Find projects

Discover and match with projects from real companies

Pixel Pirate Studio
Los Angeles, California, United States
Chief Operating Officer
3
Project
Academic experience
120 hours of work total
Learner
Anywhere
Advanced level

Project scope

Categories
UI design UX design Software development
Skills
debugging c++ (programming language) lip sync unreal engine communication character animation computer facial animation user experience (ux) drag and drop javascript (programming language)
Details

UNOMi is seeking Interners with skills in C++ Front-End Engineering to help us build out the front-end integration of our 3D Lip Sync Plugin for Unreal Engine. The backend—built in JavaScript—is fully functional and production-ready. Your role will be to build out the front-end interface inside Unreal Engine, enabling creators to automate facial animation and lip sync for 3D characters.


The estimated timeline is one to two months. You’ll work directly with our core engineering and animation team to bring this plugin to life.

Requirements:

  • C++ skills
Deliverables


  • Create a user-friendly UI/UX within Unreal Engine to control 3D lip sync features (importing audio, triggering lip movement, timeline scrubbing, etc.).
  • Implement support for drag-and-drop audio and character input.
  • Optimize the plugin for performance within the Unreal Editor environment.
  • Conduct thorough testing and debugging to ensure stability and accuracy.
  • Deliver clear documentation for internal and external use, including setup, usage, and troubleshooting instructions.



Phase 1: Requirements & Planning

Define Plugin Features

  • Input: Audio or text
  • Output: Facial animation (blendshapes, bones, or control curves)
  • Import format: FBX/GLTF with rig
  • Compatibility: Specify which UE versions (e.g., 5.1, 5.3+)
  • Engine target: Windows, Mac, Linux? Editor only or runtime?

Choose Communication Architecture

  • If using a backend (e.g., Python API): REST, WebSocket, or gRPC
  • Local processing or cloud-based (Google Cloud)?
  • File handling (audio upload, animation return, caching)

Phase 2: Plugin Development

Step 1: Set Up Plugin Skeleton

Use the Unreal Plugin Wizard: File > New C++ Class > None > Next > Create Plugin

• Choose Editor Utility Plugin or Runtime Plugin depending on your needs.

Step 2: Add Audio Input Interface

  • UI to let users upload .wav files or input text (if using text-to-speech)
  • Use UFileDialog or similar UMG widget for file selection
  • Optionally integrate TTS if starting from text

Step 3: Handle Backend Communication

  • Add HTTP or WebSocket module (HttpModule, WebSocketsModule)
  • Send request to UNOMi API (with audio or text input)
  • Receive JSON or animation data in response (e.g., bone transforms, blendshape weights)

Step 4: Apply Lip Sync to Character

  • Parse JSON response (use FJsonObject)
  • Apply to:
  • Morph Targets / Blendshapes via USkeletalMeshComponent::SetMorphTarget
  • Bone transforms via SetBoneRotationByName
  • Optional: Bake into Animation Sequence (UAnimSequence) or Timeline

Step 5: UI Integration (Optional)

  • Create a custom Editor tab or Details Panel using Slate/UMG
  • Add controls: Upload, Sync, Preview, Export Animation

Phase 3: Testing & Optimization

  1. Test Cases
  • Different rigs: Face rigs with bones vs. blendshapes
  • File sizes: 10s vs. 1 min audio
  • Different languages (once multilingual support added)
  1. Performance Tuning
  • Cache results on disk
  • Compress audio before upload
  • Enable async loading of animation data
  1. Error Handling
  • API timeouts, incorrect rigs, unsupported characters
  • Logging using UE_LOG

Phase 4: Packaging & Deployment

  1. Package the Plugin
  • Plugins > Package Plugin
  • Create ZIP for distribution or publish to the Unreal Marketplace
  1. Documentation
  • Setup guide, dependencies (e.g., Google Cloud key, Python API)
  • Rigging requirements (naming conventions, supported controls)
  • Sample projects or demo scene
  1. Optional: Blueprint Integration
  • Expose core C++ functions to Blueprints with UFUNCTION(BlueprintCallable)
  • Let users trigger lip sync in BP workflows



Timeline Example (4–6 Weeks)

• Week 1: Define API, plugin skeleton, Unreal setup

• Week 2: Audio input + backend communication

• Week 3: Rig integration (morph targets / bones)

• Week 4: UI integration + error handling

• Week 5: Testing and optimization6Packaging + documentation



Mentorship
Domain expertise and knowledge

Providing specialized, in-depth knowledge and general industry insights for a comprehensive understanding.

Skills, knowledge and expertise

Sharing knowledge in specific technical skills, techniques, methodologies required for the project.

Hands-on support

Direct involvement in project tasks, offering guidance, and demonstrating techniques.

Tools and/or resources

Providing access to necessary tools, software, and resources required for project completion.

Regular meetings

Scheduled check-ins to discuss progress, address challenges, and provide feedback.

About the company

Company
Los Angeles, California, United States
2 - 10 employees
Entertainment, Technology
Representation
Minority-Owned

UNOMi is innovative and easy to use software for animators. UNOMi reduces the production time and budget on developing content by 30% to 70%. It does this by automatically syncing 2D and 3D mouth poses to voice-over recordings of each character that an artist or animator creates. We understand the pain that is involved in producing quality animated content and we’ve created the perfect tool to help with the process. It normally takes animators about a day to animate one character talking for 30 seconds but with UNOMi animators can get that done in seconds.

UNOMi’s top mission is to solve the greatest challenges facing animators today. With the level of technology in the world, there is no reason why animators should still be struggling to tell their stories.