Add the UNOMi 3D Lip Sync plugin to Blender
Main contact

Portals
-
San Francisco, California, United States
Project scope
Categories
Information technology Software development Artificial intelligenceSkills
debugging websocket batch processing parsing data processing lip sync communication python (programming language) application programming interface (api) linear interpolationIntegrate the UNOMi 3D Lip Sync plugin into Blender. The approach will revolve around building an addon in Python that communicates with your existing Kaldi-based backend, applies the results (phoneme/mouth movement data) to 3D characters in Blender, and presents an intuitive UI to the user.
Overview of Requirements
- Blender version: Targeting 3.x+
- Backend: Already functional (Kaldi + Python)
- Input: Audio or text
- Output: Blendshape/morph target animation or bone movement
- Output format from backend: JSON or similar
- Goal: Apply lip sync to rigged characters in Blender (face rigs or shape keys)
Plugin Architecture
Frontend: Blender Python Addon
Backend: Existing UNOMi API (local or cloud)
Communication: REST or WebSocket (Blender → Backend)
Data Flow:
Audio/Text ➝ UNOMi API ➝ JSON Animation ➝ Blender (keyframes/blendshapes)
Development Schedule & Step-by-Step Breakdown (5 Weeks)
Week 1: Project Setup + API Integration
Step 1: Create Blender Addon Structure
- Create addon folder with
__init__.py
,bl_info
, and registration functions - Define basic UI panel in 3D View or Text Editor using
bpy.types.Panel
Step 2: Implement Audio/Text Input
- Add UI to:
- Select or drag in audio file (
.wav
) - Optionally input text for TTS (if supported)
- Save input locally in temp folder
Step 3: Send Request to UNOMi Backend
- Use
requests
oraiohttp
to send audio or text to your backend API - Receive JSON with animation data (phoneme timings, blendshape weights)
Week 2: Data Processing & Application
Step 4: Parse Backend Response
- Process returned JSON:
- Timecodes
- Phoneme-to-shape mappings
- Weight values over time
Step 5: Connect to Blender Character Rig
- Detect selected object and check:
- Shape keys (for blendshape-based rigs)
- Bone structure (for bone-driven rigs)
- Validate character compatibility
Week 3: Apply Keyframes
Step 6: Generate Animation in Timeline
- For Shape Keys:
- Use
bpy.data.objects[...].data.shape_keys.key_blocks[...].value
- Insert keyframes using
.keyframe_insert(data_path="value", frame=...)
- For Bones:
- Use
bpy.data.objects[...].pose.bones[...].rotation_euler
or.location
- Apply transforms and keyframes
- Optional: Smooth values with linear interpolation
Week 4: UI Polishing & Usability
Step 7: Add Playback and Preview Features
- Add “Play Preview” button in UI
- Option to delete or re-generate animation
Step 8: Export or Bake Animation
- Add option to export animation to
.fbx
or.glb
- Option to bake keyframes to reduce complexity
Week 5: Testing, Debugging & Packaging
Step 9: Error Handling & Logs
- Handle edge cases: missing shape keys, upload errors, backend timeout
- Display user messages via
self.report({'ERROR'}, message)
Step 10: Finalize & Package Addon
- Zip plugin folder for distribution
- Write installation instructions
- Create sample character or demo .blend file
Tools and Libraries Needed
- Python (Blender native)
requests
oraiohttp
for HTTP communication- Blender API (
bpy
) - Optional: NumPy or Pandas for phoneme data processing (if needed)
Deliverables
unomi_blender_lipsync/
addon folder- README and setup guide
- Example character with mouth rig
- (Optional) Video walkthrough/demo
Bonus Features (Post-launch Ideas)
- TTS integration if text-only input desired
- Language selector once multilingual phoneme support is live
- Batch processing for multiple characters/scenes
Providing specialized knowledge in the project subject area, with industry context.
Sharing knowledge in specific technical skills, techniques, methodologies required for the project.
Direct involvement in project tasks, offering guidance, and demonstrating techniques.
Providing access to necessary tools, software, and resources required for project completion.
Scheduled check-ins to discuss progress, address challenges, and provide feedback.
About the company
UNOMi is innovative and easy to use software for animators. UNOMi reduces the production time and budget on developing content by 30% to 70%. It does this by automatically syncing 2D and 3D mouth poses to voice-over recordings of each character that an artist or animator creates. We understand the pain that is involved in producing quality animated content and we’ve created the perfect tool to help with the process. It normally takes animators about a day to animate one character talking for 30 seconds but with UNOMi animators can get that done in seconds.
UNOMi’s top mission is to solve the greatest challenges facing animators today. With the level of technology in the world, there is no reason why animators should still be struggling to tell their stories.
Main contact

Portals
-
San Francisco, California, United States