What Are Integration Apps?
Integration apps allow Omi to interact with external services by sending data to your webhook endpoints. Unlike prompt-based apps, these require you to host a server.Memory Triggers
Run code when a memory is created
Real-Time Transcript
Process live transcripts as they happen
Audio Streaming
Receive raw audio bytes for custom processing
Memory Creation Triggers
These apps are activated when Omi creates a new memory, allowing you to process or store the data externally.How It Works
How It Works
- User completes a conversation
- Omi processes and creates a memory
- Your webhook receives the complete memory object
- Your server processes and responds
Example Use Cases
Example Use Cases
- Slack Integration: Post conversation summaries to team channels
- CRM Updates: Log customer interactions automatically
- Project Management: Create tasks in Notion, Asana, or Jira
- Knowledge Base: Build a searchable archive of conversations
- Analytics: Track conversation patterns and insights
Video Tutorial
Video Tutorial
Running FastAPI locally (no cloud deployment):
Webhook Payload
Your endpoint receives a POST request with the memory object:POST /your-endpoint?uid=user123
Real-Time Transcript Processors
Process conversation transcripts as they occur, enabling real-time analysis and actions.How It Works
How It Works
- User starts speaking
- Omi transcribes in real-time
- Your webhook receives transcript segments as they’re created
- Your server processes and can trigger immediate actions
Example Use Cases
Example Use Cases
- Live Coaching: Provide real-time feedback during presentations
- Fact-Checking: Verify claims as they’re made
- Smart Home: Trigger actions based on spoken commands
- Sentiment Analysis: Monitor emotional tone in real-time
- Translation: Live translation of conversations
Video Tutorial
Video Tutorial
Webhook Payload
Your endpoint receives transcript segments with session context:POST /your-endpoint?session_id=abc123&uid=user123
Implementation Tips
Use session_id
Track context across multiple calls using the session_id parameter
Avoid Redundancy
Implement logic to prevent processing the same segments twice
Accumulate Context
Build complete conversation context by storing segments
Handle Errors
Fail gracefully - don’t block transcription with slow processing
Real-Time Audio Bytes
Stream raw audio bytes from Omi directly to your endpoint for custom audio processing.How It Works
How It Works
- User speaks into Omi device
- Raw PCM audio is streamed to your endpoint
- Your server processes the audio bytes directly
- Handle as needed (custom STT, VAD, feature extraction, etc.)
Example Use Cases
Example Use Cases
- Custom ASR: Use your own speech recognition models
- Voice Activity Detection: Implement custom VAD logic
- Audio Features: Extract spectrograms, embeddings, or other features
- Recording: Store raw audio for later processing
- Real-time Translation: Feed audio to translation services
Technical Details
| Setting | Value |
|---|---|
| Trigger Type | audio_bytes |
| HTTP Method | POST |
| Content-Type | application/octet-stream |
| Audio Format | PCM16 (16-bit little-endian) |
| Bytes per Sample | 2 |
POST /your-endpoint?sample_rate=16000&uid=user123
Body contains raw PCM16 audio bytes.
To produce a playable WAV file, prepend a WAV header and concatenate the received chunks.
Configuring Delivery Frequency
You can control how often audio is sent via the Omi app Developer Settings:https://your-endpoint.com/audio,5 sends audio every 5 seconds.
For a complete implementation, see the Audio Streaming Guide.
Creating an Integration App
Choose Your Trigger Type
Decide which integration type(s) you need:
- Memory Trigger: Process completed conversations
- Real-Time Transcript: React to live speech
- Audio Bytes: Process raw audio
Set Up Your Endpoint
Create a webhook endpoint that can receive POST requests. For testing, use webhook.site or webhook-test.com.Your endpoint should:
- Accept POST requests
- Parse JSON body (or binary for audio)
- Read
uidfrom query parameters - Return 200 OK quickly
Implement Your Logic
Process the incoming data and integrate with external services.Example (Python/FastAPI):
Test Your Integration
Use Developer Mode to test without creating new memories (see testing section below)
Submit Your App
Publish through the Omi mobile app
Testing Your Integration
Enable Developer Mode
Open Omi app → Settings → Enable Developer Mode → Developer Settings
Set Webhook URL
- Memory Triggers: Enter URL in “Memory Creation Webhook”
- Real-Time: Enter URL in “Real-Time Transcript Webhook”
Test Memory Triggers
Go to any memory → Tap 3-dot menu → Developer Tools → Trigger webhook with existing data
Test Real-Time
Start speaking - your endpoint receives updates immediately
App Submission Fields
When submitting your integration app:| Field | Required | Description |
|---|---|---|
| Webhook URL | Yes | Your POST endpoint for receiving data |
| Setup Completed URL | No | GET endpoint returning {"is_setup_completed": boolean} |
| Auth URL | No | URL for user authentication (uid appended automatically) |
| Setup Instructions | No | Text or link explaining how to configure your app |
Setup Instructions Best Practices
Step-by-Step Guide
Clear numbered instructions for configuration
Screenshots
Visual aids for complex setup steps
Authentication Flow
If required, explain how to connect accounts
Troubleshooting
Common issues and how to resolve them
When users open your setup links, Omi automatically appends a
uid query parameter. Use this to associate credentials with specific users.Related Documentation
Developer API
Access your own personal Omi data programmatically
Data Import APIs
Create conversations and memories via REST API
Audio Streaming Guide
Detailed guide for processing raw audio bytes
Chat Tools
Add custom tools users can invoke in chat
OAuth Setup
Add authentication flows to your apps
Notifications
Send push notifications from your app
Prompt-Based Apps
Create apps without hosting a server
Apps Introduction
Overview of all app types