Backend Setup
Welcome to the Omi backend setup guide! Omi is an innovative, multimodal AI assistant that combines cutting-edge technologies to provide a seamless user experience. This guide will help you set up the backend infrastructure that powers Omi’s intelligent capabilities.
Prerequisites 📋
Before you start, make sure you have the following:
- Google Cloud Project: You need a Google Cloud project with Firebase enabled. If you’ve already set up Firebase for the Omi app, you’re good to go.
- API Keys: 🔑 Obtain API keys for:
- OpenAI: For AI language models (OpenAI API Keys)
- Deepgram: For speech-to-text (Deepgram API Keys)
- Redis: Upstash is recommended (Upstash Redis Console)
- Pinecone: For vector database; use “text-embedding-3-large” model (Pinecone API Keys)
- Modal: [optional] For serverless deployment (Modal Dashboard)
- Hugging Face: For voice activity detection (Hugging Face Access Tokens)
- GitHub:[optional] For firmware updates (GitHub Personal Access Tokens)
- Google Maps API Key: [optional] For location features (Google Maps API Key)
- Typesense Credentials: For search functionality (Typesense Cloud Dashboard)
- Stripe Credentials: [optional] For paid apps payment processing (Stripe API Keys)
Note: If you are not very experienced in backend development, we recommend installing Homebrew (for macOS or Linux) or Chocolatey (for Windows).
Video Walkthrough
I. Setting Up Google Cloud & Firebase ☁️
-
Install Google Cloud SDK:
- Mac (using brew):
brew install google-cloud-sdk
- Nix Envdir: The SDK is usually pre-installed
- Windows (using choco):
choco install gcloudsdk
- Mac (using brew):
-
Enable Necessary APIs: 🔧
- Go to the Google Cloud Console
- Select your project
- Navigate to APIs & Services -> Library
- Enable the following APIs:
-
Authenticate with Google Cloud: 🔐
- Open your terminal
- Run the following commands one by one, replacing
<project-id>
with your Google Cloud project ID: - This process generates an
application_default_credentials.json
file in the~/.config/gcloud
directory. This file is used for automatic authentication with Google Cloud services in Python. - Copy the credentials file to your backend directory and rename it:
II. Backend Setup 🛠️
-
Install Python & Dependencies: 🐍
- Mac (using brew):
brew install python
- Nix Envdir: Python is pre-installed
- Windows (using choco):
choco install python
- Install pip (if not present):
- Follow instructions on https://pip.pypa.io/en/stable/installation/
- Install Git and FFmpeg:
- Mac (using brew):
brew install git ffmpeg
- Nix Envdir: Git and FFmpeg are pre-installed
- Windows (using choco):
choco install git.install ffmpeg
- Mac (using brew):
- Install opus:
- Mac (using brew):
brew install opus
- Windows: You should already have it installed if you are on Windows 10 version 1903 and above
- Mac (using brew):
- Install PyOgg:
- All Platforms:
pip install PyOgg
- All Platforms:
- Install All Required Dependencies:
- All Platforms:
brew install -r requirements.txt
- All Platforms:
- Mac (using brew):
-
Clone the Backend Repository: 📂
- Open your terminal and navigate to your desired directory
- Clone the Omi backend repository:
-
Set up Pusher Service: 📡 [Optional]
- You don’t need to have the Pusher Service running if you do not intend to use the webhooks feature
- Navigate to the pusher directory:
- Create a copy of the
.env.template
file and rename it to.env
: - Set the
SERVICE_ACCOUNT_JSON
environment variable in the.env
file to the string representation of your Google Cloud service account credentials (google-credentials.json
). This is used to authenticate with Google Cloud - Move back to the backend directory and run the following command to start the Pusher service:
- Optionally you can expose the Pusher endpoint using Ngrok or a similar service
-
Set up Typesense: 🔎 [Optional]
- You don’t need to setup Typesense if you do not intend to use the search functionality
- Create an account on Typesense
- Create a new collection in Typesense with the name
memories
and use the schema provided in thetypesense/memories.schema
file - Install the Firebase Typesense extension from here
- While setting up the extension, use the following values for the configuration:
- Firestore Collection Path:
users/{userId}/memories
- Firestore Collection Fields:
structured,transcript_segments,created_at,deleted,discarded,started_at,id,finished_at,geolocation,userId
- Firestore Collection Path:
- Create
typesense_sync
collection and add a document namedbackfill
with data{'trigger' : true}
(required only if you already have memories in Firestore and want to sync them to Typesense)
- While setting up the extension, use the following values for the configuration:
- Set the
TYPESENSE_HOST
,TYPESENSE_HOST_PORT
andTYPESENSE_API_KEY
environment variables in the.env
file to the host URL and API key provided by Typesense
-
Set up the Environment File: 📝
- Create a copy of the
.env.template
file and rename it to.env
: - Open the
.env
file and fill in the following:- OPENAI_API_KEY: Obtained from your OpenAI API Settings
- DEEPGRAM_API_KEY: Obtained from your Deepgram Console
- Redis Credentials: Host, port, username, and password from your Upstash Redis Console
- Modal API Key: Obtained from your Modal Dashboard
- ADMIN_KEY: Set to a temporary value (e.g.,
123
) for local development - HOSTED_PUSHER_API_URL: Endpoint of your hosted pusher service (if you are using it, see step 3)
- Typesense Credentials: Host, port, and API key from your Typesense Cloud Dashboard
- NO_SOCKET_TIMEOUT: (Optional) Set to
True
to disable the socket timeout for the backend server (socket will stay connected for as long as the app is open) - Other API Keys: Fill in any other API keys required by your integrations (e.g., Google Maps API key)
- Create a copy of the
-
Install Python Dependencies: 📚 You have two options for installing the required Python packages:
Option A: Using a Virtual Environment (Recommended) 🐍
- It’s recommended to use a virtual environment to isolate your project dependencies and avoid conflicts
- Create a new virtual environment in the backend directory:
- You should see
(venv)
at the beginning of your command prompt, indicating that the virtual environment is active - Install dependencies within the virtual environment:
- All packages will be installed isolated from your system’s Python installation
Option B: Direct Installation
- If you prefer not to use a virtual environment, you can install the dependencies directly:
- Note that this approach may lead to conflicts with other Python projects on your system
III. Running the Backend Locally 🏃♂️
-
Set up Ngrok for Tunneling: 🚇
- Sign up for a free account on https://ngrok.com/ and install Ngrok
- Follow their instructions to authenticate Ngrok with your account
- During the onboarding, Ngrok will provide you with a command to create a tunnel to your localhost. Modify the port in the command to
8000
(the default port for the backend). For example: - Run this command in your terminal. Ngrok will provide you with a public URL (like
https://example.ngrok-free.app
) that points to your local backend
-
Start the Backend Server: 🖥️
- In your terminal, run:
--reload
automatically restarts the server when code changes are saved, making development easier--env-file .env
loads environment variables from your.env
file--host 0.0.0.0
listens to every interface on your computer so you don’t have to set upngrok
when developing in your network--port 8000
port for backend to listen
- In your terminal, run:
-
Troubleshooting SSL Errors: 🔒
- SSL Errors: If you encounter SSL certificate errors during model downloads, add this to
utils/stt/vad.py
: - API Key Issues: Double-check all API keys in your
.env
file. Ensure there are no trailing spaces - Ngrok Connection: Ensure your Ngrok tunnel is active and the URL is correctly set in the Omi app
- Dependencies: If you encounter any module not found errors, try reinstalling dependencies:
- SSL Errors: If you encounter SSL certificate errors during model downloads, add this to
-
Connect the App to the Backend: 🔗
- In your Omi app’s environment variables, set the
API_BASE_URL
to the public URL provided by Ngrok (e.g.,https://example.ngrok-free.app
)
- In your Omi app’s environment variables, set the
Now, your Omi app should be successfully connected to the locally running backend.
- When You’re Done: 🛑
- If you used a virtual environment, when you’re finished working with the backend, deactivate it by running:
- This command returns you to your system’s global Python environment
- To reactivate the virtual environment later, just run the activation command again (
source venv/bin/activate
on macOS/Linux orvenv\Scripts\activate
on Windows)
- If you used a virtual environment, when you’re finished working with the backend, deactivate it by running:
Environment Variables 🔐
Here’s a detailed explanation of each environment variable you need to define in your .env
file:
HUGGINGFACE_TOKEN
: Your Hugging Face Hub API token, used to download models for speech processing (like voice activity detection)BUCKET_SPEECH_PROFILES
: The name of the Google Cloud Storage bucket where user speech profiles are storedBUCKET_BACKUPS
: The name of the Google Cloud Storage bucket used for backups (if applicable)GOOGLE_APPLICATION_CREDENTIALS
: The path to your Google Cloud service account credentials file (google-credentials.json
). This is generated in step 3 of I. Setting Up Google Cloud & Firebase- By default, the backend expects to find a file named
google-credentials.json
in the same directory where the application is running - If you’ve followed Option 1 in step 3 of the Google Cloud setup, this will be already set correctly
- If you prefer to use the default location of the credentials, set this to the full path of your
application_default_credentials.json
file (e.g.,~/.config/gcloud/application_default_credentials.json
on macOS/Linux or%APPDATA%\gcloud\application_default_credentials.json
on Windows)
- By default, the backend expects to find a file named
PINECONE_API_KEY
: Your Pinecone API key, used for vector database operations. Storing Memory Embeddings: Each memory is converted into a numerical representation (embedding). Pinecone efficiently stores these embeddings and allows Omi to quickly find the most relevant memories related to a user’s queryPINECONE_INDEX_NAME
: The name of your Pinecone index where memory embeddings are storedREDIS_DB_HOST
: The host address of your Redis instanceREDIS_DB_PORT
: The port number of your Redis instanceREDIS_DB_PASSWORD
: The password for your Redis instanceDEEPGRAM_API_KEY
: Your Deepgram API key, used for real-time and pre-recorded audio transcriptionADMIN_KEY
: A temporary key used for authentication during local development (replace with a more secure method in production)OPENAI_API_KEY
: Your OpenAI API key, used for accessing OpenAI’s language models for chat, memory processing, and moreGITHUB_TOKEN
: Your GitHub personal access token, used to access GitHub’s API for retrieving the latest firmware versionWORKFLOW_API_KEY
: Your custom API key for securing communication with external workflows or integrations
Make sure to replace the placeholders (<api-key>
, <bucket-name>
, etc.) with your actual values.
Contributing 🤝
We welcome contributions from the open source community! Whether it’s improving documentation, adding new features, or reporting bugs, your input is valuable. Check out our Contribution Guide for more information.
Support 🆘
If you’re stuck, have questions, or just want to chat about Omi:
- GitHub Issues: 🐛 For bug reports and feature requests
- Community Forum: 💬 Join our community forum for discussions and questions
- Documentation: 📚 Check out our full documentation for in-depth guides
Happy coding! 💻 If you have any questions or need further assistance, don’t hesitate to reach out to our community.