This repository contains an AI Voice Assistant application built with Python and React, utilizing Deepgram for speech-to-text and text-to-speech functionality, and Llama 3 with Groq for natural language processing.
You can access a live demo of the AI Voice Assistant application in this link.
For a detailed walkthrough of the code and the technologies used, check out our blog post series:
- Part 1: A local Python voice assistant app.
- Part 2: A Python backend with FastAPI & WebSockets.
- Part 3: An interactive React user interface.
This project was developed by CodeAwake, and is licensed under the MIT License.
The repository is organized into two main folders:
backend/
: Contains the Python FastAPI backend code and a simple local Python assistant.frontend/
: Contains the React Next.js frontend code.
- Python 3.11 or higher
- Node.js 18.17 or higher
- Poetry (Python package manager)
-
Navigate to the backend folder and install the Python dependencies using Poetry:
cd backend poetry install
-
Create a
.env
file in the backend folder copying the.env.example
file provided and set the required environment variables:GROQ_API_KEY
: Your Groq API key.DEEPGRAM_API_KEY
: Your Deepgram API key.
-
Navigate to the frontend folder and install the JavaScript dependencies:
cd frontend npm install
-
Create a
.env
file in the frontend folder copying the.env.example
file provided that includes the required environment variable:NEXT_PUBLIC_WEBSOCKET_URL
: The WebSocket URL to connect to the backend API.
You can run the local Python assistant script using the provided Poetry script:
cd backend
poetry run local-assistant
To run the full-stack web application:
-
Activate the virtual environment for the backend and start the backend server:
cd backend poetry shell fastapi dev app/main.py
-
In a separate terminal, start the frontend server:
cd frontend npm run dev
-
Open your web browser and visit
http://localhost:3000
to access the application.