Talking AI is a simple API built with FastAPI that allows users to interact with an AI-powered chatbot using OpenAI's GPT-4. The API receives text input via a POST request and returns a response generated by the AI.
- Startup Process
- Loads the OpenAI API key from an .env file or prompts the user to enter it manually.
- Initializes a Flask application with an endpoint for AI interaction.
- Query Processing
- A POST request is sent to the /talking_ia endpoint with a JSON payload
- The system processes the input and sends it to OpenAI's GPT-4.
- The AI generates a response, which is returned as JSON.
Install dependencies using:
pip install -r requirements.txt
The requirements.txt
file includes:
- FastAPI: Web framework for the API.
- uvicorn: ASGI server for FastAPI.
- python-dotenv: Manages environment variables.
- langchain & langchain-openai: Handles interactions with OpenAI models.
1.Set Up Your API Key:
- Create a .env file or set the environment variable OPENAI_API_KEY with your OpenAI API key.
- If not set, the application will prompt you to enter the key at startup.
- Start the FastAPI application:
uvicorn main:app --reload
- Send a request to interact with the AI:
{
"text": "Your message here"
}