Skip to content

Simple FastAPI for interacting with OpenAI's GPT-4. Sends user input to the model and returns AI-generated responses.

Notifications You must be signed in to change notification settings

didibrabosa/chat-openai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI Chat Integration Flask App

Project Description

Talking AI is a simple API built with FastAPI that allows users to interact with an AI-powered chatbot using OpenAI's GPT-4. The API receives text input via a POST request and returns a response generated by the AI.

How It Works

  1. Startup Process
  • Loads the OpenAI API key from an .env file or prompts the user to enter it manually.
  • Initializes a Flask application with an endpoint for AI interaction.
  1. Query Processing
  • A POST request is sent to the /talking_ia endpoint with a JSON payload
  • The system processes the input and sends it to OpenAI's GPT-4.
  • The AI generates a response, which is returned as JSON.

Requirements

Install dependencies using:

pip install -r requirements.txt

The requirements.txt file includes:

  • FastAPI: Web framework for the API.
  • uvicorn: ASGI server for FastAPI.
  • python-dotenv: Manages environment variables.
  • langchain & langchain-openai: Handles interactions with OpenAI models.

How to Use

1.Set Up Your API Key:

  • Create a .env file or set the environment variable OPENAI_API_KEY with your OpenAI API key.
  • If not set, the application will prompt you to enter the key at startup.
  1. Start the FastAPI application:
 uvicorn main:app --reload
  1. Send a request to interact with the AI:
   {
     "text": "Your message here"
   }

About

Simple FastAPI for interacting with OpenAI's GPT-4. Sends user input to the model and returns AI-generated responses.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages