A modern chat interface for tabbyAPI that provides an intuitive way to interact with large language models.
- Backendless, all code runs in your browser
- Support for multimodal interactions (text and image)
- Specialized rendering for Markdown, thinking, inline and display LaTeX, and programming language syntax.
- One-click model switching for different tasks:
- General Assistant
- Vision tasks
- Coding assistance
- Chain-of-thought reasoning
- Conversation organization with folders
- Conversations and settings persistence using localStorage
- Ensure you have a running tabbyAPI instance
- Configure your server URL and API keys in the settings
- Load your preferred model through the model management interface
- Start chatting!
An AWS deployed instance is available at: https://main.d1nwbxsgjn09jn.amplifyapp.com/. However, this would require mixed-mode content enabled to access a conventionally-deployed local TabbyAPI server.
This project is built with:
- React + TypeScript
- Material-UI components
- Vite build system
To run locally:
npm install
npm run build
npm run preview
Contributions are welcome! Please feel free to submit issues and pull requests.