Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory! 🦥
-
Updated
Feb 28, 2025 - Python
Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory! 🦥
Efficient Triton Kernels for LLM Training
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality synthetic data generation pipeline!
An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.
ArchNetAI is a Python library that leverages the Ollama API for generating AI-powered content.
LLM Interview Preparation Assistant using RAG, ElasticSearch and Ollama/ChatGPT
Generates AI-driven responses using the Ollama API with the Phi3 mini model from Microsoft. Lightweight implementation without memory for language generation
⚗️ Phi-3-mini 3.8B instruct model repository
Generates AI-driven responses using the Ollama API with the Phi3 mini model from Microsoft. Heavyweight implementation with memory for language generation, featuring remote RAG and Pinecone for persistent memory
Add a description, image, and links to the phi3 topic page so that developers can more easily learn about it.
To associate your repository with the phi3 topic, visit your repo's landing page and select "manage topics."