Fine-Tune Your Own Llama 2 Model LOCALLY in a Colab Notebook
-
Updated
Aug 8, 2023 - Jupyter Notebook
Fine-Tune Your Own Llama 2 Model LOCALLY in a Colab Notebook
This repository contains notebooks and resources related to the Software Development Group Project (SDGP) machine learning component. Specifically, it includes two notebooks used for creating a dataset and fine-tuning a Mistral-7B-v0.1-Instruct model.
Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA
The LLM FineTuning and Evaluation project π enhances FLAN-T5 models for tasks like summarizing Spanish news articles πͺπΈπ°. It features detailed notebooks π on fine-tuning and evaluating models to optimize performance for specific applications. πβ¨
This repository contains experiments on fine-tuning LLMs (Llama, Llama3.1, Gemma). It includes notebooks for model tuning, data preprocessing, and hyperparameter optimization to enhance model performance.
This repository contains a notebook for fine-tuning the meta-llama/Llama-3.2-3B-Instruct (or any other generative language models) model using Quantized LoRA (QLoRA) for sentiment classification on the Arabic HARD dataset.
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."