From c3b834737f65f88853cdedbaaf20ed770016824f Mon Sep 17 00:00:00 2001 From: Benson Wong Date: Mon, 13 Jan 2025 22:37:30 -0800 Subject: [PATCH] Update README.md --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index cbc2e89..a27451d 100644 --- a/README.md +++ b/README.md @@ -5,10 +5,12 @@ # Introduction llama-swap is a light weight, transparent proxy server that provides automatic model swapping to llama.cpp's server. -Written in golang, it is very easy to install (single binary with no dependancies) and configure (single yaml file). Download a pre-built [release](/~https://github.com/mostlygeek/llama-swap/releases) or built it yourself from source with `make clean all`. +Written in golang, it is very easy to install (single binary with no dependancies) and configure (single yaml file). + +Download a pre-built [release](/~https://github.com/mostlygeek/llama-swap/releases) or build it yourself from source with `make clean all`. ## How does it work? -When a request is made to an OpenAI compatible endpoints, lama-swap will extract the `model` value load the appropriate server configuration to serve it. If a server is already running it will stop it and start a new one. This is where the "swap" part comes in. The upstream server is automatically swapped to the correct one to serve the request. +When a request is made to an OpenAI compatible endpoint, lama-swap will extract the `model` value and load the appropriate server configuration to serve it. If a server is already running it will stop it and start the correct one. This is where the "swap" part comes in. The upstream server is automatically swapped to the correct one to serve the request. In the most basic configuration llama-swap handles one model at a time. For more advanced use cases, the `profiles` feature can load multiple models at the same time. You have complete control over how your system resources are used.