My learning testing models. Im using locally those with ollama as docker with nvidia gpu.
https://huggingface.co/huggingface
https://huggingface.co/settings/keys
https://huggingface.co/docs/hub/index
/~https://github.com/mudler/LocalAI/
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
https://localai.io/basics/container/
https://localai.io/docs/getting-started/models/
/~https://github.com/deepseek-ai/deepseek-coder/
/~https://github.com/deepseek-ai
https://huggingface.co/deepseek-ai
https://mistral.ai/news/codestral-mamba/
/~https://github.com/ollama/ollama
https://hub.docker.com/r/ollama/ollama
# enter the container
docker exec -it ollama bash
/root/.ollama/models
/root/.ollama/models/manifests/registry.ollama.ai/library
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
/~https://github.com/meta-llama
/~https://github.com/meta-llama/llama3
/~https://github.com/lmstudio-ai
/~https://github.com/Codium-ai/cover-agent
https://klu.ai/blog/open-source-llm-models