A vim plugin for code completion with local LLMs.
To perform autocompletion, select the lines that you want to complete (in visual mode) and call :'<,'>AutoComplete
To comment a piece of code, select the lines (in visual mode) and call :'<,'>Comment
Requires a server for inference vim with python3 support. An example of server can be ollama (for local inference), or you can also use remote models (e.g., groq).
The default model is deepseek-coder:6.7b-instruct
, you can change it by editing the project-wise file: .codeassistant_config.json
, by changing the model_name
value.
Also, to change the server address, simply edit the url
field.
The config file is created the first time you open vim in a directory.
You can enable RAG by switching the rag
field in the config file, and choosing an appropriate rag_model
served with ollama.
First, install ollama from https://ollama.com/, then, install the plugin with:
Plug 'leocus/codeassistant.vim'
This plugin uses pretrained LLMs to generate code. Beware of the limitations of LLMs and of possible bugs in the plugin (which are quite likely :) ). If you have any suggestion for improving this plugin or to report any bug, please open an issue! :)
- Add the possibility to have an authentication token
- Add a refactor mode