-
Notifications
You must be signed in to change notification settings - Fork 34
Issues: kherud/java-llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
what is the process to build the project if i am only making changes to java-llama code?
#90
opened Dec 29, 2024 by
siddhsql
Json request like role system or role user isn't support?
#88
opened Dec 26, 2024 by
yousefabdel1727
does
llama-3.4.1-cuda12-linux-x86-64.jar
handle both CPU and GPU or only GPU?
#86
opened Dec 18, 2024 by
siddhsql
SIGILL error when executing a language model code on CPU on M1 MacBook with Rosetta 2 and Java 8
#85
opened Nov 26, 2024 by
s0t00524
Android inference issue "A FORTIFY: pthread_mutex_lock called on a destroyed mutex"
#79
opened Sep 19, 2024 by
xunuohope1107
Add suuport to params.lora_adapters in newer llama.cpp (after b3534)
#78
opened Sep 12, 2024 by
xunuohope1107
Process finished with exit code -1073741819 (0xC0000005) while trying to infer CodeGemma-2B GGUF
#77
opened Sep 10, 2024 by
32kda
Android build error : libllama.so is incompatible with aarch64linux
#50
opened Feb 28, 2024 by
RageshAntonyHM
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.