Skip to content

Actions: ggml-org/llama.cpp

Server

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
10,667 workflow runs
10,667 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

llama : refactor llama_kv_cache, llama_context and llm_build_context
Server #11279: Pull request #11213 synchronize by ggerganov
February 28, 2025 16:01 8m 24s gg/llama-kv-cache
February 28, 2025 16:01 8m 24s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Server #11276: Pull request #11213 synchronize by ggerganov
February 28, 2025 14:30 10m 11s gg/llama-kv-cache
February 28, 2025 14:30 10m 11s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Server #11275: Pull request #11213 synchronize by ggerganov
February 28, 2025 14:29 2m 28s gg/llama-kv-cache
February 28, 2025 14:29 2m 28s
ggml : upgrade init_tensor API to return a ggml_status (#11854)
Server #11274: Commit 70680c4 pushed by slaren
February 28, 2025 13:41 8m 35s master
February 28, 2025 13:41 8m 35s
sycl: cleanup oneDNN related code
Server #11273: Pull request #12097 synchronize by sgeor255
February 28, 2025 13:41 7m 47s sgeor255:svet/llama-onednn
February 28, 2025 13:41 7m 47s
CUDA: compress-mode size
Server #11271: Pull request #12029 synchronize by Green-Sky
February 28, 2025 12:55 9m 49s Green-Sky:cuda_compress
February 28, 2025 12:55 9m 49s
Adding UTF-8 support to linenoise.cpp
Server #11270: Pull request #12111 opened by ericcurtin
February 28, 2025 12:48 9m 1s llama-run-utf-8
February 28, 2025 12:48 9m 1s
llama : add Phi-4-mini support (supersede #12099) (#12108)
Server #11269: Commit c43a3e7 pushed by ngxson
February 28, 2025 11:44 7m 44s master
February 28, 2025 11:44 7m 44s
sync : ggml
Server #11267: Pull request #12104 synchronize by ggerganov
February 28, 2025 10:37 8m 27s sync-ggml-25-02-28
February 28, 2025 10:37 8m 27s
llama : add Phi-4-mini support (supersede #12099)
Server #11266: Pull request #12108 synchronize by ngxson
February 28, 2025 09:54 7m 49s xsn/phi-4
February 28, 2025 09:54 7m 49s
llama : add Phi-4-mini support (supersede #12099)
Server #11265: Pull request #12108 opened by ngxson
February 28, 2025 09:52 2m 32s xsn/phi-4
February 28, 2025 09:52 2m 32s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Server #11264: Pull request #11213 synchronize by ggerganov
February 28, 2025 08:51 16m 20s gg/llama-kv-cache
February 28, 2025 08:51 16m 20s
vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizatio…
Server #11263: Commit 438a839 pushed by 0cc4m
February 28, 2025 08:42 8m 44s master
February 28, 2025 08:42 8m 44s
ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot (#12064)
Server #11262: Commit 05e6f5a pushed by ggerganov
February 28, 2025 07:36 15m 11s master
February 28, 2025 07:36 15m 11s
sync : ggml
Server #11261: Pull request #12104 synchronize by ggerganov
February 28, 2025 07:30 12m 23s sync-ggml-25-02-28
February 28, 2025 07:30 12m 23s
CANN: Fix build error with GCC 13 (#11990)
Server #11260: Commit 673cfef pushed by hipudding
February 28, 2025 07:23 13m 36s master
February 28, 2025 07:23 13m 36s
vulkan: matmul dequantization improvements (#12015)
Server #11259: Commit fbeda90 pushed by 0cc4m
February 28, 2025 07:20 12m 4s master
February 28, 2025 07:20 12m 4s
sync : ggml
Server #11258: Pull request #12104 opened by ggerganov
February 28, 2025 07:11 7m 40s sync-ggml-25-02-28
February 28, 2025 07:11 7m 40s
Upgrade init_tensor API to return a ggml_status
Server #11256: Pull request #11854 synchronize by slaren
February 28, 2025 01:26 7m 46s WilliamTambellini:init_tensor
February 28, 2025 01:26 7m 46s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Server #11252: Pull request #11213 synchronize by ggerganov
February 27, 2025 14:00 8m 41s gg/llama-kv-cache
February 27, 2025 14:00 8m 41s