Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-08 10:04:10 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
217469f07f75db1594eb4b4ac3c4e723d6ea8553
llama.cpp/tools
History
Georgi Gerganov 16451d6bc3 Merge branch 'master' into HEAD
2025-12-01 14:47:50 +02:00
..
batched-bench
batched-bench : add "separate text gen" mode (#17103)
2025-11-10 12:59:29 +02:00
cvector-generator
refactor : simplify and improve memory management
2025-11-28 16:09:42 +02:00
export-lora
cmake : Do not install tools on iOS targets (#15903)
2025-09-16 09:54:44 +07:00
gguf-split
ci : use smaller model (#16168)
2025-09-22 09:11:39 +03:00
imatrix
refactor : simplify and improve memory management
2025-11-28 16:09:42 +02:00
llama-bench
bench : cache the llama_context state at computed depth (#16944)
2025-11-07 21:23:11 +02:00
main
Merge branch 'master' into HEAD
2025-12-01 14:47:50 +02:00
mtmd
Merge branch 'master' into HEAD
2025-12-01 14:47:50 +02:00
perplexity
refactor : simplify and improve memory management
2025-11-28 16:09:42 +02:00
quantize
ci : use smaller model (#16168)
2025-09-22 09:11:39 +03:00
rpc
Install rpc-server when GGML_RPC is ON. (#17149)
2025-11-11 10:53:59 +00:00
run
Manually link -lbsd to resolve flock symbol on AIX (#16610)
2025-10-23 19:37:31 +08:00
server
llama : naming
2025-11-30 00:05:47 +02:00
tokenize
cmake : Do not install tools on iOS targets (#15903)
2025-09-16 09:54:44 +07:00
tts
refactor : simplify and improve memory management
2025-11-28 16:09:42 +02:00
CMakeLists.txt
mtmd : rename llava directory to mtmd (#13311)
2025-05-05 16:02:55 +02:00
Powered by Gitea Version: 1.25.3 Page: 73ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API