Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-10 02:54:06 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b7378
llama.cpp/ggml
History
Johannes Gäßler 482211438d CUDA: fix overflow in MMA kernel without stream-k (#17939)
2025-12-12 17:43:58 +01:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
2025-08-07 13:45:41 +02:00
include
ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (#17951)
2025-12-12 16:26:03 +02:00
src
CUDA: fix overflow in MMA kernel without stream-k (#17939)
2025-12-12 17:43:58 +01:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784)
2025-12-08 10:41:34 +02:00
Powered by Gitea Version: 1.25.3 Page: 131ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API