Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-08 18:14:07 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b8032
llama.cpp/ggml
History
Aman Gupta 5065da554e CUDA: loop over ne2*ne3 in case it overflows (#19538)
* CUDA: loop over ne2*ne3 in case it overflows

* use fastdiv
2026-02-13 17:01:40 +05:30
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
2025-08-07 13:45:41 +02:00
include
ggml-virtgpu: make the code thread safe (#19204)
2026-02-04 10:46:18 +08:00
src
CUDA: loop over ne2*ne3 in case it overflows (#19538)
2026-02-13 17:01:40 +05:30
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
Bump cmake max version (needed for Windows on Snapdragon builds) (#19188)
2026-02-01 14:13:38 -08:00
Powered by Gitea Version: 1.25.3 Page: 144ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API