Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-13 04:24:17 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
f0198ef6fcb024c8cddc6e593bf16b4006dffda4
llama.cpp/ggml
History
Gaurav Garg aa8b62105c Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.
Fix compilation errors.
2026-02-16 15:39:26 +05:30
..
cmake
Remove shfl and AllReduce from backend interface
2026-02-11 14:51:37 +01:00
include
support for tensor dims % n_devs != 0
2026-02-13 00:40:00 +01:00
src
Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.
2026-02-16 15:39:26 +05:30
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
GGML: HIP: add RCCL support
2026-02-11 14:51:33 +01:00
Powered by Gitea Version: 1.25.3 Page: 172ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API