This website requires JavaScript.
Explore
Help
Register
Sign In
sdgoij
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2026-05-15 13:34:06 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b9150
llama.cpp
/
ggml
History
alex-spacemit
81b0d882ae
ggml-cpu: Add IME2 Instruction Support for the SpacemiT Backend (
#22863
)
2026-05-14 17:39:30 +08:00
..
cmake
ggml: backend-agnostic tensor parallelism (experimental) (
#19378
)
2026-04-09 16:42:19 +02:00
include
CUDA: lower-case PCI bus id, standardize for ggml (
#22820
)
2026-05-08 10:09:38 +02:00
src
ggml-cpu: Add IME2 Instruction Support for the SpacemiT Backend (
#22863
)
2026-05-14 17:39:30 +08:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
SYCL: fix multi-GPU system RAM exhaustion by using Level Zero allocations (
#21597
)
2026-05-14 13:39:14 +08:00