Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-15 13:34:06 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b9143
llama.cpp/docs/backend
History
Ravi Panchumarthy 7e16646015 docs : Update OPENVINO.md (#22959)
Updated OPENVINO.md with Validated models and quantizations

Co-authored-by: Haarika Madaka <haarika.madaka@intel.com>
2026-05-13 17:12:15 +03:00
..
snapdragon
hexagon: add support for basic and extended Op profiling (#22269)
2026-04-23 14:17:21 -07:00
VirtGPU
ggml-virtgpu: Fix some build commands (#20341)
2026-03-12 15:47:45 +08:00
BLIS.md
make : deprecate (#10514)
2024-12-02 21:22:53 +02:00
CANN.md
CANN: update docker images to 8.5.0 and improve CANN.md (#20801)
2026-03-27 08:53:00 +08:00
CUDA-FEDORA.md
docs: update: improve the Fedoa CUDA guide (#12536)
2025-03-24 11:02:26 +00:00
OPENCL.md
docs: add linux to index (#18907)
2026-01-18 18:03:35 +08:00
OPENVINO.md
docs : Update OPENVINO.md (#22959)
2026-05-13 17:12:15 +03:00
SYCL.md
SYCL: reduce allocation overhead during flash attention (#22732)
2026-05-09 09:30:39 +03:00
VirtGPU.md
ggml-virtgpu: improve the reliability of the code (#19846)
2026-02-26 20:00:57 +08:00
zDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690)
2025-12-07 00:13:33 +08:00
ZenDNN.md
ggml-zendnn : add MUL_MAT_ID op support for MoE models (#21315)
2026-04-03 12:19:08 +03:00
Powered by Gitea Version: 1.25.3 Page: 55ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API