Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-09 02:24:17 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b4909
llama.cpp/ggml
History
0cc4m fd123cfead Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (#12434)
2025-03-18 07:21:40 +01:00
..
cmake
cmake : enable building llama.cpp using system libggml (#12321)
2025-03-17 11:05:23 +02:00
include
llama: Add support for RWKV v7 architecture (#12412)
2025-03-18 07:27:50 +08:00
src
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (#12434)
2025-03-18 07:21:40 +01:00
.gitignore
vulkan : cmake integration (#8119)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
opencl: use OpenCL C standard supported by the device (#12221)
2025-03-10 09:57:00 -07:00
Powered by Gitea Version: 1.25.3 Page: 118ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API