This website requires JavaScript.
Explore
Help
Register
Sign In
sdgoij
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2026-05-12 03:54:06 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
8e8e2007269670cb0fae82f6fe17da970210ed07
llama.cpp
/
tools
/
server
/
server-context.cpp
Ruben Ortlam
8e8e200726
server: add --models-memory-max parameter to allow dynamically unloading models when they exceed a memory size threshold
2026-04-21 14:33:26 +02:00
174 KiB
Raw
Blame
History
View Raw
Reference in New Issue
View Git Blame
Copy Permalink