Files
llama.cpp/tools/server/server-context.cpp
o7si 4893cc07bb server : fix crash when seq_rm fails for hybrid/recurrent models (#18391)
* server : fix crash when seq_rm fails for hybrid/recurrent models

* server : add allow_processing param to clear_slot
2025-12-26 16:35:29 +01:00

158 KiB