Files
llama.cpp/ggml
Sachin Sharma 61af07c22d ggml-zendnn : adaptive fallback to CPU backend for small batch sizes (#22681)
* ggml-zendnn : add runtime env var GGML_ZENDNN_ADAPTIVE_FALLBACK to control adaptive fallback (default: enabled)

* ggml-zendnn : restore original fallback logic when adaptive fallback is disabled
2026-05-13 09:13:47 +03:00
..
2024-07-13 18:12:39 +02:00