Files
llama.cpp/examples
ggerganov 43f14a0a46 llama-eval : support multiple evaluation endpoints with dynamic task distribution
- Add ServerConfig dataclass (url, threads, name)
- Accept comma-separated --server, --threads, --server-name CLI args
- Dynamic shared-queue task distribution across servers (fast servers do more work)
- One ThreadPoolExecutor per server, workers pull from shared Queue
- Track which server processed each task (server_name in results)
- Thread-safe EvalState with threading.Lock for concurrent mutations
- Server column in HTML report and console output
- Backward compatible: single server works as before

Assisted-by: llama.cpp:local pi
2026-05-10 20:42:23 +03:00
..
2026-04-28 09:07:33 +03:00
2026-04-28 09:07:33 +03:00
2026-05-08 06:54:57 +03:00