* spec : refactor * spec : drop support for incompatible vocabs * spec : update common_speculative_init() * cont : pass seq_id * cont : dedup ctx_seq_rm_type * server : sketch the ctx_dft decode loop * server : draft prompt cache and checkpoints * server : improve ctx names * server, spec : transition to unified spec context * cont : sync main and drft contexts * cont : async drft eval when possible * cont : handle non-ckpt models * cont : pass correct n_past for drafting * cont : process images throught the draft context * spec : handle draft running out of context * server : fix mtmd draft processing * server : fix URL for draft model * server : add comment * server : clean-up + dry * speculative-simple : update * spec : fix n_past type * server : fix slot ctx_drft ptr * tools : update readme * naming : improve consistency * spec : refactor for multi-sequence speculative context * cont : prepare params * cont : prepare params * spec : support parallel drafts * server : support parallel drafting * llama : reuse device buffers when possible * server, spec : clean-up * cont : clean-up * cont : minor * spec : reset `drafting` flag at the end * spec : introduce `common_speculative_process()` * spec : allow for multiple spec types (chain of speculators) * replace old type field of type common_speculative_type in the common_params_speculative struct with a vector to allow multiple types to be specified * introduce common_get_enabled_speculative_impls(const std::vector<enum common_speculative_type>) to figure out which implementations the user has enabled * introduce common_speculative_type_from_names(const std::vector<std::string> & names) to parse the already user provided spec types * all speculators run sequentially, best one wins (we verify its drafted tokens) * maximize expected accepted tokens for current round by calculating the product between the probability of accepting current token (n_acc_tokens / n_gen_drafts) and the draft's length --------- Co-authored-by: Petros Sideris <petros.sideris@nokia.com>
Server tests
Python based server tests scenario using pytest.
Tests target GitHub workflows job runners with 4 vCPU.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail.
To mitigate it, you can increase values in n_predict, kv_size.
Install dependencies
pip install -r requirements.txt
Run tests
- Build the server
cd ../../..
cmake -B build
cmake --build build --target llama-server
- Start the test:
./tests.sh
It's possible to override some scenario steps values with environment variables:
| variable | description |
|---|---|
PORT |
context.server_port to set the listening port of the server during scenario, default: 8080 |
LLAMA_SERVER_BIN_PATH |
to change the server binary path, default: ../../../build/bin/llama-server |
DEBUG |
to enable steps and server verbose mode --verbose |
N_GPU_LAYERS |
number of model layers to offload to VRAM -ngl --n-gpu-layers |
LLAMA_CACHE |
by default server tests re-download models to the tmp subfolder. Set this to your cache (e.g. $HOME/Library/Caches/llama.cpp on Mac or $HOME/.cache/llama.cpp on Unix) to avoid this |
To run slow tests (will download many models, make sure to set LLAMA_CACHE if needed):
SLOW_TESTS=1 ./tests.sh
To run with stdout/stderr display in real time (verbose output, but useful for debugging):
DEBUG=1 ./tests.sh -s -v -x
To run all the tests in a file:
./tests.sh unit/test_chat_completion.py -v -x
To run a single test:
./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req
Hint: You can compile and run test in single command, useful for local development:
cmake --build build -j --target llama-server && ./tools/server/tests/tests.sh
To see all available arguments, please refer to pytest documentation
Debugging external llama-server
It can sometimes be useful to run the server in a debugger when invesigating test
failures. To do this, the environment variable DEBUG_EXTERNAL=1 can be set
which will cause the test to skip starting a llama-server itself. Instead, the
server can be started in a debugger.
Example using gdb:
$ gdb --args ../../../build/bin/llama-server \
--host 127.0.0.1 --port 8080 \
--temp 0.8 --seed 42 \
--hf-repo ggml-org/models --hf-file tinyllamas/stories260K.gguf \
--batch-size 32 --no-slots --alias tinyllama-2 --ctx-size 512 \
--parallel 2 --n-predict 64
And a break point can be set in before running:
(gdb) br server.cpp:4604
(gdb) r
main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv update_slots: all slots are idle
And then the test in question can be run in another terminal:
(venv) $ env DEBUG_EXTERNAL=1 ./tests.sh unit/test_chat_completion.py -v -x
And this should trigger the breakpoint and allow inspection of the server state in the debugger terminal.