1# Server tests
2
3Python based server tests scenario using [pytest](https://docs.pytest.org/en/stable/).
4
5Tests target GitHub workflows job runners with 4 vCPU.
6
7Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail.
8To mitigate it, you can increase values in `n_predict`, `kv_size`.
9
10### Install dependencies
11
12`pip install -r requirements.txt`
13
14### Run tests
15
161. Build the server
17
18```shell
19cd ../../..
20cmake -B build
21cmake --build build --target llama-server
22```
23
242. Start the test: `./tests.sh`
25
26It's possible to override some scenario steps values with environment variables:
27
28| variable | description |
29|--------------------------|------------------------------------------------------------------------------------------------|
30| `PORT` | `context.server_port` to set the listening port of the server during scenario, default: `8080` |
31| `LLAMA_SERVER_BIN_PATH` | to change the server binary path, default: `../../../build/bin/llama-server` |
32| `DEBUG` | to enable steps and server verbose mode `--verbose` |
33| `N_GPU_LAYERS` | number of model layers to offload to VRAM `-ngl --n-gpu-layers` |
34| `LLAMA_CACHE` | by default server tests re-download models to the `tmp` subfolder. Set this to your cache (e.g. `$HOME/Library/Caches/llama.cpp` on Mac or `$HOME/.cache/llama.cpp` on Unix) to avoid this |
35
36To run slow tests (will download many models, make sure to set `LLAMA_CACHE` if needed):
37
38```shell
39SLOW_TESTS=1 ./tests.sh
40```
41
42To run with stdout/stderr display in real time (verbose output, but useful for debugging):
43
44```shell
45DEBUG=1 ./tests.sh -s -v -x
46```
47
48To run all the tests in a file:
49
50```shell
51./tests.sh unit/test_chat_completion.py -v -x
52```
53
54To run a single test:
55
56```shell
57./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req
58```
59
60Hint: You can compile and run test in single command, useful for local developement:
61
62```shell
63cmake --build build -j --target llama-server && ./tools/server/tests/tests.sh
64```
65
66To see all available arguments, please refer to [pytest documentation](https://docs.pytest.org/en/stable/how-to/usage.html)
67
68### Debugging external llama-server
69It can sometimes be useful to run the server in a debugger when invesigating test
70failures. To do this, the environment variable `DEBUG_EXTERNAL=1` can be set
71which will cause the test to skip starting a llama-server itself. Instead, the
72server can be started in a debugger.
73
74Example using `gdb`:
75```console
76$ gdb --args ../../../build/bin/llama-server \
77 --host 127.0.0.1 --port 8080 \
78 --temp 0.8 --seed 42 \
79 --hf-repo ggml-org/models --hf-file tinyllamas/stories260K.gguf \
80 --batch-size 32 --no-slots --alias tinyllama-2 --ctx-size 512 \
81 --parallel 2 --n-predict 64
82```
83And a break point can be set in before running:
84```console
85(gdb) br server.cpp:4604
86(gdb) r
87main: server is listening on http://127.0.0.1:8080 - starting the main loop
88srv update_slots: all slots are idle
89```
90
91And then the test in question can be run in another terminal:
92```console
93(venv) $ env DEBUG_EXTERNAL=1 ./tests.sh unit/test_chat_completion.py -v -x
94```
95And this should trigger the breakpoint and allow inspection of the server state
96in the debugger terminal.