From b333b06772c89d96aacb5490d6a219fba7c09cc6 Mon Sep 17 00:00:00 2001 From: Mitja Felicijan Date: Thu, 12 Feb 2026 20:57:17 +0100 Subject: Engage! --- llama.cpp/examples/parallel/README.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 llama.cpp/examples/parallel/README.md (limited to 'llama.cpp/examples/parallel/README.md') diff --git a/llama.cpp/examples/parallel/README.md b/llama.cpp/examples/parallel/README.md new file mode 100644 index 0000000..2468a30 --- /dev/null +++ b/llama.cpp/examples/parallel/README.md @@ -0,0 +1,14 @@ +# llama.cpp/example/parallel + +Simplified simulation of serving incoming requests in parallel + +## Example + +Generate 128 client requests (`-ns 128`), simulating 8 concurrent clients (`-np 8`). The system prompt is shared (`-pps`), meaning that it is computed once at the start. The client requests consist of up to 10 junk questions (`--junk 10`) followed by the actual question. + +```bash +llama-parallel -m model.gguf -np 8 -ns 128 --top-k 1 -pps --junk 10 -c 16384 +``` + +> [!NOTE] +> It's recommended to use base models with this example. Instruction tuned models might not be able to properly follow the custom chat template specified here, so the results might not be as expected. -- cgit v1.2.3