diff options
| author | Mitja Felicijan <mitja.felicijan@gmail.com> | 2026-02-12 20:57:17 +0100 |
|---|---|---|
| committer | Mitja Felicijan <mitja.felicijan@gmail.com> | 2026-02-12 20:57:17 +0100 |
| commit | b333b06772c89d96aacb5490d6a219fba7c09cc6 (patch) | |
| tree | 211df60083a5946baa2ed61d33d8121b7e251b06 /llama.cpp/examples/server-llama2-13B.sh | |
| download | llmnpc-b333b06772c89d96aacb5490d6a219fba7c09cc6.tar.gz | |
Engage!
Diffstat (limited to 'llama.cpp/examples/server-llama2-13B.sh')
| -rwxr-xr-x | llama.cpp/examples/server-llama2-13B.sh | 26 |
1 files changed, 26 insertions, 0 deletions
diff --git a/llama.cpp/examples/server-llama2-13B.sh b/llama.cpp/examples/server-llama2-13B.sh new file mode 100755 index 0000000..fd5a575 --- /dev/null +++ b/llama.cpp/examples/server-llama2-13B.sh @@ -0,0 +1,26 @@ +#!/usr/bin/env bash + +set -e + +cd "$(dirname "$0")/.." || exit + +# Specify the model you want to use here: +MODEL="${MODEL:-./models/llama-2-13b-chat.ggmlv3.q5_K_M.bin}" +PROMPT_TEMPLATE=${PROMPT_TEMPLATE:-./prompts/chat-system.txt} + +# Adjust to the number of CPU cores you want to use. +N_THREAD="${N_THREAD:-12}" + +# Note: you can also override the generation options by specifying them on the command line: +GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 4096 --batch-size 1024}" + + +# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS +./llama-server $GEN_OPTIONS \ + --model "$MODEL" \ + --threads "$N_THREAD" \ + --rope-freq-scale 1.0 \ + "$@" + +# I used this to test the model with mps, but omitted it from the general purpose. If you want to use it, just specify it on the command line. +# -ngl 1 \ |
