From b333b06772c89d96aacb5490d6a219fba7c09cc6 Mon Sep 17 00:00:00 2001 From: Mitja Felicijan Date: Thu, 12 Feb 2026 20:57:17 +0100 Subject: Engage! --- llama.cpp/examples/simple/README.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 llama.cpp/examples/simple/README.md (limited to 'llama.cpp/examples/simple/README.md') diff --git a/llama.cpp/examples/simple/README.md b/llama.cpp/examples/simple/README.md new file mode 100644 index 0000000..937008b --- /dev/null +++ b/llama.cpp/examples/simple/README.md @@ -0,0 +1,21 @@ +# llama.cpp/example/simple + +The purpose of this example is to demonstrate a minimal usage of llama.cpp for generating text with a given prompt. + +```bash +./llama-simple -m ./models/llama-7b-v2/ggml-model-f16.gguf "Hello my name is" + +... + +main: n_len = 32, n_ctx = 2048, n_parallel = 1, n_kv_req = 32 + + Hello my name is Shawn and I'm a 20 year old male from the United States. I'm a 20 year old + +main: decoded 27 tokens in 2.31 s, speed: 11.68 t/s + +llama_print_timings: load time = 579.15 ms +llama_print_timings: sample time = 0.72 ms / 28 runs ( 0.03 ms per token, 38888.89 tokens per second) +llama_print_timings: prompt eval time = 655.63 ms / 10 tokens ( 65.56 ms per token, 15.25 tokens per second) +llama_print_timings: eval time = 2180.97 ms / 27 runs ( 80.78 ms per token, 12.38 tokens per second) +llama_print_timings: total time = 2891.13 ms +``` -- cgit v1.2.3