From b333b06772c89d96aacb5490d6a219fba7c09cc6 Mon Sep 17 00:00:00 2001 From: Mitja Felicijan Date: Thu, 12 Feb 2026 20:57:17 +0100 Subject: Engage! --- llama.cpp/examples/lookup/README.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 llama.cpp/examples/lookup/README.md (limited to 'llama.cpp/examples/lookup/README.md') diff --git a/llama.cpp/examples/lookup/README.md b/llama.cpp/examples/lookup/README.md new file mode 100644 index 0000000..07d7384 --- /dev/null +++ b/llama.cpp/examples/lookup/README.md @@ -0,0 +1,12 @@ +# llama.cpp/examples/lookup + +Demonstration of Prompt Lookup Decoding + +https://github.com/apoorvumang/prompt-lookup-decoding + +The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found. + +More info: + +https://github.com/ggml-org/llama.cpp/pull/4484 +https://github.com/ggml-org/llama.cpp/issues/4226 -- cgit v1.2.3