summaryrefslogtreecommitdiff
path: root/llama.cpp/examples/lookup/README.md
diff options
context:
space:
mode:
authorMitja Felicijan <mitja.felicijan@gmail.com>2026-02-12 20:57:17 +0100
committerMitja Felicijan <mitja.felicijan@gmail.com>2026-02-12 20:57:17 +0100
commitb333b06772c89d96aacb5490d6a219fba7c09cc6 (patch)
tree211df60083a5946baa2ed61d33d8121b7e251b06 /llama.cpp/examples/lookup/README.md
downloadllmnpc-b333b06772c89d96aacb5490d6a219fba7c09cc6.tar.gz
Engage!
Diffstat (limited to 'llama.cpp/examples/lookup/README.md')
-rw-r--r--llama.cpp/examples/lookup/README.md12
1 files changed, 12 insertions, 0 deletions
diff --git a/llama.cpp/examples/lookup/README.md b/llama.cpp/examples/lookup/README.md
new file mode 100644
index 0000000..07d7384
--- /dev/null
+++ b/llama.cpp/examples/lookup/README.md
@@ -0,0 +1,12 @@
+# llama.cpp/examples/lookup
+
+Demonstration of Prompt Lookup Decoding
+
+https://github.com/apoorvumang/prompt-lookup-decoding
+
+The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.
+
+More info:
+
+https://github.com/ggml-org/llama.cpp/pull/4484
+https://github.com/ggml-org/llama.cpp/issues/4226