1# llama.cpp/examples/lookup
 2
 3Demonstration of Prompt Lookup Decoding
 4
 5https://github.com/apoorvumang/prompt-lookup-decoding
 6
 7The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.
 8
 9More info:
10
11https://github.com/ggml-org/llama.cpp/pull/4484
12https://github.com/ggml-org/llama.cpp/issues/4226