1# llama.cpp/example/embedding
 2
 3This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.
 4
 5## Quick Start
 6
 7To get started right away, run the following command, making sure to use the correct path for the model you have:
 8
 9### Unix-based systems (Linux, macOS, etc.):
10
11```bash
12./llama-embedding -m ./path/to/model --pooling mean --log-disable -p "Hello World!" 2>/dev/null
13```
14
15### Windows:
16
17```powershell
18llama-embedding.exe -m ./path/to/model --pooling mean --log-disable -p "Hello World!" 2>$null
19```
20
21The above command will output space-separated float values.
22
23## extra parameters
24### --embd-normalize $integer$
25| $integer$ | description         | formula |
26|-----------|---------------------|---------|
27| $-1$      | none                |
28| $0$       | max absolute int16  | $\Large{{32760 * x_i} \over\max \lvert x_i\rvert}$
29| $1$       | taxicab             | $\Large{x_i \over\sum \lvert x_i\rvert}$
30| $2$       | euclidean (default) | $\Large{x_i \over\sqrt{\sum x_i^2}}$
31| $>2$      | p-norm              | $\Large{x_i \over\sqrt[p]{\sum \lvert x_i\rvert^p}}$
32
33### --embd-output-format $'string'$
34| $'string'$ | description                  |  |
35|------------|------------------------------|--|
36| ''         | same as before               | (default)
37| 'array'    | single embeddings            | $[[x_1,...,x_n]]$
38|            | multiple embeddings          | $[[x_1,...,x_n],[x_1,...,x_n],...,[x_1,...,x_n]]$
39| 'json'     | openai style                 |
40| 'json+'    | add cosine similarity matrix |
41| 'raw'      | plain text output            |
42
43### --embd-separator $"string"$
44| $"string"$   | |
45|--------------|-|
46| "\n"         | (default)
47| "<#embSep#>" | for example
48| "<#sep#>"    | other example
49
50## examples
51### Unix-based systems (Linux, macOS, etc.):
52
53```bash
54./llama-embedding -p 'Castle<#sep#>Stronghold<#sep#>Dog<#sep#>Cat' --pooling mean --embd-separator '<#sep#>' --embd-normalize 2  --embd-output-format '' -m './path/to/model.gguf' --n-gpu-layers 99 --log-disable 2>/dev/null
55```
56
57### Windows:
58
59```powershell
60llama-embedding.exe -p 'Castle<#sep#>Stronghold<#sep#>Dog<#sep#>Cat' --pooling mean --embd-separator '<#sep#>' --embd-normalize 2  --embd-output-format '' -m './path/to/model.gguf' --n-gpu-layers 99 --log-disable 2>/dev/null
61```