1# export-lora
 2
 3Apply LORA adapters to base model and export the resulting model.
 4
 5```
 6usage: llama-export-lora [options]
 7
 8options:
 9  -m,    --model                  model path from which to load base model (default '')
10         --lora FNAME             path to LoRA adapter  (can be repeated to use multiple adapters)
11         --lora-scaled FNAME S    path to LoRA adapter with user defined scaling S  (can be repeated to use multiple adapters)
12  -t,    --threads N              number of threads to use during computation (default: 4)
13  -o,    --output FNAME           output file (default: 'ggml-lora-merged-f16.gguf')
14```
15
16For example:
17
18```bash
19./bin/llama-export-lora \
20    -m open-llama-3b-v2.gguf \
21    -o open-llama-3b-v2-english2tokipona-chat.gguf \
22    --lora lora-open-llama-3b-v2-english2tokipona-chat-LATEST.gguf
23```
24
25Multiple LORA adapters can be applied by passing multiple `--lora FNAME` or `--lora-scaled FNAME S` command line parameters:
26
27```bash
28./bin/llama-export-lora \
29    -m your_base_model.gguf \
30    -o your_merged_model.gguf \
31    --lora-scaled lora_task_A.gguf 0.5 \
32    --lora-scaled lora_task_B.gguf 0.5
33```