diff options
| author | Mitja Felicijan <mitja.felicijan@gmail.com> | 2026-02-12 20:57:17 +0100 |
|---|---|---|
| committer | Mitja Felicijan <mitja.felicijan@gmail.com> | 2026-02-12 20:57:17 +0100 |
| commit | b333b06772c89d96aacb5490d6a219fba7c09cc6 (patch) | |
| tree | 211df60083a5946baa2ed61d33d8121b7e251b06 /llama.cpp/examples/training/README.md | |
| download | llmnpc-b333b06772c89d96aacb5490d6a219fba7c09cc6.tar.gz | |
Engage!
Diffstat (limited to 'llama.cpp/examples/training/README.md')
| -rw-r--r-- | llama.cpp/examples/training/README.md | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/llama.cpp/examples/training/README.md b/llama.cpp/examples/training/README.md new file mode 100644 index 0000000..df42527 --- /dev/null +++ b/llama.cpp/examples/training/README.md @@ -0,0 +1,17 @@ +# llama.cpp/examples/training + +This directory contains examples related to language model training using llama.cpp/GGML. +So far finetuning is technically functional (for FP32 models and limited hardware setups) but the code is very much WIP. +Finetuning of Stories 260K and LLaMA 3.2 1b seems to work with 24 GB of memory. +**For CPU training, compile llama.cpp without any additional backends such as CUDA.** +**For CUDA training, use the maximum number of GPU layers.** + +Proof of concept: + +``` sh +export model_name=llama_3.2-1b && export quantization=f32 +./build/bin/llama-finetune --file wikitext-2-raw/wiki.test.raw -ngl 999 --model models/${model_name}-${quantization}.gguf -c 512 -b 512 -ub 512 +./build/bin/llama-perplexity --file wikitext-2-raw/wiki.test.raw -ngl 999 --model finetuned-model.gguf +``` + +The perplexity value of the finetuned model should be lower after training on the test set for 2 epochs. |
