summaryrefslogtreecommitdiff
path: root/llama.cpp/examples/training/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'llama.cpp/examples/training/README.md')
-rw-r--r--llama.cpp/examples/training/README.md17
1 files changed, 17 insertions, 0 deletions
diff --git a/llama.cpp/examples/training/README.md b/llama.cpp/examples/training/README.md
new file mode 100644
index 0000000..df42527
--- /dev/null
+++ b/llama.cpp/examples/training/README.md
@@ -0,0 +1,17 @@
+# llama.cpp/examples/training
+
+This directory contains examples related to language model training using llama.cpp/GGML.
+So far finetuning is technically functional (for FP32 models and limited hardware setups) but the code is very much WIP.
+Finetuning of Stories 260K and LLaMA 3.2 1b seems to work with 24 GB of memory.
+**For CPU training, compile llama.cpp without any additional backends such as CUDA.**
+**For CUDA training, use the maximum number of GPU layers.**
+
+Proof of concept:
+
+``` sh
+export model_name=llama_3.2-1b && export quantization=f32
+./build/bin/llama-finetune --file wikitext-2-raw/wiki.test.raw -ngl 999 --model models/${model_name}-${quantization}.gguf -c 512 -b 512 -ub 512
+./build/bin/llama-perplexity --file wikitext-2-raw/wiki.test.raw -ngl 999 --model finetuned-model.gguf
+```
+
+The perplexity value of the finetuned model should be lower after training on the test set for 2 epochs.