1# llama.cpp
  2
  3![llama](https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png)
  4
  5[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
  6[![Release](https://img.shields.io/github/v/release/ggml-org/llama.cpp)](https://github.com/ggml-org/llama.cpp/releases)
  7[![Server](https://github.com/ggml-org/llama.cpp/actions/workflows/server.yml/badge.svg)](https://github.com/ggml-org/llama.cpp/actions/workflows/server.yml)
  8
  9[Manifesto](https://github.com/ggml-org/llama.cpp/discussions/205) / [ggml](https://github.com/ggml-org/ggml) / [ops](https://github.com/ggml-org/llama.cpp/blob/master/docs/ops.md)
 10
 11LLM inference in C/C++
 12
 13## Recent API changes
 14
 15- [Changelog for `libllama` API](https://github.com/ggml-org/llama.cpp/issues/9289)
 16- [Changelog for `llama-server` REST API](https://github.com/ggml-org/llama.cpp/issues/9291)
 17
 18## Hot topics
 19
 20- **[guide : using the new WebUI of llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/16938)**
 21- [guide : running gpt-oss with llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/15396)
 22- [[FEEDBACK] Better packaging for llama.cpp to support downstream consumers ๐Ÿค—](https://github.com/ggml-org/llama.cpp/discussions/15313)
 23- Support for the `gpt-oss` model with native MXFP4 format has been added | [PR](https://github.com/ggml-org/llama.cpp/pull/15091) | [Collaboration with NVIDIA](https://blogs.nvidia.com/blog/rtx-ai-garage-openai-oss) | [Comment](https://github.com/ggml-org/llama.cpp/discussions/15095)
 24- Multimodal support arrived in `llama-server`: [#12898](https://github.com/ggml-org/llama.cpp/pull/12898) | [documentation](./docs/multimodal.md)
 25- VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
 26- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
 27- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669
 28- Hugging Face GGUF editor: [discussion](https://github.com/ggml-org/llama.cpp/discussions/9268) | [tool](https://huggingface.co/spaces/CISCai/gguf-editor)
 29
 30----
 31
 32## Quick start
 33
 34Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
 35
 36- Install `llama.cpp` using [brew, nix or winget](docs/install.md)
 37- Run with Docker - see our [Docker documentation](docs/docker.md)
 38- Download pre-built binaries from the [releases page](https://github.com/ggml-org/llama.cpp/releases)
 39- Build from source by cloning this repository - check out [our build guide](docs/build.md)
 40
 41Once installed, you'll need a model to work with. Head to the [Obtaining and quantizing models](#obtaining-and-quantizing-models) section to learn more.
 42
 43Example command:
 44
 45```sh
 46# Use a local model file
 47llama-cli -m my_model.gguf
 48
 49# Or download and run a model directly from Hugging Face
 50llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
 51
 52# Launch OpenAI-compatible API server
 53llama-server -hf ggml-org/gemma-3-1b-it-GGUF
 54```
 55
 56## Description
 57
 58The main goal of `llama.cpp` is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
 59range of hardware - locally and in the cloud.
 60
 61- Plain C/C++ implementation without any dependencies
 62- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
 63- AVX, AVX2, AVX512 and AMX support for x86 architectures
 64- RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
 65- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
 66- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
 67- Vulkan and SYCL backend support
 68- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
 69
 70The `llama.cpp` project is the main playground for developing new features for the [ggml](https://github.com/ggml-org/ggml) library.
 71
 72<details>
 73<summary>Models</summary>
 74
 75Typically finetunes of the base models below are supported as well.
 76
 77Instructions for adding support for new models: [HOWTO-add-model.md](docs/development/HOWTO-add-model.md)
 78
 79#### Text-only
 80
 81- [X] LLaMA ๐Ÿฆ™
 82- [x] LLaMA 2 ๐Ÿฆ™๐Ÿฆ™
 83- [x] LLaMA 3 ๐Ÿฆ™๐Ÿฆ™๐Ÿฆ™
 84- [X] [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)
 85- [x] [Mixtral MoE](https://huggingface.co/models?search=mistral-ai/Mixtral)
 86- [x] [DBRX](https://huggingface.co/databricks/dbrx-instruct)
 87- [x] [Jamba](https://huggingface.co/ai21labs)
 88- [X] [Falcon](https://huggingface.co/models?search=tiiuae/falcon)
 89- [X] [Chinese LLaMA / Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) and [Chinese LLaMA-2 / Alpaca-2](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2)
 90- [X] [Vigogne (French)](https://github.com/bofenghuang/vigogne)
 91- [X] [BERT](https://github.com/ggml-org/llama.cpp/pull/5423)
 92- [X] [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/)
 93- [X] [Baichuan 1 & 2](https://huggingface.co/models?search=baichuan-inc/Baichuan) + [derivations](https://huggingface.co/hiyouga/baichuan-7b-sft)
 94- [X] [Aquila 1 & 2](https://huggingface.co/models?search=BAAI/Aquila)
 95- [X] [Starcoder models](https://github.com/ggml-org/llama.cpp/pull/3187)
 96- [X] [Refact](https://huggingface.co/smallcloudai/Refact-1_6B-fim)
 97- [X] [MPT](https://github.com/ggml-org/llama.cpp/pull/3417)
 98- [X] [Bloom](https://github.com/ggml-org/llama.cpp/pull/3553)
 99- [x] [Yi models](https://huggingface.co/models?search=01-ai/Yi)
100- [X] [StableLM models](https://huggingface.co/stabilityai)
101- [x] [Deepseek models](https://huggingface.co/models?search=deepseek-ai/deepseek)
102- [x] [Qwen models](https://huggingface.co/models?search=Qwen/Qwen)
103- [x] [PLaMo-13B](https://github.com/ggml-org/llama.cpp/pull/3557)
104- [x] [Phi models](https://huggingface.co/models?search=microsoft/phi)
105- [x] [PhiMoE](https://github.com/ggml-org/llama.cpp/pull/11003)
106- [x] [GPT-2](https://huggingface.co/gpt2)
107- [x] [Orion 14B](https://github.com/ggml-org/llama.cpp/pull/5118)
108- [x] [InternLM2](https://huggingface.co/models?search=internlm2)
109- [x] [CodeShell](https://github.com/WisdomShell/codeshell)
110- [x] [Gemma](https://ai.google.dev/gemma)
111- [x] [Mamba](https://github.com/state-spaces/mamba)
112- [x] [Grok-1](https://huggingface.co/keyfan/grok-1-hf)
113- [x] [Xverse](https://huggingface.co/models?search=xverse)
114- [x] [Command-R models](https://huggingface.co/models?search=CohereForAI/c4ai-command-r)
115- [x] [SEA-LION](https://huggingface.co/models?search=sea-lion)
116- [x] [GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) + [GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B)
117- [x] [OLMo](https://allenai.org/olmo)
118- [x] [OLMo 2](https://allenai.org/olmo)
119- [x] [OLMoE](https://huggingface.co/allenai/OLMoE-1B-7B-0924)
120- [x] [Granite models](https://huggingface.co/collections/ibm-granite/granite-code-models-6624c5cec322e4c148c8b330)
121- [x] [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) + [Pythia](https://github.com/EleutherAI/pythia)
122- [x] [Snowflake-Arctic MoE](https://huggingface.co/collections/Snowflake/arctic-66290090abe542894a5ac520)
123- [x] [Smaug](https://huggingface.co/models?search=Smaug)
124- [x] [Poro 34B](https://huggingface.co/LumiOpen/Poro-34B)
125- [x] [Bitnet b1.58 models](https://huggingface.co/1bitLLM)
126- [x] [Flan T5](https://huggingface.co/models?search=flan-t5)
127- [x] [Open Elm models](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759ca)
128- [x] [ChatGLM3-6b](https://huggingface.co/THUDM/chatglm3-6b) + [ChatGLM4-9b](https://huggingface.co/THUDM/glm-4-9b) + [GLMEdge-1.5b](https://huggingface.co/THUDM/glm-edge-1.5b-chat) + [GLMEdge-4b](https://huggingface.co/THUDM/glm-edge-4b-chat)
129- [x] [GLM-4-0414](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e)
130- [x] [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)
131- [x] [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)
132- [x] [FalconMamba Models](https://huggingface.co/collections/tiiuae/falconmamba-7b-66b9a580324dd1598b0f6d4a)
133- [x] [Jais](https://huggingface.co/inceptionai/jais-13b-chat)
134- [x] [Bielik-11B-v2.3](https://huggingface.co/collections/speakleash/bielik-11b-v23-66ee813238d9b526a072408a)
135- [x] [RWKV-7](https://huggingface.co/collections/shoumenchougou/rwkv7-gxx-gguf)
136- [x] [RWKV-6](https://github.com/BlinkDL/RWKV-LM)
137- [x] [QRWKV-6](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1)
138- [x] [GigaChat-20B-A3B](https://huggingface.co/ai-sage/GigaChat-20B-A3B-instruct)
139- [X] [Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview)
140- [x] [Ling models](https://huggingface.co/collections/inclusionAI/ling-67c51c85b34a7ea0aba94c32)
141- [x] [LFM2 models](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38)
142- [x] [Hunyuan models](https://huggingface.co/collections/tencent/hunyuan-dense-model-6890632cda26b19119c9c5e7)
143- [x] [BailingMoeV2 (Ring/Ling 2.0) models](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86)
144
145#### Multimodal
146
147- [x] [LLaVA 1.5 models](https://huggingface.co/collections/liuhaotian/llava-15-653aac15d994e992e2677a7e), [LLaVA 1.6 models](https://huggingface.co/collections/liuhaotian/llava-16-65b9e40155f60fd046a5ccf2)
148- [x] [BakLLaVA](https://huggingface.co/models?search=SkunkworksAI/Bakllava)
149- [x] [Obsidian](https://huggingface.co/NousResearch/Obsidian-3B-V0.5)
150- [x] [ShareGPT4V](https://huggingface.co/models?search=Lin-Chen/ShareGPT4V)
151- [x] [MobileVLM 1.7B/3B models](https://huggingface.co/models?search=mobileVLM)
152- [x] [Yi-VL](https://huggingface.co/models?search=Yi-VL)
153- [x] [Mini CPM](https://huggingface.co/models?search=MiniCPM)
154- [x] [Moondream](https://huggingface.co/vikhyatk/moondream2)
155- [x] [Bunny](https://github.com/BAAI-DCAI/Bunny)
156- [x] [GLM-EDGE](https://huggingface.co/models?search=glm-edge)
157- [x] [Qwen2-VL](https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d)
158- [x] [LFM2-VL](https://huggingface.co/collections/LiquidAI/lfm2-vl-68963bbc84a610f7638d5ffa)
159
160</details>
161
162<details>
163<summary>Bindings</summary>
164
165- Python: [ddh0/easy-llama](https://github.com/ddh0/easy-llama)
166- Python: [abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
167- Go: [go-skynet/go-llama.cpp](https://github.com/go-skynet/go-llama.cpp)
168- Node.js: [withcatai/node-llama-cpp](https://github.com/withcatai/node-llama-cpp)
169- JS/TS (llama.cpp server client): [lgrammel/modelfusion](https://modelfusion.dev/integration/model-provider/llamacpp)
170- JS/TS (Programmable Prompt Engine CLI): [offline-ai/cli](https://github.com/offline-ai/cli)
171- JavaScript/Wasm (works in browser): [tangledgroup/llama-cpp-wasm](https://github.com/tangledgroup/llama-cpp-wasm)
172- Typescript/Wasm (nicer API, available on npm): [ngxson/wllama](https://github.com/ngxson/wllama)
173- Ruby: [yoshoku/llama_cpp.rb](https://github.com/yoshoku/llama_cpp.rb)
174- Rust (more features): [edgenai/llama_cpp-rs](https://github.com/edgenai/llama_cpp-rs)
175- Rust (nicer API): [mdrokz/rust-llama.cpp](https://github.com/mdrokz/rust-llama.cpp)
176- Rust (more direct bindings): [utilityai/llama-cpp-rs](https://github.com/utilityai/llama-cpp-rs)
177- Rust (automated build from crates.io): [ShelbyJenkins/llm_client](https://github.com/ShelbyJenkins/llm_client)
178- C#/.NET: [SciSharp/LLamaSharp](https://github.com/SciSharp/LLamaSharp)
179- C#/VB.NET (more features - community license): [LM-Kit.NET](https://docs.lm-kit.com/lm-kit-net/index.html)
180- Scala 3: [donderom/llm4s](https://github.com/donderom/llm4s)
181- Clojure: [phronmophobic/llama.clj](https://github.com/phronmophobic/llama.clj)
182- React Native: [mybigday/llama.rn](https://github.com/mybigday/llama.rn)
183- Java: [kherud/java-llama.cpp](https://github.com/kherud/java-llama.cpp)
184- Java: [QuasarByte/llama-cpp-jna](https://github.com/QuasarByte/llama-cpp-jna)
185- Zig: [deins/llama.cpp.zig](https://github.com/Deins/llama.cpp.zig)
186- Flutter/Dart: [netdur/llama_cpp_dart](https://github.com/netdur/llama_cpp_dart)
187- Flutter: [xuegao-tzx/Fllama](https://github.com/xuegao-tzx/Fllama)
188- PHP (API bindings and features built on top of llama.cpp): [distantmagic/resonance](https://github.com/distantmagic/resonance) [(more info)](https://github.com/ggml-org/llama.cpp/pull/6326)
189- Guile Scheme: [guile_llama_cpp](https://savannah.nongnu.org/projects/guile-llama-cpp)
190- Swift [srgtuszy/llama-cpp-swift](https://github.com/srgtuszy/llama-cpp-swift)
191- Swift [ShenghaiWang/SwiftLlama](https://github.com/ShenghaiWang/SwiftLlama)
192- Delphi [Embarcadero/llama-cpp-delphi](https://github.com/Embarcadero/llama-cpp-delphi)
193- Go (no CGo needed): [hybridgroup/yzma](https://github.com/hybridgroup/yzma)
194- Android: [llama.android](/examples/llama.android)
195
196</details>
197
198<details>
199<summary>UIs</summary>
200
201*(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
202
203- [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
204- [BonzAI App](https://apps.apple.com/us/app/bonzai-your-local-ai-agent/id6752847988) (proprietary)
205- [cztomsik/ava](https://github.com/cztomsik/ava) (MIT)
206- [Dot](https://github.com/alexpinel/Dot) (GPL)
207- [eva](https://github.com/ylsdamxssjxxdd/eva) (MIT)
208- [iohub/collama](https://github.com/iohub/coLLaMA) (Apache-2.0)
209- [janhq/jan](https://github.com/janhq/jan) (AGPL)
210- [johnbean393/Sidekick](https://github.com/johnbean393/Sidekick) (MIT)
211- [KanTV](https://github.com/zhouwg/kantv?tab=readme-ov-file) (Apache-2.0)
212- [KodiBot](https://github.com/firatkiral/kodibot) (GPL)
213- [llama.vim](https://github.com/ggml-org/llama.vim) (MIT)
214- [LARS](https://github.com/abgulati/LARS) (AGPL)
215- [Llama Assistant](https://github.com/vietanhdev/llama-assistant) (GPL)
216- [LlamaLib](https://github.com/undreamai/LlamaLib) (Apache-2.0)
217- [LLMFarm](https://github.com/guinmoon/LLMFarm?tab=readme-ov-file) (MIT)
218- [LLMUnity](https://github.com/undreamai/LLMUnity) (MIT)
219- [LMStudio](https://lmstudio.ai/) (proprietary)
220- [LocalAI](https://github.com/mudler/LocalAI) (MIT)
221- [LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp) (AGPL)
222- [MindMac](https://mindmac.app) (proprietary)
223- [MindWorkAI/AI-Studio](https://github.com/MindWorkAI/AI-Studio) (FSL-1.1-MIT)
224- [Mobile-Artificial-Intelligence/maid](https://github.com/Mobile-Artificial-Intelligence/maid) (MIT)
225- [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile) (Apache-2.0)
226- [nat/openplayground](https://github.com/nat/openplayground) (MIT)
227- [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) (MIT)
228- [ollama/ollama](https://github.com/ollama/ollama) (MIT)
229- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (AGPL)
230- [PocketPal AI](https://github.com/a-ghorbani/pocketpal-ai) (MIT)
231- [psugihara/FreeChat](https://github.com/psugihara/FreeChat) (MIT)
232- [ptsochantaris/emeltal](https://github.com/ptsochantaris/emeltal) (MIT)
233- [pythops/tenere](https://github.com/pythops/tenere) (AGPL)
234- [ramalama](https://github.com/containers/ramalama) (MIT)
235- [semperai/amica](https://github.com/semperai/amica) (MIT)
236- [withcatai/catai](https://github.com/withcatai/catai) (MIT)
237- [Autopen](https://github.com/blackhole89/autopen) (GPL)
238
239</details>
240
241<details>
242<summary>Tools</summary>
243
244- [akx/ggify](https://github.com/akx/ggify) โ€“ download PyTorch models from HuggingFace Hub and convert them to GGML
245- [akx/ollama-dl](https://github.com/akx/ollama-dl) โ€“ download models from the Ollama library to be used directly with llama.cpp
246- [crashr/gppm](https://github.com/crashr/gppm) โ€“ launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
247- [gpustack/gguf-parser](https://github.com/gpustack/gguf-parser-go/tree/main/cmd/gguf-parser) - review/check the GGUF file and estimate the memory usage
248- [Styled Lines](https://marketplace.unity.com/packages/tools/generative-ai/styled-lines-llama-cpp-model-292902) (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
249- [unslothai/unsloth](https://github.com/unslothai/unsloth) โ€“ ๐Ÿฆฅ exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
250
251</details>
252
253<details>
254<summary>Infrastructure</summary>
255
256- [Paddler](https://github.com/intentee/paddler) - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
257- [GPUStack](https://github.com/gpustack/gpustack) - Manage GPU clusters for running LLMs
258- [llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
259- [llama-swap](https://github.com/mostlygeek/llama-swap) - transparent proxy that adds automatic model switching with llama-server
260- [Kalavai](https://github.com/kalavai-net/kalavai-client) - Crowdsource end to end LLM deployment at any scale
261- [llmaz](https://github.com/InftyAI/llmaz) - โ˜ธ๏ธ Easy, advanced inference platform for large language models on Kubernetes.
262</details>
263
264<details>
265<summary>Games</summary>
266
267- [Lucy's Labyrinth](https://github.com/MorganRO8/Lucys_Labyrinth) - A simple maze game where agents controlled by an AI model will try to trick you.
268
269</details>
270
271
272## Supported backends
273
274| Backend | Target devices |
275| --- | --- |
276| [Metal](docs/build.md#metal-build) | Apple Silicon |
277| [BLAS](docs/build.md#blas-build) | All |
278| [BLIS](docs/backend/BLIS.md) | All |
279| [SYCL](docs/backend/SYCL.md) | Intel and Nvidia GPU |
280| [MUSA](docs/build.md#musa) | Moore Threads GPU |
281| [CUDA](docs/build.md#cuda) | Nvidia GPU |
282| [HIP](docs/build.md#hip) | AMD GPU |
283| [ZenDNN](docs/build.md#zendnn) | AMD CPU |
284| [Vulkan](docs/build.md#vulkan) | GPU |
285| [CANN](docs/build.md#cann) | Ascend NPU |
286| [OpenCL](docs/backend/OPENCL.md) | Adreno GPU |
287| [IBM zDNN](docs/backend/zDNN.md) | IBM Z & LinuxONE |
288| [WebGPU [In Progress]](docs/build.md#webgpu) | All |
289| [RPC](https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc) | All |
290| [Hexagon [In Progress]](docs/backend/hexagon/README.md) | Snapdragon |
291| [VirtGPU](docs/backend/VirtGPU.md) | VirtGPU APIR |
292
293## Obtaining and quantizing models
294
295The [Hugging Face](https://huggingface.co) platform hosts a [number of LLMs](https://huggingface.co/models?library=gguf&sort=trending) compatible with `llama.cpp`:
296
297- [Trending](https://huggingface.co/models?library=gguf&sort=trending)
298- [LLaMA](https://huggingface.co/models?sort=trending&search=llama+gguf)
299
300You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from [Hugging Face](https://huggingface.co/) or other model hosting sites, such as [ModelScope](https://modelscope.cn/), by using this CLI argument: `-hf <user>/<model>[:quant]`. For example:
301
302```sh
303llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
304```
305
306By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable `MODEL_ENDPOINT`. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. `MODEL_ENDPOINT=https://www.modelscope.cn/`.
307
308After downloading a model, use the CLI tools to run it locally - see below.
309
310`llama.cpp` requires the model to be stored in the [GGUF](https://github.com/ggml-org/ggml/blob/master/docs/gguf.md) file format. Models in other data formats can be converted to GGUF using the `convert_*.py` Python scripts in this repo.
311
312The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with `llama.cpp`:
313
314- Use the [GGUF-my-repo space](https://huggingface.co/spaces/ggml-org/gguf-my-repo) to convert to GGUF format and quantize model weights to smaller sizes
315- Use the [GGUF-my-LoRA space](https://huggingface.co/spaces/ggml-org/gguf-my-lora) to convert LoRA adapters to GGUF format (more info: https://github.com/ggml-org/llama.cpp/discussions/10123)
316- Use the [GGUF-editor space](https://huggingface.co/spaces/CISCai/gguf-editor) to edit GGUF meta data in the browser (more info: https://github.com/ggml-org/llama.cpp/discussions/9268)
317- Use the [Inference Endpoints](https://ui.endpoints.huggingface.co/) to directly host `llama.cpp` in the cloud (more info: https://github.com/ggml-org/llama.cpp/discussions/9669)
318
319To learn more about model quantization, [read this documentation](tools/quantize/README.md)
320
321## [`llama-cli`](tools/cli)
322
323#### A CLI tool for accessing and experimenting with most of `llama.cpp`'s functionality.
324
325- <details open>
326    <summary>Run in conversation mode</summary>
327
328    Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding `-cnv` and specifying a suitable chat template with `--chat-template NAME`
329
330    ```bash
331    llama-cli -m model.gguf
332
333    # > hi, who are you?
334    # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
335    #
336    # > what is 1+1?
337    # Easy peasy! The answer to 1+1 is... 2!
338    ```
339
340    </details>
341
342- <details>
343    <summary>Run in conversation mode with custom chat template</summary>
344
345    ```bash
346    # use the "chatml" template (use -h to see the list of supported templates)
347    llama-cli -m model.gguf -cnv --chat-template chatml
348
349    # use a custom template
350    llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
351    ```
352
353    </details>
354
355- <details>
356    <summary>Constrain the output with a custom grammar</summary>
357
358    ```bash
359    llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
360
361    # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}
362    ```
363
364    The [grammars/](grammars/) folder contains a handful of sample grammars. To write your own, check out the [GBNF Guide](grammars/README.md).
365
366    For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
367
368    </details>
369
370
371## [`llama-server`](tools/server)
372
373#### A lightweight, [OpenAI API](https://github.com/openai/openai-openapi) compatible, HTTP server for serving LLMs.
374
375- <details open>
376    <summary>Start a local HTTP server with default configuration on port 8080</summary>
377
378    ```bash
379    llama-server -m model.gguf --port 8080
380
381    # Basic web UI can be accessed via browser: http://localhost:8080
382    # Chat completion endpoint: http://localhost:8080/v1/chat/completions
383    ```
384
385    </details>
386
387- <details>
388    <summary>Support multiple-users and parallel decoding</summary>
389
390    ```bash
391    # up to 4 concurrent requests, each with 4096 max context
392    llama-server -m model.gguf -c 16384 -np 4
393    ```
394
395    </details>
396
397- <details>
398    <summary>Enable speculative decoding</summary>
399
400    ```bash
401    # the draft.gguf model should be a small variant of the target model.gguf
402    llama-server -m model.gguf -md draft.gguf
403    ```
404
405    </details>
406
407- <details>
408    <summary>Serve an embedding model</summary>
409
410    ```bash
411    # use the /embedding endpoint
412    llama-server -m model.gguf --embedding --pooling cls -ub 8192
413    ```
414
415    </details>
416
417- <details>
418    <summary>Serve a reranking model</summary>
419
420    ```bash
421    # use the /reranking endpoint
422    llama-server -m model.gguf --reranking
423    ```
424
425    </details>
426
427- <details>
428    <summary>Constrain all outputs with a grammar</summary>
429
430    ```bash
431    # custom grammar
432    llama-server -m model.gguf --grammar-file grammar.gbnf
433
434    # JSON
435    llama-server -m model.gguf --grammar-file grammars/json.gbnf
436    ```
437
438    </details>
439
440
441## [`llama-perplexity`](tools/perplexity)
442
443#### A tool for measuring the [perplexity](tools/perplexity/README.md) [^1] (and other quality metrics) of a model over a given text.
444
445- <details open>
446    <summary>Measure the perplexity over a text file</summary>
447
448    ```bash
449    llama-perplexity -m model.gguf -f file.txt
450
451    # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
452    # Final estimate: PPL = 5.4007 +/- 0.67339
453    ```
454
455    </details>
456
457- <details>
458    <summary>Measure KL divergence</summary>
459
460    ```bash
461    # TODO
462    ```
463
464    </details>
465
466[^1]: [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity)
467
468## [`llama-bench`](tools/llama-bench)
469
470#### Benchmark the performance of the inference for various parameters.
471
472- <details open>
473    <summary>Run default benchmark</summary>
474
475    ```bash
476    llama-bench -m model.gguf
477
478    # Output:
479    # | model               |       size |     params | backend    | threads |          test |                  t/s |
480    # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
481    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         pp512 |      5765.41 ยฑ 20.55 |
482    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         tg128 |        197.71 ยฑ 0.81 |
483    #
484    # build: 3e0ba0e60 (4229)
485    ```
486
487    </details>
488
489## [`llama-simple`](examples/simple)
490
491#### A minimal example for implementing apps with `llama.cpp`. Useful for developers.
492
493- <details>
494    <summary>Basic text completion</summary>
495
496    ```bash
497    llama-simple -m model.gguf
498
499    # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
500    ```
501
502    </details>
503
504
505## Contributing
506
507- Contributors can open PRs
508- Collaborators will be invited based on contributions
509- Maintainers can push to branches in the `llama.cpp` repo and merge PRs into the `master` branch
510- Any help with managing issues, PRs and projects is very appreciated!
511- See [good first issues](https://github.com/ggml-org/llama.cpp/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) for tasks suitable for first contributions
512- Read the [CONTRIBUTING.md](CONTRIBUTING.md) for more information
513- Make sure to read this: [Inference at the edge](https://github.com/ggml-org/llama.cpp/discussions/205)
514- A bit of backstory for those who are interested: [Changelog podcast](https://changelog.com/podcast/532)
515
516## Other documentation
517
518- [cli](tools/cli/README.md)
519- [completion](tools/completion/README.md)
520- [server](tools/server/README.md)
521- [GBNF grammars](grammars/README.md)
522
523#### Development documentation
524
525- [How to build](docs/build.md)
526- [Running on Docker](docs/docker.md)
527- [Build on Android](docs/android.md)
528- [Performance troubleshooting](docs/development/token_generation_performance_tips.md)
529- [GGML tips & tricks](https://github.com/ggml-org/llama.cpp/wiki/GGML-Tips-&-Tricks)
530
531#### Seminal papers and background on the models
532
533If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
534- LLaMA:
535    - [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
536    - [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
537- GPT-3
538    - [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
539- GPT-3.5 / InstructGPT / ChatGPT:
540    - [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
541    - [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
542
543## XCFramework
544The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS,
545and macOS. It can be used in Swift projects without the need to compile the
546library from source. For example:
547```swift
548// swift-tools-version: 5.10
549// The swift-tools-version declares the minimum version of Swift required to build this package.
550
551import PackageDescription
552
553let package = Package(
554    name: "MyLlamaPackage",
555    targets: [
556        .executableTarget(
557            name: "MyLlamaPackage",
558            dependencies: [
559                "LlamaFramework"
560            ]),
561        .binaryTarget(
562            name: "LlamaFramework",
563            url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
564            checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
565        )
566    ]
567)
568```
569The above example is using an intermediate build `b5046` of the library. This can be modified
570to use a different version by changing the URL and checksum.
571
572## Completions
573Command-line completion is available for some environments.
574
575#### Bash Completion
576```bash
577$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
578$ source ~/.llama-completion.bash
579```
580Optionally this can be added to your `.bashrc` or `.bash_profile` to load it
581automatically. For example:
582```console
583$ echo "source ~/.llama-completion.bash" >> ~/.bashrc
584```
585
586## Dependencies
587
588- [yhirose/cpp-httplib](https://github.com/yhirose/cpp-httplib) - Single-header HTTP server, used by `llama-server` - MIT license
589- [stb-image](https://github.com/nothings/stb) - Single-header image format decoder, used by multimodal subsystem - Public domain
590- [nlohmann/json](https://github.com/nlohmann/json) - Single-header JSON library, used by various tools/examples - MIT License
591- [miniaudio.h](https://github.com/mackron/miniaudio) - Single-header audio format decoder, used by multimodal subsystem - Public domain
592- [subprocess.h](https://github.com/sheredom/subprocess.h) - Single-header process launching solution for C and C++ - Public domain