1# llama-server Development Documentation
  2
  3This document provides an in-depth technical overview of `llama-server`, intended for maintainers and contributors.
  4
  5If you are an end user consuming `llama-server` as a product, please refer to the main [README](./README.md) instead.
  6
  7## Backend
  8
  9### Overview
 10
 11The server supports two primary operating modes:
 12
 13- **Inference mode**: The default mode for performing inference with a single loaded GGUF model.
 14- **Router mode**: Enables management of multiple inference server instances behind a single API endpoint. Requests are automatically routed to the appropriate backend instance based on the requested model.
 15
 16The core architecture consists of the following components:
 17
 18- `server_context`: Holds the primary inference state, including the main `llama_context` and all active slots.
 19- `server_slot`: An abstraction over a single “sequence” in llama.cpp, responsible for managing individual parallel inference requests.
 20- `server_routes`: Middleware layer between `server_context` and the HTTP interface; handles JSON parsing/formatting and request routing logic.
 21- `server_http_context`: Implements the HTTP server using `cpp-httplib`.
 22- `server_queue`: Thread-safe queue used by HTTP workers to submit new tasks to `server_context`.
 23- `server_response`: Thread-safe queue used by `server_context` to return results to HTTP workers.
 24- `server_response_reader`: Higher-level wrapper around the two queues above for cleaner code.
 25- `server_task`: Unit of work pushed into `server_queue`.
 26- `server_task_result`: Unit of result pushed into `server_response`.
 27- `server_tokens`: Unified representation of token sequences (supports both text and multimodal tokens); used by `server_task` and `server_slot`.
 28- `server_prompt_checkpoint`: For recurrent (e.g., RWKV) and SWA models, stores snapshots of KV cache state. Enables reuse when subsequent requests share the same prompt prefix, saving redundant computation.
 29- `server_models`: Standalone component for managing multiple backend instances (used in router mode). It is completely independent of `server_context`.
 30
 31```mermaid
 32graph TD
 33    API_User <--> server_http_context
 34    server_http_context <-- router mode --> server_models
 35    server_http_context <-- inference mode --> server_routes
 36    server_routes -- server_task --> server_queue
 37    subgraph server_context
 38        server_queue --> server_slot
 39        server_slot -- server_task_result --> server_response
 40        server_slot[multiple server_slot]
 41    end
 42    server_response --> server_routes
 43```
 44
 45### Batching
 46
 47The server context maintains a single batch shared across all slots. When `update_slots()` is invoked, the system iterates through all active slots to populate this batch. For each slot, either a generated token from the previous decoding step or available prompt tokens are added to the batch.
 48
 49Batching constraints apply: slots can only be batched together if they share compatible configurations. For instance, slots using a specific LoRA adapter can be batched with each other, but not with slots using a different LoRA adapter or no adapter at all.
 50
 51Once the batch reaches capacity or all slots have been processed, `llama_decode` is called to execute the inference. This operation represents the primary computational bottleneck in `update_slots()`.
 52
 53Following decoding, the system either retrieves embeddings or samples the next token using `common_sampler_sample`. If a slot has remaining prompt tokens to process, it yields until the next `update_slots()` iteration.
 54
 55### Thread Management
 56
 57`server_context` runs on a dedicated single thread. Because it is single-threaded, heavy post-processing (especially after token generation) should be avoided, as it directly impacts multi-sequence throughput.
 58
 59Each incoming HTTP request is handled by its own thread managed by the HTTP library. The following operations are performed in HTTP worker threads:
 60
 61- JSON request parsing
 62- Chat template application
 63- Tokenization
 64- Conversion of `server_task_result` into final JSON response
 65- Error formatting into JSON
 66- Tracking of partial/incremental responses (e.g., streaming tool calls or reasoning steps)
 67
 68**Best practices to follow:**
 69
 70- All JSON formatting and chat template logic must stay in the HTTP layer.
 71- Avoid passing raw JSON between the HTTP layer and `server_slot`. Instead, parse everything into native C++ types as early as possible.
 72
 73### Example trace of a request
 74
 75Here is an example trace of an API request for text completion:
 76
 77- A request arrives at the HTTP layer.
 78- The request is routed to the corresponding handler inside `server_routes`. In this case, `handle_completions_impl` is invoked.
 79- The handler parses the input request, constructs a new `server_task`, and passes it to `server_res_generator`.
 80- `server_res_generator` creates a new `task_result_state` for each task:
 81    - `task_result_state` stays in the HTTP layer, responsible for keeping track of the current state of the response (e.g., parsing tool calls or thinking messages).
 82    - `server_task` is moved into `server_queue` inside `server_context`.
 83- `server_context` launches the task by moving it into an available slot (see `launch_slot_with_task()`).
 84- `update_slot()` processes the task as described in the "Batching" section above.
 85- Results may be sent using `send_partial_response` or `send_final_response`, which creates a new `server_task_result` and pushes it to the response queue.
 86- At the same time, `server_res_generator` listens to the response queue and retrieves this response.
 87- As the response is stateless, `server_res_generator` calls `response->update()` to update the response with the current state.
 88- `server_res_generator` then calls `response->to_json()` and passes the response to the HTTP layer.
 89
 90### Testing
 91
 92`llama-server` includes an automated test suite based on `pytest`.
 93
 94The framework automatically starts a `llama-server` instance, sends requests, and validates responses.
 95
 96For detailed instructions, see the [test documentation](./tests/README.md).
 97
 98### Notable Related PRs
 99
100- Initial server implementation: https://github.com/ggml-org/llama.cpp/pull/1443
101- Parallel decoding support: https://github.com/ggml-org/llama.cpp/pull/3228
102- Refactor introducing `server_queue` and `server_response`: https://github.com/ggml-org/llama.cpp/pull/5065
103- Reranking endpoint: https://github.com/ggml-org/llama.cpp/pull/9510
104- Multimodal model support (`libmtmd`): https://github.com/ggml-org/llama.cpp/pull/12898
105- Unified KV cache handling: https://github.com/ggml-org/llama.cpp/pull/16736
106- Separation of HTTP logic into dedicated files: https://github.com/ggml-org/llama.cpp/pull/17216
107- Large-scale code base split into smaller files: https://github.com/ggml-org/llama.cpp/pull/17362
108- Introduction of router mode: https://github.com/ggml-org/llama.cpp/pull/17470
109- Speculative decoding: https://github.com/ggml-org/llama.cpp/pull/17808 and rework in https://github.com/ggml-org/llama.cpp/pull/17808
110- INI presets: https://github.com/ggml-org/llama.cpp/pull/17859 (+ refactoring: https://github.com/ggml-org/llama.cpp/pull/18169)
111- Sleeping mode: https://github.com/ggml-org/llama.cpp/pull/18228
112
113
114
115
116## Web UI
117
118The project includes a web-based user interface for interacting with `llama-server`. It supports both single-model (`MODEL` mode) and multi-model (`ROUTER` mode) operation.
119
120The SvelteKit-based Web UI is introduced in this PR: https://github.com/ggml-org/llama.cpp/pull/14839
121
122### Features
123
124-   **Chat interface** with streaming responses
125-   **Multi-model support** (ROUTER mode) - switch between models, auto-load on selection
126-   **Modality validation** - ensures selected model supports conversation's attachments (images, audio)
127-   **Conversation management** - branching, regeneration, editing with history preservation
128-   **Attachment support** - images, audio, PDFs (with vision/text fallback)
129-   **Configurable parameters** - temperature, top_p, etc. synced with server defaults
130-   **Dark/light theme**
131
132### Tech Stack
133
134-   **SvelteKit** - frontend framework with Svelte 5 runes for reactive state
135-   **TailwindCSS** + **shadcn-svelte** - styling and UI components
136-   **Vite** - build tooling
137-   **IndexedDB** (Dexie) - local storage for conversations
138-   **LocalStorage** - user settings persistence
139
140### Architecture
141
142The WebUI follows a layered architecture:
143
144```
145Routes → Components → Hooks → Stores → Services → Storage/API
146```
147
148-   **Stores** - reactive state management (`chatStore`, `conversationsStore`, `modelsStore`, `serverStore`, `settingsStore`)
149-   **Services** - stateless API/database communication (`ChatService`, `ModelsService`, `PropsService`, `DatabaseService`)
150-   **Hooks** - reusable logic (`useModelChangeValidation`, `useProcessingState`)
151
152For detailed architecture diagrams, see [`tools/server/webui/docs/`](webui/docs/):
153
154-   `high-level-architecture.mmd` - full architecture with all modules
155-   `high-level-architecture-simplified.mmd` - simplified overview
156-   `data-flow-simplified-model-mode.mmd` - data flow for single-model mode
157-   `data-flow-simplified-router-mode.mmd` - data flow for multi-model mode
158-   `flows/*.mmd` - detailed per-domain flows (chat, conversations, models, etc.)
159
160### Development
161
162```sh
163# make sure you have Node.js installed
164cd tools/server/webui
165npm i
166
167# run dev server (with hot reload)
168npm run dev
169
170# run tests
171npm run test
172
173# build production bundle
174npm run build
175```
176
177After `public/index.html.gz` has been generated, rebuild `llama-server` as described in the [build](#build) section to include the updated UI.
178
179**Note:** The Vite dev server automatically proxies API requests to `http://localhost:8080`. Make sure `llama-server` is running on that port during development.