Added readme

Author Mitja Felicijan <mitja.felicijan@gmail.com> 2026-02-12 21:12:09 +0100
Committer Mitja Felicijan <mitja.felicijan@gmail.com> 2026-02-12 21:12:09 +0100
Commit 03ebe8c1e276650e63e1db0a97c00d0191f3d520 (patch)
-rw-r--r-- README.md 58
1 files changed, 58 insertions, 0 deletions
diff --git a/README.md b/README.md
  
1
# llmnpc
  
2
  
  
3
A command-line LLM inference tool powered by
  
4
[llama.cpp](https://github.com/ggerganov/llama.cpp) for testing how/if NPC's
  
5
could use LLM's.
  
6
  
  
7
## Building
  
8
  
  
9
### Prerequisites
  
10
  
  
11
- C compiler (gcc/clang)
  
12
- CMake
  
13
- Docker (optional, for containerized use of binaries)
  
14
  
  
15
### Build Steps
  
16
  
  
17
1. Build llama.cpp libraries:
  
18
   ```bash
  
19
   make llamacpp
  
20
   ```
  
21
  
  
22
2. Build the prompt binary:
  
23
   ```bash
  
24
   make prompt
  
25
   ```
  
26
  
  
27
## Usage
  
28
  
  
29
```bash
  
30
./prompt -p "Your prompt here"
  
31
./prompt -m flan-t5-small -p "What is machine learning?"
  
32
```
  
33
  
  
34
### Options
  
35
  
  
36
| Flag | Description |
  
37
|------|-------------|
  
38
| `-m, --model` | Model to use (default: first model in config) |
  
39
| `-p, --prompt` | Prompt text (required) |
  
40
| `-h, --help` | Show help message |
  
41
  
  
42
## Models
  
43
  
  
44
Configure models in `models.h`. The default model is `flan-t5-small`, expecting a GGUF file at `models/flan-t5-small.F16.gguf`.
  
45
  
  
46
## Docker
  
47
  
  
48
```bash
  
49
make docker
  
50
```
  
51
  
  
52
This builds a Docker image and drops you into a shell with the prompt binary and models available at `/app/`.
  
53
  
  
54
## Cleaning
  
55
  
  
56
```bash
  
57
make clean
  
58
```