Update readme

Author Mitja Felicijan <mitja.felicijan@gmail.com> 2026-02-18 01:53:14 +0100
Committer Mitja Felicijan <mitja.felicijan@gmail.com> 2026-02-18 01:53:14 +0100
Commit d8bc33b57e1fb80d10def874a54e91bed84df79b (patch)
-rw-r--r-- README.md 24
1 files changed, 12 insertions, 12 deletions
diff --git a/README.md b/README.md
1
# llmnpc
1
# llmnpc
2
  
2
  
3
Command-line LLM inference and simple context retrieval powered by
3
Command-line tooling for NPC-focused LLM experiments with lightweight context
4
[llama.cpp](https://github.com/ggerganov/llama.cpp) to test viability of using
4
retrieval, powered by [llama.cpp](https://github.com/ggerganov/llama.cpp).
5
LLM's to drive NPC behaviour.
  
6
  
5
  
7
## Building
6
## Building
8
  
7
  
...
26
  
25
  
27
3. Build binaries:
26
3. Build binaries:
28
   ```bash
27
   ```bash
29
   make build/prompt
  
30
   make build/context
28
   make build/context
  
29
   make build/npc
31
   ```
30
   ```
32
  
31
  
33
## Usage
32
## Usage
...
42
./context -m flan-t5-small -i context.txt -o context.vdb
41
./context -m flan-t5-small -i context.txt -o context.vdb
43
```
42
```
44
  
43
  
45
### Run a prompt with retrieved context
44
### Run an NPC query with retrieved context
46
  
45
  
47
`prompt` reads the context text file, embeds the query, selects the top 3
46
`npc` loads a vector database, embeds the prompt, selects the top 3 matching
48
matching lines by cosine similarity, and builds a prompt from those lines.
47
lines by cosine similarity, and runs the NPC system prompt against that context.
49
  
48
  
50
```bash
49
```bash
51
./prompt -p "What is machine learning?" -c context.txt
50
./npc -m flan-t5-small -p "Who is Gandalf?" -c context.vdb
52
./prompt -m flan-t5-small -p "What is machine learning?" -c context.txt
51
./npc -m flan-t5-small -p "Who is Frodo?" -c context.vdb
53
```
52
```
54
  
53
  
55
### context options
54
### context options
...
63
| `-v, --verbose` | Enable llama.cpp logging |
62
| `-v, --verbose` | Enable llama.cpp logging |
64
| `-h, --help` | Show help message |
63
| `-h, --help` | Show help message |
65
  
64
  
66
### prompt options
65
### npc options
67
  
66
  
68
| Flag | Description |
67
| Flag | Description |
69
|------|-------------|
68
|------|-------------|
70
| `-m, --model` | Model to use (default: first model in config) |
69
| `-m, --model` | Model to use (required) |
71
| `-p, --prompt` | Prompt text (required) |
70
| `-p, --prompt` | Prompt text (required) |
72
| `-c, --context` | Context text file (required) |
71
| `-c, --context` | Context vector database file (.vdb) (required) |
  
72
| `-l, --list` | List available models |
73
| `-v, --verbose` | Enable llama.cpp logging |
73
| `-v, --verbose` | Enable llama.cpp logging |
74
| `-h, --help` | Show help message |
74
| `-h, --help` | Show help message |
75
  
75
  
...