Enhance README with project details and usage

Expanded project description to include embedding models, vector database creation, and local inference details.

Author Mitja Felicijan <mitja.felicijan@gmail.com> 2026-02-20 14:13:15 +0100
Committer GitHub <noreply@github.com> 2026-02-20 14:13:15 +0100
Commit 809f5def0c3a49c10d41a8e2165192fe5fa938da (patch)
-rw-r--r-- README.md 9
1 files changed, 9 insertions, 0 deletions
diff --git a/README.md b/README.md
1
An experiment using tiny LLMs as NPCs that could be embedded into the game.
1
An experiment using tiny LLMs as NPCs that could be embedded into the game.
2
  
2
  
  
3
Embed models into the game, build a simple vector database from text, embed 
  
4
prompts, retrieve top‑k by cosine similarity, and feed context into tiny 
  
5
CPU LLMs for NPC interactions.
  
6
  
  
7
**No external API calls.** Everything is local, directly using GGUF models 
  
8
and [llama.cpp](https://github.com/ggml-org/llama.cpp) for inference.
  
9
  
  
10
https://github.com/user-attachments/assets/863b75eb-0da7-4235-8112-f00bc82d81f6
  
11
  
3
> [!NOTE]
12
> [!NOTE]
4
> This project is just for fun, to see how LLMs would fare as NPCs. Because of
13
> This project is just for fun, to see how LLMs would fare as NPCs. Because of
5
> the non-deterministic nature of LLMs, the results vary and are often quite
14
> the non-deterministic nature of LLMs, the results vary and are often quite
...