summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMitja Felicijan <mitja.felicijan@gmail.com>2026-02-12 21:12:09 +0100
committerMitja Felicijan <mitja.felicijan@gmail.com>2026-02-12 21:12:09 +0100
commit03ebe8c1e276650e63e1db0a97c00d0191f3d520 (patch)
tree1237b6a4797d26659df420ea8c08ca786e08cccd
parent59df893b07dff149c19e833368660cb594037db8 (diff)
downloadllmnpc-03ebe8c1e276650e63e1db0a97c00d0191f3d520.tar.gz
Added readme
-rw-r--r--README.md58
1 files changed, 58 insertions, 0 deletions
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..9ee0241
--- /dev/null
+++ b/README.md
@@ -0,0 +1,58 @@
+# llmnpc
+
+A command-line LLM inference tool powered by
+[llama.cpp](https://github.com/ggerganov/llama.cpp) for testing how/if NPC's
+could use LLM's.
+
+## Building
+
+### Prerequisites
+
+- C compiler (gcc/clang)
+- CMake
+- Docker (optional, for containerized use of binaries)
+
+### Build Steps
+
+1. Build llama.cpp libraries:
+ ```bash
+ make llamacpp
+ ```
+
+2. Build the prompt binary:
+ ```bash
+ make prompt
+ ```
+
+## Usage
+
+```bash
+./prompt -p "Your prompt here"
+./prompt -m flan-t5-small -p "What is machine learning?"
+```
+
+### Options
+
+| Flag | Description |
+|------|-------------|
+| `-m, --model` | Model to use (default: first model in config) |
+| `-p, --prompt` | Prompt text (required) |
+| `-h, --help` | Show help message |
+
+## Models
+
+Configure models in `models.h`. The default model is `flan-t5-small`, expecting a GGUF file at `models/flan-t5-small.F16.gguf`.
+
+## Docker
+
+```bash
+make docker
+```
+
+This builds a Docker image and drops you into a shell with the prompt binary and models available at `/app/`.
+
+## Cleaning
+
+```bash
+make clean
+```