1---
 2base_model:
 3- {base_model}
 4---
 5# {model_name} GGUF
 6
 7Recommended way to run this model:
 8
 9```sh
10llama-server -hf {namespace}/{model_name}-GGUF
11```
12
13Then, access http://localhost:8080