blob: a04595032431c153299527d6d9faae534e84c1e0 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
|
---
base_model:
- {base_model}
---
# {model_name} GGUF
Recommended way to run this model:
```sh
llama-server -hf {namespace}/{model_name}-GGUF
```
Then, access http://localhost:8080
|