2026-01-16 10:43:34 +08:00
|
|
|
---
|
|
|
|
|
base_model:
|
|
|
|
|
- {base_model}
|
|
|
|
|
---
|
|
|
|
|
# {model_name} GGUF
|
|
|
|
|
|
|
|
|
|
Recommended way to run this model:
|
|
|
|
|
|
|
|
|
|
```sh
|
2026-01-16 11:16:14 +08:00
|
|
|
llama-server -hf {namespace}/{model_name}-GGUF -c 0
|
2026-01-16 10:43:34 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Then, access http://localhost:8080
|