Sync from upstream llama.cpp repository
This commit is contained in:
13
examples/model-conversion/scripts/causal/modelcard.template
Normal file
13
examples/model-conversion/scripts/causal/modelcard.template
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
base_model:
|
||||
- {base_model}
|
||||
---
|
||||
# {model_name} GGUF
|
||||
|
||||
Recommended way to run this model:
|
||||
|
||||
```sh
|
||||
llama-server -hf {namespace}/{model_name}-GGUF
|
||||
```
|
||||
|
||||
Then, access http://localhost:8080
|
||||
Reference in New Issue
Block a user