Upload README.md with huggingface_hub
This commit is contained in:
@@ -24,7 +24,7 @@ tags:
|
||||
|
||||
## <span style="color: #7F7FFF;">Model Generation Details</span>
|
||||
|
||||
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`f505bd83`](https://github.com/ggerganov/llama.cpp/commit/f505bd83ca7a43c4585ff3d59135e77eae9c793b).
|
||||
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`ee09828cb`](https://github.com/ggerganov/llama.cpp/commit/ee09828cb057460b369576410601a3a09279e23c).
|
||||
|
||||
|
||||
|
||||
@@ -131,6 +131,8 @@ However, we do not recommend using them for tasks that are knowledge-intensive o
|
||||
* `min_p=0.15`
|
||||
* `repetition_penalty=1.05`
|
||||
|
||||
**Reasoning**: LFM2-2.6B is the only model in this family to use dynamic hybrid reasoning (traces between `<think>` and `</think>` tokens) for complex or multilingual prompts.
|
||||
|
||||
**Chat template**: LFM2 uses a ChatML-like chat template as follows:
|
||||
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user