diff --git a/README.md b/README.md index 198f39c..bfc7377 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ tags: ## Model Generation Details -This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`f505bd83`](https://github.com/ggerganov/llama.cpp/commit/f505bd83ca7a43c4585ff3d59135e77eae9c793b). +This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`ee09828cb`](https://github.com/ggerganov/llama.cpp/commit/ee09828cb057460b369576410601a3a09279e23c). @@ -131,6 +131,8 @@ However, we do not recommend using them for tasks that are knowledge-intensive o * `min_p=0.15` * `repetition_penalty=1.05` +**Reasoning**: LFM2-2.6B is the only model in this family to use dynamic hybrid reasoning (traces between `` and `` tokens) for complex or multilingual prompts. + **Chat template**: LFM2 uses a ChatML-like chat template as follows: ```