初始化项目,由ModelHub XC社区提供模型

Model: CultriX/Qwen2.5-14B-ReasoningMerge
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-21 16:10:19 +08:00
commit 82aec94864
17 changed files with 151806 additions and 0 deletions

51
README.md Normal file
View File

@@ -0,0 +1,51 @@
---
base_model:
- Sakalti/Saka-14B
- RDson/WomboCombo-R1-Coder-14B-Preview
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Sakalti/Saka-14B](https://huggingface.co/Sakalti/Saka-14B)
* [RDson/WomboCombo-R1-Coder-14B-Preview](https://huggingface.co/RDson/WomboCombo-R1-Coder-14B-Preview)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Sakalti/Saka-14B
merge_method: slerp
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: 0.4 # Favor RDson/WomboCombo for self-attention
- filter: mlp
value: 0.7 # Favor Sakalti/Saka-14B for MLP layers
- value: 0.5
models:
- model: Sakalti/Saka-14B
- model: RDson/WomboCombo-R1-Coder-14B-Preview
tokenizer_source: Sakalti/Saka-14B
chat_template: chatml
name: reasoning_blend
```