50 lines
1.4 KiB
Markdown
50 lines
1.4 KiB
Markdown
---
|
||
base_model: []
|
||
library_name: transformers
|
||
tags:
|
||
- mergekit
|
||
- merge
|
||
|
||
---
|
||
# qwen2.5-1.5b-thinking-task-arithmetic
|
||
|
||
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||
|
||
## Merge Details
|
||
### Merge Method
|
||
|
||
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using /workspace/dqdung/khoaluan/model/Qwen2.5-1.5B as a base.
|
||
|
||
### Models Merged
|
||
|
||
The following models were included in the merge:
|
||
* /workspace/dqdung/khoaluan/model/Qwen2.5-Math-1.5B-Instruct
|
||
* /workspace/dqdung/khoaluan/model/DeepSeek-R1-Distill-Qwen-1.5B
|
||
|
||
### Configuration
|
||
|
||
The following YAML configuration was used to produce this model:
|
||
|
||
```yaml
|
||
merge_method: task_arithmetic
|
||
slices:
|
||
- sources:
|
||
- model: /workspace/dqdung/khoaluan/model/Qwen2.5-1.5B
|
||
layer_range: [0, 28]
|
||
- model: /workspace/dqdung/khoaluan/model/Qwen2.5-Math-1.5B-Instruct
|
||
layer_range: [0, 28]
|
||
parameters:
|
||
density: 0.8 # k = 0.8 (top-k% of task vectors to keep)
|
||
weight: 0.3 # α = 1.0
|
||
- model: /workspace/dqdung/khoaluan/model/DeepSeek-R1-Distill-Qwen-1.5B
|
||
layer_range: [0, 28]
|
||
parameters:
|
||
density: 0.8 # k = 0.8
|
||
weight: 0.7 # α = 1.0
|
||
merge_parameters:
|
||
normalize: false # normalize merged task vectors
|
||
dtype: bfloat16
|
||
base_model: /workspace/dqdung/khoaluan/model/Qwen2.5-1.5B
|
||
|
||
```
|