初始化项目,由ModelHub XC社区提供模型

Model: khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Slerp
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 01:08:02 +08:00
commit 13d77c411d
10 changed files with 651 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text
benchmark/Merged_Model.png filter=lfs diff=lfs merge=lfs -text

87
README.md Normal file
View File

@@ -0,0 +1,87 @@
---
base_model:
- khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled
- khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Distilled
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
pipeline_tag: text-generation
language:
- en
datasets:
- khazarai/kimi-2.5-high-reasoning-250x
- khazarai/qwen3.6-plus-high-reasoning-500x
---
# khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Slerp
![alt="General Benchmark Comparison Chart"](benchmark/Merged_Model.png)
*Note: The sharp drop in "Creative Writing" is an expected and accepted trade-off to maximize extreme logical reasoning and coding precision.*
This model is a highly experimental and optimized reasoning model created through a surgical SLERP merge of two powerful 4B reasoning models. The goal of this merge was to combine the deep analytical capabilities of Kimi with the mathematical and structural precision of Qwen, while mitigating the catastrophic forgetting commonly seen in SFT model merges.
After multiple iterations and layer-by-layer tensor analysis, we achieved a **"1+1=3 Synergy Effect"** in Logical Inference and Planning, outperforming both base models and the official Qwen Thinking model.
### The "Golden Path" (V5) Strategy
Standard SLERP merges often destroy RAG capabilities and syntax adherence. To solve this, this model utilizes a custom merge configuration:
1. **RAG/Vocabulary Fix:** `embed_tokens` and `lm_head` are strictly pinned to `1.0` (Qwen). The model reads and speaks purely using Qwen's vocabulary, completely eliminating the RAG degradation problem.
2. **Gradient Attention:** The intermediate attention and MLP layers follow a smooth gradient `[0, 0.1, 0.2, 0.3, 0.5, 0.7, 0.8, 0.9, 1]` to prevent weight interference in deep reasoning steps.
## Benchmark Performance (Multi-Domain Reasoning)
| Model | Score |
| :--- | :--- |
| **khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Slerp** | **77.18** |
| khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled | 76.09 |
| khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Distilled | 75.64 |
| Qwen/Qwen3-4B-Thinking-2507 | 73.73 |
- **Benchmark**: khazarai/Multi-Domain-Reasoning-Benchmark
- **Total Questions**: 100
## 💡 Intended Use Cases
* **Ideal for:** Complex logical deductions, Python code debugging, mathematical problem-solving, and strict RAG (Retrieval-Augmented Generation) pipelines.
* **Not recommended for:** Creative writing, poetry, or highly imaginative storytelling.
### Models Merged
The following models were included in the merge:
* [khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled](https://huggingface.co/khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled)
* [khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Distilled](https://huggingface.co/khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Distilled)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled
- model: khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Distilled
merge_method: slerp
base_model: khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled
parameters:
t:
- filter: embed_tokens
value: 1
- filter: lm_head
value: 1
- value: 1
- filter: self
value: [0, 0.1, 0.2, 0.3, 0.5, 0.7, 0.8, 0.9, 1]
dtype: bfloat16
```

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7ab02f4b4c12f0ff5f4977f8d0ae2a6721b92ccca7ee20720e96b96c16e7b5c2
size 157595

73
config.json Normal file
View File

@@ -0,0 +1,73 @@
{
"architectures": [
"Qwen3ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": null,
"dtype": "bfloat16",
"eos_token_id": 151645,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 2560,
"initializer_range": 0.02,
"intermediate_size": 9728,
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
],
"max_position_embeddings": 262144,
"max_window_layers": 36,
"model_type": "qwen3",
"num_attention_heads": 32,
"num_hidden_layers": 36,
"num_key_value_heads": 8,
"pad_token_id": 151669,
"rms_norm_eps": 1e-06,
"rope_parameters": {
"rope_theta": 5000000,
"rope_type": "default"
},
"sliding_window": null,
"tie_word_embeddings": true,
"transformers_version": "5.0.0",
"unsloth_fixed": true,
"unsloth_version": "2026.3.7",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 151936
}

20
mergekit_config.yml Normal file
View File

@@ -0,0 +1,20 @@
models:
- model: khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled
- model: khazarai/Qwen3-4B-Qwen3.6-plus-Reasoning-Distilled
merge_method: slerp
base_model: khazarai/Qwen3-4B-Kimi2.5-Reasoning-Distilled
parameters:
t:
- filter: embed_tokens
value: 1
- filter: lm_head
value: 1
- value: 1
- filter: self
value: [0, 0.1, 0.2, 0.3, 0.5, 0.7, 0.8, 0.9, 1]
dtype: bfloat16

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:518236ed993065e71c3fdff8e649e4a157c4195ca8834b0e7c4987ca66bebbf0
size 4990818752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:08185a8789244ef018789f02833f0e08dc5105aabea9aa79398a06d63f03dbb7
size 3054163248

View File

@@ -0,0 +1,406 @@
{
"metadata": {
"total_size": 8044936192,
"mergekit_version": "0.1.4"
},
"weight_map": {
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.k_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.q_norm.weight": "model-00001-of-00002.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.3.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.4.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.5.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.6.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.7.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.8.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
"model.layers.9.self_attn.k_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.self_attn.q_norm.weight": "model-00002-of-00002.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
"model.norm.weight": "model-00002-of-00002.safetensors"
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d7430e9138b76e93fb6f93462394d236b411111aef53cb421ba97d2691040cca
size 11423114

16
tokenizer_config.json Normal file
View File

@@ -0,0 +1,16 @@
{
"add_prefix_space": false,
"backend": "tokenizers",
"bos_token": null,
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"is_local": false,
"model_max_length": 262144,
"pad_token": "<|PAD_TOKEN|>",
"padding_side": "left",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null,
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if message.content is string %}\n {%- set content = message.content %}\n {%- else %}\n {%- set content = '' %}\n {%- endif %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n<think>\\n' }}\n{%- endif %}"
}