初始化项目,由ModelHub XC社区提供模型

Model: nv-community/OpenCodeReasoning-Nemotron-14B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 17:36:00 +08:00
commit 2b3c6c8ef3
17 changed files with 152557 additions and 0 deletions

49
.gitattributes vendored Normal file
View File

@@ -0,0 +1,49 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

16
LICENSE Normal file
View File

@@ -0,0 +1,16 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

206
README.md Normal file
View File

@@ -0,0 +1,206 @@
---
base_model:
- Qwen/Qwen2.5-14B-Instruct
datasets:
- nvidia/OpenCodeReasoning
language:
- en
library_name: transformers
license: apache-2.0
tags:
- nvidia
- code
pipeline_tag: text-generation
---
# OpenCodeReasoning-Nemotron-14B Overview
## Description: <br>
OpenCodeReasoning-Nemotron-14B is a large language model (LLM) which is a derivative of Qwen2.5-14B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning for code generation. The model supports a context length of 32K tokens. <br>
This model is ready for commercial/non-commercial use. <br>
![Evaluation Results](./results.png)
## Results from [OpenCodeReasoning](https://arxiv.org/abs/2504.01943)
Below results are the average of **64 evaluations** on each benchmark.
| Model | LiveCodeBench Avg. | CodeContest All |
|------------------------|--------------------|-----------------|
| DeepSeek-R1 | 65.6 | 26.2 |
| QwQ-32B | 61.3 | 20.2 |
| | | |
| **Distilled 7B+ Models** | | |
| | | |
| Bespoke-Stratos-7B | 14.7 | 2.0 |
| OpenThinker-7B | 25.5 | 5.0 |
| R1-Distill-Qwen-7B | 38.0 | 11.1 |
| OlympicCoder-7B | 40.9 | 10.6 |
| **OCR-Qwen-7B** | **48.5** | **16.3** |
| **OCR-Qwen-7B-Instruct** | **51.3** | **18.1** |
| | | |
| **Distilled 14B+ Models**| | |
| | | |
| R1-Distill-Qwen-14B | 51.3 | 17.6 |
| **OCR-Qwen-14B** | **57.7** | **22.6** |
| **OCR-Qwen-14B-Instruct**| **59.4** | **23.6** |
| | | |
| **Distilled 32B+ Models**| | |
| | | |
| Bespoke-Stratos-32B | 30.1 | 6.3 |
| OpenThinker-32B | 54.1 | 16.4 |
| R1-Distill-Qwen-32B | 58.1 | 18.3 |
| OlympicCoder-32B | 57.4 | 18.0 |
| **OCR-Qwen-32B** | **61.8** | **24.6** |
| **OCR-Qwen-32B-Instruct**| **61.7** | **24.4** |
## Reproducing our results
* [Models](https://huggingface.co/collections/nvidia/opencodereasoning-2-68168f37cd7c6beb1e3f92e7)
* [Dataset](https://huggingface.co/datasets/nvidia/OpenCodeReasoning)
* [Paper](https://arxiv.org/abs/2504.01943)
## How to use the models?
To run inference on coding problems:
````python
import transformers
import torch
model_id = "nvidia/OpenCodeReasoning-Nemotron-14B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
prompt = """You are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.
Please use python programming language only.
You must use ```python for just the final solution code block with the following format:
```python
# Your code here
```
{user}
"""
messages = [
{
"role": "user",
"content": prompt.format(user="Write a program to calculate the sum of the first $N$ fibonacci numbers")},
]
outputs = pipeline(
messages,
max_new_tokens=32768,
)
print(outputs[0]["generated_text"][-1]['content'])
````
## Citation
If you find the data useful, please cite:
```
@article{ahmad2025opencodereasoning,
title={OpenCodeReasoning: Advancing Data Distillation for Competitive Coding},
author={Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, Boris Ginsburg},
year={2025},
eprint={2504.01943},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.01943},
}
```
## Additional Information
## Model Architecture: <br>
Architecture Type: Dense decoder-only Transformer model
Network Architecture: Qwen-14B-Instruct
<br>
**This model was developed based on Qwen2.5-14B-Instruct and has 14B model parameters. <br>**
**OpenCodeReasoning-Nemotron-14B was developed based on Qwen2.5-14B-Instruct and has 14B model parameters. <br>**
## Input: <br>
**Input Type(s):** Text <br>
**Input Format(s):** String <br>
**Input Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Input:** Context length up to 32,768 tokens <br>
## Output: <br>
**Output Type(s):** Text <br>
**Output Format:** String <br>
**Output Parameters:** One-Dimensional (1D) <br>
**Other Properties Related to Output:** Context length up to 32,768 tokens <br>
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIAs hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
## Software Integration : <br>
* Runtime Engine: NeMo 2.3.0 <br>
* Recommended Hardware Microarchitecture Compatibility: <br>
NVIDIA Ampere <br>
NVIDIA Hopper <br>
* Preferred/Supported Operating System(s): Linux <br>
## Model Version(s):
1.0 (4/25/2025) <br>
OpenCodeReasoning-Nemotron-7B<br>
OpenCodeReasoning-Nemotron-14B<br>
OpenCodeReasoning-Nemotron-32B<br>
OpenCodeReasoning-Nemotron-32B-IOI<br>
# Training and Evaluation Datasets: <br>
## Training Dataset:
The training corpus for OpenCodeReasoning-Nemotron-14B is [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) dataset, which is composed of competitive programming questions and DeepSeek-R1 generated responses.
Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
Labeling Method: Hybrid: Automated, Human, Synthetic <br>
Properties: 736k samples from OpenCodeReasoning (https://huggingface.co/datasets/nvidia/OpenCodeReasoning)
## Evaluation Dataset:
We used the datasets listed in the next section to evaluate OpenCodeReasoning-Nemotron-14B. <br>
**Data Collection Method: Hybrid: Automated, Human, Synthetic <br>**
**Labeling Method: Hybrid: Automated, Human, Synthetic <br>**
### License/Terms of Use: <br>
GOVERNING TERMS: Use of this model is governed by [Apache 2.0](https://huggingface.co/nvidia/OpenCode-Nemotron-2-14B/blob/main/LICENSE).
### Deployment Geography:
Global<br>
### Use Case: <br>
This model is intended for developers and researchers building LLMs. <br>
### Release Date: <br>
Huggingface [04/25/2025] via https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B/ <br>
## Reference(s):
[2504.01943] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
<br>
## Inference:
**Engine:** vLLM <br>
**Test Hardware** NVIDIA H100-80GB <br>
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.

24
added_tokens.json Normal file
View File

@@ -0,0 +1,24 @@
{
"</tool_call>": 151658,
"<tool_call>": 151657,
"<|box_end|>": 151649,
"<|box_start|>": 151648,
"<|endoftext|>": 151643,
"<|file_sep|>": 151664,
"<|fim_middle|>": 151660,
"<|fim_pad|>": 151662,
"<|fim_prefix|>": 151659,
"<|fim_suffix|>": 151661,
"<|im_end|>": 151645,
"<|im_start|>": 151644,
"<|image_pad|>": 151655,
"<|object_ref_end|>": 151647,
"<|object_ref_start|>": 151646,
"<|quad_end|>": 151651,
"<|quad_start|>": 151650,
"<|repo_name|>": 151663,
"<|video_pad|>": 151656,
"<|vision_end|>": 151653,
"<|vision_pad|>": 151654,
"<|vision_start|>": 151652
}

29
config.json Normal file
View File

@@ -0,0 +1,29 @@
{
"_name_or_path": "Qwen/Qwen2.5-14B-Instruct",
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151645,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"max_position_embeddings": 32768,
"max_window_layers": 70,
"model_type": "qwen2",
"num_attention_heads": 40,
"num_hidden_layers": 48,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.47.1",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 152064
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 151643,
"eos_token_id": 151645,
"transformers_version": "4.47.1"
}

151388
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2870a88bcef736b77679f86de06f3ccedf56d7aebb992c79e785cff21382cb3b
size 9941058640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8e2928a4b1dc7d04a0f56972ffb9547e4a8b278ef7243f47333c6ae14cea4ba3
size 9909694792

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4340738c2f709ce8af77b7734a90e6e55e1817fd2acde1c6a266f73ae395f3cd
size 9689380560

View File

@@ -0,0 +1,586 @@
{
"metadata": {
"total_size": 29540067328
},
"weight_map": {
"lm_head.weight": "model-00003-of-00003.safetensors",
"model.embed_tokens.weight": "model-00001-of-00003.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.15.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.30.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.33.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.33.self_attn.k_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.33.self_attn.q_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.33.self_attn.v_proj.bias": "model-00002-of-00003.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.34.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.40.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.40.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.40.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.40.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.40.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.40.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.41.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.41.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.42.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.42.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.43.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.43.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.44.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.44.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.45.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.45.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.46.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.46.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.47.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.k_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.q_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.v_proj.bias": "model-00003-of-00003.safetensors",
"model.layers.47.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.norm.weight": "model-00003-of-00003.safetensors"
}
}

BIN
results.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

31
special_tokens_map.json Normal file
View File

@@ -0,0 +1,31 @@
{
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"eos_token": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
size 11421896

208
tokenizer_config.json Normal file
View File

@@ -0,0 +1,208 @@
{
"add_bos_token": false,
"add_prefix_space": false,
"added_tokens_decoder": {
"151643": {
"content": "<|endoftext|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151644": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151645": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151646": {
"content": "<|object_ref_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151647": {
"content": "<|object_ref_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151648": {
"content": "<|box_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151649": {
"content": "<|box_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151650": {
"content": "<|quad_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151651": {
"content": "<|quad_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151652": {
"content": "<|vision_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151653": {
"content": "<|vision_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151654": {
"content": "<|vision_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151655": {
"content": "<|image_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151656": {
"content": "<|video_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"151657": {
"content": "<tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151658": {
"content": "</tool_call>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151659": {
"content": "<|fim_prefix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151660": {
"content": "<|fim_middle|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151661": {
"content": "<|fim_suffix|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151662": {
"content": "<|fim_pad|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151663": {
"content": "<|repo_name|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
},
"151664": {
"content": "<|file_sep|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": false
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>",
"<|object_ref_start|>",
"<|object_ref_end|>",
"<|box_start|>",
"<|box_end|>",
"<|quad_start|>",
"<|quad_end|>",
"<|vision_start|>",
"<|vision_end|>",
"<|vision_pad|>",
"<|image_pad|>",
"<|video_pad|>"
],
"bos_token": null,
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"errors": "replace",
"extra_special_tokens": {},
"model_max_length": 131072,
"pad_token": "<|endoftext|>",
"split_special_tokens": false,
"tokenizer_class": "Qwen2Tokenizer",
"unk_token": null
}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long