初始化项目,由ModelHub XC社区提供模型
Model: nv-community/AceMath-RL-Nemotron-7B Source: Original Platform
This commit is contained in:
49
.gitattributes
vendored
Normal file
49
.gitattributes
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||
*.tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
*.db* filter=lfs diff=lfs merge=lfs -text
|
||||
*.ark* filter=lfs diff=lfs merge=lfs -text
|
||||
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
|
||||
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
|
||||
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.gguf* filter=lfs diff=lfs merge=lfs -text
|
||||
*.ggml filter=lfs diff=lfs merge=lfs -text
|
||||
*.llamafile* filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
102
README.md
Normal file
102
README.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: other
|
||||
license_name: nvidia-open-model-license
|
||||
license_link: >-
|
||||
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
|
||||
pipeline_tag: text-generation
|
||||
language:
|
||||
- en
|
||||
tags:
|
||||
- nvidia
|
||||
- reasoning
|
||||
- math
|
||||
- reinforcement learning
|
||||
- pytorch
|
||||
---
|
||||
|
||||
## Introduction
|
||||

|
||||
|
||||
We’re thrilled to introduce AceMath-RL-Nemotron-7B, a math reasoning model trained entirely through reinforcement learning (RL), starting from the Deepseek-R1-Distilled-Qwen-7B. It delivers impressive results, achieving 69.0% Pass@1 accuracy on AIME 2024 (+13.5% gain) and 53.6% Pass@1 accuracy on AIME 2025 (+14.4% gain).
|
||||
Interestingly, this math-focused RL training also improves the model’s coding accuracy on LiveCodeBench, reaching 44.4% Pass@1 (+6.8% gain), demonstrating the generalization capabilities of scaled RL training.
|
||||
|
||||
We share our training recipe, training logs, and data curation details in our [BLOG](https://research.nvidia.com/labs/adlr/acemath_rl/).
|
||||
|
||||
|
||||
## Results
|
||||
|
||||
We evaluate our model against competitive reasoning models of comparable size on AIME 2024, AIME 2025, and GPQA.
|
||||
| **Model** | **AIME 2024<br>(AVG@64)** | **AIME 2025<br>(AVG@64)** | **GPQA-Diamond<br>(AVG@8)** |
|
||||
| :---: | :---: | :---: | :---: |
|
||||
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.2 | 49.1 |
|
||||
| Light-R1-7B-DS | 59.1 | 44.3 | 49.4 |
|
||||
| AReaL-boba-RL-7B | 61.9 | 48.3 | 47.6 |
|
||||
| Llama-Nemotron-Nano-v1 (8B) | 63.8 | 47.1 | 54.1 |
|
||||
| Skywork-OR1-Math-7B-Preview | 69.8 | 52.3 | - |
|
||||
| [AceMath-RL-Nemotron-7B 🤗](https://huggingface.co/nvidia/AceMath-RL-Nemotron-7B) | 69.0 | 53.6 | 52.1 |
|
||||
|
||||
Additionally, we evaluate our models on additional math benchmarks and LiveCodeBench for a more comprehensive evaluation.
|
||||
|
||||
| **Model** | **GSM8K<br>(AVG@1)** | **MATH500<br>(AVG@4)** | **Minerva Math<br>(AVG@1)** | **GaoKao2023En<br>(AVG@1)** | **Olympiad Bench<br>(AVG@1)** | **College Math<br>(AVG@1)** | **ACM23<br>(AVG@5)** | **LiveCodeBench<br>(AVG@8)** |
|
||||
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
||||
| DeepSeek-R1-Distill-Qwen-7B | 92.7 | 92.8 | 57.4 | 82.3 | 58.2 | 56.7 | 89.0 | 37.6 |
|
||||
| [AceMath-RL-Nemotron-7B 🤗](https://huggingface.co/nvidia/AceMath-RL-Nemotron-7B) | 93.3 | 94.1 | 56.6 | 85.5 | 66.7 | 59.8 | 94.0 | 44.4 |
|
||||
|
||||
|
||||
## How to use
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = 'nvidia/AceMath-RL-Nemotron-7B'
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
|
||||
|
||||
prompt = "Jen enters a lottery by picking $4$ distinct numbers from $S=\\{1,2,3,\\cdots,9,10\\}.$ $4$ numbers are randomly chosen from $S.$ She wins a prize if at least two of her numbers were $2$ of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize is $\\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$."
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
|
||||
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=32768,
|
||||
temperature=0.6,
|
||||
top_p=0.95
|
||||
)
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
```
|
||||
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
1. Don't include a system prompt; instead, place all instructions directly in the user prompt.
|
||||
2. We recommend using the following prompt format for math questions:<br>*<|begin▁of▁sentence|><|User|>{math_question}\nPlease reason step by step, and put your final answer within \boxed{}.<|Assistant|>\<think\>\n*
|
||||
|
||||
|
||||
## Correspondence to
|
||||
Yang Chen (yachen@nvidia.com),<br>Zihan Liu (zihanl@nvidia.com),<br>Chankyu Lee (chankyul@nvidia.com),<br>Wei Ping (wping@nvidia.com)
|
||||
|
||||
|
||||
## License
|
||||
Your use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
|
||||
|
||||
|
||||
## Citation
|
||||
```
|
||||
@article{acemath2024,
|
||||
title={AceMath: Advancing Frontier Math Reasoning with Post-Training and Reward Modeling},
|
||||
author={Liu, Zihan and Chen, Yang and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
|
||||
journal={arXiv preprint},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
30
config.json
Normal file
30
config.json
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"architectures": [
|
||||
"Qwen2ForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 151646,
|
||||
"eos_token_id": 151643,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 3584,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 18944,
|
||||
"max_position_embeddings": 131072,
|
||||
"max_window_layers": 28,
|
||||
"model_type": "qwen2",
|
||||
"num_attention_heads": 28,
|
||||
"num_hidden_layers": 28,
|
||||
"num_key_value_heads": 4,
|
||||
"pad_token_id": 151643,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 10000,
|
||||
"sliding_window": 4096,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.49.0",
|
||||
"use_cache": true,
|
||||
"use_mrope": false,
|
||||
"use_sliding_window": false,
|
||||
"vocab_size": 152064
|
||||
}
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 151646,
|
||||
"eos_token_id": 151643,
|
||||
"pad_token_id": 151643,
|
||||
"transformers_version": "4.49.0"
|
||||
}
|
||||
BIN
img/aime24_accuracy.png
Normal file
BIN
img/aime24_accuracy.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 90 KiB |
3
model-00001-of-00004.safetensors
Normal file
3
model-00001-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:86739f30b28a218c8d2663e995a4b9549c777fa45268fa206fa0e9b9edc9a693
|
||||
size 4173009960
|
||||
3
model-00002-of-00004.safetensors
Normal file
3
model-00002-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f3b4ef4260b9498c75f0f444ca4000d315b9579ca7429144561667c0690fa37f
|
||||
size 4932679304
|
||||
3
model-00003-of-00004.safetensors
Normal file
3
model-00003-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f0394f62d24d0786510b40b06399b806a4be575ff60c7ff9260d3bc770a31fcc
|
||||
size 4998822648
|
||||
3
model-00004-of-00004.safetensors
Normal file
3
model-00004-of-00004.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6529ca28c527668b3fb154720dc6472c5f21cf2fc8a0b2c7d2f4b8d7f2437a66
|
||||
size 1126759856
|
||||
346
model.safetensors.index.json
Normal file
346
model.safetensors.index.json
Normal file
@@ -0,0 +1,346 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 15231233024
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00002-of-00004.safetensors",
|
||||
"model.embed_tokens.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.bias": "model-00004-of-00004.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.bias": "model-00004-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.bias": "model-00004-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
|
||||
"model.norm.weight": "model-00004-of-00004.safetensors"
|
||||
}
|
||||
}
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin▁of▁sentence|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|end▁of▁sentence|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|end▁of▁sentence|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e20ddafc659ba90242154b55275402edeca0715e5dbb30f56815a4ce081f4893
|
||||
size 11422778
|
||||
195
tokenizer_config.json
Normal file
195
tokenizer_config.json
Normal file
@@ -0,0 +1,195 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_eos_token": false,
|
||||
"add_prefix_space": null,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|end▁of▁sentence|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|User|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|Assistant|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|begin▁of▁sentence|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151647": {
|
||||
"content": "<|EOT|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151648": {
|
||||
"content": "<think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151649": {
|
||||
"content": "</think>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151650": {
|
||||
"content": "<|quad_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151651": {
|
||||
"content": "<|quad_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151652": {
|
||||
"content": "<|vision_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151653": {
|
||||
"content": "<|vision_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151654": {
|
||||
"content": "<|vision_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151655": {
|
||||
"content": "<|image_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151656": {
|
||||
"content": "<|video_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151657": {
|
||||
"content": "<tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151658": {
|
||||
"content": "</tool_call>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151659": {
|
||||
"content": "<|fim_prefix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151660": {
|
||||
"content": "<|fim_middle|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151661": {
|
||||
"content": "<|fim_suffix|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151662": {
|
||||
"content": "<|fim_pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151663": {
|
||||
"content": "<|repo_name|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151664": {
|
||||
"content": "<|file_sep|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
}
|
||||
},
|
||||
"bos_token": "<|begin▁of▁sentence|>",
|
||||
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|><think>\\n'}}{% endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|end▁of▁sentence|>",
|
||||
"extra_special_tokens": {},
|
||||
"legacy": true,
|
||||
"model_max_length": 16384,
|
||||
"pad_token": "<|end▁of▁sentence|>",
|
||||
"sp_model_kwargs": {},
|
||||
"tokenizer_class": "LlamaTokenizerFast",
|
||||
"unk_token": null,
|
||||
"use_default_system_prompt": false
|
||||
}
|
||||
Reference in New Issue
Block a user