初始化项目,由ModelHub XC社区提供模型

Model: prithivMLmods/Blaze-14B-xElite
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-01 10:37:25 +08:00
commit 35ecfb9f35
17 changed files with 602785 additions and 0 deletions

41
.gitattributes vendored Normal file
View File

@@ -0,0 +1,41 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model-00001-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00002-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00003-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00004-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00005-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00006-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text

220
README.md Normal file
View File

@@ -0,0 +1,220 @@
---
license: llama3.1
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- phi-4
- LlamaForCausalLM
- xElite
- 14B
model-index:
- name: Blaze-14B-xElite
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 3.63
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 51.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 35.88
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 17.68
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.68
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FBlaze-14B-xElite
name: Open LLM Leaderboard
---
![xlite.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/SW_hpO9bn8vtkf5F1NteF.gif)
# **Blaze-14B-xElite**
[Blaze-14B-xElite finetuned] is a state-of-the-art open model built on the LLaMA-based model architecture. It has been fine-tuned using a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach is to ensure that small yet capable models are trained with high-quality data focused on advanced reasoning.
Blaze-14B-xElite has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated synthetic datasets. The overall technique employed to achieve safety alignment combines SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization), including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted at multiple safety categories.
# **Dataset Info**
Blaze-14B-xElite is fine-tuned on a synthetic dataset curated through a pipeline explicitly built for this purpose. The data is primarily based on the Chain of Thought (CoT) or Chain of Continuous Flow methodologies. This approach ensures that the dataset is rich in reasoning, problem-solving, and step-by-step breakdowns of complex tasks. The model is specifically designed to excel in reasoning, mathematics, and breaking down problems into logical, manageable steps.
# **Run with Transformers**
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Blaze-14B-xElite")
model = AutoModelForCausalLM.from_pretrained(
"prithivMLmods/Blaze-14B-xElite",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
# **Intended Use**
The Blaze-14B-xElite model is designed for a wide range of applications, particularly those requiring advanced reasoning, high-quality text generation, and multilingual capabilities. Below are some of the intended use cases:
1. **Complex Reasoning Tasks**:
- Solving intricate problems in mathematics, logic, and science.
- Assisting in academic research by providing detailed explanations and summaries.
2. **Multilingual Applications**:
- Translating text across multiple languages while preserving context and nuance.
- Generating content in various languages for global audiences.
3. **Content Creation**:
- Assisting writers, marketers, and creators with high-quality text generation.
- Generating creative ideas, stories, and technical documentation.
4. **Educational Tools**:
- Providing explanations, tutoring, and Q&A support for students and educators.
- Generating practice questions and answers for learning purposes.
5. **Customer Support**:
- Automating responses to customer queries with accurate and helpful information.
- Handling complex customer service scenarios with advanced reasoning.
6. **Safety-Critical Applications**:
- Ensuring responses are aligned with safety guidelines, making it suitable for sensitive domains.
- Providing harmlessness-focused interactions in public-facing applications.
# **Limitations**
While Blaze-14B-xElite is a powerful and versatile model, it has certain limitations that users should be aware of:
1. **Bias and Fairness**:
- Despite rigorous training and safety alignment, the model may still exhibit biases present in the training data. Users should critically evaluate outputs, especially in sensitive contexts.
2. **Contextual Understanding**:
- The model may occasionally misinterpret complex or ambiguous prompts, leading to inaccurate or irrelevant responses.
3. **Real-Time Knowledge**:
- The model's knowledge is limited to the data it was trained on and does not include real-time or post-training updates. It may not be aware of recent events or developments.
4. **Safety and Harmlessness**:
- While extensive efforts have been made to align the model with safety guidelines, it may still generate outputs that are inappropriate or harmful in certain contexts. Continuous monitoring and human oversight are recommended.
5. **Resource Requirements**:
- Running the model efficiently may require significant computational resources, especially for large-scale or real-time applications.
6. **Ethical Considerations**:
- The model should not be used for malicious purposes, such as generating harmful content, misinformation, or spam. Users are responsible for ensuring ethical use.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Blaze-14B-xElite-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FBlaze-14B-xElite&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 28.95|
|IFEval (0-Shot) | 3.63|
|BBH (3-Shot) | 51.57|
|MATH Lvl 5 (4-Shot)| 35.88|
|GPQA (0-shot) | 19.24|
|MuSR (0-shot) | 17.68|
|MMLU-PRO (5-shot) | 45.68|

31
config.json Normal file
View File

@@ -0,0 +1,31 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 100257,
"eos_token_id": 100265,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 16384,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 10,
"original_max_position_embeddings": 16384,
"pad_token_id": 100351,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 250000,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.47.1",
"use_cache": true,
"vocab_size": 100352
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

8
generation_config.json Normal file
View File

@@ -0,0 +1,8 @@
{
"_from_model_config": true,
"bos_token_id": 100257,
"eos_token_id": 100265,
"max_length": 16384,
"pad_token_id": 100351,
"transformers_version": "4.47.1"
}

100001
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ae37a64bea8c5bb310c860ba39e918f1548d9a97eccbd8b863ea7431498dce70
size 4933658528

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:029ec03cab40537ca1df85fbf49e58aee257d8e1dabc1d857876c44f011aa8fd
size 4954693112

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4745c983087cc0192b850e7b4f9cbb41d32079e2ad98b23666e5eb621c67a9dd
size 4902243992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:969b56c8d56fd4c2f6dc43263149f511811f90ce233a9d73f83e6ff417dcc77f
size 4954672440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b5da91a9fb0d2ddb6fbd4aaa2f6ed2c013689d1eddacd05d5912852bd28408a8
size 4954672432

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:63f1c3d8b8f7f79907a240b150b4be8f928763714c4d18bca01ca7f7805cb48c
size 4619116224

View File

@@ -0,0 +1,370 @@
{
"metadata": {
"total_size": 29319014400
},
"weight_map": {
"lm_head.weight": "model-00006-of-00006.safetensors",
"model.embed_tokens.weight": "model-00001-of-00006.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.20.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.norm.weight": "model-00006-of-00006.safetensors"
}
}

30
special_tokens_map.json Normal file
View File

@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<|endoftext|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false
},
"eos_token": {
"content": "<|im_end|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false
},
"pad_token": {
"content": "<|dummy_87|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false
},
"unk_token": {
"content": "�",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

501273
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

791
tokenizer_config.json Normal file
View File

@@ -0,0 +1,791 @@
{
"add_prefix_space": false,
"added_tokens_decoder": {
"5809": {
"content": "�",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"100256": {
"content": "<|dummy_0|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100257": {
"content": "<|endoftext|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100258": {
"content": "<|fim_prefix|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100259": {
"content": "<|fim_middle|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100260": {
"content": "<|fim_suffix|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100261": {
"content": "<|dummy_1|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100262": {
"content": "<|dummy_2|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100263": {
"content": "<|dummy_3|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100264": {
"content": "<|im_start|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100265": {
"content": "<|im_end|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100266": {
"content": "<|im_sep|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100267": {
"content": "<|dummy_4|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100268": {
"content": "<|dummy_5|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100269": {
"content": "<|dummy_6|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100270": {
"content": "<|dummy_7|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100271": {
"content": "<|dummy_8|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100272": {
"content": "<|dummy_9|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100273": {
"content": "<|dummy_10|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100274": {
"content": "<|dummy_11|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100275": {
"content": "<|dummy_12|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100276": {
"content": "<|endofprompt|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100277": {
"content": "<|dummy_13|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100278": {
"content": "<|dummy_14|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100279": {
"content": "<|dummy_15|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100280": {
"content": "<|dummy_16|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100281": {
"content": "<|dummy_17|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100282": {
"content": "<|dummy_18|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100283": {
"content": "<|dummy_19|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100284": {
"content": "<|dummy_20|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100285": {
"content": "<|dummy_21|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100286": {
"content": "<|dummy_22|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100287": {
"content": "<|dummy_23|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100288": {
"content": "<|dummy_24|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100289": {
"content": "<|dummy_25|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100290": {
"content": "<|dummy_26|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100291": {
"content": "<|dummy_27|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100292": {
"content": "<|dummy_28|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100293": {
"content": "<|dummy_29|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100294": {
"content": "<|dummy_30|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100295": {
"content": "<|dummy_31|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100296": {
"content": "<|dummy_32|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100297": {
"content": "<|dummy_33|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100298": {
"content": "<|dummy_34|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100299": {
"content": "<|dummy_35|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100300": {
"content": "<|dummy_36|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100301": {
"content": "<|dummy_37|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100302": {
"content": "<|dummy_38|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100303": {
"content": "<|dummy_39|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100304": {
"content": "<|dummy_40|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100305": {
"content": "<|dummy_41|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100306": {
"content": "<|dummy_42|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100307": {
"content": "<|dummy_43|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100308": {
"content": "<|dummy_44|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100309": {
"content": "<|dummy_45|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100310": {
"content": "<|dummy_46|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100311": {
"content": "<|dummy_47|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100312": {
"content": "<|dummy_48|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100313": {
"content": "<|dummy_49|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100314": {
"content": "<|dummy_50|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100315": {
"content": "<|dummy_51|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100316": {
"content": "<|dummy_52|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100317": {
"content": "<|dummy_53|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100318": {
"content": "<|dummy_54|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100319": {
"content": "<|dummy_55|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100320": {
"content": "<|dummy_56|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100321": {
"content": "<|dummy_57|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100322": {
"content": "<|dummy_58|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100323": {
"content": "<|dummy_59|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100324": {
"content": "<|dummy_60|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100325": {
"content": "<|dummy_61|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100326": {
"content": "<|dummy_62|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100327": {
"content": "<|dummy_63|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100328": {
"content": "<|dummy_64|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100329": {
"content": "<|dummy_65|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100330": {
"content": "<|dummy_66|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100331": {
"content": "<|dummy_67|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100332": {
"content": "<|dummy_68|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100333": {
"content": "<|dummy_69|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100334": {
"content": "<|dummy_70|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100335": {
"content": "<|dummy_71|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100336": {
"content": "<|dummy_72|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100337": {
"content": "<|dummy_73|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100338": {
"content": "<|dummy_74|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100339": {
"content": "<|dummy_75|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100340": {
"content": "<|dummy_76|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100341": {
"content": "<|dummy_77|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100342": {
"content": "<|dummy_78|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100343": {
"content": "<|dummy_79|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100344": {
"content": "<|dummy_80|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100345": {
"content": "<|dummy_81|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100346": {
"content": "<|dummy_82|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100347": {
"content": "<|dummy_83|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100348": {
"content": "<|dummy_84|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100349": {
"content": "<|dummy_85|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100350": {
"content": "<|dummy_86|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
},
"100351": {
"content": "<|dummy_87|>",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": true
}
},
"bos_token": "<|endoftext|>",
"chat_template": "{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|im_start|>system<|im_sep|>' + message['content'] + '<|im_end|>'}}{% elif (message['role'] == 'user') %}{{'<|im_start|>user<|im_sep|>' + message['content'] + '<|im_end|>'}}{% elif (message['role'] == 'assistant') %}{{'<|im_start|>assistant<|im_sep|>' + message['content'] + '<|im_end|>'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant<|im_sep|>' }}{% endif %}",
"clean_up_tokenization_spaces": false,
"eos_token": "<|im_end|>",
"extra_special_tokens": {},
"model_max_length": 16384,
"pad_token": "<|dummy_87|>",
"padding_side": "left",
"tokenizer_class": "GPT2Tokenizer",
"unk_token": "�"
}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long