初始化项目,由ModelHub XC社区提供模型

Model: OpenBuddy/openbuddy-llemma-34b-v13.1
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-29 10:59:11 +08:00
commit 00d9bc9307
17 changed files with 101314 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

1644
Evaluation.txt Normal file

File diff suppressed because it is too large Load Diff

55
README.md Normal file
View File

@@ -0,0 +1,55 @@
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: llama2
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)
![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png)
# Copyright Notice
Base model: https://huggingface.co/EleutherAI/llemma_34b
License: llama2
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事不要使用这些模型以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供不附带任何种类的明示或暗示的保证包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任无论是合同、侵权还是其他原因承担责任。
使用OpenBuddy即表示您同意这些条款和条件并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。

28
config.json Normal file
View File

@@ -0,0 +1,28 @@
{
"_name_or_path": "openbuddy-llemma-34b-v13.1",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 8192,
"initializer_range": 0.02,
"intermediate_size": 22016,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 64,
"num_hidden_layers": 48,
"num_key_value_heads": 8,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 1000000,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.35.0.dev0",
"use_cache": true,
"vocab_size": 37632
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 0,
"transformers_version": "4.35.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4097481cc1dfc96cb8cbbac1d79946d1e6dab5ea159ccfe613f67c64d09ff535
size 9944912185

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:befbc3402791de1b97edd99bcc121b159111734256934f50a06953e26bfb5feb
size 9689093137

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:af6041c24c3f1de0136cc0d2a506536767ab3f2b0cf4028ba94bc833abf7ace6
size 9689093137

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8341a648a1f7e417bf6cfa3e24b9e2ba957e257d0388d9eb562a79f53614c73c
size 9689093137

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:68449cacfa3f3dedfd4abe48d3a76db816eea3bd4d7931cfa8c58c95c34d31e6
size 9689093137

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3b321079d64e1f0496a765de2b4d39bcabb80c3a3046aed91667007f3c538143
size 9689093137

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a830d39bf76270011a5a3967e3dcb40c18c78b901caac10c8a346c1c5e756ef7
size 9282260633

View File

@@ -0,0 +1,442 @@
{
"metadata": {
"total_size": 67672489984
},
"weight_map": {
"lm_head.weight": "pytorch_model-00007-of-00007.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00007.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00007.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.input_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.34.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00005-of-00007.bin",
"model.layers.35.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.40.input_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.mlp.down_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.post_attention_layernorm.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.40.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.41.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.41.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.41.mlp.gate_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.41.mlp.up_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.41.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.41.self_attn.k_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.41.self_attn.o_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.41.self_attn.q_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.41.self_attn.v_proj.weight": "pytorch_model-00006-of-00007.bin",
"model.layers.42.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.mlp.gate_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.mlp.up_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.self_attn.k_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.self_attn.o_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.self_attn.q_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.42.self_attn.v_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.mlp.gate_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.mlp.up_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.self_attn.k_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.self_attn.o_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.self_attn.q_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.43.self_attn.v_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.mlp.gate_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.mlp.up_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.self_attn.k_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.self_attn.o_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.self_attn.q_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.44.self_attn.v_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.mlp.gate_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.mlp.up_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.self_attn.k_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.self_attn.o_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.self_attn.q_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.45.self_attn.v_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.mlp.gate_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.mlp.up_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.self_attn.k_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.self_attn.o_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.self_attn.q_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.46.self_attn.v_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.input_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.mlp.down_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.mlp.gate_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.mlp.up_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.post_attention_layernorm.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.self_attn.k_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.self_attn.o_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.self_attn.q_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.47.self_attn.v_proj.weight": "pytorch_model-00007-of-00007.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00007.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00007.bin",
"model.norm.weight": "pytorch_model-00007-of-00007.bin"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

99023
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f440c53d2cc6f14a7ed7124dea5f5a7402fb4fc95bccb5d8be6d0f7e74d327ed
size 568229

33
tokenizer_config.json Normal file
View File

@@ -0,0 +1,33 @@
{
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}