初始化项目,由ModelHub XC社区提供模型

Model: allenai/tulu-v2.5-dpo-13b-hh-rlhf-60k
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-04 20:03:37 +08:00
commit bf8ce88081
15 changed files with 587 additions and 0 deletions

41
.gitattributes vendored Normal file
View File

@@ -0,0 +1,41 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model-00001-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00002-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00003-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00004-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00005-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00006-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text

86
README.md Normal file
View File

@@ -0,0 +1,86 @@
---
model-index:
- name: tulu-v2.5-dpo-13b-hh-rlhf-60k
results: []
datasets:
- allenai/tulu-2.5-preference-data
- allenai/tulu-v2-sft-mixture
language:
- en
base_model: allenai/tulu-2-13b
license: apache-2.0
---
<center>
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-2.5/tulu_25_banner.png" alt="Tulu 2.5 banner image" width="800px"/>
</center>
# Model Card for Tulu V2.5 DPO 13B - HH-RLHF 60k
Tulu is a series of language models that are trained to act as helpful assistants.
Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tulu 2 suite](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
This model is trained on a 60k random subsample of the HH-RLHF dataset using DPO.
For more details, read the paper:
[Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://arxiv.org/abs/2406.09279).
## .Model description
- **Model type:** One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
- **Language(s) (NLP):** English
- **License:** Apache 2.0.
- **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
### Model Sources
- **Repository:** https://github.com/allenai/open-instruct
- **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `hh_rlhf_60k` split.
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
We have included a [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) in the tokenizer implementing this template.
## Intended uses & limitations
The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the dataset mentioned above.
## Bias, Risks, and Limitations
The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
### Training hyperparameters
The following hyperparameters were used during DPO training:
- learning_rate: 5e-07
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
## Citation
If you find Tulu 2.5 is useful in your work, please cite it with:
```
@misc{ivison2024unpacking,
title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
year={2024},
eprint={2406.09279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```

27
config.json Normal file
View File

@@ -0,0 +1,27 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 40,
"pretraining_tp": 1,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.38.2",
"use_cache": true,
"vocab_size": 32000
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.38.2"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0d381a457675a8b21dc2f5697f84534d1bbf3fd26aca8404fd0ec06fbbffd772
size 4978265800

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:54a5fcaf382f23874782394f996890ba388fda02da735f30f747284c256b3207
size 4970422232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:392140635a2b85af1b72128fe2dbe50eb24cd9d05ce54a51e7591dc3789656b6
size 4970422256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:be42fa67a527038a7f2c60b0396e115c1dd75cc77f9e2e1bc989b155522561f8
size 4933701504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:66691fa54e2a066172739e43a28bcb423de875e386bea47a258616ecff6b6710
size 4933722216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f560418365c29d594bbebe4a6ca6c843b3bda586e0d446642ddf2ee586aee82d
size 1245236920

View File

@@ -0,0 +1,370 @@
{
"metadata": {
"total_size": 26031728640
},
"weight_map": {
"lm_head.weight": "model-00006-of-00006.safetensors",
"model.embed_tokens.weight": "model-00001-of-00006.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.20.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.norm.weight": "model-00006-of-00006.safetensors"
}
}

1
special_tokens_map.json Normal file
View File

@@ -0,0 +1 @@
{"bos_token": {"content": "<s>", "lstrip": false, "normalized": true, "rstrip": false, "single_word": false}, "eos_token": {"content": "</s>", "lstrip": false, "normalized": true, "rstrip": false, "single_word": false}, "unk_token": {"content": "<unk>", "lstrip": false, "normalized": true, "rstrip": false, "single_word": false}}

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
size 499723

34
tokenizer_config.json Normal file
View File

@@ -0,0 +1,34 @@
{
"add_bos_token": true,
"add_eos_token": false,
"model_max_length": 4096,
"pad_token": null,
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"clean_up_tokenization_spaces": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
}