初始化项目,由ModelHub XC社区提供模型
Model: vilm/Quyen-Pro-v0.1 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
63
README.md
Normal file
63
README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: other
|
||||
datasets:
|
||||
- teknium/OpenHermes-2.5
|
||||
- LDJnr/Capybara
|
||||
- Intel/orca_dpo_pairs
|
||||
- argilla/distilabel-capybara-dpo-7k-binarized
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Quyen
|
||||
<img src="quyen.webp" width="512" height="512" alt="Quyen">
|
||||
|
||||
# Model Description
|
||||
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
|
||||
|
||||
- **Quyen-SE (0.5B)**
|
||||
- **Quyen-Mini (1.8B)**
|
||||
- **Quyen (4B)**
|
||||
- **Quyen-Plus (7B)**
|
||||
- **Quyen-Pro (14B)**
|
||||
- **Quyen-Pro-Max (72B)**
|
||||
|
||||
All models were trained with SFT and DPO using the following dataset:
|
||||
|
||||
- *OpenHermes-2.5* by **Teknium**
|
||||
- *Capyabara* by **LDJ**
|
||||
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
|
||||
- *orca_dpo_pairs* by **Intel**
|
||||
- and Private Data by **Ontocord** & **BEE-spoke-data**
|
||||
|
||||
# Prompt Template
|
||||
- All Quyen models use ChatML as the default template:
|
||||
|
||||
```
|
||||
<|im_start|>system
|
||||
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
|
||||
<|im_start|>user
|
||||
Hello world.<|im_end|>
|
||||
<|im_start|>assistant
|
||||
```
|
||||
|
||||
- You can also use `apply_chat_template`:
|
||||
|
||||
```python
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
|
||||
{"role": "user", "content": "Hello world."}
|
||||
]
|
||||
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
||||
model.generate(**gen_input)
|
||||
```
|
||||
|
||||
# Benchmarks:
|
||||
|
||||
- Coming Soon! We will update the benchmarks later
|
||||
|
||||
# Acknowledgement
|
||||
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
|
||||
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
|
||||
5
added_tokens.json
Normal file
5
added_tokens.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"<|endoftext|>": 151643,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644
|
||||
}
|
||||
27
config.json
Normal file
27
config.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"_name_or_path": "Qwen/Qwen2-beta-14B",
|
||||
"architectures": [
|
||||
"Qwen2ForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"eos_token_id": 151645,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 5120,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 13696,
|
||||
"max_position_embeddings": 32768,
|
||||
"max_window_layers": 35,
|
||||
"model_type": "qwen2",
|
||||
"num_attention_heads": 40,
|
||||
"num_hidden_layers": 40,
|
||||
"num_key_value_heads": 40,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_theta": 1000000.0,
|
||||
"sliding_window": 4096,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.38.0.dev0",
|
||||
"use_cache": false,
|
||||
"use_sliding_window": false,
|
||||
"vocab_size": 152064
|
||||
}
|
||||
9
generation_config.json
Normal file
9
generation_config.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"bos_token_id": 151643,
|
||||
"do_sample": true,
|
||||
"eos_token_id": 151645,
|
||||
"max_length": 4096,
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.8,
|
||||
"transformers_version": "4.38.0.dev0"
|
||||
}
|
||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model-00001-of-00006.safetensors
Normal file
3
model-00001-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8f83da3ded1241b6237e90dc4b6a2b94dfb399a96dbc29e25a06517e6ed26c21
|
||||
size 4919426680
|
||||
3
model-00002-of-00006.safetensors
Normal file
3
model-00002-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:439ecbb9e8e4c9a6480356dfe26d0faae1a4afb66d5e64f3503d57b519626c05
|
||||
size 4991642256
|
||||
3
model-00003-of-00006.safetensors
Normal file
3
model-00003-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1a9cac20c8355c173154c5a05148db5c961b88c31637eb90995614a7da4cc668
|
||||
size 4991631960
|
||||
3
model-00004-of-00006.safetensors
Normal file
3
model-00004-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:80f2f5fd709a02a1af183ce779e7ec4e0669367c4cd05f143ed17cbe66c07c97
|
||||
size 4991631960
|
||||
3
model-00005-of-00006.safetensors
Normal file
3
model-00005-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bdc2103fc58c09236055a9e578574bfdc65714d03ace25a3550cd557d275e7fb
|
||||
size 4991631960
|
||||
3
model-00006-of-00006.safetensors
Normal file
3
model-00006-of-00006.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a5a03958696e608540f5cf0aae996f742b4a12696bc55502f68a70bf2799b73d
|
||||
size 3448672536
|
||||
490
model.safetensors.index.json
Normal file
490
model.safetensors.index.json
Normal file
@@ -0,0 +1,490 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 28334581760
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00006-of-00006.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.bias": "model-00003-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.bias": "model-00004-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.32.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.33.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.34.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.35.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.input_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.k_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.q_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.v_proj.bias": "model-00005-of-00006.safetensors",
|
||||
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
|
||||
"model.layers.37.input_layernorm.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.k_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.q_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.v_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.37.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.k_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.q_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.v_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.38.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.k_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.q_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.v_proj.bias": "model-00006-of-00006.safetensors",
|
||||
"model.layers.39.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.bias": "model-00001-of-00006.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.bias": "model-00002-of-00006.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
|
||||
"model.norm.weight": "model-00006-of-00006.safetensors"
|
||||
}
|
||||
}
|
||||
BIN
quyen.webp
Normal file
BIN
quyen.webp
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 258 KiB |
20
special_tokens_map.json
Normal file
20
special_tokens_map.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
303111
tokenizer.json
Normal file
303111
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
43
tokenizer_config.json
Normal file
43
tokenizer_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": false
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"errors": "replace",
|
||||
"model_max_length": 32768,
|
||||
"pad_token": "<|endoftext|>",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null
|
||||
}
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user