初始化项目,由ModelHub XC社区提供模型
Model: haoranxu/ALMA-13B Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
78
README.md
Normal file
78
README.md
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
---
|
||||||
|
license: mit
|
||||||
|
---
|
||||||
|
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
|
||||||
|
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
|
||||||
|
```
|
||||||
|
@misc{xu2023paradigm,
|
||||||
|
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
|
||||||
|
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
|
||||||
|
year={2023},
|
||||||
|
eprint={2309.11674},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.CL}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
**[ALMA-R](https://arxiv.org/abs/2401.08417) (NEW!) is released now!** ALMA-R builds upon ALMA models, with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
@misc{xu2024contrastive,
|
||||||
|
title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation},
|
||||||
|
author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
|
||||||
|
year={2024},
|
||||||
|
eprint={2401.08417},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.CL}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
We release six translation models presented in the paper:
|
||||||
|
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
|
||||||
|
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
|
||||||
|
- **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
|
||||||
|
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
|
||||||
|
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
|
||||||
|
- **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization.
|
||||||
|
|
||||||
|
Model checkpoints are released at huggingface:
|
||||||
|
| Models | Base Model Link | LoRA Link |
|
||||||
|
|:-------------:|:---------------:|:---------:|
|
||||||
|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
|
||||||
|
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
|
||||||
|
| **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - |
|
||||||
|
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
|
||||||
|
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
|
||||||
|
| **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - |
|
||||||
|
|
||||||
|
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
|
||||||
|
|
||||||
|
Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
|
||||||
|
| Datasets | Train / Validation| Test |
|
||||||
|
|:-------------:|:---------------:|:---------:|
|
||||||
|
| Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) |
|
||||||
|
| Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) |
|
||||||
|
|
||||||
|
A quick start to use system ALMA-13B-LoRA for translation. An example of translating "我爱机器翻译。" into English:
|
||||||
|
```
|
||||||
|
import torch
|
||||||
|
from peft import PeftModel
|
||||||
|
from transformers import AutoModelForCausalLM
|
||||||
|
from transformers import LlamaTokenizer
|
||||||
|
|
||||||
|
# Load base model and LoRA weights
|
||||||
|
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
|
||||||
|
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
|
||||||
|
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
|
||||||
|
|
||||||
|
# Add the source setence into the prompt template
|
||||||
|
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
|
||||||
|
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
|
||||||
|
|
||||||
|
# Translation
|
||||||
|
with torch.no_grad():
|
||||||
|
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
|
||||||
|
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
||||||
|
print(outputs)
|
||||||
|
```
|
||||||
|
|
||||||
|
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
|
||||||
27
config.json
Normal file
27
config.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "./",
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 5120,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 13824,
|
||||||
|
"max_length": 512,
|
||||||
|
"max_position_embeddings": 4096,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 40,
|
||||||
|
"num_hidden_layers": 40,
|
||||||
|
"num_key_value_heads": 40,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.30.0.dev0",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 32000
|
||||||
|
}
|
||||||
10
generation_config.json
Normal file
10
generation_config.json
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"max_length": 512,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"temperature": 0.9,
|
||||||
|
"top_p": 0.6,
|
||||||
|
"do_sample": true,
|
||||||
|
"transformers_version": "4.30.0.dev0"
|
||||||
|
}
|
||||||
3
pytorch_model-00001-of-00006.bin
Normal file
3
pytorch_model-00001-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:786cdad44fe92bbee5b923d8363117163e9411b94ec1d8ee124aff40dcb7457b
|
||||||
|
size 9956543883
|
||||||
3
pytorch_model-00002-of-00006.bin
Normal file
3
pytorch_model-00002-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:7e21a9b14a4c4d4d5cd758454cdd5a57a1825d66786bc203c4c932bd1ad38eb1
|
||||||
|
size 9940856385
|
||||||
3
pytorch_model-00003-of-00006.bin
Normal file
3
pytorch_model-00003-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:a88ff5ccb30d0b40c14e61e2a54a26810dee5eab04803c27a80edda4e9286fe3
|
||||||
|
size 9940856943
|
||||||
3
pytorch_model-00004-of-00006.bin
Normal file
3
pytorch_model-00004-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:2c0deb589bef9936a6f1d75db618f526bbba34bac5df2ca6a0de3280e1a28324
|
||||||
|
size 9867415289
|
||||||
3
pytorch_model-00005-of-00006.bin
Normal file
3
pytorch_model-00005-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:b9f24be125c47cdd3424a89cb7c2ea55c7fa33935cd7775c67092b60d7fc8c10
|
||||||
|
size 9867456961
|
||||||
3
pytorch_model-00006-of-00006.bin
Normal file
3
pytorch_model-00006-of-00006.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9e448a9ba7f97e335a52cbc044abc0912327836c860c994686c5199eea3c134e
|
||||||
|
size 2490476207
|
||||||
410
pytorch_model.bin.index.json
Normal file
410
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 52063467520
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.embed_tokens.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
|
||||||
|
"model.layers.23.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.30.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
|
||||||
|
"model.layers.31.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.38.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.38.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
|
||||||
|
"model.layers.39.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00006-of-00006.bin",
|
||||||
|
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
|
||||||
|
"model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
|
||||||
|
"model.norm.weight": "pytorch_model-00006-of-00006.bin"
|
||||||
|
}
|
||||||
|
}
|
||||||
12
special_tokens_map.json
Normal file
12
special_tokens_map.json
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
{
|
||||||
|
"bos_token": "<s>",
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"pad_token": "<unk>",
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
||||||
|
size 499723
|
||||||
36
tokenizer_config.json
Normal file
36
tokenizer_config.json
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": true,
|
||||||
|
"add_eos_token": false,
|
||||||
|
"bos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"legacy": false,
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": null,
|
||||||
|
"padding_side": "left",
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"unk_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"use_fast": true
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user