初始化项目,由ModelHub XC社区提供模型

Model: AdaptLLM/medicine-LLM-13B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 12:49:57 +08:00
commit 19b09979ce
22 changed files with 1133 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model-00001-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00002-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00003-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00004-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00005-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00006-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
pytorch_model-00001-of-00006.bin filter=lfs diff=lfs merge=lfs -text
pytorch_model-00002-of-00006.bin filter=lfs diff=lfs merge=lfs -text
pytorch_model-00003-of-00006.bin filter=lfs diff=lfs merge=lfs -text
pytorch_model-00004-of-00006.bin filter=lfs diff=lfs merge=lfs -text
pytorch_model-00005-of-00006.bin filter=lfs diff=lfs merge=lfs -text
pytorch_model-00006-of-00006.bin filter=lfs diff=lfs merge=lfs -text

139
README.md Normal file
View File

@@ -0,0 +1,139 @@
---
language:
- en
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
- EleutherAI/pile
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- biology
- medical
license: apache-2.0
---
# Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### [2024/11/29] 🤗 Introduce the multimodal version of AdaptLLM at [AdaMLLM](https://huggingface.co/papers/2411.19930), for adapting MLLMs to domains 🤗
**************************** **Updates** ****************************
* 2024/11/29: Released [AdaMLLM](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains) for adapting MLLMs to domains
* 2024/9/20: Our [research paper for Instruction-Pretrain](https://huggingface.co/papers/2406.14491) has been accepted by EMNLP 2024
* 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks
* 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm)
* 2024/6/21: Released the general version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain)
* 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets
* 2024/1/16: Our [research paper for AdaptLLM](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B
## 1. Domain-Specific Models
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
### LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the biomedicine model (💗 An amazing [usage example](https://huggingface.co/AdaptLLM/medicine-LLM-13B/discussions/2)):
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-LLM-13B")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-LLM-13B", use_fast=False)
# Put your input here:
user_input = '''Question: Which of the following is an example of monosomy?
Options:
- 46,XX
- 47,XXX
- 69,XYY
- 45,X
Please provide your choice first and then provide explanations if possible.'''
# Simply use your input as the prompt for base models
prompt = user_input
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=2048)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(pred)
```
### LLaMA-3-8B (💡New!)
In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
## 2. Domain-Specific Tasks
### Pre-templatized Testing Splits
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
Note: those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
### Evaluating Any Huggingface LMs on Domain-Specific Tasks (💡New!)
You can use the following script to reproduce our results and evaluate any other Huggingface models on domain-specific tasks. Note that the script is NOT applicable to models that require specific prompt templates (e.g., Llama2-chat, Llama3-Instruct).
1). **Set Up Dependencies**
```bash
git clone https://github.com/microsoft/LMOps
cd LMOps/adaptllm
pip install -r requirements.txt
```
2). **Evaluate the Model**
```bash
# Select the domain from ['biomedicine', 'finance', 'law']
DOMAIN='biomedicine'
# Specify any Huggingface model name (Not applicable to chat models)
MODEL='AdaptLLM/medicine-LLM-13B'
# Model parallelization:
# - Set MODEL_PARALLEL=False if the model fits on a single GPU.
# We observe that LMs smaller than 10B always meet this requirement.
# - Set MODEL_PARALLEL=True if the model is too large and encounters OOM on a single GPU.
MODEL_PARALLEL=True
# Choose the number of GPUs from [1, 2, 4, 8]
N_GPU=2
# Whether to add a BOS token at the beginning of the prompt input:
# - Set to False for AdaptLLM.
# - Set to True for instruction-pretrain models.
# If unsure, we recommend setting it to False, as this is suitable for most LMs.
add_bos_token=False
# Run the evaluation script
bash scripts/inference.sh ${DOMAIN} ${MODEL} ${add_bos_token} ${MODEL_PARALLEL} ${N_GPU}
```
## Citation
If you find our work helpful, please cite us:
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```

24
config.json Normal file
View File

@@ -0,0 +1,24 @@
{
"_name_or_path": "/home/sgugger/tmp/llama/llama-13b/",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"max_position_embeddings": 2048,
"max_sequence_length": 2048,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"pad_token_id": 32000,
"rms_norm_eps": 1e-06,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.28.0.dev0",
"use_cache": true,
"vocab_size": 32001
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 32000,
"transformers_version": "4.28.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a4a9a4f1fb332d5f1c89d0fec66b87d19b37931dec721683db1842cfd4ddcece
size 9956547112

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a0e49c7bc5b7d2b13f92a077ad6deb47819ea56b0c8ff4e891bc5061232af3fd
size 9940838856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8c2e3f2ba7edcfb59c73287cce706328311ff43d2445cd0093742968e5603366
size 9940839248

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2a56a41395d8548d322fddc18fb23eb9271ec5cec3b6b3323b0f440df88ace0c
size 9867397864

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fb1898ee6676cf7997cbc4142d93fec53d6311ba065fe9550728bedebc633d9c
size 9867439048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f0701de6801e1833e2f2a6fd0579e592b3b32dcce48c526510ddb1714079768a
size 2490492968

View File

@@ -0,0 +1,410 @@
{
"metadata": {
"total_size": 52063508480
},
"weight_map": {
"lm_head.weight": "model-00006-of-00006.safetensors",
"model.embed_tokens.weight": "model-00001-of-00006.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.15.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.20.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.rotary_emb.inv_freq": "model-00003-of-00006.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.rotary_emb.inv_freq": "model-00004-of-00006.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.rotary_emb.inv_freq": "model-00005-of-00006.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.rotary_emb.inv_freq": "model-00006-of-00006.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.rotary_emb.inv_freq": "model-00001-of-00006.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.rotary_emb.inv_freq": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.norm.weight": "model-00006-of-00006.safetensors"
}
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1d7d4cb8863ebfac0dd416816e2ca360f72aaca7bd5e8a55020196d63e4f4087
size 9956564363

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dbee8e21af733486eafc0bb60e2fe4f50f08704c8cd94f270d1f05741f675544
size 9940856385

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d99f009ce74193a86aa037ca3c71c9946ecd1431eda94a594eecdec8cacc6779
size 9940856943

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42f5301aab0a7596dc1a7f5ebcde87854a644840f1eeb1cd04c91443fc4a466d
size 9867415289

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e212a12031939cfb4db3c06b77f07859356cf92e8aa57a7e0fb57c5b85a5c53b
size 9867456961

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5866636a093aa84792f5b35c389915e262d7d9e983b6196aa315920efc00a130
size 2490496687

View File

@@ -0,0 +1,410 @@
{
"metadata": {
"total_size": 52063508480
},
"weight_map": {
"lm_head.weight": "pytorch_model-00006-of-00006.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00006.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00006.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.32.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.32.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.33.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.33.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.34.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.34.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.35.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.35.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.36.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.36.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.37.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.37.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.38.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.38.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.38.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.38.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.38.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.38.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.38.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.38.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.38.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00006.bin",
"model.layers.38.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
"model.layers.39.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.mlp.gate_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.self_attn.k_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.self_attn.o_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.self_attn.q_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.39.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00006.bin",
"model.layers.39.self_attn.v_proj.weight": "pytorch_model-00006-of-00006.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00006.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00006.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
"model.norm.weight": "pytorch_model-00006-of-00006.bin"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fa299a0662fc3bf7ada4d816b1cb9fdeb472e9edf6c2ffbc7f00e1b5ff5ff968
size 499739

33
tokenizer_config.json Normal file
View File

@@ -0,0 +1,33 @@
{
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"model_max_length": 2048,
"pad_token": "<pad>",
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}