初始化项目,由ModelHub XC社区提供模型
Model: hamxea/Llama-2-7b-chat-hf-activity-fine-tuned-v3 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
181
README.md
Normal file
181
README.md
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
---
|
||||||
|
license: other
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- medical
|
||||||
|
- text-generation-inference
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# 🦙 Llama for Huggingface Transformers
|
||||||
|
|
||||||
|
Llama-7B converted from official [Llama-7B](https://github.com/facebookresearch/Llama/blob/main/MODEL_CARD.md) to Huggingface model via [HF's conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
|
||||||
|
|
||||||
|
This is updated from [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/Llama-7b-hf) (since the many pull requests are not merged yet in decapoda's repo, so I directly open a new repo here). It includes:
|
||||||
|
|
||||||
|
(1) The naming changes (LLaMA -> Llama) to best fit for `transformers` naming rule, in both `LlamaForCausalLM` and `LlamaTokenizer`. This works perfectly for `transformers>=4.28.0`.
|
||||||
|
|
||||||
|
(2) The model checkpoints are saved in 2 shards (instead of 33 shards in [decapoda-research/Llama-7b-hf](https://huggingface.co/decapoda-research/Llama-7b-hf)). Less shards would accelerate loading speed from disk.
|
||||||
|
|
||||||
|
--
|
||||||
|
license: other
|
||||||
|
---
|
||||||
|
# Llama Model Card
|
||||||
|
|
||||||
|
## Model details
|
||||||
|
**Organization developing the model**
|
||||||
|
The FAIR team of Meta AI.
|
||||||
|
|
||||||
|
**Model date**
|
||||||
|
Llama was trained between December. 2022 and Feb. 2023.
|
||||||
|
|
||||||
|
**Model version**
|
||||||
|
This is version 1 of the model.
|
||||||
|
|
||||||
|
**Model type**
|
||||||
|
Llama is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
|
||||||
|
|
||||||
|
**Paper or resources for more information**
|
||||||
|
More information can be found in the paper “Llama, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/Llama-open-and-efficient-foundation-language-models/.
|
||||||
|
|
||||||
|
**Citations details**
|
||||||
|
https://research.facebook.com/publications/Llama-open-and-efficient-foundation-language-models/
|
||||||
|
|
||||||
|
**License**
|
||||||
|
Non-commercial bespoke license
|
||||||
|
|
||||||
|
**Where to send questions or comments about the model**
|
||||||
|
Questions and comments about Llama can be sent via the [GitHub repository](https://github.com/facebookresearch/Llama) of the project , by opening an issue.
|
||||||
|
|
||||||
|
## Intended use
|
||||||
|
**Primary intended uses**
|
||||||
|
The primary use of Llama is research on large language models, including:
|
||||||
|
exploring potential applications such as question answering, natural language understanding or reading comprehension,
|
||||||
|
understanding capabilities and limitations of current language models, and developing techniques to improve those,
|
||||||
|
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
|
||||||
|
|
||||||
|
**Primary intended users**
|
||||||
|
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
|
||||||
|
|
||||||
|
**Out-of-scope use cases**
|
||||||
|
Llama is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
|
||||||
|
|
||||||
|
## Factors
|
||||||
|
**Relevant factors**
|
||||||
|
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
|
||||||
|
|
||||||
|
**Evaluation factors**
|
||||||
|
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
|
||||||
|
|
||||||
|
## Metrics
|
||||||
|
**Model performance measures**
|
||||||
|
We use the following measure to evaluate the model:
|
||||||
|
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
|
||||||
|
- Exact match for question answering,
|
||||||
|
- The toxicity score from Perspective API on RealToxicityPrompts.
|
||||||
|
|
||||||
|
**Decision thresholds**
|
||||||
|
Not applicable.
|
||||||
|
|
||||||
|
**Approaches to uncertainty and variability**
|
||||||
|
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
|
||||||
|
|
||||||
|
## Evaluation datasets
|
||||||
|
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
|
||||||
|
|
||||||
|
## Training dataset
|
||||||
|
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
|
||||||
|
|
||||||
|
## Quantitative analysis
|
||||||
|
Hyperparameters for the model architecture
|
||||||
|
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th >Llama</th> <th colspan=6>Model hyper parameters </th>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
*Table 1 - Summary of Llama Model Hyperparameters*
|
||||||
|
|
||||||
|
We present our results on eight standard common sense reasoning benchmarks in the table below.
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Llama</th> <th colspan=9>Reasoning tasks </th>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
|
||||||
|
</th>
|
||||||
|
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
|
||||||
|
</th>
|
||||||
|
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
|
||||||
|
</th>
|
||||||
|
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
*Table 2 - Summary of Llama Model Performance on Reasoning tasks*
|
||||||
|
|
||||||
|
|
||||||
|
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
|
||||||
|
|
||||||
|
|
||||||
|
| No | Category | FAIR LLM |
|
||||||
|
| --- | -------------------- | -------- |
|
||||||
|
| 1 | Gender | 70.6 |
|
||||||
|
| 2 | Religion | 79 |
|
||||||
|
| 3 | Race/Color | 57 |
|
||||||
|
| 4 | Sexual orientation | 81 |
|
||||||
|
| 5 | Age | 70.1 |
|
||||||
|
| 6 | Nationality | 64.2 |
|
||||||
|
| 7 | Disability | 66.7 |
|
||||||
|
| 8 | Physical appearance | 77.8 |
|
||||||
|
| 9 | Socioeconomic status | 71.5 |
|
||||||
|
| | Llama Average | 66.6 |
|
||||||
|
|
||||||
|
*Table 3 - Summary bias of our model output..*
|
||||||
|
|
||||||
|
|
||||||
|
## Ethical considerations
|
||||||
|
**Data**
|
||||||
|
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
|
||||||
|
|
||||||
|
**Human life**
|
||||||
|
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
|
||||||
|
|
||||||
|
**Mitigations**
|
||||||
|
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
|
||||||
|
|
||||||
|
**Risks and harms**
|
||||||
|
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
|
||||||
|
|
||||||
|
**Use cases**
|
||||||
|
Llama is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
||||||
26
config.json
Normal file
26
config.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "meta-llama/Llama-2-7b-chat-hf",
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 4096,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 11008,
|
||||||
|
"max_position_embeddings": 4096,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 32,
|
||||||
|
"num_key_value_heads": 32,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "float16",
|
||||||
|
"transformers_version": "4.31.0",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 32000
|
||||||
|
}
|
||||||
10
generation_config.json
Normal file
10
generation_config.json
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"do_sample": true,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"max_length": 4096,
|
||||||
|
"pad_token_id": 0,
|
||||||
|
"temperature": 0.6,
|
||||||
|
"top_p": 0.9,
|
||||||
|
"transformers_version": "4.31.0"
|
||||||
|
}
|
||||||
3
model-00001-of-00003.safetensors
Normal file
3
model-00001-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:0d5a54fab6558c4b5fc4616a02700eb040897c41e43d084834b5604168e4d1d9
|
||||||
|
size 4938989552
|
||||||
3
model-00002-of-00003.safetensors
Normal file
3
model-00002-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:db908fb2877650a5c587660d1afed7f86687214f13d00ab85c5ef5814170eb99
|
||||||
|
size 4947395088
|
||||||
3
model-00003-of-00003.safetensors
Normal file
3
model-00003-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:506cdf7115f143ad51de321f4881f6a666b99807e9c67bb60e455dfe49e8ea18
|
||||||
|
size 3590491616
|
||||||
330
model.safetensors.index.json
Normal file
330
model.safetensors.index.json
Normal file
@@ -0,0 +1,330 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 13476839424
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.embed_tokens.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.11.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.self_attn.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||||
|
"model.layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.self_attn.rotary_emb.inv_freq": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||||
|
"model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.self_attn.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||||
|
"model.norm.weight": "model-00003-of-00003.safetensors"
|
||||||
|
}
|
||||||
|
}
|
||||||
3
pytorch_model-00001-of-00003.bin
Normal file
3
pytorch_model-00001-of-00003.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cc7459766bfbb1e79eae49a78d639a6c6806e191ffd9f8fc3fd90a71f8abcb56
|
||||||
|
size 4939016253
|
||||||
3
pytorch_model-00002-of-00003.bin
Normal file
3
pytorch_model-00002-of-00003.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:6986cf6449d76c4ac74c193232113926484b097217c56d3f24f5947d4681212c
|
||||||
|
size 4947422741
|
||||||
3
pytorch_model-00003-of-00003.bin
Normal file
3
pytorch_model-00003-of-00003.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:bd991382e18ce0afb776f6764da586e061cf503137fe94c469a5faa30587da5a
|
||||||
|
size 3590510948
|
||||||
330
pytorch_model.bin.index.json
Normal file
330
pytorch_model.bin.index.json
Normal file
@@ -0,0 +1,330 @@
|
|||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 13476839424
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.embed_tokens.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.20.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00003.bin",
|
||||||
|
"model.layers.24.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.30.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.input_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00003-of-00003.bin",
|
||||||
|
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00003.bin",
|
||||||
|
"model.norm.weight": "pytorch_model-00003-of-00003.bin"
|
||||||
|
}
|
||||||
|
}
|
||||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
93391
tokenizer.json
Normal file
93391
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
34
tokenizer_config.json
Normal file
34
tokenizer_config.json
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' + eos_token }}{% endif %}{% endfor %}",
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"legacy": false,
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": null,
|
||||||
|
"padding_side": "right",
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"unk_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user