初始化项目,由ModelHub XC社区提供模型

Model: elinas/chronos-33b
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-02 16:46:08 +08:00
commit d3d2950362
16 changed files with 94332 additions and 0 deletions

34
.gitattributes vendored Normal file
View File

@@ -0,0 +1,34 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

193
README.md Normal file
View File

@@ -0,0 +1,193 @@
---
license: other
tags:
- llama
- pytorch
- chatbot
- storywriting
---
# chronos-33b
Update: Safetensors added, more to come? Follow for updates.
This is the fp16 PyTorch / HF version of **chronos-33b** - if you need another version, GGML and GPTQ versions are linked below.
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[GGUFs provided by @mradermacher!](https://huggingface.co/mradermacher/chronos-33b-GGUF)
[4bit GPTQ Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.

23
config.json Normal file
View File

@@ -0,0 +1,23 @@
{
"_name_or_path": "elinas/llama-30b-hf-transformers-4.29",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 6656,
"initializer_range": 0.02,
"intermediate_size": 17920,
"max_position_embeddings": 2048,
"model_type": "llama",
"num_attention_heads": 52,
"num_hidden_layers": 60,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.28.1",
"use_cache": true,
"vocab_size": 32000
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 0,
"transformers_version": "4.28.1"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0a686721bad092eab49e70c7c532f009dfaea8830a1a1d91213bd87348d1da42
size 9818304504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6138f7538d9429648e911b2ab7327940ab72c4cae352318fab3c6ad98ed3ec76
size 9958081264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0f42663880638e7a698c728fc1fbc447e57565dc60a345000062b1260d478eeb
size 9896713208

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f75a5c357f09332e771d532a87e944b498be10c1aab22b23ea8a8481928e8d8
size 9869449640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:55020d3a77a52761ff93144d6bd064d95104f509f41f71419170e4260d8d3c40
size 9869449640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c6749da2e4e9270832ddb66ec53f2ec7df3fe870ee29139e0096cd3d878c354c
size 9958081280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c2d5cb74da53fe9c0b5e72cc47e68f1a52947e6cbee297b9dc7309e054a4490e
size 5687891864

View File

@@ -0,0 +1,610 @@
{
"metadata": {
"total_size": 65057902592
},
"weight_map": {
"lm_head.weight": "model-00007-of-00007.safetensors",
"model.embed_tokens.weight": "model-00001-of-00007.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.0.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.1.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.10.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.11.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.12.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.13.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.14.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.15.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.16.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.17.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.18.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.19.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.2.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.20.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.20.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.21.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.22.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.23.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.24.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.25.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.input_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.26.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.27.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.27.self_attn.rotary_emb.inv_freq": "model-00003-of-00007.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
"model.layers.28.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.28.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.29.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.3.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.30.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.30.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.31.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.32.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.33.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.34.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.input_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.35.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.36.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.36.self_attn.rotary_emb.inv_freq": "model-00004-of-00007.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
"model.layers.37.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.37.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.38.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.39.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.4.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.40.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.40.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.40.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.40.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.40.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.40.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.40.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.40.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.40.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.40.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.41.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.41.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.41.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.41.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.42.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.42.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.42.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.42.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.43.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.43.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.43.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.43.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.input_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.44.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
"model.layers.44.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.44.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.44.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.45.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.45.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.45.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.45.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.45.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.45.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.45.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.45.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.45.self_attn.rotary_emb.inv_freq": "model-00005-of-00007.safetensors",
"model.layers.45.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
"model.layers.46.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.46.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.46.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.46.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.46.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.46.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.46.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.46.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.46.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.46.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.47.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.47.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.47.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.47.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.48.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.48.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.48.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.48.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.49.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.49.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.49.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.49.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.5.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.50.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.50.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.50.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.50.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.50.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.50.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.50.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.50.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.50.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.50.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.51.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.51.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.51.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.51.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.52.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.52.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.52.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.52.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.53.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.53.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.53.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.53.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.input_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.54.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
"model.layers.54.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.54.self_attn.rotary_emb.inv_freq": "model-00006-of-00007.safetensors",
"model.layers.54.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.55.input_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.55.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.55.mlp.gate_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.55.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.55.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.55.self_attn.k_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.55.self_attn.o_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.55.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
"model.layers.55.self_attn.rotary_emb.inv_freq": "model-00007-of-00007.safetensors",
"model.layers.55.self_attn.v_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.input_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.56.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.mlp.gate_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.56.self_attn.k_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.self_attn.o_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.self_attn.q_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.56.self_attn.rotary_emb.inv_freq": "model-00007-of-00007.safetensors",
"model.layers.56.self_attn.v_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.input_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.57.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.mlp.gate_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.57.self_attn.k_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.self_attn.o_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.self_attn.q_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.57.self_attn.rotary_emb.inv_freq": "model-00007-of-00007.safetensors",
"model.layers.57.self_attn.v_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.input_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.58.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.mlp.gate_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.58.self_attn.k_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.self_attn.o_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.self_attn.q_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.58.self_attn.rotary_emb.inv_freq": "model-00007-of-00007.safetensors",
"model.layers.58.self_attn.v_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.input_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.59.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.mlp.gate_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
"model.layers.59.self_attn.k_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.self_attn.o_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.self_attn.q_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.59.self_attn.rotary_emb.inv_freq": "model-00007-of-00007.safetensors",
"model.layers.59.self_attn.v_proj.weight": "model-00007-of-00007.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.6.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.7.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.8.self_attn.rotary_emb.inv_freq": "model-00001-of-00007.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
"model.layers.9.self_attn.rotary_emb.inv_freq": "model-00002-of-00007.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
"model.norm.weight": "model-00007-of-00007.safetensors"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

93385
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
size 499723

33
tokenizer_config.json Normal file
View File

@@ -0,0 +1,33 @@
{
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}