初始化项目,由ModelHub XC社区提供模型

Model: sharpbai/Llama-2-7b-chat
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-19 10:19:51 +08:00
commit 131c786acd
43 changed files with 94109 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

147
README.md Normal file
View File

@@ -0,0 +1,147 @@
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# Llama-2-7b-chat
*The weight file is split into chunks with a size of 405MB for convenient and fast parallel downloads*
A 405MB split weight version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
The original model card is down below
-----------------------------------------
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metas sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|

26
config.json Normal file
View File

@@ -0,0 +1,26 @@
{
"_name_or_path": "sharpbai/Llama-2-7b-chat",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.2",
"use_cache": true,
"vocab_size": 32000
}

10
generation_config.json Normal file
View File

@@ -0,0 +1,10 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"max_length": 4096,
"pad_token_id": 0,
"temperature": 0.9,
"top_p": 0.6,
"transformers_version": "4.30.2"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b567345b37554b3e84c84587d3d66b97f1d4e2954399dae288cb9fe92db7e0e
size 396364479

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:040b32d42253cfd2f94a000ea4563a12d9bcb2fe9b30b71ccb03d6b37ecd2b6d
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c5f81d4a90d026761a95a9a95883e630bdaa6c4f3a3ae4fdb2c314d8da6e552
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6469de41625275e6fbc42e2b7e1c13c62777694e28321d7d4bb2c5c0c668af30
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:44f87b5e389c534ab7687511f218afa794a0c1eb6cd10cce413fbdc3b82b53ad
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ef32773b02b2bac442b16dcc75ce8518816e9200330f5c4c9d2684a662e33a2c
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c5b3b34f6c7533cefc5eec6dfb41d27a15db68279c72322c9f2658a63f86761e
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9742a8f3b8d673369c39c2f1b919ee38fb9a947ad046771c35bd04754b63e12d
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ddfefe003828b77b667494671432b83258be6a5ad4e22e46f6e149cc2d755fd
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1d4d082b37ab603d1101a5483fbb3f89ab3ca9390bbcaa03b9d0b2fb47754100
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1510f74784fae8ec105fb21d6ccdd965eb8651ea31801f7303e52f821f883820
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:821dd9641581709bd132fcaa2f182950030a06431760f567134e12a6d26d407b
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e9a7f78aa05cf623ea67404b1dcfcce60e263c558fdaf5261d001f073480659c
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c9d5742c113fcd492289fc487d072e7ccbc55662303165c0dbdb1ded84445d1
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e37c1654eef3ba8381a9e23a5d737e4a87a9730fe627d92fc03470b5dde81ea4
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f76680ba82e80bb4d301106f0f788c8250e67c4c30137d7ede867a56f558f138
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:52a8e15c1b40613e41b54933af1c1a4ecb454739d6f212cebb58a9b972eb9d95
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c9b39cfdfc2c1a24f0c89b99f70058fc5bc0ee34ae2ff9ef12ccbc36341dd5e
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c12e8c6480efef5ac094d3cc02bdca5d19796d931676f26776cbf2e678d6ad1
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:087b16b7d96a97a4ca23c4e6ba213a46db5310a3b510112366d268ce0db4de9f
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1d01fc9179c646fd2105fa96a31ed81b906a2aa0eb8ab5f545ceffe7d3e459b9
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b43e8290f4de9cd832a38955ebcfcedc15e88a6159ab94d6806f6442594aa5d9
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ef72145ac2dee9b13a376cf8b4bb73639b80662505b3fa65bb53fb0c9af4dd4c
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2a28ef5e9ca41670e6af69197e3bc1d13c14cfb7525f8d3eea8bb22b1d4b1055
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fcc1a666e55284023cde1dc4f2bcbade5ef13633590a398ef116c67103780d99
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:36e3dd91f25ee2668c8fa8ac55ce625a8a17a1a9aed380f8acc0651712362e4c
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bb803caeebceae2e4d83cfef61571253ae1edf5de48b4c8523a12a66ce3205d2
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:79271e20de9ec578c931a68f8e3455ff684a31acb6f82ed3eac5a2ca9821baff
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:33dfa6c15f0ca81a2e0d8b7e733790fd4dc164ad62440baf45e745da3a8d03d8
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:919fc5b03034f78134a50cde7a0cb4493baf8c0f5ad7dcdeba5313059f852538
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bb6f00d3914045fccf9c04e5bf16796509af3641cf233457d1da531dab783a91
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d1ec1d12eb55e8cd66197aa46f09076ef1e0af09a1ab5d8743e4362bb59234c1
size 404770755

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42d5976ecd4c3cd7500ffdad1d9660f46a4830b09739cefddae50db070153da0
size 270559679

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c379ef15678b8f07be84d699f9970e1be9cc99135b32c56420f1ab3363e3deee
size 262144938

View File

@@ -0,0 +1,330 @@
{
"metadata": {
"total_size": 13476839424
},
"weight_map": {
"lm_head.weight": "pytorch_model-00034-of-00034.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00034.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00034.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00034.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00034.bin",
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00034.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00034.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00034.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00002-of-00034.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00034.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00034.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00012-of-00034.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00034.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00013-of-00034.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00034.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00014-of-00034.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00015-of-00034.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00015-of-00034.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00016-of-00034.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00016-of-00034.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00017-of-00034.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00017-of-00034.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00018-of-00034.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00018-of-00034.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00019-of-00034.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00019-of-00034.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00020-of-00034.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00020-of-00034.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00034.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00003-of-00034.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00021-of-00034.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00021-of-00034.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00022-of-00034.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00022-of-00034.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00023-of-00034.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00023-of-00034.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00024-of-00034.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00024-of-00034.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00025-of-00034.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00025-of-00034.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00026-of-00034.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00026-of-00034.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00027-of-00034.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00027-of-00034.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00028-of-00034.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00028-of-00034.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00029-of-00034.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00029-of-00034.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00030-of-00034.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00030-of-00034.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00034.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00004-of-00034.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00031-of-00034.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00031-of-00034.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00033-of-00034.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00033-of-00034.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00033-of-00034.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00033-of-00034.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00033-of-00034.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00032-of-00034.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00032-of-00034.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00034.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00005-of-00034.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00034.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00006-of-00034.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00034.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00007-of-00034.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00034.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00008-of-00034.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00034.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00009-of-00034.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00011-of-00034.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00034.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00010-of-00034.bin",
"model.norm.weight": "pytorch_model-00033-of-00034.bin"
}
}

24
special_tokens_map.json Normal file
View File

@@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"pad_token": "<unk>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

93400
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
size 499723

32
tokenizer_config.json Normal file
View File

@@ -0,0 +1,32 @@
{
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"legacy": false,
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}