初始化项目,由ModelHub XC社区提供模型

Model: beomi/llama-2-ko-7b
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-30 20:03:51 +08:00
commit 7b51a7d680
40 changed files with 125501 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

149
LICENSE Normal file
View File

@@ -0,0 +1,149 @@
Llama-2-Ko 7b MIT License under LLAMA 2 COMMUNITY LICENSE AGREEMENT
Copyright (c) 2023 L. Junbum (Beomi)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
---
LLAMA 2 COMMUNITY LICENSE AGREEMENT
Llama 2 Version Release Date: July 18, 2023
"Agreement" means the terms and conditions for use, reproduction, distribution and
modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-
libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity (if
you are entering into this Agreement on such person or entity's behalf), of the age
required under applicable laws, rules or regulations to provide legal consent and that
has legal authority to bind your employer or such other person or entity if you are
entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and other
elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-
libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
Documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you
are an entity, your principal place of business is in the EEA or Switzerland) and Meta
Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or element of the
Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual property or
other rights owned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works of, and make modifications to the Llama
Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this Agreement to such
third party.
ii. If you receive Llama Materials, or any derivative works thereof, from
a Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you
distribute the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws
and regulations (including trade compliance laws and regulations) and adhere to the
Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into
this Agreement.
v. You will not use the Llama Materials or any output or results of the
Llama Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for Licensee,
or Licensee's affiliates, is greater than 700 million monthly active users in the
preceding calendar month, you must request a license from Meta, which Meta may
grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you
such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE
PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR
FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR
USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE
LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT,
NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS
AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,
CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF
ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in
connection with the Llama Materials, neither Meta nor Licensee may use any name
or mark owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing the
Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be the
owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the Llama
Materials or Llama 2 outputs or results, or any portion of any of the foregoing,
constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of
the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related
to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will continue in
full force and effect until terminated in accordance with the terms and conditions
herein. Meta may terminate this Agreement if you are in breach of any term or
condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the
termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice of law
principles, and the UN Convention on Contracts for the International Sale of Goods
does not apply to this Agreement. The courts of California shall have exclusive
jurisdiction of any dispute arising out of this Agreement.

207
README.md Normal file
View File

@@ -0,0 +1,207 @@
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
---
**Update Log**
- 2023.12.27
- New Model is here! Trained with only open-accessible Korean text corpus: https://huggingface.co/beomi/open-llama-2-ko-7b
- 2023.10.19
- Fix Tokenizer bug(space not applied when decoding) after `transforemrs>=4.34.0`
# **Llama-2-Ko** 🦙🇰🇷
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 7B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
## Model Details
**Model Developers** Junbum Lee (Beomi)
**Variations** Llama-2-Ko will come in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of Korean online data*|7B|4k|&#10007;|>40B*|1e<sup>-5</sup>|
*Plan to train upto 200B tokens
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Llama-2 | 32000 | Sentencepiece BPE |
| **Expanded Llama-2-Ko** | 46336 | Sentencepiece BPE. Added Korean vocab and merges |
**Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '씨', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '요']` |
| Llama-2-Ko | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요']` |
**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']` |
| Llama-2-Ko | `['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
### NSMC (Acc) - 50000 full test
TBD
### COPA (F1)
<img src=https://user-images.githubusercontent.com/11323660/255575809-c037bc6e-0566-436a-a6c1-2329ac92187a.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| https://huggingface.co/kakaobrain/kogpt | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| https://huggingface.co/facebook/xglm-7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| Llama-2 Original 7B* | 0.562033 | 0.575982 | 0.576216 | 0.595532 |
| Llama-2-Ko-7b 20B (10k) | 0.738780 | 0.762639 | 0.780761 | 0.797863 |
| Llama-2-Ko-7b 40B (20k) | 0.743630 | 0.792716 | 0.803746 | 0.825944 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### HellaSwag (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576090-a2bfc1ae-d117-44b7-9f7b-262e41179ec1.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| https://huggingface.co/kakaobrain/kogpt | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| https://huggingface.co/facebook/xglm-7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| Llama-2 Original 7B* | 0.415390 | 0.431382 | 0.421342 | 0.442003 |
| Llama-2-Ko-7b 20B (10k) | 0.451757 | 0.466751 | 0.472607 | 0.482776 |
| Llama-2-Ko-7b 40B (20k) | 0.456246 | 0.465665 | 0.469810 | 0.477374 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### BoolQ (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576343-5d847a6f-3b6a-41a7-af37-0f11940a5ea4.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| https://huggingface.co/kakaobrain/kogpt | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| https://huggingface.co/facebook/xglm-7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| Llama-2 Original 7B* | 0.352050 | 0.563238 | 0.474788 | 0.419222 |
| Llama-2-Ko-7b 20B (10k) | 0.360656 | 0.679743 | 0.680109 | 0.662152 |
| Llama-2-Ko-7b 40B (20k) | 0.578640 | 0.697747 | 0.708358 | 0.714423 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### SentiNeg (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576572-b005a81d-fa4d-4709-b48a-f0fe4eed17a3.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| https://huggingface.co/kakaobrain/kogpt | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| https://huggingface.co/facebook/xglm-7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| Llama-2 Original 7B* | 0.347502 | 0.529124 | 0.480641 | 0.788457 |
| Llama-2-Ko-7b 20B (10k) | 0.485546 | 0.829503 | 0.871141 | 0.851253 |
| Llama-2-Ko-7b 40B (20k) | 0.459447 | 0.761079 | 0.727611 | 0.936988 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
## Note for oobabooga/text-generation-webui
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
```python
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
```
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use `use_fast=True` option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
## Citation
```
@misc {l._junbum_2023,
author = { {L. Junbum} },
title = { llama-2-ko-7b (Revision 4a9993e) },
year = 2023,
url = { https://huggingface.co/beomi/llama-2-ko-7b },
doi = { 10.57967/hf/1098 },
publisher = { Hugging Face }
}
```
## Acknowledgement
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__llama-2-ko-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 39.43 |
| ARC (25-shot) | 48.46 |
| HellaSwag (10-shot) | 75.28 |
| MMLU (5-shot) | 39.56 |
| TruthfulQA (0-shot) | 34.49 |
| Winogrande (5-shot) | 72.14 |
| GSM8K (5-shot) | 1.97 |
| DROP (3-shot) | 4.1 |

26
config.json Normal file
View File

@@ -0,0 +1,26 @@
{
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_length": 4096,
"max_position_embeddings": 2048,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.28.0.dev0",
"use_cache": true,
"vocab_size": 46336
}

8
generation_config.json Normal file
View File

@@ -0,0 +1,8 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"max_length": 4096,
"pad_token_id": 0,
"transformers_version": "4.28.0.dev0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ff9c36f7fa88101794c114772b44c84093eea0679ccd6bf5afc1a0eaee63a721
size 918571296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8f61295a25767eb80c47fe16e36d1b249c4d2b0cc5968c84e42c3b605bc07a9a
size 989891520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e6309c59c025ece6ad4b127ec2f5a96f31df46adc660ff8ff8907ea98a7bff3f
size 966839576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f23231d7c88bc3130694787bc410de0c6ed50bee74e9ef8427ea4a22192b9372
size 966823328

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b033c31c166cab4254a0beac00e45b8cc3a81b0464a30fd673f897fddc1c89a
size 989908144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:114f3151eece4c44c5085c8feab6a04fc69b5094dc794540945b8d97ab8afb65
size 943754792

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ceaedeb4b7ece45ba4185b30d1ccee709ea8132fe27fd3fb2585a1f5ff9decf7
size 989891544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c6111b434eac97233c81224fe712fbb74ee01b46395e4bf974f636730f1dbc51
size 966839600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f6986fbe739e40bcf83ce679c687b98aa9dce2b0621604e70e80f76ad5e13ada
size 966823352

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:824354aba3909363d96c7958c4d1ef2d46cfd0150ad7dbf4a12ae7e6388dc8ab
size 989908160

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1e84c1c922beb3a14a987d67a281490f33aebf7084cd2532425f717486ed1c6
size 943754792

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89b127dcb3c85427a3c9224224c60f344d45eacdb729e4352a42224bdbed76ba
size 989891544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5dc7f7c52371bacb5d92a2ca5bc16ea10c8306a8e4d673b0385e8cb879d9e56f
size 966839600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ba3c4093ef0648818030617103646e294eff74d181433225ce51aef304e9877
size 742435432

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6bdd5a478fcb8941874cd27a76eb4ddaa260bcfb15060aa1996868a099292100
size 379584640

View File

@@ -0,0 +1,330 @@
{
"metadata": {
"total_size": 13711720448
},
"weight_map": {
"lm_head.weight": "model-00015-of-00015.safetensors",
"model.embed_tokens.weight": "model-00001-of-00015.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00015.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00015.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.0.self_attn.rotary_emb.inv_freq": "model-00001-of-00015.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.1.input_layernorm.weight": "model-00002-of-00015.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00002-of-00015.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.1.self_attn.rotary_emb.inv_freq": "model-00001-of-00015.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00015.safetensors",
"model.layers.10.input_layernorm.weight": "model-00005-of-00015.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00005-of-00015.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.10.self_attn.rotary_emb.inv_freq": "model-00005-of-00015.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.11.input_layernorm.weight": "model-00006-of-00015.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00006-of-00015.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.11.self_attn.rotary_emb.inv_freq": "model-00006-of-00015.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.input_layernorm.weight": "model-00006-of-00015.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00006-of-00015.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.12.self_attn.rotary_emb.inv_freq": "model-00006-of-00015.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.13.input_layernorm.weight": "model-00007-of-00015.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00007-of-00015.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.13.self_attn.rotary_emb.inv_freq": "model-00006-of-00015.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00006-of-00015.safetensors",
"model.layers.14.input_layernorm.weight": "model-00007-of-00015.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00007-of-00015.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.14.self_attn.rotary_emb.inv_freq": "model-00007-of-00015.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.15.input_layernorm.weight": "model-00008-of-00015.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00008-of-00015.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.15.self_attn.rotary_emb.inv_freq": "model-00007-of-00015.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00007-of-00015.safetensors",
"model.layers.16.input_layernorm.weight": "model-00008-of-00015.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00008-of-00015.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.16.self_attn.rotary_emb.inv_freq": "model-00008-of-00015.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.input_layernorm.weight": "model-00008-of-00015.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00008-of-00015.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.17.self_attn.rotary_emb.inv_freq": "model-00008-of-00015.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.18.input_layernorm.weight": "model-00009-of-00015.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00009-of-00015.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00008-of-00015.safetensors",
"model.layers.18.self_attn.rotary_emb.inv_freq": "model-00009-of-00015.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.input_layernorm.weight": "model-00009-of-00015.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00009-of-00015.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.19.self_attn.rotary_emb.inv_freq": "model-00009-of-00015.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.2.input_layernorm.weight": "model-00002-of-00015.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00002-of-00015.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.2.self_attn.rotary_emb.inv_freq": "model-00002-of-00015.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.20.input_layernorm.weight": "model-00010-of-00015.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00010-of-00015.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.20.self_attn.rotary_emb.inv_freq": "model-00009-of-00015.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00009-of-00015.safetensors",
"model.layers.21.input_layernorm.weight": "model-00010-of-00015.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00010-of-00015.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.21.self_attn.rotary_emb.inv_freq": "model-00010-of-00015.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.input_layernorm.weight": "model-00010-of-00015.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00010-of-00015.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.22.self_attn.rotary_emb.inv_freq": "model-00010-of-00015.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00010-of-00015.safetensors",
"model.layers.23.input_layernorm.weight": "model-00011-of-00015.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00011-of-00015.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.23.self_attn.rotary_emb.inv_freq": "model-00011-of-00015.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.input_layernorm.weight": "model-00011-of-00015.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00011-of-00015.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.24.self_attn.rotary_emb.inv_freq": "model-00011-of-00015.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.25.input_layernorm.weight": "model-00012-of-00015.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00012-of-00015.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.25.self_attn.rotary_emb.inv_freq": "model-00011-of-00015.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00011-of-00015.safetensors",
"model.layers.26.input_layernorm.weight": "model-00012-of-00015.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00012-of-00015.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.26.self_attn.rotary_emb.inv_freq": "model-00012-of-00015.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.27.input_layernorm.weight": "model-00013-of-00015.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00013-of-00015.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.27.self_attn.rotary_emb.inv_freq": "model-00012-of-00015.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00012-of-00015.safetensors",
"model.layers.28.input_layernorm.weight": "model-00013-of-00015.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00013-of-00015.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.28.self_attn.rotary_emb.inv_freq": "model-00013-of-00015.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.input_layernorm.weight": "model-00013-of-00015.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00013-of-00015.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.29.self_attn.rotary_emb.inv_freq": "model-00013-of-00015.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.3.input_layernorm.weight": "model-00003-of-00015.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00003-of-00015.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.3.self_attn.rotary_emb.inv_freq": "model-00002-of-00015.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00002-of-00015.safetensors",
"model.layers.30.input_layernorm.weight": "model-00014-of-00015.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00014-of-00015.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00013-of-00015.safetensors",
"model.layers.30.self_attn.rotary_emb.inv_freq": "model-00014-of-00015.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.input_layernorm.weight": "model-00014-of-00015.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00014-of-00015.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.31.self_attn.rotary_emb.inv_freq": "model-00014-of-00015.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00014-of-00015.safetensors",
"model.layers.4.input_layernorm.weight": "model-00003-of-00015.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00003-of-00015.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.4.self_attn.rotary_emb.inv_freq": "model-00003-of-00015.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.input_layernorm.weight": "model-00003-of-00015.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00003-of-00015.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.5.self_attn.rotary_emb.inv_freq": "model-00003-of-00015.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.6.input_layernorm.weight": "model-00004-of-00015.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00004-of-00015.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00003-of-00015.safetensors",
"model.layers.6.self_attn.rotary_emb.inv_freq": "model-00004-of-00015.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.input_layernorm.weight": "model-00004-of-00015.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00004-of-00015.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.7.self_attn.rotary_emb.inv_freq": "model-00004-of-00015.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.8.input_layernorm.weight": "model-00005-of-00015.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00005-of-00015.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.8.self_attn.rotary_emb.inv_freq": "model-00004-of-00015.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00004-of-00015.safetensors",
"model.layers.9.input_layernorm.weight": "model-00005-of-00015.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00005-of-00015.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00005-of-00015.safetensors",
"model.layers.9.self_attn.rotary_emb.inv_freq": "model-00005-of-00015.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00005-of-00015.safetensors",
"model.norm.weight": "model-00014-of-00015.safetensors"
}
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f0af62f384154d4e1c1c8aee834acdd00708e15264a1f07e56ce99822063687e
size 918575087

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf45c96a59d280c62aa5a20897b06880feab737d6823d359169752c68abd3b53
size 989896539

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e22feb1ea86cb4d9d4f8b5778d3b5eaf1b1d4ca3f6a4d5ac0e462f713f483040
size 966845201

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:37d23d9d306bb236885bac296af2ea8d4373d660fd987ef9f47df027b8d5c912
size 966828735

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c5b72d89af911f376e885d5e0ffa850220e81860db1497a6d174341fc904c1f
size 989913535

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3a5c9f3a9b85f37257d05cbcd3d3f0334b44eb8c70cf54dbc4f67a58b4b61a89
size 943760401

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9c9e5032a493764b0044ab2a66a76e51e34f8ae0c73699a23cdd4548d6d11329
size 989896539

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e5fbe84ade25b11afd7f1282ff5ceab357dc693d177aac59bfd5e6842beca5d4
size 966845201

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f720fa710126cc21e1da1c39dc6437ac78f96ceedad4e8ba1da1738387ba85c0
size 966828799

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ebeaa24b28bc22e843aa2c086c8ecb3120c595e4a1a63825f47c53f2bf8d8ced
size 989913535

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:951565ee409ae885830287f8b06e07d0b3553853513dbdb9c9196cfc9ddd6aa6
size 943760401

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:48a7fb3485138521ea17a243ac2b8878e7d614f7da05610acc17b7345a1e0728
size 989896539

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0800e4946b434fe0c4f477c1b7bea47fbd2d482c9e97238af6150c6350238a96
size 966845201

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:de56542a3464c1f988120524f5af891440b7696c84a3246855e9ec84119a94c0
size 742439845

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d05bef079707f958c8c2689f4d8d945a9e58d5555761df3cb6b0ea21943cf517
size 379585450

View File

@@ -0,0 +1,330 @@
{
"metadata": {
"total_size": 13711720448
},
"weight_map": {
"lm_head.weight": "pytorch_model-00015-of-00015.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.0.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00015.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.1.self_attn.rotary_emb.inv_freq": "pytorch_model-00001-of-00015.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00015.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.10.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00015.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.11.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00015.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.12.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00015.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.13.self_attn.rotary_emb.inv_freq": "pytorch_model-00006-of-00015.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00006-of-00015.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.14.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00015.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.15.self_attn.rotary_emb.inv_freq": "pytorch_model-00007-of-00015.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00007-of-00015.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.16.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00015.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.17.self_attn.rotary_emb.inv_freq": "pytorch_model-00008-of-00015.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00008-of-00015.bin",
"model.layers.18.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00015.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.19.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00015.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.2.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00015.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.20.self_attn.rotary_emb.inv_freq": "pytorch_model-00009-of-00015.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00009-of-00015.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.21.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00015.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.22.self_attn.rotary_emb.inv_freq": "pytorch_model-00010-of-00015.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00010-of-00015.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.23.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00015.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.24.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00015.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.25.self_attn.rotary_emb.inv_freq": "pytorch_model-00011-of-00015.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00011-of-00015.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.26.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00015.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.27.self_attn.rotary_emb.inv_freq": "pytorch_model-00012-of-00015.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00012-of-00015.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.28.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00015.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.29.self_attn.rotary_emb.inv_freq": "pytorch_model-00013-of-00015.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.3.self_attn.rotary_emb.inv_freq": "pytorch_model-00002-of-00015.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00002-of-00015.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00013-of-00015.bin",
"model.layers.30.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00015.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.31.self_attn.rotary_emb.inv_freq": "pytorch_model-00014-of-00015.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00014-of-00015.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.4.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00015.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.5.self_attn.rotary_emb.inv_freq": "pytorch_model-00003-of-00015.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00003-of-00015.bin",
"model.layers.6.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00015.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.7.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00015.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.8.self_attn.rotary_emb.inv_freq": "pytorch_model-00004-of-00015.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00004-of-00015.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.layers.9.self_attn.rotary_emb.inv_freq": "pytorch_model-00005-of-00015.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00005-of-00015.bin",
"model.norm.weight": "pytorch_model-00014-of-00015.bin"
}
}

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

124264
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

39
tokenizer_config.json Normal file
View File

@@ -0,0 +1,39 @@
{
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"legacy": false,
"model_max_length": 1000000000000000019884624838656,
"pad_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}