初始化项目,由ModelHub XC社区提供模型

Model: EleutherAI/gpt-neox-20b
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-01 08:48:02 +08:00
commit e5cb672ed9
102 changed files with 151785 additions and 0 deletions

74
.gitattributes vendored Normal file
View File

@@ -0,0 +1,74 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model-00010-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00012-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00015-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00031-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00029-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00005-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00037-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00025-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00027-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00018-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00044-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00021-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00017-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00023-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00040-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00034-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00046-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00022-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00033-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00035-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00002-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00020-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00026-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00042-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00013-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00019-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00008-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00016-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00014-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00030-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00028-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00045-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00024-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00009-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00036-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00039-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00006-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00011-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00007-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00032-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00041-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00001-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00043-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00038-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00004-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text
model-00003-of-00046.safetensors filter=lfs diff=lfs merge=lfs -text

196
README.md Normal file
View File

@@ -0,0 +1,196 @@
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- EleutherAI/pile
---
GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained
on [the Pile](https://pile.eleuther.ai/) using the [GPT-NeoX
library](https://github.com/EleutherAI/gpt-neox). Its architecture intentionally
resembles that of GPT-3, and is almost identical to that of [GPT-J-
6B](https://huggingface.co/EleutherAI/gpt-j-6B). Its training dataset contains
a multitude of English-language texts, reflecting the general-purpose nature
of this model. See the [accompanying paper](https://arxiv.org/abs/2204.06745)
for details about model architecture (including how it differs from GPT-3),
training procedure, and additional evaluations.
### Model details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [GPT-NeoX-20B: An Open-Source Autoregressive Language
Model](https://arxiv.org/abs/2204.06745). For details about the training dataset,
see [the Pile paper](https://arxiv.org/abs/2101.00027), and [its data
sheet](https://arxiv.org/abs/2201.07311).
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing GPT-NeoX-20B documentation before asking about the model
on Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure style="width:30em">
| Hyperparameter | Value |
| ---------------------- | ----------- |
| n<sub>parameters</sub> | 20554567680 |
| n<sub>layers</sub> | 44 |
| d<sub>model</sub> | 6144 |
| n<sub>heads</sub> | 64 |
| d<sub>head</sub> | 96 |
| n<sub>vocab</sub> | 50257 |
| Sequence Length | 2048 |
| Learning Rate | 0.97 x 10<sup>-5</sup> |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
</figure>
### Uses and limitations
#### Intended use
GPT-NeoX-20B was developed primarily for research purposes. It learns an inner
representation of the English language that can be used to extract features
useful for downstream tasks.
In addition to scientific uses, you may also further fine-tune and adapt
GPT-NeoX-20B for deployment, as long as your use is in accordance with the
Apache 2.0 license. This model works with the [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that
you need to conduct your own risk and bias assessment.
#### Out-of-scope use
GPT-NeoX-20B is **not** intended for deployment as-is. It is not a product
and cannot be used for human-facing interactions without supervision.
GPT-NeoX-20B has not been fine-tuned for downstream tasks for which language
models are commonly deployed, such as writing genre prose, or commercial
chatbots. This means GPT-NeoX-20B will likely **not** respond to a given prompt
the way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,
ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human
Feedback (RLHF) to better “understand” human instructions and dialogue.
This model is English-language only, and thus cannot be used for translation
or generating text in other languages.
#### Limitations and biases
The core functionality of GPT-NeoX-20B is to take a string of text and predict
the next token. Remember that the statistically most likely next token need
not result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce
factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
GPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
We recommend curating the outputs of this model before presenting it to a human
reader. Please inform your audience that you are using artificially generated
text.
#### How to use
If you simply want to try out some prompts, check out [this
playground](https://20b.eleuther.ai/).
GPT-NeoX-20B can be loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
```
### Training
#### Training dataset
The Pile is a 825GiB general-purpose dataset in English. It was created by
EleutherAI specifically for training large language models. It contains texts
from 22 diverse sources, roughly broken down into five categories: academic
writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project
Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,
Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for
a breakdown of all data sources, methodology, and a discussion of ethical
implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for
more detailed documentation about the Pile and its component datasets. The
Pile can be downloaded from the [official website](https://pile.eleuther.ai/),
or from a [community mirror](https://the-eye.eu/public/AI/pile/).
The Pile was **not** deduplicated before being used to train GPT-NeoX-20B.
#### Training procedure
GPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens
(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor
parallelism and pipeline parallelism were used to distribute the model across
GPUs. Additional details about the training procedure are in [Section 3 of
the accompanying paper](https://arxiv.org/abs/2204.06745).
### Evaluations
<figure style="width:55em">
| Model | OpenAIs LAMBADA | SciQ | PIQA | TriviaQA | ARC (Challenge) |
| ------------- | :--------------: | :-----------: | :-----------: | :-----------: | :-------------: |
| GPT-J-6B | 0.683 ± 0.006 | 0.910 ± 0.009 | 0.752 ± 0.010 | 0.170 ± 0.004 | 0.340 ± 0.014 |
| FairSeq 6.7B | 0.673 ± 0.007 | 0.895 ± 0.010 | 0.762 ± 0.010 | 0.221 ± 0.004 | 0.329 ± 0.014 |
| GPT-3 Curie | 0.693 ± 0.006 | 0.918 ± 0.009 | 0.767 ± 0.010 | 0.196 ± 0.004 | 0.334 ± 0.014 |
| FairSeq 13B | 0.709 ± 0.006 | 0.910 ± 0.009 | 0.769 ± 0.010 | 0.270 ± 0.004 | 0.345 ± 0.014 |
| GPT-NeoX-20B | 0.720 ± 0.006 | 0.928 ± 0.008 | 0.779 ± 0.010 | 0.259 ± 0.004 | 0.380 ± 0.014 |
| GPT-3 DaVinci | 0.752 ± 0.006 | 0.949 ± 0.007 | 0.791 ± 0.009 | 0.409 ± 0.005 | 0.435 ± 0.014 |
<figcaption>Zero-shot performance on selected natural language tasks.</figcaption>
</figure>
This is a heavily abridged version of the evaluation results. Appendix D of the
[GPT-NeoX-20B paper](https://arxiv.org/abs/2204.06745) compares more model
sizes, and contains additional evaluations, including on: zero and five-shot
natural language tasks, zero and five-shot Basic Arithmetic and MATH,
and zero-shot Hendrycks tasks.
### BibTeX
To cite the GPT-NeoX-20B paper:
```
@misc{https://doi.org/10.48550/arxiv.2204.06745,
doi = {10.48550/ARXIV.2204.06745},
url = {https://arxiv.org/abs/2204.06745},
author = {Black, Sid and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, USVSN Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {GPT-NeoX-20B: An Open-Source Autoregressive Language Model},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__gpt-neox-20b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 36.02 |
| ARC (25-shot) | 45.73 |
| HellaSwag (10-shot) | 73.45 |
| MMLU (5-shot) | 25.0 |
| TruthfulQA (0-shot) | 31.61 |
| Winogrande (5-shot) | 68.9 |
| GSM8K (5-shot) | 2.43 |
| DROP (3-shot) | 5.04 |

25
config.json Normal file
View File

@@ -0,0 +1,25 @@
{
"architectures": [
"GPTNeoXForCausalLM"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 0,
"eos_token_id": 0,
"hidden_act": "gelu_fast",
"hidden_dropout_prob": 0,
"hidden_size": 6144,
"initializer_range": 0.02,
"intermediate_size": 24576,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 2048,
"model_type": "gpt_neox",
"num_attention_heads": 64,
"num_hidden_layers": 44,
"rotary_emb_base": 10000,
"rotary_pct": 0.25,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.19.0.dev0",
"use_cache": true,
"vocab_size": 50432
}

50009
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:24a2fc18ab96693eac02e833bf7373c78fcd4b12ae5f01651e9b75fd854dbcf1
size 925992334

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eb326a6fc1840dabfa3a30c57c306f89f57e5f0917a351c8d045e79539a6bb1b
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:95effe91209682aff2935db6cbb84ff1d64083b468699cc2ccdc164343582814
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a0ac9466a9c1f3f954a678c5dc3dea2c883f54ae199fa70e5e221b1fe8c93dca
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9d6613a915480e7fa00853c0ead97ff165073e9098eb28cdb3b0f52a5b30e3ac
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cddbc8a13a7797a58f4e34a23276cb67ea3b411e6f6a0d571809ae323ab64985
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:097ac87764658bb0d2a1fc15424b7bcfbb039a5f6fa7f8dd6c31dd7e5f41609c
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:47ec7084cea6c8444ece2ad3baf03907b708c4b203d69aac285b6888834b14c8
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01fdfabdb95d133ebbb1c604a03c30b8da1e69cfe0b6f7a0a6d4efa9ca3fc7a6
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bcd3060ee2fbf970c8076a7e3280fbe02face84a973231011b617a7c95fb454a
size 910325486

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fd1a57dfabafd6a99705ece673301e6bb682f8e01e67034e52f497f1bc5b922e
size 910325491

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c85e72751dffa26e549de07479f6df2e3bbbd3de0ec744d3868ba6b0944057fe
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:179423abc852fc930d0e6e2e794afd930960146214a3e28eb6442a055b915c95
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8dcc73bab4328c069f81436d1b5d900fc40420a27b13a484b0a3be817ffebdb2
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1df6529dfc089ea0f68cae3de062d83e412fc88d0b2ff824acfdb1bbcc570cb7
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b1a3727b4800e51ddadbfbcbc77a62d6e6ddf47479106220270ca2e3008b793
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5860f6ef89f5377550a0b547ecbc6612a615d181869d47fa184ec37f84b98e5f
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:62d8b2094835d04506fa677dfb0d1d44c8af7104613717e8a24a69330f9979bc
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:06cd4a326cd54d565d2d17385d927eaadad57ffaaad0dcb963379132c5006e70
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8d192a4a220ba4bfe5d76fb681e28901a51164f59fdc9b84e3ad32c6678426c8
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:24c8388deca5726d11c07810017aaf178d36454df375b45fd4ab6db20b0fcddc
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dd821f6f05df926a12879560660278d6e32d33b84adf289f7f72a2daebfee9a2
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:21ac34fc590ebb62ea8374213873e8ecf1209687c279742f993357045704a05d
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:506b22eebc595a64dc2da52758130f63e544c75cdf93a5060c380bdf224b8043
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eed055bbd7cd16e384b0ac9499192df8c33dee0109d503008d813ea077209d2f
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b25b6edd4b9d50c7e6ea35b38ce52fa9e8849ebdafb92d718726549aad431a71
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cf9c0cf13f55dc236b4cd5039df3eb49c521e371074ccaa901a71ce7462ec4a6
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:793c91ab150af3a3c6176abbc3cd54a20137527f150bcdce6b6f01e1b27bd1f4
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f4b1bcc99c22204bcf88b44fc212cdc9aa3de8138b3ef767cc1342620e4cea15
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf342ac358e299f553f15b35fb6f30a657c5e697237d60bd79c917d885e7bbda
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:713341d88368000cfab1ebe8ab47121f4e79bd67418cfd3f4372110547678473
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d26e8d4c5811c42d8172eb0624f941b747fdfa22add2f42c6d2eb93be137db43
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:757880b98e70fb826e27333f580194ccd85e99e40b19c993cfb1c4a6c2b21b3c
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:808c213e7e6e0bcae8cf4d6ddde3bcec96f7f73f57d7958ba05e0b06dbc7a0ab
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:11113c4a99b37f86bd8054a23b922710350ffb6ef9969921be87647c554812dc
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cd5649b0348af719ae73c2c7312d1abcc5e1924805a69dd300a7f1d54bc04654
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aaf2e9c33501b372547d5da9a60fedf689f225b22091ec1856b94337f3226784
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a815e95afd13b93ab246b32d6fe75af72fb35a52336f47961d027a135e8266e7
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cbcc362c5028e969f2d0907d183130e47cfe269dbe8815a60751140a8e1d4cca
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a24e2b3a2befb91162ac54e201dd49a8acd282980c8b8997753a6cf1fe17c27
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8e26c19107464eeb3acf15578402c4d51c56df2721c939ad09f5b4465356fc3
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7172ff989ec0219a3d764d55751c9551e64d4d3451ea285988725aae7bb96da3
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0266b06e509f6b6566961d8131b9cea3fe4706ade362d8cafef4c8d532efc064
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ae11994b83961d8e5f634e4ca93561f66ea8db1a36151501de3a03ddf9a86df
size 910325501

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:70ae0ac80dd9dd8a5dbef6000ae7051fe9609557c1262bf31585896ba06d7325
size 604066469

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01499aa37fdaace2112368ea52cccd955d6a0efbf4f4a2934129f789db79a87f
size 619708541

View File

@@ -0,0 +1,671 @@
{
"metadata": {
"total_size": 41293685880
},
"weight_map": {
"embed_out.weight": "model-00046-of-00046.safetensors",
"gpt_neox.embed_in.weight": "model-00001-of-00046.safetensors",
"gpt_neox.final_layer_norm.bias": "model-00045-of-00046.safetensors",
"gpt_neox.final_layer_norm.weight": "model-00045-of-00046.safetensors",
"gpt_neox.layers.0.attention.bias": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.attention.dense.bias": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.attention.dense.weight": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.attention.masked_bias": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.attention.query_key_value.bias": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.attention.query_key_value.weight": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.attention.rotary_emb.inv_freq": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.input_layernorm.bias": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.input_layernorm.weight": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.mlp.dense_4h_to_h.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.0.mlp.dense_4h_to_h.weight": "model-00002-of-00046.safetensors",
"gpt_neox.layers.0.mlp.dense_h_to_4h.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.0.mlp.dense_h_to_4h.weight": "model-00002-of-00046.safetensors",
"gpt_neox.layers.0.post_attention_layernorm.bias": "model-00001-of-00046.safetensors",
"gpt_neox.layers.0.post_attention_layernorm.weight": "model-00001-of-00046.safetensors",
"gpt_neox.layers.1.attention.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.attention.dense.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.attention.dense.weight": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.attention.masked_bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.attention.query_key_value.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.attention.query_key_value.weight": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.attention.rotary_emb.inv_freq": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.input_layernorm.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.input_layernorm.weight": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.mlp.dense_4h_to_h.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.1.mlp.dense_4h_to_h.weight": "model-00003-of-00046.safetensors",
"gpt_neox.layers.1.mlp.dense_h_to_4h.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.1.mlp.dense_h_to_4h.weight": "model-00003-of-00046.safetensors",
"gpt_neox.layers.1.post_attention_layernorm.bias": "model-00002-of-00046.safetensors",
"gpt_neox.layers.1.post_attention_layernorm.weight": "model-00002-of-00046.safetensors",
"gpt_neox.layers.10.attention.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.attention.dense.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.attention.dense.weight": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.attention.masked_bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.attention.query_key_value.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.attention.query_key_value.weight": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.attention.rotary_emb.inv_freq": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.input_layernorm.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.input_layernorm.weight": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.mlp.dense_4h_to_h.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.10.mlp.dense_4h_to_h.weight": "model-00012-of-00046.safetensors",
"gpt_neox.layers.10.mlp.dense_h_to_4h.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.10.mlp.dense_h_to_4h.weight": "model-00012-of-00046.safetensors",
"gpt_neox.layers.10.post_attention_layernorm.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.10.post_attention_layernorm.weight": "model-00011-of-00046.safetensors",
"gpt_neox.layers.11.attention.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.attention.dense.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.attention.dense.weight": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.attention.masked_bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.attention.query_key_value.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.attention.query_key_value.weight": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.attention.rotary_emb.inv_freq": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.input_layernorm.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.input_layernorm.weight": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.mlp.dense_4h_to_h.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.11.mlp.dense_4h_to_h.weight": "model-00013-of-00046.safetensors",
"gpt_neox.layers.11.mlp.dense_h_to_4h.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.11.mlp.dense_h_to_4h.weight": "model-00013-of-00046.safetensors",
"gpt_neox.layers.11.post_attention_layernorm.bias": "model-00012-of-00046.safetensors",
"gpt_neox.layers.11.post_attention_layernorm.weight": "model-00012-of-00046.safetensors",
"gpt_neox.layers.12.attention.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.attention.dense.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.attention.dense.weight": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.attention.masked_bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.attention.query_key_value.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.attention.query_key_value.weight": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.attention.rotary_emb.inv_freq": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.input_layernorm.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.input_layernorm.weight": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.mlp.dense_4h_to_h.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.12.mlp.dense_4h_to_h.weight": "model-00014-of-00046.safetensors",
"gpt_neox.layers.12.mlp.dense_h_to_4h.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.12.mlp.dense_h_to_4h.weight": "model-00014-of-00046.safetensors",
"gpt_neox.layers.12.post_attention_layernorm.bias": "model-00013-of-00046.safetensors",
"gpt_neox.layers.12.post_attention_layernorm.weight": "model-00013-of-00046.safetensors",
"gpt_neox.layers.13.attention.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.attention.dense.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.attention.dense.weight": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.attention.masked_bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.attention.query_key_value.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.attention.query_key_value.weight": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.attention.rotary_emb.inv_freq": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.input_layernorm.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.input_layernorm.weight": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.mlp.dense_4h_to_h.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.13.mlp.dense_4h_to_h.weight": "model-00015-of-00046.safetensors",
"gpt_neox.layers.13.mlp.dense_h_to_4h.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.13.mlp.dense_h_to_4h.weight": "model-00015-of-00046.safetensors",
"gpt_neox.layers.13.post_attention_layernorm.bias": "model-00014-of-00046.safetensors",
"gpt_neox.layers.13.post_attention_layernorm.weight": "model-00014-of-00046.safetensors",
"gpt_neox.layers.14.attention.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.attention.dense.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.attention.dense.weight": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.attention.masked_bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.attention.query_key_value.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.attention.query_key_value.weight": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.attention.rotary_emb.inv_freq": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.input_layernorm.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.input_layernorm.weight": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.mlp.dense_4h_to_h.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.14.mlp.dense_4h_to_h.weight": "model-00016-of-00046.safetensors",
"gpt_neox.layers.14.mlp.dense_h_to_4h.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.14.mlp.dense_h_to_4h.weight": "model-00016-of-00046.safetensors",
"gpt_neox.layers.14.post_attention_layernorm.bias": "model-00015-of-00046.safetensors",
"gpt_neox.layers.14.post_attention_layernorm.weight": "model-00015-of-00046.safetensors",
"gpt_neox.layers.15.attention.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.attention.dense.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.attention.dense.weight": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.attention.masked_bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.attention.query_key_value.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.attention.query_key_value.weight": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.attention.rotary_emb.inv_freq": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.input_layernorm.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.input_layernorm.weight": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.mlp.dense_4h_to_h.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.15.mlp.dense_4h_to_h.weight": "model-00017-of-00046.safetensors",
"gpt_neox.layers.15.mlp.dense_h_to_4h.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.15.mlp.dense_h_to_4h.weight": "model-00017-of-00046.safetensors",
"gpt_neox.layers.15.post_attention_layernorm.bias": "model-00016-of-00046.safetensors",
"gpt_neox.layers.15.post_attention_layernorm.weight": "model-00016-of-00046.safetensors",
"gpt_neox.layers.16.attention.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.attention.dense.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.attention.dense.weight": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.attention.masked_bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.attention.query_key_value.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.attention.query_key_value.weight": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.attention.rotary_emb.inv_freq": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.input_layernorm.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.input_layernorm.weight": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.mlp.dense_4h_to_h.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.16.mlp.dense_4h_to_h.weight": "model-00018-of-00046.safetensors",
"gpt_neox.layers.16.mlp.dense_h_to_4h.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.16.mlp.dense_h_to_4h.weight": "model-00018-of-00046.safetensors",
"gpt_neox.layers.16.post_attention_layernorm.bias": "model-00017-of-00046.safetensors",
"gpt_neox.layers.16.post_attention_layernorm.weight": "model-00017-of-00046.safetensors",
"gpt_neox.layers.17.attention.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.attention.dense.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.attention.dense.weight": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.attention.masked_bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.attention.query_key_value.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.attention.query_key_value.weight": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.attention.rotary_emb.inv_freq": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.input_layernorm.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.input_layernorm.weight": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.mlp.dense_4h_to_h.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.17.mlp.dense_4h_to_h.weight": "model-00019-of-00046.safetensors",
"gpt_neox.layers.17.mlp.dense_h_to_4h.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.17.mlp.dense_h_to_4h.weight": "model-00019-of-00046.safetensors",
"gpt_neox.layers.17.post_attention_layernorm.bias": "model-00018-of-00046.safetensors",
"gpt_neox.layers.17.post_attention_layernorm.weight": "model-00018-of-00046.safetensors",
"gpt_neox.layers.18.attention.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.attention.dense.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.attention.dense.weight": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.attention.masked_bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.attention.query_key_value.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.attention.query_key_value.weight": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.attention.rotary_emb.inv_freq": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.input_layernorm.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.input_layernorm.weight": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.mlp.dense_4h_to_h.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.18.mlp.dense_4h_to_h.weight": "model-00020-of-00046.safetensors",
"gpt_neox.layers.18.mlp.dense_h_to_4h.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.18.mlp.dense_h_to_4h.weight": "model-00020-of-00046.safetensors",
"gpt_neox.layers.18.post_attention_layernorm.bias": "model-00019-of-00046.safetensors",
"gpt_neox.layers.18.post_attention_layernorm.weight": "model-00019-of-00046.safetensors",
"gpt_neox.layers.19.attention.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.attention.dense.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.attention.dense.weight": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.attention.masked_bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.attention.query_key_value.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.attention.query_key_value.weight": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.attention.rotary_emb.inv_freq": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.input_layernorm.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.input_layernorm.weight": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.mlp.dense_4h_to_h.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.19.mlp.dense_4h_to_h.weight": "model-00021-of-00046.safetensors",
"gpt_neox.layers.19.mlp.dense_h_to_4h.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.19.mlp.dense_h_to_4h.weight": "model-00021-of-00046.safetensors",
"gpt_neox.layers.19.post_attention_layernorm.bias": "model-00020-of-00046.safetensors",
"gpt_neox.layers.19.post_attention_layernorm.weight": "model-00020-of-00046.safetensors",
"gpt_neox.layers.2.attention.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.attention.dense.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.attention.dense.weight": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.attention.masked_bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.attention.query_key_value.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.attention.query_key_value.weight": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.attention.rotary_emb.inv_freq": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.input_layernorm.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.input_layernorm.weight": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.mlp.dense_4h_to_h.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.2.mlp.dense_4h_to_h.weight": "model-00004-of-00046.safetensors",
"gpt_neox.layers.2.mlp.dense_h_to_4h.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.2.mlp.dense_h_to_4h.weight": "model-00004-of-00046.safetensors",
"gpt_neox.layers.2.post_attention_layernorm.bias": "model-00003-of-00046.safetensors",
"gpt_neox.layers.2.post_attention_layernorm.weight": "model-00003-of-00046.safetensors",
"gpt_neox.layers.20.attention.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.attention.dense.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.attention.dense.weight": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.attention.masked_bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.attention.query_key_value.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.attention.query_key_value.weight": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.attention.rotary_emb.inv_freq": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.input_layernorm.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.input_layernorm.weight": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.mlp.dense_4h_to_h.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.20.mlp.dense_4h_to_h.weight": "model-00022-of-00046.safetensors",
"gpt_neox.layers.20.mlp.dense_h_to_4h.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.20.mlp.dense_h_to_4h.weight": "model-00022-of-00046.safetensors",
"gpt_neox.layers.20.post_attention_layernorm.bias": "model-00021-of-00046.safetensors",
"gpt_neox.layers.20.post_attention_layernorm.weight": "model-00021-of-00046.safetensors",
"gpt_neox.layers.21.attention.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.attention.dense.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.attention.dense.weight": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.attention.masked_bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.attention.query_key_value.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.attention.query_key_value.weight": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.attention.rotary_emb.inv_freq": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.input_layernorm.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.input_layernorm.weight": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.mlp.dense_4h_to_h.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.21.mlp.dense_4h_to_h.weight": "model-00023-of-00046.safetensors",
"gpt_neox.layers.21.mlp.dense_h_to_4h.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.21.mlp.dense_h_to_4h.weight": "model-00023-of-00046.safetensors",
"gpt_neox.layers.21.post_attention_layernorm.bias": "model-00022-of-00046.safetensors",
"gpt_neox.layers.21.post_attention_layernorm.weight": "model-00022-of-00046.safetensors",
"gpt_neox.layers.22.attention.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.attention.dense.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.attention.dense.weight": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.attention.masked_bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.attention.query_key_value.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.attention.query_key_value.weight": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.attention.rotary_emb.inv_freq": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.input_layernorm.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.input_layernorm.weight": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.mlp.dense_4h_to_h.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.22.mlp.dense_4h_to_h.weight": "model-00024-of-00046.safetensors",
"gpt_neox.layers.22.mlp.dense_h_to_4h.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.22.mlp.dense_h_to_4h.weight": "model-00024-of-00046.safetensors",
"gpt_neox.layers.22.post_attention_layernorm.bias": "model-00023-of-00046.safetensors",
"gpt_neox.layers.22.post_attention_layernorm.weight": "model-00023-of-00046.safetensors",
"gpt_neox.layers.23.attention.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.attention.dense.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.attention.dense.weight": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.attention.masked_bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.attention.query_key_value.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.attention.query_key_value.weight": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.attention.rotary_emb.inv_freq": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.input_layernorm.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.input_layernorm.weight": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.mlp.dense_4h_to_h.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.23.mlp.dense_4h_to_h.weight": "model-00025-of-00046.safetensors",
"gpt_neox.layers.23.mlp.dense_h_to_4h.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.23.mlp.dense_h_to_4h.weight": "model-00025-of-00046.safetensors",
"gpt_neox.layers.23.post_attention_layernorm.bias": "model-00024-of-00046.safetensors",
"gpt_neox.layers.23.post_attention_layernorm.weight": "model-00024-of-00046.safetensors",
"gpt_neox.layers.24.attention.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.attention.dense.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.attention.dense.weight": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.attention.masked_bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.attention.query_key_value.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.attention.query_key_value.weight": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.attention.rotary_emb.inv_freq": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.input_layernorm.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.input_layernorm.weight": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.mlp.dense_4h_to_h.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.24.mlp.dense_4h_to_h.weight": "model-00026-of-00046.safetensors",
"gpt_neox.layers.24.mlp.dense_h_to_4h.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.24.mlp.dense_h_to_4h.weight": "model-00026-of-00046.safetensors",
"gpt_neox.layers.24.post_attention_layernorm.bias": "model-00025-of-00046.safetensors",
"gpt_neox.layers.24.post_attention_layernorm.weight": "model-00025-of-00046.safetensors",
"gpt_neox.layers.25.attention.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.attention.dense.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.attention.dense.weight": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.attention.masked_bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.attention.query_key_value.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.attention.query_key_value.weight": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.attention.rotary_emb.inv_freq": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.input_layernorm.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.input_layernorm.weight": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.mlp.dense_4h_to_h.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.25.mlp.dense_4h_to_h.weight": "model-00027-of-00046.safetensors",
"gpt_neox.layers.25.mlp.dense_h_to_4h.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.25.mlp.dense_h_to_4h.weight": "model-00027-of-00046.safetensors",
"gpt_neox.layers.25.post_attention_layernorm.bias": "model-00026-of-00046.safetensors",
"gpt_neox.layers.25.post_attention_layernorm.weight": "model-00026-of-00046.safetensors",
"gpt_neox.layers.26.attention.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.attention.dense.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.attention.dense.weight": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.attention.masked_bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.attention.query_key_value.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.attention.query_key_value.weight": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.attention.rotary_emb.inv_freq": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.input_layernorm.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.input_layernorm.weight": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.mlp.dense_4h_to_h.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.26.mlp.dense_4h_to_h.weight": "model-00028-of-00046.safetensors",
"gpt_neox.layers.26.mlp.dense_h_to_4h.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.26.mlp.dense_h_to_4h.weight": "model-00028-of-00046.safetensors",
"gpt_neox.layers.26.post_attention_layernorm.bias": "model-00027-of-00046.safetensors",
"gpt_neox.layers.26.post_attention_layernorm.weight": "model-00027-of-00046.safetensors",
"gpt_neox.layers.27.attention.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.attention.dense.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.attention.dense.weight": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.attention.masked_bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.attention.query_key_value.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.attention.query_key_value.weight": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.attention.rotary_emb.inv_freq": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.input_layernorm.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.input_layernorm.weight": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.mlp.dense_4h_to_h.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.27.mlp.dense_4h_to_h.weight": "model-00029-of-00046.safetensors",
"gpt_neox.layers.27.mlp.dense_h_to_4h.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.27.mlp.dense_h_to_4h.weight": "model-00029-of-00046.safetensors",
"gpt_neox.layers.27.post_attention_layernorm.bias": "model-00028-of-00046.safetensors",
"gpt_neox.layers.27.post_attention_layernorm.weight": "model-00028-of-00046.safetensors",
"gpt_neox.layers.28.attention.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.attention.dense.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.attention.dense.weight": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.attention.masked_bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.attention.query_key_value.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.attention.query_key_value.weight": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.attention.rotary_emb.inv_freq": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.input_layernorm.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.input_layernorm.weight": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.mlp.dense_4h_to_h.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.28.mlp.dense_4h_to_h.weight": "model-00030-of-00046.safetensors",
"gpt_neox.layers.28.mlp.dense_h_to_4h.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.28.mlp.dense_h_to_4h.weight": "model-00030-of-00046.safetensors",
"gpt_neox.layers.28.post_attention_layernorm.bias": "model-00029-of-00046.safetensors",
"gpt_neox.layers.28.post_attention_layernorm.weight": "model-00029-of-00046.safetensors",
"gpt_neox.layers.29.attention.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.attention.dense.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.attention.dense.weight": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.attention.masked_bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.attention.query_key_value.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.attention.query_key_value.weight": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.attention.rotary_emb.inv_freq": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.input_layernorm.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.input_layernorm.weight": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.mlp.dense_4h_to_h.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.29.mlp.dense_4h_to_h.weight": "model-00031-of-00046.safetensors",
"gpt_neox.layers.29.mlp.dense_h_to_4h.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.29.mlp.dense_h_to_4h.weight": "model-00031-of-00046.safetensors",
"gpt_neox.layers.29.post_attention_layernorm.bias": "model-00030-of-00046.safetensors",
"gpt_neox.layers.29.post_attention_layernorm.weight": "model-00030-of-00046.safetensors",
"gpt_neox.layers.3.attention.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.attention.dense.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.attention.dense.weight": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.attention.masked_bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.attention.query_key_value.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.attention.query_key_value.weight": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.attention.rotary_emb.inv_freq": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.input_layernorm.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.input_layernorm.weight": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.mlp.dense_4h_to_h.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.3.mlp.dense_4h_to_h.weight": "model-00005-of-00046.safetensors",
"gpt_neox.layers.3.mlp.dense_h_to_4h.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.3.mlp.dense_h_to_4h.weight": "model-00005-of-00046.safetensors",
"gpt_neox.layers.3.post_attention_layernorm.bias": "model-00004-of-00046.safetensors",
"gpt_neox.layers.3.post_attention_layernorm.weight": "model-00004-of-00046.safetensors",
"gpt_neox.layers.30.attention.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.attention.dense.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.attention.dense.weight": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.attention.masked_bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.attention.query_key_value.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.attention.query_key_value.weight": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.attention.rotary_emb.inv_freq": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.input_layernorm.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.input_layernorm.weight": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.mlp.dense_4h_to_h.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.30.mlp.dense_4h_to_h.weight": "model-00032-of-00046.safetensors",
"gpt_neox.layers.30.mlp.dense_h_to_4h.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.30.mlp.dense_h_to_4h.weight": "model-00032-of-00046.safetensors",
"gpt_neox.layers.30.post_attention_layernorm.bias": "model-00031-of-00046.safetensors",
"gpt_neox.layers.30.post_attention_layernorm.weight": "model-00031-of-00046.safetensors",
"gpt_neox.layers.31.attention.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.attention.dense.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.attention.dense.weight": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.attention.masked_bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.attention.query_key_value.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.attention.query_key_value.weight": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.attention.rotary_emb.inv_freq": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.input_layernorm.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.input_layernorm.weight": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.mlp.dense_4h_to_h.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.31.mlp.dense_4h_to_h.weight": "model-00033-of-00046.safetensors",
"gpt_neox.layers.31.mlp.dense_h_to_4h.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.31.mlp.dense_h_to_4h.weight": "model-00033-of-00046.safetensors",
"gpt_neox.layers.31.post_attention_layernorm.bias": "model-00032-of-00046.safetensors",
"gpt_neox.layers.31.post_attention_layernorm.weight": "model-00032-of-00046.safetensors",
"gpt_neox.layers.32.attention.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.attention.dense.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.attention.dense.weight": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.attention.masked_bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.attention.query_key_value.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.attention.query_key_value.weight": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.attention.rotary_emb.inv_freq": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.input_layernorm.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.input_layernorm.weight": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.mlp.dense_4h_to_h.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.32.mlp.dense_4h_to_h.weight": "model-00034-of-00046.safetensors",
"gpt_neox.layers.32.mlp.dense_h_to_4h.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.32.mlp.dense_h_to_4h.weight": "model-00034-of-00046.safetensors",
"gpt_neox.layers.32.post_attention_layernorm.bias": "model-00033-of-00046.safetensors",
"gpt_neox.layers.32.post_attention_layernorm.weight": "model-00033-of-00046.safetensors",
"gpt_neox.layers.33.attention.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.attention.dense.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.attention.dense.weight": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.attention.masked_bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.attention.query_key_value.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.attention.query_key_value.weight": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.attention.rotary_emb.inv_freq": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.input_layernorm.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.input_layernorm.weight": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.mlp.dense_4h_to_h.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.33.mlp.dense_4h_to_h.weight": "model-00035-of-00046.safetensors",
"gpt_neox.layers.33.mlp.dense_h_to_4h.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.33.mlp.dense_h_to_4h.weight": "model-00035-of-00046.safetensors",
"gpt_neox.layers.33.post_attention_layernorm.bias": "model-00034-of-00046.safetensors",
"gpt_neox.layers.33.post_attention_layernorm.weight": "model-00034-of-00046.safetensors",
"gpt_neox.layers.34.attention.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.attention.dense.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.attention.dense.weight": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.attention.masked_bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.attention.query_key_value.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.attention.query_key_value.weight": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.attention.rotary_emb.inv_freq": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.input_layernorm.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.input_layernorm.weight": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.mlp.dense_4h_to_h.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.34.mlp.dense_4h_to_h.weight": "model-00036-of-00046.safetensors",
"gpt_neox.layers.34.mlp.dense_h_to_4h.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.34.mlp.dense_h_to_4h.weight": "model-00036-of-00046.safetensors",
"gpt_neox.layers.34.post_attention_layernorm.bias": "model-00035-of-00046.safetensors",
"gpt_neox.layers.34.post_attention_layernorm.weight": "model-00035-of-00046.safetensors",
"gpt_neox.layers.35.attention.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.attention.dense.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.attention.dense.weight": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.attention.masked_bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.attention.query_key_value.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.attention.query_key_value.weight": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.attention.rotary_emb.inv_freq": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.input_layernorm.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.input_layernorm.weight": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.mlp.dense_4h_to_h.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.35.mlp.dense_4h_to_h.weight": "model-00037-of-00046.safetensors",
"gpt_neox.layers.35.mlp.dense_h_to_4h.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.35.mlp.dense_h_to_4h.weight": "model-00037-of-00046.safetensors",
"gpt_neox.layers.35.post_attention_layernorm.bias": "model-00036-of-00046.safetensors",
"gpt_neox.layers.35.post_attention_layernorm.weight": "model-00036-of-00046.safetensors",
"gpt_neox.layers.36.attention.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.attention.dense.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.attention.dense.weight": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.attention.masked_bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.attention.query_key_value.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.attention.query_key_value.weight": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.attention.rotary_emb.inv_freq": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.input_layernorm.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.input_layernorm.weight": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.mlp.dense_4h_to_h.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.36.mlp.dense_4h_to_h.weight": "model-00038-of-00046.safetensors",
"gpt_neox.layers.36.mlp.dense_h_to_4h.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.36.mlp.dense_h_to_4h.weight": "model-00038-of-00046.safetensors",
"gpt_neox.layers.36.post_attention_layernorm.bias": "model-00037-of-00046.safetensors",
"gpt_neox.layers.36.post_attention_layernorm.weight": "model-00037-of-00046.safetensors",
"gpt_neox.layers.37.attention.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.attention.dense.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.attention.dense.weight": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.attention.masked_bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.attention.query_key_value.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.attention.query_key_value.weight": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.attention.rotary_emb.inv_freq": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.input_layernorm.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.input_layernorm.weight": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.mlp.dense_4h_to_h.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.37.mlp.dense_4h_to_h.weight": "model-00039-of-00046.safetensors",
"gpt_neox.layers.37.mlp.dense_h_to_4h.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.37.mlp.dense_h_to_4h.weight": "model-00039-of-00046.safetensors",
"gpt_neox.layers.37.post_attention_layernorm.bias": "model-00038-of-00046.safetensors",
"gpt_neox.layers.37.post_attention_layernorm.weight": "model-00038-of-00046.safetensors",
"gpt_neox.layers.38.attention.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.attention.dense.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.attention.dense.weight": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.attention.masked_bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.attention.query_key_value.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.attention.query_key_value.weight": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.attention.rotary_emb.inv_freq": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.input_layernorm.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.input_layernorm.weight": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.mlp.dense_4h_to_h.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.38.mlp.dense_4h_to_h.weight": "model-00040-of-00046.safetensors",
"gpt_neox.layers.38.mlp.dense_h_to_4h.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.38.mlp.dense_h_to_4h.weight": "model-00040-of-00046.safetensors",
"gpt_neox.layers.38.post_attention_layernorm.bias": "model-00039-of-00046.safetensors",
"gpt_neox.layers.38.post_attention_layernorm.weight": "model-00039-of-00046.safetensors",
"gpt_neox.layers.39.attention.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.attention.dense.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.attention.dense.weight": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.attention.masked_bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.attention.query_key_value.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.attention.query_key_value.weight": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.attention.rotary_emb.inv_freq": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.input_layernorm.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.input_layernorm.weight": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.mlp.dense_4h_to_h.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.39.mlp.dense_4h_to_h.weight": "model-00041-of-00046.safetensors",
"gpt_neox.layers.39.mlp.dense_h_to_4h.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.39.mlp.dense_h_to_4h.weight": "model-00041-of-00046.safetensors",
"gpt_neox.layers.39.post_attention_layernorm.bias": "model-00040-of-00046.safetensors",
"gpt_neox.layers.39.post_attention_layernorm.weight": "model-00040-of-00046.safetensors",
"gpt_neox.layers.4.attention.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.attention.dense.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.attention.dense.weight": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.attention.masked_bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.attention.query_key_value.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.attention.query_key_value.weight": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.attention.rotary_emb.inv_freq": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.input_layernorm.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.input_layernorm.weight": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.mlp.dense_4h_to_h.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.4.mlp.dense_4h_to_h.weight": "model-00006-of-00046.safetensors",
"gpt_neox.layers.4.mlp.dense_h_to_4h.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.4.mlp.dense_h_to_4h.weight": "model-00006-of-00046.safetensors",
"gpt_neox.layers.4.post_attention_layernorm.bias": "model-00005-of-00046.safetensors",
"gpt_neox.layers.4.post_attention_layernorm.weight": "model-00005-of-00046.safetensors",
"gpt_neox.layers.40.attention.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.attention.dense.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.attention.dense.weight": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.attention.masked_bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.attention.query_key_value.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.attention.query_key_value.weight": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.attention.rotary_emb.inv_freq": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.input_layernorm.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.input_layernorm.weight": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.mlp.dense_4h_to_h.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.40.mlp.dense_4h_to_h.weight": "model-00042-of-00046.safetensors",
"gpt_neox.layers.40.mlp.dense_h_to_4h.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.40.mlp.dense_h_to_4h.weight": "model-00042-of-00046.safetensors",
"gpt_neox.layers.40.post_attention_layernorm.bias": "model-00041-of-00046.safetensors",
"gpt_neox.layers.40.post_attention_layernorm.weight": "model-00041-of-00046.safetensors",
"gpt_neox.layers.41.attention.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.attention.dense.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.attention.dense.weight": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.attention.masked_bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.attention.query_key_value.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.attention.query_key_value.weight": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.attention.rotary_emb.inv_freq": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.input_layernorm.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.input_layernorm.weight": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.mlp.dense_4h_to_h.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.41.mlp.dense_4h_to_h.weight": "model-00043-of-00046.safetensors",
"gpt_neox.layers.41.mlp.dense_h_to_4h.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.41.mlp.dense_h_to_4h.weight": "model-00043-of-00046.safetensors",
"gpt_neox.layers.41.post_attention_layernorm.bias": "model-00042-of-00046.safetensors",
"gpt_neox.layers.41.post_attention_layernorm.weight": "model-00042-of-00046.safetensors",
"gpt_neox.layers.42.attention.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.attention.dense.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.attention.dense.weight": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.attention.masked_bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.attention.query_key_value.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.attention.query_key_value.weight": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.attention.rotary_emb.inv_freq": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.input_layernorm.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.input_layernorm.weight": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.mlp.dense_4h_to_h.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.42.mlp.dense_4h_to_h.weight": "model-00044-of-00046.safetensors",
"gpt_neox.layers.42.mlp.dense_h_to_4h.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.42.mlp.dense_h_to_4h.weight": "model-00044-of-00046.safetensors",
"gpt_neox.layers.42.post_attention_layernorm.bias": "model-00043-of-00046.safetensors",
"gpt_neox.layers.42.post_attention_layernorm.weight": "model-00043-of-00046.safetensors",
"gpt_neox.layers.43.attention.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.attention.dense.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.attention.dense.weight": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.attention.masked_bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.attention.query_key_value.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.attention.query_key_value.weight": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.attention.rotary_emb.inv_freq": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.input_layernorm.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.input_layernorm.weight": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.mlp.dense_4h_to_h.bias": "model-00045-of-00046.safetensors",
"gpt_neox.layers.43.mlp.dense_4h_to_h.weight": "model-00045-of-00046.safetensors",
"gpt_neox.layers.43.mlp.dense_h_to_4h.bias": "model-00045-of-00046.safetensors",
"gpt_neox.layers.43.mlp.dense_h_to_4h.weight": "model-00045-of-00046.safetensors",
"gpt_neox.layers.43.post_attention_layernorm.bias": "model-00044-of-00046.safetensors",
"gpt_neox.layers.43.post_attention_layernorm.weight": "model-00044-of-00046.safetensors",
"gpt_neox.layers.5.attention.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.attention.dense.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.attention.dense.weight": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.attention.masked_bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.attention.query_key_value.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.attention.query_key_value.weight": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.attention.rotary_emb.inv_freq": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.input_layernorm.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.input_layernorm.weight": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.mlp.dense_4h_to_h.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.5.mlp.dense_4h_to_h.weight": "model-00007-of-00046.safetensors",
"gpt_neox.layers.5.mlp.dense_h_to_4h.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.5.mlp.dense_h_to_4h.weight": "model-00007-of-00046.safetensors",
"gpt_neox.layers.5.post_attention_layernorm.bias": "model-00006-of-00046.safetensors",
"gpt_neox.layers.5.post_attention_layernorm.weight": "model-00006-of-00046.safetensors",
"gpt_neox.layers.6.attention.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.attention.dense.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.attention.dense.weight": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.attention.masked_bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.attention.query_key_value.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.attention.query_key_value.weight": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.attention.rotary_emb.inv_freq": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.input_layernorm.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.input_layernorm.weight": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.mlp.dense_4h_to_h.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.6.mlp.dense_4h_to_h.weight": "model-00008-of-00046.safetensors",
"gpt_neox.layers.6.mlp.dense_h_to_4h.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.6.mlp.dense_h_to_4h.weight": "model-00008-of-00046.safetensors",
"gpt_neox.layers.6.post_attention_layernorm.bias": "model-00007-of-00046.safetensors",
"gpt_neox.layers.6.post_attention_layernorm.weight": "model-00007-of-00046.safetensors",
"gpt_neox.layers.7.attention.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.attention.dense.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.attention.dense.weight": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.attention.masked_bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.attention.query_key_value.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.attention.query_key_value.weight": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.attention.rotary_emb.inv_freq": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.input_layernorm.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.input_layernorm.weight": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.mlp.dense_4h_to_h.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.7.mlp.dense_4h_to_h.weight": "model-00009-of-00046.safetensors",
"gpt_neox.layers.7.mlp.dense_h_to_4h.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.7.mlp.dense_h_to_4h.weight": "model-00009-of-00046.safetensors",
"gpt_neox.layers.7.post_attention_layernorm.bias": "model-00008-of-00046.safetensors",
"gpt_neox.layers.7.post_attention_layernorm.weight": "model-00008-of-00046.safetensors",
"gpt_neox.layers.8.attention.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.attention.dense.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.attention.dense.weight": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.attention.masked_bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.attention.query_key_value.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.attention.query_key_value.weight": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.attention.rotary_emb.inv_freq": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.input_layernorm.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.input_layernorm.weight": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.mlp.dense_4h_to_h.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.8.mlp.dense_4h_to_h.weight": "model-00010-of-00046.safetensors",
"gpt_neox.layers.8.mlp.dense_h_to_4h.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.8.mlp.dense_h_to_4h.weight": "model-00010-of-00046.safetensors",
"gpt_neox.layers.8.post_attention_layernorm.bias": "model-00009-of-00046.safetensors",
"gpt_neox.layers.8.post_attention_layernorm.weight": "model-00009-of-00046.safetensors",
"gpt_neox.layers.9.attention.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.attention.dense.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.attention.dense.weight": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.attention.masked_bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.attention.query_key_value.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.attention.query_key_value.weight": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.attention.rotary_emb.inv_freq": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.input_layernorm.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.input_layernorm.weight": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.mlp.dense_4h_to_h.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.9.mlp.dense_4h_to_h.weight": "model-00011-of-00046.safetensors",
"gpt_neox.layers.9.mlp.dense_h_to_4h.bias": "model-00011-of-00046.safetensors",
"gpt_neox.layers.9.mlp.dense_h_to_4h.weight": "model-00011-of-00046.safetensors",
"gpt_neox.layers.9.post_attention_layernorm.bias": "model-00010-of-00046.safetensors",
"gpt_neox.layers.9.post_attention_layernorm.weight": "model-00010-of-00046.safetensors"
}
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:91a6926dd4e27c801194ab9b697e6b739d4312cbd7d081ee020f5ca75607743c
size 925994625

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a96a36c8efc68a1d6c5de03b8c78625422edbf3a0b3d7c88650a8cd409ebbd73
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:582e2bbbbd7c8b91de2c8fe2b292610e6b5a74966a7e2398645a38dbda2ed71f
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dad2df6d880f4a64ee2c530c0b51c6be4600f2463fa69d06d668b18904e3d5a3
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2dd1111e7f207d4a7720a8723cf5d2c2dabb77e509a57ce41a28ee630b0e8353
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c7de215cb446cb678dc7e4e7df5b18cc16b79fdf08e41a2738491ff9cc57e37a
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:53b30b05245b1ad580dd2d304a760fd93bb80cd6e67e3e497b0140abb23c8f84
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f4b0850c82bc82ada8e092e7bd9a782737391faa8a844c32f1d206c9d5cc1665
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:19b9c1d811efc0361c8607bd3a52f0474478b5871bbf70746af8dc8cc7b7569d
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:343cfe1a395d55c148034e13cb3a0470baa68e4d0711412a2d6ad22574e2e9ea
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:089530ad755e6ec58b10d4a977b960bff4ade4346c0b0b9ebf2063cde42af0fd
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7040b3aadd06bfc8b4175f8273d86f22a8eb7c7b1a5cd9991c2678290b610a36
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:50d7a789188146fc4368c0f4b1574dd7933ec8cc4ecb809aebca90a70c0fca91
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4ba306de96a20abccb2e99048614debd63557228c3f9787ca7d6cc445397a9c2
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5105b09e42b2eacaf1985ce37811913dca02446169cff6ece20cbf8cd6803a78
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:41c5345d12c491aede73bd79dcda8093badddd1ab393413de1be377aed044c34
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:73a3f74f1e0ce412f6b06c172b068fd9dcefbfdff60a1e3242d1a56a58441e8a
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c17a90b84c957246da45835e6b88cc496430e24af18f74d5219cebfb6ca89bf5
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ce0eab7c8271f3b267eb84cc90d0e6667a40060fc5c751f0bbe7edd2c2f1e32f
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4a3e63161295d36203577c2226e217dffdbf3bbbb99f883ff64e11e7b2cb1fb9
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:260f41230c97ce26e787650aa225cc65d381b4e17485cbf6e2cef31e99d0f2dc
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:43392e9228f83d32455920c0517f3f288049340236fc9cb7d660f61de0705d6d
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:11fb9cefd82632103450852c85eef713d9dfdd0c14451c84a909e3b335db5ddc
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0ff5af200c2682206e7076337820a576ad125f97cad398513c73c0febac61eb7
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e38d80739d11771d6d2ac50574f5c7c5a980da47dd204ff320af548e3305aed3
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ead5458cf979a9e311e1d1187605caa0d630b39f2bce616f1a7e43b70a565915
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:34cfb1757f7e4fbc933fe56d442f297fe64763ee95576da09e1a9e6718e7de22
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4090aa141fb5d6a00cefc15d5f55cee135db352ebcca1ce6829da65acdefacbc
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:55b81d9a2491ff2054b2e502207e96009308259b25da2efab9e3073ae3fef9fb
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eaab6b1b9b16640dedca882c7eba8a3991a769462fc267215499241bc48d3732
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9ad862519bb27c0417b0ee46b9ffa47a7b7c219034a254881e054cc79b7d3b2a
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:23752ed93b429d111dc4d75759731887cddb9f1a6717190778e22f37d6c1ef8d
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9d0842b355f997dc12d5c9c90297936919ed2b1223fd8fd5f00afbfcd28db979
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7275a1541dd2a22297f878bdb5dff0cd41c327e61445e8ba6fd7f4b8ed51b6f9
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d7b1e3ead65e8867953b00685531c42b3968fb317eb21286a4489f2d33c9424f
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:92801ef5b1dd5f959f94b79233c461343319bd7fd881cc5d1642cd97675352b4
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:080b0ab69fdc85af7f0fac13bf89d402980ee1e93812545e6ec85253040fc9ea
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:603b045908ca1044109a3c13da7ce8f9f633a015b7717adc2ca72db186beb58e
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9823f13162ca5b21532930863497d82a79788a2842c77d0538a36b8d86ec9573
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a231cae597ce6e732429e9f589d2b1face5755df083b999eb2688481757cb4af
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc909def52bbd51a6b6bb34f29bad67383a5ce0a5947a2fc26a9afcf5f705471
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ac94d1b42f8ffd9b34121a797dc8371e405d3b4bbc1ad8003560da82a13205a
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:36259a4252eae0d31d14513bb6b1e9e408aa416abda2347f397b333ae4ee0a41
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c2c40045b5342d2e4643cea8b7db300f529064df796a08f38b4b7df08ca67f96
size 910328184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a7baf394716643e2873b74cc823f39cc563ad1cdaa918b4b91cc7f1b42d7b68b
size 604067735

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:925dc7a0c53a3487686cea905b52f2117d14fb3ff18e8fe1b544895a9c69fce4
size 619709163

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:044586635a1c8d8057638652ef2efe621b98bc982e04549cba70414306b45182
size 57712

1
special_tokens_map.json Normal file
View File

@@ -0,0 +1 @@
{"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}

100528
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More