初始化项目,由ModelHub XC社区提供模型

Model: TheBloke/CodeLlama-13B-Instruct-fp16
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 11:35:37 +08:00
commit cadf9c55d2
18 changed files with 95310 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

1
LICENSE Normal file
View File

@@ -0,0 +1 @@
Please refer to license: https://github.com/facebookresearch/llama/blob/main/LICENSE

52
MODEL_CARD.md Normal file
View File

@@ -0,0 +1,52 @@
# Code Llama
## **Model Details**
**Model Developers** Meta AI
**Variations** Code Llama comes in three model sizes, and three variants:
1) Code Llama: our base models designed for general code synthesis and understanding
2) Code Llama - Python: designed specifically for Python
3) Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**Input** Models input text only.
**Output** Models output text only.
**Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
**Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)).
## **Intended Use**
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## **Hardware and Software**
**Training Factors**
We used custom training libraries. The training and fine-tuning of the released models have been performed Metas Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Metas sustainability program.
**Training data**
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
Code Llama - Instruct uses additional instruction fine-tuning data.
**Evaluation Results**
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## **Ethical Considerations and Limitations**
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llamas potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).

126
README.md Normal file
View File

@@ -0,0 +1,126 @@
---
license: llama2
tags:
- llama-2
- codellama
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CodeLlama 13B-Instruct fp16
- Model creator: [Meta](https://ai.meta.com/llama/)
## Description
This is Transformers/HF format fp16 weights for CodeLlama 13B-Instruct. It is the result of downloading CodeLlama 13B-Instruct from [Meta](https://ai.meta.com/blog/code-llama-large-language-model-coding/) and converting to HF using `convert_llama_weights_to_hf.py`.
Quantisations will be coming shortly.
Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
Credit to @emozilla for creating the necessary modelling code to achieve this!
## Prompt template: TBC
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card
# Code Llama
## **Model Details**
**Model Developers** Meta AI
**Variations** Code Llama comes in three model sizes, and three variants:
1) Code Llama: our base models designed for general code synthesis and understanding
2) Code Llama - Python: designed specifically for Python
3) Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**Input** Models input text only.
**Output** Models output text only.
**Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
**Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)).
## **Intended Use**
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## **Hardware and Software**
**Training Factors**
We used custom training libraries. The training and fine-tuning of the released models have been performed Metas Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Metas sustainability program.
**Training data**
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
Code Llama - Instruct uses additional instruction fine-tuning data.
**Evaluation Results**
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## **Ethical Considerations and Limitations**
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llamas potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).

1
USE_POLICY.md Normal file
View File

@@ -0,0 +1 @@
Please refer to acceptable use policy: https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md

32
config.json Normal file
View File

@@ -0,0 +1,32 @@
{
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"max_position_embeddings": 16384,
"model_type": "llama",
"num_attention_heads": 40,
"num_hidden_layers": 40,
"num_key_value_heads": 40,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.32.0",
"use_cache": true,
"vocab_size": 32016,
"auto_map": {
"AutoConfig": "configuration_llama.LlamaConfig",
"AutoModel": "modeling_llama.LlamaModel",
"AutoModelForCausalLM": "modeling_llama.LlamaForCausalLM",
"AutoModelForSequenceClassification": "modeling_llama.LlamaForSequenceClassification"
},
"rope_theta": 1000000,
"pad_token_id": 0
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

176
configuration_llama.py Normal file
View File

@@ -0,0 +1,176 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" LLaMA model configuration"""
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
LLAMA_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
class LlamaConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`LlamaModel`]. It is used to instantiate an LLaMA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LLaMA-7B.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`LlamaModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
pretraining_tp (`int`, *optional*, defaults to `1`):
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
issue](https://github.com/pytorch/pytorch/issues/76232).
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
tie_word_embeddings(`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
Example:
```python
>>> from transformers import LlamaModel, LlamaConfig
>>> # Initializing a LLaMA llama-7b style configuration
>>> configuration = LlamaConfig()
>>> # Initializing a model from the llama-7b style configuration
>>> model = LlamaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "llama"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size=32000,
hidden_size=4096,
intermediate_size=11008,
num_hidden_layers=32,
num_attention_heads=32,
num_key_value_heads=None,
hidden_act="silu",
max_position_embeddings=2048,
initializer_range=0.02,
rms_norm_eps=1e-6,
use_cache=True,
pad_token_id=None,
bos_token_id=1,
eos_token_id=2,
pretraining_tp=1,
tie_word_embeddings=False,
rope_scaling=None,
rope_theta=10000,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.pretraining_tp = pretraining_tp
self.use_cache = use_cache
self.rope_scaling = rope_scaling
self._rope_scaling_validation()
self.rope_theta = rope_theta
super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
def _rope_scaling_validation(self):
"""
Validate the `rope_scaling` configuration.
"""
if self.rope_scaling is None:
return
if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
raise ValueError(
"`rope_scaling` must be a dictionary with with two fields, `name` and `factor`, "
f"got {self.rope_scaling}"
)
rope_scaling_type = self.rope_scaling.get("type", None)
rope_scaling_factor = self.rope_scaling.get("factor", None)
if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
raise ValueError(
f"`rope_scaling`'s name field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
)
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
raise ValueError(f"`rope_scaling`'s factor field must be an float > 1, got {rope_scaling_factor}")

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"pad_token_id": 0,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.32.0"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3aca61c007ac3bc3fa081b72dd60278df7ddfe0357a30da23e547378ac928b7c
size 9948851728

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a168786b4845c7cf4e9740f80c799734c9b18f12ad06fa22bc77fa99552a021c
size 9904123616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d25e6e8a2fb5964142a145d16a7cfea08676a140b30749c13a85023d19213d52
size 6179122880

View File

@@ -0,0 +1,370 @@
{
"metadata": {
"total_size": 26032056320
},
"weight_map": {
"lm_head.weight": "model-00003-of-00003.safetensors",
"model.embed_tokens.weight": "model-00001-of-00003.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.input_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
"model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.input_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
"model.norm.weight": "model-00003-of-00003.safetensors"
}
}

1020
modeling_llama.py Normal file

File diff suppressed because it is too large Load Diff

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

93418
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
size 500058

36
tokenizer_config.json Normal file
View File

@@ -0,0 +1,36 @@
{
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"legacy": null,
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"use_default_system_prompt": true
}