初始化项目,由ModelHub XC社区提供模型

Model: NousResearch/Yarn-Mistral-7b-128k
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 20:27:46 +08:00
commit 398a1cf4d2
15 changed files with 93300 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
pytorch_model-00001-of-00002.bin filter=lfs diff=lfs merge=lfs -text
pytorch_model-00002-of-00002.bin filter=lfs diff=lfs merge=lfs -text

62
README.md Normal file
View File

@@ -0,0 +1,62 @@
---
datasets:
- emozilla/yarn-train-tokenized-16k-mistral
metrics:
- perplexity
library_name: transformers
license: apache-2.0
language:
- en
---
# Model Card: Nous-Yarn-Mistral-7b-128k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)
![yarn](https://raw.githubusercontent.com/jquesnelle/yarn/mistral/data/proofpile-long-small-mistral.csv.png)
## Model Description
Nous-Yarn-Mistral-7b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method.
It is an extension of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and supports a 128k token context window.
To use, pass `trust_remote_code=True` when loading the model, for example
```python
model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-128k",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
```sh
pip install git+https://github.com/huggingface/transformers
```
## Benchmarks
Long context benchmarks:
| Model | Context Window | 8k PPL | 16k PPL | 32k PPL | 64k PPL | 128k PPL |
|-------|---------------:|------:|----------:|-----:|-----:|------------:|
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 2.96 | - | - | - | - |
| [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 3.04 | 2.65 | 2.44 | 2.20 | - |
| [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 3.08 | 2.68 | 2.47 | 2.24 | 2.19 |
Short context benchmarks showing that quality degradation is minimal:
| Model | Context Window | ARC-c | Hellaswag | MMLU | Truthful QA |
|-------|---------------:|------:|----------:|-----:|------------:|
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 59.98 | 83.31 | 64.16 | 42.15 |
| [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 59.38 | 81.21 | 61.32 | 42.50 |
| [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 58.87 | 80.58 | 60.64 | 42.46 |
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
- [honglu2875](https://github.com/honglu2875): Paper and evals
The authors would like to thank LAION AI for their support of compute for this model.
It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.

5
added_tokens.json Normal file
View File

@@ -0,0 +1,5 @@
{
"</s>": 2,
"<s>": 1,
"<unk>": 0
}

36
config.json Normal file
View File

@@ -0,0 +1,36 @@
{
"_name_or_path": "NousResearch/Yarn-Mistral-7b-128k",
"architectures": [
"MistralForCausalLM"
],
"auto_map": {
"AutoConfig": "configuration_mistral.MistralConfig",
"AutoModelForCausalLM": "modeling_mistral_yarn.MistralForCausalLM"
},
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"max_sequence_length": 131072,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 16.0,
"finetuned": true,
"original_max_position_embeddings": 8192,
"type": "yarn"
},
"rope_theta": 10000.0,
"sliding_window": 131072,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.35.0.dev0",
"use_cache": true,
"vocab_size": 32000
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

181
configuration_mistral.py Normal file
View File

@@ -0,0 +1,181 @@
# coding=utf-8
# Copyright 2023 Mistral AI and the HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Mistral model configuration"""
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
MISTRAL_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"mistralai/Mistral-7B-v0.1": "https://huggingface.co/mistralai/Mistral-7B-v0.1/resolve/main/config.json",
"mistralai/Mistral-7B-Instruct-v0.1": "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/resolve/main/config.json",
}
class MistralConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`MistralModel`]. It is used to instantiate an
Mistral model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Mistral-7B-v0.1 or Mistral-7B-Instruct-v0.1.
[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the Mistral model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`MistralModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 14336):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (`int`, *optional*, defaults to 8):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to `8`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
The maximum sequence length that this model might ever be used with. Mistral's sliding window attention
allows sequence of up to 4096*32 tokens.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
The id of the padding token.
bos_token_id (`int`, *optional*, defaults to 1):
The id of the "beginning-of-sequence" token.
eos_token_id (`int`, *optional*, defaults to 2):
The id of the "end-of-sequence" token.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports three scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is `{"type": strategy name, "factor": scaling factor}`.
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
sliding_window (`int`, *optional*, defaults to 4096):
Sliding window attention window size. If not specified, will default to `4096`.
```python
>>> from transformers import MistralModel, MistralConfig
>>> # Initializing a Mistral 7B style configuration
>>> configuration = MistralConfig()
>>> # Initializing a model from the Mistral 7B style configuration
>>> model = MistralModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "mistral"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size=32000,
hidden_size=4096,
intermediate_size=14336,
num_hidden_layers=32,
num_attention_heads=32,
num_key_value_heads=8,
hidden_act="silu",
max_position_embeddings=4096 * 32,
initializer_range=0.02,
rms_norm_eps=1e-6,
use_cache=True,
pad_token_id=None,
bos_token_id=1,
eos_token_id=2,
tie_word_embeddings=False,
rope_scaling=None,
rope_theta=10000.0,
sliding_window=4096,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.sliding_window = sliding_window
# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.use_cache = use_cache
self.rope_scaling = rope_scaling
self._rope_scaling_validation()
self.rope_theta = rope_theta
super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
def _rope_scaling_validation(self):
"""
Validate the `rope_scaling` configuration.
"""
if self.rope_scaling is None:
return
if not isinstance(self.rope_scaling, dict):
raise ValueError(
"`rope_scaling` must be a dictionary, "
f"got {self.rope_scaling}"
)
rope_scaling_type = self.rope_scaling.get("type", None)
rope_scaling_factor = self.rope_scaling.get("factor", None)
if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic", "yarn", "dynamic-yarn"]:
raise ValueError(
f"`rope_scaling`'s name field must be one of ['linear', 'dynamic', 'yarn', 'dynamic-yarn'], got {rope_scaling_type}"
)
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
raise ValueError(f"`rope_scaling`'s factor field must be an float > 1, got {rope_scaling_factor}")
if rope_scaling_type == "yarn" or rope_scaling_type == "dynamic-yarn":
original_max_position_embeddings = self.rope_scaling.get("original_max_position_embeddings", None)
if original_max_position_embeddings is None or not isinstance(original_max_position_embeddings, int):
raise ValueError(f"`rope_scaling.original_max_position_embeddings` must be set to an int when using yarn, and dynamic-yarn")

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.35.0.dev0"
}

1489
modeling_mistral_yarn.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac92349639a9b81ff8d3f4af00a36729895fb46c54d014f1e95d9c2f64ee1539
size 9943028044

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2a8faed6f7a31940b678ce98d986ed5d5fbfb8c1bc8e7dc00159bae92e019757
size 4540535647

View File

@@ -0,0 +1,298 @@
{
"metadata": {
"total_size": 14483464192
},
"weight_map": {
"lm_head.weight": "pytorch_model-00002-of-00002.bin",
"model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.22.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
"model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
"model.norm.weight": "pytorch_model-00002-of-00002.bin"
}
}

10
special_tokens_map.json Normal file
View File

@@ -0,0 +1,10 @@
{
"additional_special_tokens": [
"<unk>",
"<s>",
"</s>"
],
"bos_token": "<s>",
"eos_token": "</s>",
"unk_token": "<unk>"
}

91122
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

44
tokenizer_config.json Normal file
View File

@@ -0,0 +1,44 @@
{
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<unk>",
"<s>",
"</s>"
],
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": true,
"model_max_length": 1000000000000000019884624838656,
"pad_token": null,
"sp_model_kwargs": {},
"spaces_between_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": true
}