Update LM-Studio guide

This commit is contained in:
Cherrytest
2025-03-18 05:36:41 +00:00
parent ae1d939204
commit eb16a8aa4f
28 changed files with 315059 additions and 41 deletions

317
README.md
View File

@@ -1,47 +1,282 @@
---
license: Apache License 2.0
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
base_model: LGAI-EXAONE/EXAONE-3.5-32B-Instruct
base_model_relation: finetune
license: other
license_name: exaone
license_link: LICENSE
language:
- en
- ko
tags:
- lg-ai
- exaone
- exaone-deep
pipeline_tag: text-generation
library_name: transformers
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载
```bash
#安装ModelScope
pip install modelscope
```
<p align="center">
<img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
<br>
# EXAONE-Deep-32B
## Introduction
We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
For more details, please refer to our [documentation](https://arxiv.org/abs/2503.12524), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
<p align="center">
<img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
This repository contains the reasoning 32B language model with the following features:
- Number of Parameters (without embeddings): 30.95B
- Number of Layers: 64
- Number of Attention Heads: GQA with 40 Q-heads and 8 KV-heads
- Vocab Size: 102,400
- Context Length: 32,768 tokens
## Quickstart
We recommend to use `transformers` v4.43.1 or later.
Here is the code snippet to run conversational inference with the model:
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('LGAI-EXAONE/EXAONE-Deep-32B')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/LGAI-EXAONE/EXAONE-Deep-32B.git
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
from threading import Thread
model_name = "LGAI-EXAONE/EXAONE-Deep-32B"
streaming = True # choose the streaming option
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "user", "content": "How many golf balls can fit in a school bus?"}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
if streaming:
streamer = TextIteratorStreamer(tokenizer)
thread = Thread(target=model.generate, kwargs=dict(
input_ids=input_ids.to("cuda"),
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=32768,
do_sample=True,
temperature=0.6,
top_p=0.95,
streamer=streamer
))
thread.start()
for text in streamer:
print(text, end="", flush=True)
else:
output = model.generate(
input_ids.to("cuda"),
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=32768,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(tokenizer.decode(output[0]))
```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
> ### Note
> The EXAONE Deep models are trained with an optimized configuration,
> so we recommend following the [Usage Guideline](#usage-guideline) section to achieve optimal performance.
## Evaluation
The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the [documentation](https://arxiv.org/abs/2503.12524).
<table>
<tr>
<th>Models</th>
<th>MATH-500 (pass@1)</th>
<th>AIME 2024 (pass@1 / cons@64)</th>
<th>AIME 2025 (pass@1 / cons@64)</th>
<th>CSAT Math 2025 (pass@1)</th>
<th>GPQA Diamond (pass@1)</th>
<th>Live Code Bench (pass@1)</th>
</tr>
<tr>
<td>EXAONE Deep 32B</td>
<td>95.7</td>
<td>72.1 / <strong>90.0</strong></td>
<td>65.8 / <strong>80.0</strong></td>
<td><strong>94.5</strong></td>
<td>66.1</td>
<td>59.5</td>
</tr>
<tr>
<td>DeepSeek-R1-Distill-Qwen-32B</td>
<td>94.3</td>
<td>72.6 / 83.3</td>
<td>55.2 / 73.3</td>
<td>84.1</td>
<td>62.1</td>
<td>57.2</td>
</tr>
<tr>
<td>QwQ-32B</td>
<td>95.5</td>
<td><strong>79.5</strong> / 86.7</td>
<td><strong>67.1</strong> / 76.7</td>
<td>94.4</td>
<td>63.3</td>
<td>63.4</td>
</tr>
<tr>
<td>DeepSeek-R1-Distill-Llama-70B</td>
<td>94.5</td>
<td>70.0 / 86.7</td>
<td>53.9 / 66.7</td>
<td>88.8</td>
<td>65.2</td>
<td>57.5</td>
</tr>
<tr>
<td>DeepSeek-R1 (671B)</td>
<td><strong>97.3</strong></td>
<td>79.8 / 86.7</td>
<td>66.8 / <strong>80.0</strong></td>
<td>89.9</td>
<td><strong>71.5</strong></td>
<td><strong>65.9</strong></td>
</tr>
<tr>
<th colspan="7" height="30px"></th>
</tr>
<tr>
<td>EXAONE Deep 7.8B</td>
<td><strong>94.8</strong></td>
<td><strong>70.0</strong> / <strong>83.3</strong></td>
<td><strong>59.6</strong> / <strong>76.7</strong></td>
<td><strong>89.9</strong></td>
<td><strong>62.6</strong></td>
<td><strong>55.2</strong></td>
</tr>
<tr>
<td>DeepSeek-R1-Distill-Qwen-7B</td>
<td>92.8</td>
<td>55.5 / <strong>83.3</strong></td>
<td>38.5 / 56.7</td>
<td>79.7</td>
<td>49.1</td>
<td>37.6</td>
</tr>
<tr>
<td>DeepSeek-R1-Distill-Llama-8B</td>
<td>89.1</td>
<td>50.4 / 80.0</td>
<td>33.6 / 53.3</td>
<td>74.1</td>
<td>49.0</td>
<td>39.6</td>
</tr>
<tr>
<td>OpenAI o1-mini</td>
<td>90.0</td>
<td>63.6 / 80.0</td>
<td>54.8 / 66.7</td>
<td>84.4</td>
<td>60.0</td>
<td>53.8</td>
</tr>
<tr>
<th colspan="7" height="30px"></th>
</tr>
<tr>
<td>EXAONE Deep 2.4B</td>
<td><strong>92.3</strong></td>
<td><strong>52.5</strong> / <strong>76.7</strong></td>
<td><strong>47.9</strong> / <strong>73.3</strong></td>
<td><strong>79.2</strong></td>
<td><strong>54.3</strong></td>
<td><strong>46.6</strong></td>
</tr>
<tr>
<td>DeepSeek-R1-Distill-Qwen-1.5B</td>
<td>83.9</td>
<td>28.9 / 52.7</td>
<td>23.9 / 36.7</td>
<td>65.6</td>
<td>33.8</td>
<td>16.9</td>
</tr>
</table>
## Deployment
EXAONE Deep models can be inferred in the various frameworks, such as:
- `TensorRT-LLM`
- `vLLM`
- `SGLang`
- `llama.cpp`
- `Ollama`
- `LM-Studio`
Please refer to our [EXAONE Deep GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep) for more details about the inference frameworks.
## Quantization
We are working on quantized versions of EXAONE Deep models in both **AWQ** and **GGUF** formats. We will update this section with detailed instructions upon release.
## Usage Guideline
To achieve the expected performance, we recommend using the following configurations:
1. Ensure the model starts with `<thought>\n` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
2. The reasoning steps of EXAONE Deep models enclosed by `<thought>\n...\n</thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
3. Avoid using system prompt, and build the instruction on the user prompt.
4. When it comes to math problems, include **"Please reason step by step, and put your final answer within \boxed{}."** in your prompt.
5. In our evaluation, we use `temperature=0.6` and `top_p=0.95` for generation.
6. When evaluating the models, it is recommended to test multiple times to assess the expected performance accurately.
## Limitation
The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflects the views of LG AI Research.
- Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
- Biased responses may be generated, which are associated with age, gender, race, and so on.
- The generated responses rely heavily on statistics from the training data, which can result in the generation of
semantically or syntactically incorrect sentences.
- Since the model does not reflect the latest information, the responses may be false or contradictory.
LG AI Research strives to reduce potential risks that may arise from EXAONE language models. Users are not allowed
to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate
outputs violating LG AIs ethical principles when using EXAONE language models.
## License
The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICENSE)
## Citation
```
@article{exaone-deep,
title={EXAONE Deep: Reasoning Enhanced Language Models},
author={{LG AI Research}},
journal={arXiv preprint arXiv:2503.12524},
year={2025}
}
```
## Contact
LG AI Research Technical Support: contact_us@lgresearch.ai

Binary file not shown.

After

Width:  |  Height:  |  Size: 243 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 926 KiB

40
config.json Normal file
View File

@@ -0,0 +1,40 @@
{
"activation_function": "silu",
"architectures": [
"ExaoneForCausalLM"
],
"attention_dropout": 0.0,
"auto_map": {
"AutoConfig": "configuration_exaone.ExaoneConfig",
"AutoModelForCausalLM": "modeling_exaone.ExaoneForCausalLM",
"AutoModelForSequenceClassification": "modeling_exaone.ExaoneForSequenceClassification"
},
"bos_token_id": 1,
"embed_dropout": 0.0,
"eos_token_id": 361,
"head_dim": 128,
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 27392,
"layer_norm_epsilon": 1e-05,
"ln_no_scale": false,
"max_position_embeddings": 32768,
"model_type": "exaone",
"num_attention_heads": 40,
"num_key_value_heads": 8,
"num_layers": 64,
"pad_token_id": 0,
"rope_scaling": {
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 1000000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.1",
"use_cache": true,
"vocab_size": 102400
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

183
configuration_exaone.py Normal file
View File

@@ -0,0 +1,183 @@
# coding=utf-8
# Copyright 2021 The LG AI Research EXAONE Lab. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""EXAONE model configuration"""
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
EXAONE_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
class ExaoneConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`ExaoneModel`]. It is used to
instantiate a EXAONE model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the EXAONE-3.0-7.8B-Instruct [LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model
outputs. Read the documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 102400):
Vocabulary size of the EXAONE model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`ExaoneModel`]. Vocabulary size of the model.
Defines the different tokens that can be represented by the `inputs_ids` passed to the forward method of
[`ExaoneModel`].
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_size (`int`, *optional*, defaults to 2048):
Dimensionality of the encoder layers and the pooler layer.
num_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
intermediate_size (`int`, *optional*, defaults to `hidden_size * 4`):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
activation_function (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
accordingly.
Expected contents:
`rope_type` (`str`):
The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
'llama3'], with 'default' being the original RoPE implementation.
`factor` (`float`, *optional*):
Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a `factor` of x will enable the model to handle sequences of length x *
original maximum pre-trained length.
`original_max_position_embeddings` (`int`, *optional*):
Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
pretraining.
`attention_factor` (`float`, *optional*):
Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
`factor` field to infer the suggested value.
`beta_fast` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
`beta_slow` (`float`, *optional*):
Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
`short_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to short contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`long_factor` (`List[float]`, *optional*):
Only used with 'longrope'. The scaling factor to be applied to long contexts (<
`original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
`low_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
`high_freq_factor` (`float`, *optional*):
Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
embed_dropout (`float`, *optional*, defaults to 0.0):
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon used by the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if ``config.is_decoder=True``.
bos_token_id (`int`, *optional*, defaults to 0):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
Example:
```python
>>> from transformers import EXAONEModel, ExaoneConfig
>>> # Initializing a EXAONE configuration
>>> configuration = ExaoneConfig()
>>> # Initializing a model from configuration
>>> model = EXAONEModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "exaone"
keys_to_ignore_at_inference = ["past_key_values"]
attribute_map = {"num_hidden_layers": "num_layers"}
def __init__(
self,
vocab_size=102400,
max_position_embeddings=2048,
hidden_size=2048,
num_layers=32,
num_attention_heads=32,
num_key_value_heads=None,
intermediate_size=None,
activation_function="silu",
rope_theta=10000.0,
rope_scaling=None,
embed_dropout=0.0,
attention_dropout=0.0,
layer_norm_epsilon=1e-5,
initializer_range=0.02,
use_cache=True,
bos_token_id=0,
eos_token_id=2,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.num_layers = num_layers
self.num_attention_heads = num_attention_heads
self.num_layers = num_layers
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
if intermediate_size:
self.intermediate_size = intermediate_size
else:
self.intermediate_size = hidden_size * 4
self.activation_function = activation_function
self.embed_dropout = embed_dropout
self.attention_dropout = attention_dropout
self.layer_norm_epsilon = layer_norm_epsilon
self.initializer_range = initializer_range
self.use_cache = use_cache
self.rope_theta = rope_theta
self.rope_scaling = rope_scaling
self.bos_token_id = bos_token_id
self.eos_token_id = eos_token_id
super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)

11
generation_config.json Normal file
View File

@@ -0,0 +1,11 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": 361,
"pad_token_id": 0,
"repetition_penalty": 1.0,
"temperature": 0.6,
"top_p": 0.95,
"transformers_version": "4.43.1"
}

101783
merges.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7330c2d3592cbb466374795a6ba086fe33028d26991e192b9c3dc73e7b6a2860
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b0b0a68faa7ac3c75d48e6e7de88395fc8145b9b5dd77b50447d50669b3b78cf
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d200c2dee592440a7db08afe1e0e0e5508f48a6ce6c34b98190cbd217c66e098
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8d5f4b1dee79ed25704c62a2ef713026c0685d7e43a0904570245bdb00158681
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:800e6b7f981a6692c92d4e76e622fe56ffabf05bc082210e89c85a7377351773
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1204baf05dfc562f695c672f430ea47b5ee66b0380b0163249bbc7448e807f2
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c2c456b834b195b1bdae4c7fa41e4b23b474a0dfa9cfeb5e16aee7866e28b2f
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:00c597a6c9529044f87b8591fe60a032e672036c0bcac82941109f1b44c0e850
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4ebe76e325508896681e5190a28dfd62b79030e16b37638e3183ffcb165c2cc3
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:78e91e5e0c7b2c387dc7b500c853edaf184e63acde919c93ced8dc883240f964
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4864bb1e13541d3f468ce1f6d8b56882fa342cdab8c7ff25a09af79d2288cc47
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a0933928688cd21e76c358556b3cb001a34da4e331cbd080d19d73c96c30e73f
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:245344e2510f759faa98c174a9dd6071fec5ffab6651c14eae27d74a5de59818
size 135

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8b11d04162513a652f2b39c6ccb72957af7a2add82291fdc68e42fd571584fa0
size 135

View File

@@ -0,0 +1,586 @@
{
"metadata": {
"total_size": 64006400000
},
"weight_map": {
"lm_head.weight": "model-00014-of-00014.safetensors",
"transformer.h.0.attn.attention.k_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.attn.attention.out_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.attn.attention.q_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.attn.attention.v_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.ln_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.ln_2.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.mlp.c_fc_0.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.mlp.c_fc_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.0.mlp.c_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.attn.attention.k_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.attn.attention.out_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.attn.attention.q_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.attn.attention.v_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.ln_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.ln_2.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.mlp.c_fc_0.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.mlp.c_fc_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.1.mlp.c_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.10.attn.attention.k_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.attn.attention.out_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.attn.attention.q_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.attn.attention.v_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.ln_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.ln_2.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.mlp.c_fc_0.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.mlp.c_fc_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.10.mlp.c_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.attn.attention.k_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.attn.attention.out_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.attn.attention.q_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.attn.attention.v_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.ln_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.ln_2.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.mlp.c_fc_0.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.mlp.c_fc_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.11.mlp.c_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.attn.attention.k_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.attn.attention.out_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.attn.attention.q_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.attn.attention.v_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.ln_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.ln_2.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.mlp.c_fc_0.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.mlp.c_fc_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.12.mlp.c_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.attn.attention.k_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.attn.attention.out_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.attn.attention.q_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.attn.attention.v_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.ln_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.ln_2.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.mlp.c_fc_0.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.mlp.c_fc_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.13.mlp.c_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.attn.attention.k_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.attn.attention.out_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.attn.attention.q_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.attn.attention.v_proj.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.ln_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.ln_2.weight": "model-00003-of-00014.safetensors",
"transformer.h.14.mlp.c_fc_0.weight": "model-00004-of-00014.safetensors",
"transformer.h.14.mlp.c_fc_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.14.mlp.c_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.attn.attention.k_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.attn.attention.out_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.attn.attention.q_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.attn.attention.v_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.ln_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.ln_2.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.mlp.c_fc_0.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.mlp.c_fc_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.15.mlp.c_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.attn.attention.k_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.attn.attention.out_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.attn.attention.q_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.attn.attention.v_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.ln_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.ln_2.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.mlp.c_fc_0.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.mlp.c_fc_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.16.mlp.c_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.attn.attention.k_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.attn.attention.out_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.attn.attention.q_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.attn.attention.v_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.ln_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.ln_2.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.mlp.c_fc_0.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.mlp.c_fc_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.17.mlp.c_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.attn.attention.k_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.attn.attention.out_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.attn.attention.q_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.attn.attention.v_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.ln_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.ln_2.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.mlp.c_fc_0.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.mlp.c_fc_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.18.mlp.c_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.attn.attention.k_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.attn.attention.out_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.attn.attention.q_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.attn.attention.v_proj.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.ln_1.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.ln_2.weight": "model-00004-of-00014.safetensors",
"transformer.h.19.mlp.c_fc_0.weight": "model-00005-of-00014.safetensors",
"transformer.h.19.mlp.c_fc_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.19.mlp.c_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.2.attn.attention.k_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.attn.attention.out_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.attn.attention.q_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.attn.attention.v_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.ln_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.ln_2.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.mlp.c_fc_0.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.mlp.c_fc_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.2.mlp.c_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.20.attn.attention.k_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.attn.attention.out_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.attn.attention.q_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.attn.attention.v_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.ln_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.ln_2.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.mlp.c_fc_0.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.mlp.c_fc_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.20.mlp.c_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.attn.attention.k_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.attn.attention.out_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.attn.attention.q_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.attn.attention.v_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.ln_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.ln_2.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.mlp.c_fc_0.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.mlp.c_fc_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.21.mlp.c_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.attn.attention.k_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.attn.attention.out_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.attn.attention.q_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.attn.attention.v_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.ln_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.ln_2.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.mlp.c_fc_0.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.mlp.c_fc_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.22.mlp.c_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.attn.attention.k_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.attn.attention.out_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.attn.attention.q_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.attn.attention.v_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.ln_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.ln_2.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.mlp.c_fc_0.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.mlp.c_fc_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.23.mlp.c_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.attn.attention.k_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.attn.attention.out_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.attn.attention.q_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.attn.attention.v_proj.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.ln_1.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.ln_2.weight": "model-00005-of-00014.safetensors",
"transformer.h.24.mlp.c_fc_0.weight": "model-00006-of-00014.safetensors",
"transformer.h.24.mlp.c_fc_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.24.mlp.c_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.attn.attention.k_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.attn.attention.out_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.attn.attention.q_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.attn.attention.v_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.ln_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.ln_2.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.mlp.c_fc_0.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.mlp.c_fc_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.25.mlp.c_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.attn.attention.k_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.attn.attention.out_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.attn.attention.q_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.attn.attention.v_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.ln_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.ln_2.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.mlp.c_fc_0.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.mlp.c_fc_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.26.mlp.c_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.attn.attention.k_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.attn.attention.out_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.attn.attention.q_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.attn.attention.v_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.ln_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.ln_2.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.mlp.c_fc_0.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.mlp.c_fc_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.27.mlp.c_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.attn.attention.k_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.attn.attention.out_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.attn.attention.q_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.attn.attention.v_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.ln_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.ln_2.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.mlp.c_fc_0.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.mlp.c_fc_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.28.mlp.c_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.attn.attention.k_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.attn.attention.out_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.attn.attention.q_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.attn.attention.v_proj.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.ln_1.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.ln_2.weight": "model-00006-of-00014.safetensors",
"transformer.h.29.mlp.c_fc_0.weight": "model-00007-of-00014.safetensors",
"transformer.h.29.mlp.c_fc_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.29.mlp.c_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.3.attn.attention.k_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.attn.attention.out_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.attn.attention.q_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.attn.attention.v_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.ln_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.ln_2.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.mlp.c_fc_0.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.mlp.c_fc_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.3.mlp.c_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.30.attn.attention.k_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.attn.attention.out_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.attn.attention.q_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.attn.attention.v_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.ln_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.ln_2.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.mlp.c_fc_0.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.mlp.c_fc_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.30.mlp.c_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.attn.attention.k_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.attn.attention.out_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.attn.attention.q_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.attn.attention.v_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.ln_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.ln_2.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.mlp.c_fc_0.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.mlp.c_fc_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.31.mlp.c_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.attn.attention.k_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.attn.attention.out_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.attn.attention.q_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.attn.attention.v_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.ln_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.ln_2.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.mlp.c_fc_0.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.mlp.c_fc_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.32.mlp.c_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.attn.attention.k_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.attn.attention.out_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.attn.attention.q_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.attn.attention.v_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.ln_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.ln_2.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.mlp.c_fc_0.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.mlp.c_fc_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.33.mlp.c_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.attn.attention.k_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.attn.attention.out_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.attn.attention.q_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.attn.attention.v_proj.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.ln_1.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.ln_2.weight": "model-00007-of-00014.safetensors",
"transformer.h.34.mlp.c_fc_0.weight": "model-00008-of-00014.safetensors",
"transformer.h.34.mlp.c_fc_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.34.mlp.c_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.attn.attention.k_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.attn.attention.out_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.attn.attention.q_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.attn.attention.v_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.ln_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.ln_2.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.mlp.c_fc_0.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.mlp.c_fc_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.35.mlp.c_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.attn.attention.k_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.attn.attention.out_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.attn.attention.q_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.attn.attention.v_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.ln_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.ln_2.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.mlp.c_fc_0.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.mlp.c_fc_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.36.mlp.c_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.attn.attention.k_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.attn.attention.out_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.attn.attention.q_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.attn.attention.v_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.ln_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.ln_2.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.mlp.c_fc_0.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.mlp.c_fc_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.37.mlp.c_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.attn.attention.k_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.attn.attention.out_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.attn.attention.q_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.attn.attention.v_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.ln_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.ln_2.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.mlp.c_fc_0.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.mlp.c_fc_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.38.mlp.c_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.attn.attention.k_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.attn.attention.out_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.attn.attention.q_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.attn.attention.v_proj.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.ln_1.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.ln_2.weight": "model-00008-of-00014.safetensors",
"transformer.h.39.mlp.c_fc_0.weight": "model-00009-of-00014.safetensors",
"transformer.h.39.mlp.c_fc_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.39.mlp.c_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.4.attn.attention.k_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.4.attn.attention.out_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.4.attn.attention.q_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.4.attn.attention.v_proj.weight": "model-00001-of-00014.safetensors",
"transformer.h.4.ln_1.weight": "model-00001-of-00014.safetensors",
"transformer.h.4.ln_2.weight": "model-00002-of-00014.safetensors",
"transformer.h.4.mlp.c_fc_0.weight": "model-00002-of-00014.safetensors",
"transformer.h.4.mlp.c_fc_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.4.mlp.c_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.40.attn.attention.k_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.attn.attention.out_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.attn.attention.q_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.attn.attention.v_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.ln_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.ln_2.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.mlp.c_fc_0.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.mlp.c_fc_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.40.mlp.c_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.attn.attention.k_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.attn.attention.out_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.attn.attention.q_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.attn.attention.v_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.ln_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.ln_2.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.mlp.c_fc_0.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.mlp.c_fc_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.41.mlp.c_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.attn.attention.k_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.attn.attention.out_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.attn.attention.q_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.attn.attention.v_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.ln_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.ln_2.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.mlp.c_fc_0.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.mlp.c_fc_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.42.mlp.c_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.attn.attention.k_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.attn.attention.out_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.attn.attention.q_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.attn.attention.v_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.ln_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.ln_2.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.mlp.c_fc_0.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.mlp.c_fc_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.43.mlp.c_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.attn.attention.k_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.attn.attention.out_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.attn.attention.q_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.attn.attention.v_proj.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.ln_1.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.ln_2.weight": "model-00009-of-00014.safetensors",
"transformer.h.44.mlp.c_fc_0.weight": "model-00010-of-00014.safetensors",
"transformer.h.44.mlp.c_fc_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.44.mlp.c_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.attn.attention.k_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.attn.attention.out_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.attn.attention.q_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.attn.attention.v_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.ln_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.ln_2.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.mlp.c_fc_0.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.mlp.c_fc_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.45.mlp.c_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.attn.attention.k_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.attn.attention.out_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.attn.attention.q_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.attn.attention.v_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.ln_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.ln_2.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.mlp.c_fc_0.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.mlp.c_fc_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.46.mlp.c_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.attn.attention.k_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.attn.attention.out_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.attn.attention.q_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.attn.attention.v_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.ln_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.ln_2.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.mlp.c_fc_0.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.mlp.c_fc_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.47.mlp.c_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.attn.attention.k_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.attn.attention.out_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.attn.attention.q_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.attn.attention.v_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.ln_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.ln_2.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.mlp.c_fc_0.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.mlp.c_fc_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.48.mlp.c_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.attn.attention.k_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.attn.attention.out_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.attn.attention.q_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.attn.attention.v_proj.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.ln_1.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.ln_2.weight": "model-00010-of-00014.safetensors",
"transformer.h.49.mlp.c_fc_0.weight": "model-00011-of-00014.safetensors",
"transformer.h.49.mlp.c_fc_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.49.mlp.c_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.5.attn.attention.k_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.attn.attention.out_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.attn.attention.q_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.attn.attention.v_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.ln_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.ln_2.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.mlp.c_fc_0.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.mlp.c_fc_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.5.mlp.c_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.50.attn.attention.k_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.attn.attention.out_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.attn.attention.q_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.attn.attention.v_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.ln_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.ln_2.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.mlp.c_fc_0.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.mlp.c_fc_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.50.mlp.c_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.attn.attention.k_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.attn.attention.out_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.attn.attention.q_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.attn.attention.v_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.ln_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.ln_2.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.mlp.c_fc_0.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.mlp.c_fc_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.51.mlp.c_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.attn.attention.k_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.attn.attention.out_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.attn.attention.q_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.attn.attention.v_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.ln_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.ln_2.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.mlp.c_fc_0.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.mlp.c_fc_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.52.mlp.c_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.attn.attention.k_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.attn.attention.out_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.attn.attention.q_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.attn.attention.v_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.ln_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.ln_2.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.mlp.c_fc_0.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.mlp.c_fc_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.53.mlp.c_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.attn.attention.k_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.attn.attention.out_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.attn.attention.q_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.attn.attention.v_proj.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.ln_1.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.ln_2.weight": "model-00011-of-00014.safetensors",
"transformer.h.54.mlp.c_fc_0.weight": "model-00012-of-00014.safetensors",
"transformer.h.54.mlp.c_fc_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.54.mlp.c_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.attn.attention.k_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.attn.attention.out_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.attn.attention.q_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.attn.attention.v_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.ln_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.ln_2.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.mlp.c_fc_0.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.mlp.c_fc_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.55.mlp.c_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.attn.attention.k_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.attn.attention.out_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.attn.attention.q_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.attn.attention.v_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.ln_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.ln_2.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.mlp.c_fc_0.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.mlp.c_fc_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.56.mlp.c_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.attn.attention.k_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.attn.attention.out_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.attn.attention.q_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.attn.attention.v_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.ln_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.ln_2.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.mlp.c_fc_0.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.mlp.c_fc_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.57.mlp.c_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.attn.attention.k_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.attn.attention.out_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.attn.attention.q_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.attn.attention.v_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.ln_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.ln_2.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.mlp.c_fc_0.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.mlp.c_fc_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.58.mlp.c_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.attn.attention.k_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.attn.attention.out_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.attn.attention.q_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.attn.attention.v_proj.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.ln_1.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.ln_2.weight": "model-00012-of-00014.safetensors",
"transformer.h.59.mlp.c_fc_0.weight": "model-00013-of-00014.safetensors",
"transformer.h.59.mlp.c_fc_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.59.mlp.c_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.6.attn.attention.k_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.attn.attention.out_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.attn.attention.q_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.attn.attention.v_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.ln_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.ln_2.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.mlp.c_fc_0.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.mlp.c_fc_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.6.mlp.c_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.60.attn.attention.k_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.attn.attention.out_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.attn.attention.q_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.attn.attention.v_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.ln_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.ln_2.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.mlp.c_fc_0.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.mlp.c_fc_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.60.mlp.c_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.attn.attention.k_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.attn.attention.out_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.attn.attention.q_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.attn.attention.v_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.ln_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.ln_2.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.mlp.c_fc_0.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.mlp.c_fc_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.61.mlp.c_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.attn.attention.k_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.attn.attention.out_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.attn.attention.q_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.attn.attention.v_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.ln_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.ln_2.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.mlp.c_fc_0.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.mlp.c_fc_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.62.mlp.c_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.attn.attention.k_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.attn.attention.out_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.attn.attention.q_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.attn.attention.v_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.ln_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.ln_2.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.mlp.c_fc_0.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.mlp.c_fc_1.weight": "model-00013-of-00014.safetensors",
"transformer.h.63.mlp.c_proj.weight": "model-00013-of-00014.safetensors",
"transformer.h.7.attn.attention.k_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.attn.attention.out_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.attn.attention.q_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.attn.attention.v_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.ln_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.ln_2.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.mlp.c_fc_0.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.mlp.c_fc_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.7.mlp.c_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.attn.attention.k_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.attn.attention.out_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.attn.attention.q_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.attn.attention.v_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.ln_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.ln_2.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.mlp.c_fc_0.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.mlp.c_fc_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.8.mlp.c_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.attn.attention.k_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.attn.attention.out_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.attn.attention.q_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.attn.attention.v_proj.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.ln_1.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.ln_2.weight": "model-00002-of-00014.safetensors",
"transformer.h.9.mlp.c_fc_0.weight": "model-00003-of-00014.safetensors",
"transformer.h.9.mlp.c_fc_1.weight": "model-00003-of-00014.safetensors",
"transformer.h.9.mlp.c_proj.weight": "model-00003-of-00014.safetensors",
"transformer.ln_f.weight": "model-00013-of-00014.safetensors",
"transformer.wte.weight": "model-00001-of-00014.safetensors"
}
}

1394
modeling_exaone.py Normal file

File diff suppressed because it is too large Load Diff

30
special_tokens_map.json Normal file
View File

@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "[BOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "[|endofturn|]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "[PAD]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "[UNK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

207491
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

3221
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff

1
vocab.json Normal file

File diff suppressed because one or more lines are too long