初始化项目,由ModelHub XC社区提供模型
Model: daekeun-ml/phi-2-ko-v0.1 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
159
README.md
Normal file
159
README.md
Normal file
@@ -0,0 +1,159 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: cc-by-sa-3.0
|
||||
datasets:
|
||||
- wikimedia/wikipedia
|
||||
- maywell/korean_textbooks
|
||||
- nampdn-ai/tiny-codes
|
||||
- Open-Orca/OpenOrca
|
||||
language:
|
||||
- ko
|
||||
- en
|
||||
inference: false
|
||||
---
|
||||
|
||||
# phi-2-ko-v0.1
|
||||
|
||||
## Model Details
|
||||
This model is a Korean-specific model trained in phi-2 by adding a Korean tokenizer and Korean data. (English is also available.)
|
||||
Although phi-2 performs very well, it does not support the Korean language and does not have a tokenizer trained on Korean corpous, so tokenizing Korean text will use many times more tokens than English tokens.
|
||||
|
||||
To overcome these limitations, I trained the model using an open-license Korean corpus and some English corpus.
|
||||
The reasons for using the English corpus together are as follows:
|
||||
1. The goal is to preserve the excellent performance of the existing model by preventing catastrophic forgetting.
|
||||
2. Mixing English and Korean prompts usually produces better results than using all prompts in Korean.
|
||||
|
||||
Since my role is not as a working developer, but as an solutions architect helping customers with quick PoCs/prototypes, and I was limited by AWS GPU resources available, I only trained with 5GB of data instead of hundreds of GB of massive data.
|
||||
|
||||
### Vocab Expansion
|
||||
|
||||
| Model Name | Vocabulary Size | Description |
|
||||
| --- | --- | --- |
|
||||
| Original phi-2 | 50,295 | BBPE (Byte-level BPE) |
|
||||
| **phi-2-ko** | 66,676 | BBPE. Added Korean vocab and merges |
|
||||
|
||||
**Tokenizing "아마존 세이지메이커"**
|
||||
|
||||
| Model | # of tokens | Tokens |
|
||||
| --- | --- | --- |
|
||||
| Original phi-2 | 25 | `[168, 243, 226, 167, 100, 230, 168, 94, 112, 23821, 226, 116, 35975, 112, 168, 100, 222, 167, 102, 242, 35975, 112, 168, 119, 97]` |
|
||||
| **phi-2-ko** |6| `[57974, 51299, 50617, 51005, 52027, 51446]` |
|
||||
|
||||
### Continued pre-training
|
||||
|
||||
The dataset used for training is as follows. To prevent catastrophic forgetting, I included some English corpus as training data.
|
||||
|
||||
- Wikipedia Korean dataset (https://huggingface.co/datasets/wikimedia/wikipedia)
|
||||
- Massive Korean synthetic dataset (https://huggingface.co/datasets/maywell/korean_textbooks)
|
||||
- Tiny code dataset (https://huggingface.co/datasets/nampdn-ai/tiny-codes)
|
||||
- OpenOrca dataset (https://huggingface.co/datasets/Open-Orca/OpenOrca)
|
||||
- Using some of the various sentences I wrote (personal blog, chat, etc.)
|
||||
|
||||
|
||||
Note that performance is not guaranteed since only a small number of datasets were used for the experiment. The number of samples for training set is just around 5 million after tokenization.
|
||||
For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows.
|
||||
|
||||
Since this is a model that has not been fine-tuned, it is recommended to perform fine tuning such as instruction tuning/alignment tuning according to your use case.
|
||||
|
||||
```json
|
||||
{
|
||||
"fp16": {
|
||||
"enabled": "auto",
|
||||
"loss_scale": 0,
|
||||
"loss_scale_window": 1000,
|
||||
"initial_scale_power": 16,
|
||||
"hysteresis": 2,
|
||||
"min_loss_scale": 1
|
||||
},
|
||||
|
||||
"bf16": {
|
||||
"enabled": "auto"
|
||||
},
|
||||
|
||||
"optimizer": {
|
||||
"type": "AdamW",
|
||||
"params": {
|
||||
"lr": "auto",
|
||||
"betas": "auto",
|
||||
"eps": "auto",
|
||||
"weight_decay": "auto"
|
||||
}
|
||||
},
|
||||
|
||||
"scheduler": {
|
||||
"type": "WarmupLR",
|
||||
"params": {
|
||||
"warmup_min_lr": "auto",
|
||||
"warmup_max_lr": "auto",
|
||||
"warmup_num_steps": "auto"
|
||||
}
|
||||
},
|
||||
|
||||
"zero_optimization": {
|
||||
"stage": 2,
|
||||
"allgather_partitions": true,
|
||||
"allgather_bucket_size": 2e8,
|
||||
"overlap_comm": true,
|
||||
"reduce_scatter": true,
|
||||
"reduce_bucket_size": 2e8,
|
||||
"contiguous_gradients": true,
|
||||
"cpu_offload": true
|
||||
},
|
||||
|
||||
"gradient_accumulation_steps": "auto",
|
||||
"gradient_clipping": "auto",
|
||||
"train_batch_size": "auto",
|
||||
"train_micro_batch_size_per_gpu": "auto"
|
||||
}
|
||||
```
|
||||
|
||||
Some hyperparameters are listed below.
|
||||
```
|
||||
batch_size: 2
|
||||
num_epochs: 1
|
||||
learning_rate: 3e-4
|
||||
gradient_accumulation_steps: 8
|
||||
lr_scheduler_type: "linear"
|
||||
group_by_length: False
|
||||
```
|
||||
|
||||
## How to Get Started with the Model
|
||||
```python
|
||||
import torch
|
||||
from transformers import PhiForCausalLM, AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
torch.set_default_device("cuda")
|
||||
|
||||
# Load model and tokenizer
|
||||
model = AutoModelForCausalLM.from_pretrained("daekeun-ml/phi-2-ko-v0.1", torch_dtype="auto")
|
||||
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/phi-2-ko-v0.1", trust_remote_code=True)
|
||||
|
||||
# Korean
|
||||
inputs = tokenizer("머신러닝은 ", return_tensors="pt", return_attention_mask=False)
|
||||
|
||||
outputs = model.generate(**inputs, max_length=200)
|
||||
text = tokenizer.batch_decode(outputs)[0]
|
||||
print(text)
|
||||
|
||||
# English
|
||||
inputs = tokenizer('''def print_prime(n):
|
||||
"""
|
||||
Print all primes between 1 and n
|
||||
"""''', return_tensors="pt", return_attention_mask=False)
|
||||
|
||||
outputs = model.generate(**inputs, max_length=200)
|
||||
text = tokenizer.batch_decode(outputs)[0]
|
||||
print(text)
|
||||
```
|
||||
|
||||
### References
|
||||
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
|
||||
|
||||
## Notes
|
||||
|
||||
### License
|
||||
|
||||
cc-by-sa 3.0; The license of phi-2 is MIT, but I considered the licensing of the dataset used for training.
|
||||
|
||||
### Caution
|
||||
This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)!
|
||||
34
config.json
Normal file
34
config.json
Normal file
@@ -0,0 +1,34 @@
|
||||
{
|
||||
"_name_or_path": "/home/ec2-user/SageMaker/phi-2-ko",
|
||||
"architectures": [
|
||||
"PhiForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"auto_map": {
|
||||
"AutoConfig": "configuration_phi.PhiConfig",
|
||||
"AutoModelForCausalLM": "modeling_phi.PhiForCausalLM"
|
||||
},
|
||||
"bos_token_id": 50256,
|
||||
"embd_pdrop": 0.0,
|
||||
"eos_token_id": 50256,
|
||||
"hidden_act": "gelu_new",
|
||||
"hidden_size": 2560,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 10240,
|
||||
"layer_norm_eps": 1e-05,
|
||||
"max_position_embeddings": 2048,
|
||||
"model_type": "phi",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 32,
|
||||
"partial_rotary_factor": 0.4,
|
||||
"qk_layernorm": false,
|
||||
"resid_pdrop": 0.1,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 10000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.37.2",
|
||||
"use_cache": false,
|
||||
"vocab_size": 66676
|
||||
}
|
||||
193
configuration_phi.py
Normal file
193
configuration_phi.py
Normal file
@@ -0,0 +1,193 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 Microsoft and the HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
""" Phi model configuration"""
|
||||
|
||||
|
||||
from transformers.configuration_utils import PretrainedConfig
|
||||
from transformers.utils import logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
PHI_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||
"microsoft/phi-2": "https://huggingface.co/microsoft/phi-2/resolve/main/config.json",
|
||||
}
|
||||
|
||||
|
||||
class PhiConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`PhiModel`]. It is used to instantiate an Phi
|
||||
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
||||
defaults will yield a similar configuration to that of the Phi
|
||||
[microsoft/phi-1](https://huggingface.co/microsoft/phi-1).
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
vocab_size (`int`, *optional*, defaults to 51200):
|
||||
Vocabulary size of the Phi model. Defines the number of different tokens that can be represented by the
|
||||
`inputs_ids` passed when calling [`PhiModel`].
|
||||
hidden_size (`int`, *optional*, defaults to 2048):
|
||||
Dimension of the hidden representations.
|
||||
intermediate_size (`int`, *optional*, defaults to 8192):
|
||||
Dimension of the MLP representations.
|
||||
num_hidden_layers (`int`, *optional*, defaults to 24):
|
||||
Number of hidden layers in the Transformer decoder.
|
||||
num_attention_heads (`int`, *optional*, defaults to 32):
|
||||
Number of attention heads for each attention layer in the Transformer decoder.
|
||||
num_key_value_heads (`int`, *optional*):
|
||||
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
||||
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
||||
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
||||
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
||||
by meanpooling all the original heads within that group. For more details checkout [this
|
||||
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
||||
`num_attention_heads`.
|
||||
resid_pdrop (`float`, *optional*, defaults to 0.0):
|
||||
Dropout probability for mlp outputs.
|
||||
embd_pdrop (`int`, *optional*, defaults to 0.0):
|
||||
The dropout ratio for the embeddings.
|
||||
attention_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout ratio after computing the attention scores.
|
||||
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_new"`):
|
||||
The non-linear activation function (function or string) in the decoder.
|
||||
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
||||
The maximum sequence length that this model might ever be used with. Phi-1 and Phi-1.5 supports up to 2048
|
||||
tokens.
|
||||
initializer_range (`float`, *optional*, defaults to 0.02):
|
||||
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
|
||||
The epsilon used by the rms normalization layers.
|
||||
use_cache (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
||||
relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
|
||||
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
|
||||
Whether to tie weight embeddings
|
||||
rope_theta (`float`, *optional*, defaults to 10000.0):
|
||||
The base period of the RoPE embeddings.
|
||||
rope_scaling (`Dict`, *optional*):
|
||||
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
|
||||
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
|
||||
is `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
|
||||
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
|
||||
these scaling strategies behave:
|
||||
https://www.reddit.com/r/LocalPersimmon/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This
|
||||
is an experimental feature, subject to breaking API changes in future versions.
|
||||
partial_rotary_factor (`float`, *optional*, defaults to 0.5):
|
||||
Percentage of the query and keys which will have rotary embedding.
|
||||
qk_layernorm (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not to normalize the Queries and Keys after projecting the hidden states.
|
||||
bos_token_id (`int`, *optional*, defaults to 1):
|
||||
Denotes beginning of sequences token id.
|
||||
eos_token_id (`int`, *optional*, defaults to 2):
|
||||
Denotes end of sequences token id.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import PhiModel, PhiConfig
|
||||
|
||||
>>> # Initializing a Phi-1 style configuration
|
||||
>>> configuration = PhiConfig.from_pretrained("microsoft/phi-1")
|
||||
|
||||
>>> # Initializing a model from the configuration
|
||||
>>> model = PhiModel(configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
|
||||
model_type = "phi"
|
||||
keys_to_ignore_at_inference = ["past_key_values"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size=51200,
|
||||
hidden_size=2048,
|
||||
intermediate_size=8192,
|
||||
num_hidden_layers=24,
|
||||
num_attention_heads=32,
|
||||
num_key_value_heads=None,
|
||||
resid_pdrop=0.0,
|
||||
embd_pdrop=0.0,
|
||||
attention_dropout=0.0,
|
||||
hidden_act="gelu_new",
|
||||
max_position_embeddings=2048,
|
||||
initializer_range=0.02,
|
||||
layer_norm_eps=1e-5,
|
||||
use_cache=True,
|
||||
tie_word_embeddings=False,
|
||||
rope_theta=10000.0,
|
||||
rope_scaling=None,
|
||||
partial_rotary_factor=0.5,
|
||||
qk_layernorm=False,
|
||||
bos_token_id=1,
|
||||
eos_token_id=2,
|
||||
**kwargs,
|
||||
):
|
||||
self.vocab_size = vocab_size
|
||||
self.hidden_size = hidden_size
|
||||
self.intermediate_size = intermediate_size
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
|
||||
if num_key_value_heads is None:
|
||||
num_key_value_heads = num_attention_heads
|
||||
|
||||
self.num_key_value_heads = num_key_value_heads
|
||||
self.resid_pdrop = resid_pdrop
|
||||
self.embd_pdrop = embd_pdrop
|
||||
self.attention_dropout = attention_dropout
|
||||
self.hidden_act = hidden_act
|
||||
self.max_position_embeddings = max_position_embeddings
|
||||
self.initializer_range = initializer_range
|
||||
self.layer_norm_eps = layer_norm_eps
|
||||
self.use_cache = use_cache
|
||||
self.rope_theta = rope_theta
|
||||
self.rope_scaling = rope_scaling
|
||||
self.partial_rotary_factor = partial_rotary_factor
|
||||
self.qk_layernorm = qk_layernorm
|
||||
self._rope_scaling_validation()
|
||||
|
||||
super().__init__(
|
||||
bos_token_id=bos_token_id,
|
||||
eos_token_id=eos_token_id,
|
||||
tie_word_embeddings=tie_word_embeddings,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
# Copied from transformers.models.llama.configuration_llama.LlamaConfig._rope_scaling_validation
|
||||
def _rope_scaling_validation(self):
|
||||
"""
|
||||
Validate the `rope_scaling` configuration.
|
||||
"""
|
||||
if self.rope_scaling is None:
|
||||
return
|
||||
|
||||
if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
|
||||
raise ValueError(
|
||||
"`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
|
||||
f"got {self.rope_scaling}"
|
||||
)
|
||||
rope_scaling_type = self.rope_scaling.get("type", None)
|
||||
rope_scaling_factor = self.rope_scaling.get("factor", None)
|
||||
if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
|
||||
raise ValueError(
|
||||
f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
|
||||
)
|
||||
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
|
||||
raise ValueError(f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}")
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 50256,
|
||||
"eos_token_id": 50256,
|
||||
"transformers_version": "4.37.2"
|
||||
}
|
||||
66530
merges.txt
Normal file
66530
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8dbe878142640c197bf05fff86dc42131e0a0daba45412ec98d3f6603a5ebd00
|
||||
size 4956815344
|
||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:944464dc9e8ea4ad06e9c4ebf36dabb8f0f56250bd7ffffe05feec675f2e9943
|
||||
size 761107696
|
||||
460
model.safetensors.index.json
Normal file
460
model.safetensors.index.json
Normal file
@@ -0,0 +1,460 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 5717872872
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.bias": "model-00002-of-00002.safetensors",
|
||||
"lm_head.weight": "model-00002-of-00002.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
|
||||
"model.final_layernorm.bias": "model-00002-of-00002.safetensors",
|
||||
"model.final_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.0.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.input_layernorm.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.fc1.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.fc1.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.fc2.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.mlp.fc2.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.30.input_layernorm.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.fc1.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.fc1.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.fc2.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.mlp.fc2.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.dense.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.dense.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.input_layernorm.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.fc1.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.fc1.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.fc2.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.mlp.fc2.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.dense.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.dense.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||
"model.layers.4.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.input_layernorm.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.fc1.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.fc1.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.fc2.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.mlp.fc2.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.dense.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.dense.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors"
|
||||
}
|
||||
}
|
||||
1369
modeling_phi.py
Normal file
1369
modeling_phi.py
Normal file
File diff suppressed because it is too large
Load Diff
24
special_tokens_map.json
Normal file
24
special_tokens_map.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": "!",
|
||||
"unk_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
133262
tokenizer.json
Normal file
133262
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
28
tokenizer_config.json
Normal file
28
tokenizer_config.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "!",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"50256": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"bos_token": "<|endoftext|>",
|
||||
"clean_up_tokenization_spaces": true,
|
||||
"eos_token": "<|endoftext|>",
|
||||
"model_max_length": 2048,
|
||||
"pad_token": "!",
|
||||
"tokenizer_class": "CodeGenTokenizer",
|
||||
"unk_token": "<|endoftext|>"
|
||||
}
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user