初始化项目,由ModelHub XC社区提供模型
Model: BramVanroy/GEITje-7B-ultra-sft Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
191
README.md
Normal file
191
README.md
Normal file
@@ -0,0 +1,191 @@
|
||||
---
|
||||
license: cc-by-nc-4.0
|
||||
base_model: Rijgersberg/GEITje-7B
|
||||
tags:
|
||||
- alignment-handbook
|
||||
- trl
|
||||
- sft
|
||||
- geitje
|
||||
- conversational
|
||||
datasets:
|
||||
- BramVanroy/ultrachat_200k_dutch
|
||||
- BramVanroy/stackoverflow-chat-dutch
|
||||
- BramVanroy/alpaca-cleaned-dutch
|
||||
- BramVanroy/dolly-15k-dutch
|
||||
- BramVanroy/no_robots_dutch
|
||||
model-index:
|
||||
- name: GEITje-ultra-sft
|
||||
results: []
|
||||
pipeline_tag: text-generation
|
||||
language:
|
||||
- nl
|
||||
---
|
||||
|
||||
|
||||
# GEITje-ultra-sft
|
||||
|
||||
This model is a fine-tuned version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) on a number of synthetic datasets including gpt-3.5-turbo and gpt-4-turbo data, multi- and single turn conversations, and code. The training set consists of around 240M tokens. The model was trained with context length 8192.
|
||||
|
||||
> [!WARNING]
|
||||
> Note that this model has not been aligned with DPO or other techniques. In practice, it is therefore recommended to use the [DPO variant](https://huggingface.co/BramVanroy/GEITje-7B-ultra) of this model.
|
||||
|
||||
|
||||
## Citation
|
||||
|
||||
If you use GEITje 7B Ultra (SFT) or any of its derivatives or quantizations, place cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@misc{vanroy2024geitje7bultraconversational,
|
||||
title={GEITje 7B Ultra: A Conversational Model for Dutch},
|
||||
author={Bram Vanroy},
|
||||
year={2024},
|
||||
eprint={2412.04092},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2412.04092},
|
||||
}
|
||||
```
|
||||
|
||||
## Model description
|
||||
|
||||
This model is a SFT (chat-tuned) version of [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B), which in turn is based on Mistral 7B and further pretrained on Dutch data.
|
||||
|
||||
## Usage
|
||||
|
||||
```python
|
||||
from transformers import pipeline, Conversation
|
||||
|
||||
# load_in_8bit: lower precision but saves a lot of memory
|
||||
# device_map=auto: loads the model across multiple GPUs
|
||||
chatbot = pipeline("conversational", model="BramVanroy/GEITje-ultra-sft", model_kwargs={"load_in_8bit": True}, device_map="auto")
|
||||
|
||||
start_messages = [
|
||||
{"role": "system", "content": "Je bent een grappige chatbot die Bert heet. Je maakt vaak mopjes."},
|
||||
{"role": "user", "content": "Hallo, ik ben Bram. Ik wil vanavond graag een film kijken. Heb je enkele suggesties? Liefst een Disney-film."}
|
||||
]
|
||||
conversation = Conversation(start_messages)
|
||||
conversation = chatbot(conversation)
|
||||
response = conversation.messages[-1]["content"]
|
||||
print(response)
|
||||
# Hallo Bram! Wat leuk dat je vanavond een film wilt kijken. Als je van Disney-films houdt, heb ik een paar suggesties voor je.
|
||||
# Een klassieker is "The Lion King", die is altijd een hit. Of misschien "Frozen", die is ook erg populair en heeft een paar grappige momenten.
|
||||
# Of als je iets nieuws wilt proberen, "Raya and the Last Dragon" is een spannende avonturenfilm met een hartverwarmend verhaal. Welke film spreekt jou het meest aan?
|
||||
```
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
This model was only trained on (synthetic) chat data and not specifically aligned through reinforcement learning. The model can generate wrong, misleading, and potentially even offensive content. Use at your own risk.
|
||||
|
||||
Because the model was trained on synthetic data created with OpenAI/Azure services, this model cannot be used for commercial purposes.
|
||||
|
||||
## Training and evaluation data
|
||||
|
||||
Training data consists of older datasets that were translated to Dutch with OpenAI's gpt-3.5-turbo (alpaca, dolly, stackoverflow) and newer ones that were generated with gpt-4-turbo via Azure (no robots, ultrachat). In the case of no robots, the original English prompt (and optionally system message) were translated, and new answers were then generated with gpt-4-turbo. The case of UltraChat may be more interesting, where multi-turn conversations were generated in one go: through prompt engineering we provide the model with the original English first user message and ask it to create a conversation between a user and assistant in a single response. Additionally, and in my opinion excitedly, I created multiple personas that were randomly select from. The user messages in the dataset are written "as if" they were created by one of the personas, in hopes that the model learns to react well to different types of users. Personas include language learners, a direct conversationalist, someone who loves details, someone who is critical, a child, an expert in the field, a joyful, chaotic mind, a generalist, and "an average user". This is described in more detail [in the dataset](https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch).
|
||||
|
||||
The training set (`train_sft`) consists of 240,527,565 tokens (calculated prior to applying a chat template). The test sets (`test_sft` in the datasets) account for 26,397,086 tokens, which is around 10.97\% of the training set.
|
||||
|
||||
Here is a break down of the training set (some data pages might not be available yet *but they definitely will be in the near future*).
|
||||
|
||||
- [BramVanroy/ultrachat_200k_dutch](https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch) (gpt-4-turbo; multi-turn; generated): 85.42%
|
||||
- [BramVanroy/no_robots_dutch](https://huggingface.co/datasets/BramVanroy/no_robots_dutch) (gpt-4-turbo; prompt translate, answer generated; some items have system messages): 2.20%
|
||||
- [BramVanroy/stackoverflow-chat-dutch](https://huggingface.co/datasets/BramVanroy/stackoverflow-chat-dutch) (gpt-3.5-turbo; multi-turn; code; translated; only 50% used): 8.38%
|
||||
- [BramVanroy/alpaca-cleaned-dutch](https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch) (gpt-3.5-turbo; translated): 2.62%
|
||||
- [BramVanroy/dolly-15k-dutch](https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch) (gpt-3.5-turbo; translated): 1.39%
|
||||
|
||||
|
||||
## Training procedure
|
||||
|
||||
The great [alignment handbook](https://github.com/huggingface/alignment-handbook/) was used for training, with a custom slurm script for compatibility with our cluster. It was trained in full, without LoRA or other adapters.
|
||||
|
||||
The model was trained in bfloat16 with flash attention 2 and a context length of 8192 on two nodes of four A100 80GB each for around 2.5 hours. I thank the [Flemish Super Computer](https://www.vscentrum.be/compute) for their compute. You can find the [wandb logs](https://wandb.ai/bramvanroy/sft-geitje-ultra) here.
|
||||
|
||||
For conversational usage, the model relies on the Zephyr chat template, which is compatible with system messages. A small portion of the data contained system messages, so it is assumed the model can handle system messages at least a little bit.
|
||||
|
||||
|
||||
Recipe used with the handbook:
|
||||
|
||||
```yaml
|
||||
# Model arguments
|
||||
model_name_or_path: Rijgersberg/GEITje-7B
|
||||
model_revision: main
|
||||
torch_dtype: bfloat16
|
||||
use_flash_attention_2: true
|
||||
|
||||
# Data training arguments
|
||||
# Zephyr chat template
|
||||
chat_template: "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
|
||||
dataset_mixer:
|
||||
BramVanroy/ultrachat_200k_dutch: 1.0
|
||||
BramVanroy/stackoverflow-chat-dutch: 0.5
|
||||
BramVanroy/alpaca-cleaned-dutch: 1.0
|
||||
BramVanroy/dolly-15k-dutch: 1.0
|
||||
BramVanroy/no_robots_dutch: 1.0
|
||||
dataset_splits:
|
||||
- train_sft
|
||||
- test_sft
|
||||
preprocessing_num_workers: 8
|
||||
|
||||
# SFT trainer config
|
||||
bf16: true
|
||||
do_eval: true
|
||||
evaluation_strategy: epoch
|
||||
gradient_accumulation_steps: 1
|
||||
gradient_checkpointing: true
|
||||
gradient_checkpointing_kwargs:
|
||||
use_reentrant: False
|
||||
hub_model_id: GEITje-ultra-sft
|
||||
hub_strategy: every_save
|
||||
learning_rate: 2.0e-05
|
||||
log_level: info
|
||||
logging_steps: 5
|
||||
logging_strategy: steps
|
||||
lr_scheduler_type: cosine
|
||||
max_seq_length: 8192
|
||||
max_steps: -1
|
||||
num_train_epochs: 1
|
||||
output_dir: data/GEITje-ultra-sft
|
||||
overwrite_output_dir: true
|
||||
per_device_eval_batch_size: 8
|
||||
per_device_train_batch_size: 16
|
||||
push_to_hub: true
|
||||
remove_unused_columns: true
|
||||
report_to:
|
||||
- wandb
|
||||
save_strategy: "steps"
|
||||
save_steps: 100
|
||||
save_total_limit: 1
|
||||
seed: 42
|
||||
warmup_ratio: 0.1
|
||||
```
|
||||
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 2e-05
|
||||
- train_batch_size: 4
|
||||
- eval_batch_size: 4
|
||||
- seed: 42
|
||||
- distributed_type: multi-GPU
|
||||
- num_devices: 8
|
||||
- gradient_accumulation_steps: 4
|
||||
- total_train_batch_size: 128
|
||||
- total_eval_batch_size: 32
|
||||
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
||||
- lr_scheduler_type: cosine
|
||||
- lr_scheduler_warmup_ratio: 0.1
|
||||
- num_epochs: 1
|
||||
|
||||
### Training results
|
||||
|
||||
| Training Loss | Epoch | Step | Validation Loss |
|
||||
|:-------------:|:-----:|:----:|:---------------:|
|
||||
| 0.8632 | 1.0 | 238 | 0.8563 |
|
||||
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 4.36.2
|
||||
- Pytorch 2.1.2+cu121
|
||||
- Datasets 2.14.6
|
||||
- Tokenizers 0.15.0
|
||||
13
all_results.json
Normal file
13
all_results.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"epoch": 1.0,
|
||||
"eval_loss": 0.8563192486763,
|
||||
"eval_runtime": 465.44,
|
||||
"eval_samples": 34116,
|
||||
"eval_samples_per_second": 7.531,
|
||||
"eval_steps_per_second": 0.236,
|
||||
"train_loss": 0.5069183211366669,
|
||||
"train_runtime": 9492.9381,
|
||||
"train_samples": 285533,
|
||||
"train_samples_per_second": 3.208,
|
||||
"train_steps_per_second": 0.025
|
||||
}
|
||||
27
config.json
Normal file
27
config.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"_name_or_path": "Rijgersberg/GEITje-7B",
|
||||
"architectures": [
|
||||
"MistralForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 32768,
|
||||
"model_type": "mistral",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 2,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_theta": 10000.0,
|
||||
"sliding_window": 4096,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.36.2",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32000
|
||||
}
|
||||
8
eval_results.json
Normal file
8
eval_results.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"epoch": 1.0,
|
||||
"eval_loss": 0.8563192486763,
|
||||
"eval_runtime": 465.44,
|
||||
"eval_samples": 34116,
|
||||
"eval_samples_per_second": 7.531,
|
||||
"eval_steps_per_second": 0.236
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.36.2"
|
||||
}
|
||||
3
model-00001-of-00003.safetensors
Normal file
3
model-00001-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:197822d10c9e9fc3617f21232cdcc3dcdeeb49ef5bbc05bdfc4551ce60d0f0ac
|
||||
size 4943162336
|
||||
3
model-00002-of-00003.safetensors
Normal file
3
model-00002-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:69441f26ce855c92a107157abcdea51f62ae9262a25cb61b24aacd7072913c75
|
||||
size 4999819336
|
||||
3
model-00003-of-00003.safetensors
Normal file
3
model-00003-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4de4dd7bd5fcb7cbc83b583683c656852d32713243e8167a82417cab67c56ee7
|
||||
size 4540516344
|
||||
298
model.safetensors.index.json
Normal file
298
model.safetensors.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 14483464192
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00003-of-00003.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
|
||||
"model.norm.weight": "model-00003-of-00003.safetensors"
|
||||
}
|
||||
}
|
||||
30
special_tokens_map.json
Normal file
30
special_tokens_map.json
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
91122
tokenizer.json
Normal file
91122
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
Binary file not shown.
43
tokenizer_config.json
Normal file
43
tokenizer_config.json
Normal file
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_eos_token": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [],
|
||||
"bos_token": "<s>",
|
||||
"chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "</s>",
|
||||
"legacy": true,
|
||||
"model_max_length": 8192,
|
||||
"pad_token": "</s>",
|
||||
"sp_model_kwargs": {},
|
||||
"spaces_between_special_tokens": false,
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": true
|
||||
}
|
||||
8
train_results.json
Normal file
8
train_results.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"epoch": 1.0,
|
||||
"train_loss": 0.5069183211366669,
|
||||
"train_runtime": 9492.9381,
|
||||
"train_samples": 285533,
|
||||
"train_samples_per_second": 3.208,
|
||||
"train_steps_per_second": 0.025
|
||||
}
|
||||
326
trainer_state.json
Normal file
326
trainer_state.json
Normal file
@@ -0,0 +1,326 @@
|
||||
{
|
||||
"best_metric": null,
|
||||
"best_model_checkpoint": null,
|
||||
"epoch": 1.0,
|
||||
"eval_steps": 500,
|
||||
"global_step": 238,
|
||||
"is_hyper_param_search": false,
|
||||
"is_local_process_zero": true,
|
||||
"is_world_process_zero": true,
|
||||
"log_history": [
|
||||
{
|
||||
"epoch": 0.0,
|
||||
"learning_rate": 8.333333333333333e-07,
|
||||
"loss": 4.1909,
|
||||
"step": 1
|
||||
},
|
||||
{
|
||||
"epoch": 0.02,
|
||||
"learning_rate": 4.166666666666667e-06,
|
||||
"loss": 3.6566,
|
||||
"step": 5
|
||||
},
|
||||
{
|
||||
"epoch": 0.04,
|
||||
"learning_rate": 8.333333333333334e-06,
|
||||
"loss": 2.263,
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"epoch": 0.06,
|
||||
"learning_rate": 1.25e-05,
|
||||
"loss": 1.5891,
|
||||
"step": 15
|
||||
},
|
||||
{
|
||||
"epoch": 0.08,
|
||||
"learning_rate": 1.6666666666666667e-05,
|
||||
"loss": 1.279,
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"epoch": 0.11,
|
||||
"learning_rate": 1.9998922457512608e-05,
|
||||
"loss": 1.1573,
|
||||
"step": 25
|
||||
},
|
||||
{
|
||||
"epoch": 0.13,
|
||||
"learning_rate": 1.996123284790336e-05,
|
||||
"loss": 1.0848,
|
||||
"step": 30
|
||||
},
|
||||
{
|
||||
"epoch": 0.15,
|
||||
"learning_rate": 1.9869898108633834e-05,
|
||||
"loss": 1.039,
|
||||
"step": 35
|
||||
},
|
||||
{
|
||||
"epoch": 0.17,
|
||||
"learning_rate": 1.972541011294959e-05,
|
||||
"loss": 1.0085,
|
||||
"step": 40
|
||||
},
|
||||
{
|
||||
"epoch": 0.19,
|
||||
"learning_rate": 1.952854698514318e-05,
|
||||
"loss": 0.9853,
|
||||
"step": 45
|
||||
},
|
||||
{
|
||||
"epoch": 0.21,
|
||||
"learning_rate": 1.9280368910050943e-05,
|
||||
"loss": 0.97,
|
||||
"step": 50
|
||||
},
|
||||
{
|
||||
"epoch": 0.23,
|
||||
"learning_rate": 1.898221242354353e-05,
|
||||
"loss": 0.9575,
|
||||
"step": 55
|
||||
},
|
||||
{
|
||||
"epoch": 0.25,
|
||||
"learning_rate": 1.8635683214758213e-05,
|
||||
"loss": 0.9491,
|
||||
"step": 60
|
||||
},
|
||||
{
|
||||
"epoch": 0.27,
|
||||
"learning_rate": 1.8242647478835717e-05,
|
||||
"loss": 0.9285,
|
||||
"step": 65
|
||||
},
|
||||
{
|
||||
"epoch": 0.29,
|
||||
"learning_rate": 1.780522186673046e-05,
|
||||
"loss": 0.9286,
|
||||
"step": 70
|
||||
},
|
||||
{
|
||||
"epoch": 0.32,
|
||||
"learning_rate": 1.7325762086218415e-05,
|
||||
"loss": 0.9181,
|
||||
"step": 75
|
||||
},
|
||||
{
|
||||
"epoch": 0.34,
|
||||
"learning_rate": 1.680685021549063e-05,
|
||||
"loss": 0.9192,
|
||||
"step": 80
|
||||
},
|
||||
{
|
||||
"epoch": 0.36,
|
||||
"learning_rate": 1.6251280797653606e-05,
|
||||
"loss": 0.9165,
|
||||
"step": 85
|
||||
},
|
||||
{
|
||||
"epoch": 0.38,
|
||||
"learning_rate": 1.566204579102317e-05,
|
||||
"loss": 0.9042,
|
||||
"step": 90
|
||||
},
|
||||
{
|
||||
"epoch": 0.4,
|
||||
"learning_rate": 1.5042318456260305e-05,
|
||||
"loss": 0.9056,
|
||||
"step": 95
|
||||
},
|
||||
{
|
||||
"epoch": 0.42,
|
||||
"learning_rate": 1.4395436267123017e-05,
|
||||
"loss": 0.901,
|
||||
"step": 100
|
||||
},
|
||||
{
|
||||
"epoch": 0.44,
|
||||
"learning_rate": 1.3724882936866596e-05,
|
||||
"loss": 0.895,
|
||||
"step": 105
|
||||
},
|
||||
{
|
||||
"epoch": 0.46,
|
||||
"learning_rate": 1.3034269657086993e-05,
|
||||
"loss": 0.8956,
|
||||
"step": 110
|
||||
},
|
||||
{
|
||||
"epoch": 0.48,
|
||||
"learning_rate": 1.2327315650043605e-05,
|
||||
"loss": 0.8878,
|
||||
"step": 115
|
||||
},
|
||||
{
|
||||
"epoch": 0.5,
|
||||
"learning_rate": 1.1607828139194683e-05,
|
||||
"loss": 0.8908,
|
||||
"step": 120
|
||||
},
|
||||
{
|
||||
"epoch": 0.53,
|
||||
"learning_rate": 1.0879681845811964e-05,
|
||||
"loss": 0.8854,
|
||||
"step": 125
|
||||
},
|
||||
{
|
||||
"epoch": 0.55,
|
||||
"learning_rate": 1.0146798122093167e-05,
|
||||
"loss": 0.8826,
|
||||
"step": 130
|
||||
},
|
||||
{
|
||||
"epoch": 0.57,
|
||||
"learning_rate": 9.41312383314878e-06,
|
||||
"loss": 0.8782,
|
||||
"step": 135
|
||||
},
|
||||
{
|
||||
"epoch": 0.59,
|
||||
"learning_rate": 8.682610101591813e-06,
|
||||
"loss": 0.8836,
|
||||
"step": 140
|
||||
},
|
||||
{
|
||||
"epoch": 0.61,
|
||||
"learning_rate": 7.95919102919926e-06,
|
||||
"loss": 0.8751,
|
||||
"step": 145
|
||||
},
|
||||
{
|
||||
"epoch": 0.63,
|
||||
"learning_rate": 7.246762510237404e-06,
|
||||
"loss": 0.8808,
|
||||
"step": 150
|
||||
},
|
||||
{
|
||||
"epoch": 0.65,
|
||||
"learning_rate": 6.549161250549474e-06,
|
||||
"loss": 0.8793,
|
||||
"step": 155
|
||||
},
|
||||
{
|
||||
"epoch": 0.67,
|
||||
"learning_rate": 5.8701441053961185e-06,
|
||||
"loss": 0.8729,
|
||||
"step": 160
|
||||
},
|
||||
{
|
||||
"epoch": 0.69,
|
||||
"learning_rate": 5.213367847322408e-06,
|
||||
"loss": 0.8726,
|
||||
"step": 165
|
||||
},
|
||||
{
|
||||
"epoch": 0.71,
|
||||
"learning_rate": 4.58236947300939e-06,
|
||||
"loss": 0.8675,
|
||||
"step": 170
|
||||
},
|
||||
{
|
||||
"epoch": 0.74,
|
||||
"learning_rate": 3.980547155165429e-06,
|
||||
"loss": 0.8663,
|
||||
"step": 175
|
||||
},
|
||||
{
|
||||
"epoch": 0.76,
|
||||
"learning_rate": 3.4111419420388904e-06,
|
||||
"loss": 0.8727,
|
||||
"step": 180
|
||||
},
|
||||
{
|
||||
"epoch": 0.78,
|
||||
"learning_rate": 2.877220303107373e-06,
|
||||
"loss": 0.8727,
|
||||
"step": 185
|
||||
},
|
||||
{
|
||||
"epoch": 0.8,
|
||||
"learning_rate": 2.381657614941858e-06,
|
||||
"loss": 0.8613,
|
||||
"step": 190
|
||||
},
|
||||
{
|
||||
"epoch": 0.82,
|
||||
"learning_rate": 1.927122676180756e-06,
|
||||
"loss": 0.8697,
|
||||
"step": 195
|
||||
},
|
||||
{
|
||||
"epoch": 0.84,
|
||||
"learning_rate": 1.516063335006851e-06,
|
||||
"loss": 0.8652,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"epoch": 0.86,
|
||||
"learning_rate": 1.1506933065287062e-06,
|
||||
"loss": 0.8623,
|
||||
"step": 205
|
||||
},
|
||||
{
|
||||
"epoch": 0.88,
|
||||
"learning_rate": 8.329802510601559e-07,
|
||||
"loss": 0.8606,
|
||||
"step": 210
|
||||
},
|
||||
{
|
||||
"epoch": 0.9,
|
||||
"learning_rate": 5.646351775009617e-07,
|
||||
"loss": 0.8685,
|
||||
"step": 215
|
||||
},
|
||||
{
|
||||
"epoch": 0.92,
|
||||
"learning_rate": 3.471032288855869e-07,
|
||||
"loss": 0.8676,
|
||||
"step": 220
|
||||
},
|
||||
{
|
||||
"epoch": 0.95,
|
||||
"learning_rate": 1.8155589972348453e-07,
|
||||
"loss": 0.8646,
|
||||
"step": 225
|
||||
},
|
||||
{
|
||||
"epoch": 0.97,
|
||||
"learning_rate": 6.888472704359661e-08,
|
||||
"loss": 0.8628,
|
||||
"step": 230
|
||||
},
|
||||
{
|
||||
"epoch": 0.99,
|
||||
"learning_rate": 9.696489119221942e-09,
|
||||
"loss": 0.8632,
|
||||
"step": 235
|
||||
},
|
||||
{
|
||||
"epoch": 1.0,
|
||||
"eval_loss": 0.8563192486763,
|
||||
"eval_runtime": 482.3257,
|
||||
"eval_samples_per_second": 7.267,
|
||||
"eval_steps_per_second": 0.228,
|
||||
"step": 238
|
||||
},
|
||||
{
|
||||
"epoch": 1.0,
|
||||
"step": 238,
|
||||
"total_flos": 397821345792000.0,
|
||||
"train_loss": 0.5069183211366669,
|
||||
"train_runtime": 9492.9381,
|
||||
"train_samples_per_second": 3.208,
|
||||
"train_steps_per_second": 0.025
|
||||
}
|
||||
],
|
||||
"logging_steps": 5,
|
||||
"max_steps": 238,
|
||||
"num_input_tokens_seen": 0,
|
||||
"num_train_epochs": 1,
|
||||
"save_steps": 100,
|
||||
"total_flos": 397821345792000.0,
|
||||
"train_batch_size": 4,
|
||||
"trial_name": null,
|
||||
"trial_params": null
|
||||
}
|
||||
3
training_args.bin
Normal file
3
training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6b55610e9856316c203b39952bbc82bf51d24cc54569f3e0c71466ae6344d882
|
||||
size 5880
|
||||
Reference in New Issue
Block a user