初始化项目,由ModelHub XC社区提供模型
Model: W-61/llama-3-8b-base-r-dpo-ultrafeedback-4xh200-batch-128-20260426-105614 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
78
README.md
Normal file
78
README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
library_name: transformers
|
||||
base_model: W-61/llama-3-8b-base-sft-ultrachat-8xh200
|
||||
tags:
|
||||
- alignment-handbook
|
||||
- r-dpo
|
||||
- generated_from_trainer
|
||||
datasets:
|
||||
- HuggingFaceH4/ultrafeedback_binarized
|
||||
model-index:
|
||||
- name: llama-3-8b-base-r-dpo-ultrafeedback-4xh200-batch-128-20260426-105614
|
||||
results: []
|
||||
---
|
||||
|
||||
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
||||
should probably proofread and complete it, then remove this comment. -->
|
||||
|
||||
# llama-3-8b-base-r-dpo-ultrafeedback-4xh200-batch-128-20260426-105614
|
||||
|
||||
This model is a fine-tuned version of [W-61/llama-3-8b-base-sft-ultrachat-8xh200](https://huggingface.co/W-61/llama-3-8b-base-sft-ultrachat-8xh200) on the HuggingFaceH4/ultrafeedback_binarized dataset.
|
||||
It achieves the following results on the evaluation set:
|
||||
- Loss: 0.5338
|
||||
- R Dpo/chosen Len: 286.9760
|
||||
- R Dpo/rejected Len: 246.0880
|
||||
- R Dpo/length Delta: 40.8880
|
||||
- R Dpo/regularization Term: 0.0
|
||||
- Logps/chosen: -406.7590
|
||||
- Logps/rejected: -443.3631
|
||||
- Logps/ref Chosen: -288.6415
|
||||
- Logps/ref Rejected: -265.9616
|
||||
- Logits/chosen: -0.8653
|
||||
- Logits/rejected: -0.8488
|
||||
|
||||
## Model description
|
||||
|
||||
More information needed
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
More information needed
|
||||
|
||||
## Training and evaluation data
|
||||
|
||||
More information needed
|
||||
|
||||
## Training procedure
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 5e-07
|
||||
- train_batch_size: 4
|
||||
- eval_batch_size: 2
|
||||
- seed: 42
|
||||
- distributed_type: multi-GPU
|
||||
- num_devices: 4
|
||||
- gradient_accumulation_steps: 8
|
||||
- total_train_batch_size: 128
|
||||
- total_eval_batch_size: 8
|
||||
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
||||
- lr_scheduler_type: cosine
|
||||
- lr_scheduler_warmup_ratio: 0.1
|
||||
- num_epochs: 1
|
||||
|
||||
### Training results
|
||||
|
||||
| Training Loss | Epoch | Step | Validation Loss | R Dpo/chosen Len | R Dpo/rejected Len | R Dpo/length Delta | R Dpo/regularization Term | Logps/chosen | Logps/rejected | Logps/ref Chosen | Logps/ref Rejected | Logits/chosen | Logits/rejected |
|
||||
|:-------------:|:------:|:----:|:---------------:|:----------------:|:------------------:|:------------------:|:-------------------------:|:------------:|:--------------:|:----------------:|:------------------:|:-------------:|:---------------:|
|
||||
| 4.4915 | 0.4188 | 200 | 0.5614 | 286.9760 | 246.0880 | 40.8880 | 0.0 | -383.4086 | -408.7636 | -288.6415 | -265.9616 | -0.9071 | -0.8845 |
|
||||
| 4.2712 | 0.8377 | 400 | 0.5338 | 286.9760 | 246.0880 | 40.8880 | 0.0 | -406.7590 | -443.3631 | -288.6415 | -265.9616 | -0.8653 | -0.8488 |
|
||||
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 4.51.0
|
||||
- Pytorch 2.3.1+cu121
|
||||
- Datasets 2.21.0
|
||||
- Tokenizers 0.21.4
|
||||
24
all_results.json
Normal file
24
all_results.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"eval_logits/chosen": -0.8717418909072876,
|
||||
"eval_logits/rejected": -0.8549114465713501,
|
||||
"eval_logps/chosen": -415.0469055175781,
|
||||
"eval_logps/ref_chosen": -288.6414794921875,
|
||||
"eval_logps/ref_rejected": -265.96160888671875,
|
||||
"eval_logps/rejected": -453.6796875,
|
||||
"eval_loss": 0.5324965119361877,
|
||||
"eval_r_dpo/chosen_len": 286.97601318359375,
|
||||
"eval_r_dpo/length_delta": 40.88800048828125,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 246.08799743652344,
|
||||
"eval_runtime": 77.9812,
|
||||
"eval_samples": 2000,
|
||||
"eval_samples_per_second": 25.647,
|
||||
"eval_steps_per_second": 3.206,
|
||||
"total_flos": 0.0,
|
||||
"train_loss": 4.588983364824979,
|
||||
"train_runtime": 6140.251,
|
||||
"train_samples": 61135,
|
||||
"train_samples_per_second": 9.956,
|
||||
"train_steps_per_second": 0.078
|
||||
}
|
||||
29
config.json
Normal file
29
config.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 128000,
|
||||
"eos_token_id": 128001,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 8192,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 500000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "float32",
|
||||
"transformers_version": "4.51.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 128256
|
||||
}
|
||||
18
eval_results.json
Normal file
18
eval_results.json
Normal file
@@ -0,0 +1,18 @@
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"eval_logits/chosen": -0.8717418909072876,
|
||||
"eval_logits/rejected": -0.8549114465713501,
|
||||
"eval_logps/chosen": -415.0469055175781,
|
||||
"eval_logps/ref_chosen": -288.6414794921875,
|
||||
"eval_logps/ref_rejected": -265.96160888671875,
|
||||
"eval_logps/rejected": -453.6796875,
|
||||
"eval_loss": 0.5324965119361877,
|
||||
"eval_r_dpo/chosen_len": 286.97601318359375,
|
||||
"eval_r_dpo/length_delta": 40.88800048828125,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 246.08799743652344,
|
||||
"eval_runtime": 77.9812,
|
||||
"eval_samples": 2000,
|
||||
"eval_samples_per_second": 25.647,
|
||||
"eval_steps_per_second": 3.206
|
||||
}
|
||||
9
generation_config.json
Normal file
9
generation_config.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"bos_token_id": 128000,
|
||||
"do_sample": true,
|
||||
"eos_token_id": 128001,
|
||||
"max_length": 4096,
|
||||
"temperature": 0.6,
|
||||
"top_p": 0.9,
|
||||
"transformers_version": "4.51.0"
|
||||
}
|
||||
3
model-00001-of-00007.safetensors
Normal file
3
model-00001-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6e324731d6af899b43cdf4245b70d18309071d8a5e7dda8c5c4d3df58bbe38d4
|
||||
size 4886466168
|
||||
3
model-00002-of-00007.safetensors
Normal file
3
model-00002-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ed5416a71ba4edbcd35d285bca485f1faaf0bc6332bde0a3fd307be5ea2a153d
|
||||
size 4832007448
|
||||
3
model-00003-of-00007.safetensors
Normal file
3
model-00003-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b0778cba89bf7c7e06815d509124797dacbf0c34e1cb917c63b7d1a949a6fb8e
|
||||
size 4999813112
|
||||
3
model-00004-of-00007.safetensors
Normal file
3
model-00004-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:81b92e155ffaf9d865c87a142742579248be08db8a2d12de2c738807e1e14655
|
||||
size 4999813128
|
||||
3
model-00005-of-00007.safetensors
Normal file
3
model-00005-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:aedfb7817eaba6f5dc176d59924dd42069bb461a3ac87773042b5e727e9d8bae
|
||||
size 4832007496
|
||||
3
model-00006-of-00007.safetensors
Normal file
3
model-00006-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d6eb30daf7d85969fc617c797ebe86c84d7fb296b3c4ac36179c64d028a2e82e
|
||||
size 4999813120
|
||||
3
model-00007-of-00007.safetensors
Normal file
3
model-00007-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4eb3c0fb8323569c77ff250ce04ed31f6a14c40a2c40aa150dc29d2830338e04
|
||||
size 2571158184
|
||||
298
model.safetensors.index.json
Normal file
298
model.safetensors.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 32121044992
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00007-of-00007.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.norm.weight": "model-00007-of-00007.safetensors"
|
||||
}
|
||||
}
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3c5cf44023714fb39b05e71e425f8d7b92805ff73f7988b083b8c87f0bf87393
|
||||
size 17209961
|
||||
2064
tokenizer_config.json
Normal file
2064
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
9
train_results.json
Normal file
9
train_results.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"total_flos": 0.0,
|
||||
"train_loss": 4.588983364824979,
|
||||
"train_runtime": 6140.251,
|
||||
"train_samples": 61135,
|
||||
"train_samples_per_second": 9.956,
|
||||
"train_steps_per_second": 0.078
|
||||
}
|
||||
895
trainer_state.json
Normal file
895
trainer_state.json
Normal file
@@ -0,0 +1,895 @@
|
||||
{
|
||||
"best_global_step": null,
|
||||
"best_metric": null,
|
||||
"best_model_checkpoint": null,
|
||||
"epoch": 0.9989528795811519,
|
||||
"eval_steps": 200,
|
||||
"global_step": 477,
|
||||
"is_hyper_param_search": false,
|
||||
"is_local_process_zero": true,
|
||||
"is_world_process_zero": true,
|
||||
"log_history": [
|
||||
{
|
||||
"epoch": 0.0020942408376963353,
|
||||
"grad_norm": 28.59265899658203,
|
||||
"learning_rate": 0.0,
|
||||
"logits/chosen": -0.5995081663131714,
|
||||
"logits/rejected": -0.6144353747367859,
|
||||
"logps/chosen": -267.5272216796875,
|
||||
"logps/ref_chosen": -267.5935363769531,
|
||||
"logps/ref_rejected": -204.2306671142578,
|
||||
"logps/rejected": -204.23907470703125,
|
||||
"loss": 5.5463,
|
||||
"r_dpo/chosen_len": 257.75,
|
||||
"r_dpo/length_delta": 47.875,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 209.875,
|
||||
"step": 1
|
||||
},
|
||||
{
|
||||
"epoch": 0.020942408376963352,
|
||||
"grad_norm": 26.517539978027344,
|
||||
"learning_rate": 9.375e-08,
|
||||
"logits/chosen": -0.6324708461761475,
|
||||
"logits/rejected": -0.6374377012252808,
|
||||
"logps/chosen": -296.6641845703125,
|
||||
"logps/ref_chosen": -296.63226318359375,
|
||||
"logps/ref_rejected": -258.9539489746094,
|
||||
"logps/rejected": -258.9976501464844,
|
||||
"loss": 5.5453,
|
||||
"r_dpo/chosen_len": 291.8680419921875,
|
||||
"r_dpo/length_delta": 49.76388931274414,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.1041717529297,
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"epoch": 0.041884816753926704,
|
||||
"grad_norm": 29.77713966369629,
|
||||
"learning_rate": 1.9791666666666664e-07,
|
||||
"logits/chosen": -0.5966575145721436,
|
||||
"logits/rejected": -0.6271787881851196,
|
||||
"logps/chosen": -297.93768310546875,
|
||||
"logps/ref_chosen": -297.9349365234375,
|
||||
"logps/ref_rejected": -256.9902648925781,
|
||||
"logps/rejected": -257.028564453125,
|
||||
"loss": 5.543,
|
||||
"r_dpo/chosen_len": 291.29998779296875,
|
||||
"r_dpo/length_delta": 52.89374923706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 238.40625,
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"epoch": 0.06282722513089005,
|
||||
"grad_norm": 28.96179962158203,
|
||||
"learning_rate": 3.020833333333333e-07,
|
||||
"logits/chosen": -0.6152126789093018,
|
||||
"logits/rejected": -0.6072013974189758,
|
||||
"logps/chosen": -278.40167236328125,
|
||||
"logps/ref_chosen": -278.64752197265625,
|
||||
"logps/ref_rejected": -249.309814453125,
|
||||
"logps/rejected": -249.2447052001953,
|
||||
"loss": 5.5384,
|
||||
"r_dpo/chosen_len": 270.8812561035156,
|
||||
"r_dpo/length_delta": 25.228124618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 245.6531219482422,
|
||||
"step": 30
|
||||
},
|
||||
{
|
||||
"epoch": 0.08376963350785341,
|
||||
"grad_norm": 27.361820220947266,
|
||||
"learning_rate": 4.0625e-07,
|
||||
"logits/chosen": -0.6175118088722229,
|
||||
"logits/rejected": -0.6425970196723938,
|
||||
"logps/chosen": -282.63714599609375,
|
||||
"logps/ref_chosen": -283.49981689453125,
|
||||
"logps/ref_rejected": -265.32733154296875,
|
||||
"logps/rejected": -265.02880859375,
|
||||
"loss": 5.5221,
|
||||
"r_dpo/chosen_len": 281.43438720703125,
|
||||
"r_dpo/length_delta": 33.34375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 248.0906219482422,
|
||||
"step": 40
|
||||
},
|
||||
{
|
||||
"epoch": 0.10471204188481675,
|
||||
"grad_norm": 27.90146827697754,
|
||||
"learning_rate": 4.999932966293553e-07,
|
||||
"logits/chosen": -0.6317179799079895,
|
||||
"logits/rejected": -0.6722968220710754,
|
||||
"logps/chosen": -278.4624938964844,
|
||||
"logps/ref_chosen": -280.224365234375,
|
||||
"logps/ref_rejected": -274.3541259765625,
|
||||
"logps/rejected": -273.77716064453125,
|
||||
"loss": 5.4963,
|
||||
"r_dpo/chosen_len": 290.32501220703125,
|
||||
"r_dpo/length_delta": 35.11249923706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 255.21249389648438,
|
||||
"step": 50
|
||||
},
|
||||
{
|
||||
"epoch": 0.1256544502617801,
|
||||
"grad_norm": 28.248376846313477,
|
||||
"learning_rate": 4.991893270335525e-07,
|
||||
"logits/chosen": -0.6456497311592102,
|
||||
"logits/rejected": -0.6589030027389526,
|
||||
"logps/chosen": -278.4886474609375,
|
||||
"logps/ref_chosen": -281.12664794921875,
|
||||
"logps/ref_rejected": -259.86456298828125,
|
||||
"logps/rejected": -259.68756103515625,
|
||||
"loss": 5.4449,
|
||||
"r_dpo/chosen_len": 273.953125,
|
||||
"r_dpo/length_delta": 29.084375381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 244.86874389648438,
|
||||
"step": 60
|
||||
},
|
||||
{
|
||||
"epoch": 0.14659685863874344,
|
||||
"grad_norm": 28.587810516357422,
|
||||
"learning_rate": 4.970496218214204e-07,
|
||||
"logits/chosen": -0.7054015398025513,
|
||||
"logits/rejected": -0.710257887840271,
|
||||
"logps/chosen": -283.9915466308594,
|
||||
"logps/ref_chosen": -287.71063232421875,
|
||||
"logps/ref_rejected": -276.839599609375,
|
||||
"logps/rejected": -277.05084228515625,
|
||||
"loss": 5.3872,
|
||||
"r_dpo/chosen_len": 267.4937438964844,
|
||||
"r_dpo/length_delta": 14.484375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 253.00936889648438,
|
||||
"step": 70
|
||||
},
|
||||
{
|
||||
"epoch": 0.16753926701570682,
|
||||
"grad_norm": 28.91149139404297,
|
||||
"learning_rate": 4.935856505068998e-07,
|
||||
"logits/chosen": -0.6922556757926941,
|
||||
"logits/rejected": -0.6884378790855408,
|
||||
"logps/chosen": -276.6068115234375,
|
||||
"logps/ref_chosen": -280.123046875,
|
||||
"logps/ref_rejected": -258.8989562988281,
|
||||
"logps/rejected": -260.3894958496094,
|
||||
"loss": 5.316,
|
||||
"r_dpo/chosen_len": 267.4781188964844,
|
||||
"r_dpo/length_delta": 32.46562576293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 235.0124969482422,
|
||||
"step": 80
|
||||
},
|
||||
{
|
||||
"epoch": 0.18848167539267016,
|
||||
"grad_norm": 29.948001861572266,
|
||||
"learning_rate": 4.8881598109976e-07,
|
||||
"logits/chosen": -0.7148661017417908,
|
||||
"logits/rejected": -0.7195515632629395,
|
||||
"logps/chosen": -277.6590270996094,
|
||||
"logps/ref_chosen": -278.02545166015625,
|
||||
"logps/ref_rejected": -251.0922393798828,
|
||||
"logps/rejected": -258.99139404296875,
|
||||
"loss": 5.256,
|
||||
"r_dpo/chosen_len": 274.20623779296875,
|
||||
"r_dpo/length_delta": 44.97187423706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 229.234375,
|
||||
"step": 90
|
||||
},
|
||||
{
|
||||
"epoch": 0.2094240837696335,
|
||||
"grad_norm": 35.5301628112793,
|
||||
"learning_rate": 4.827661805750437e-07,
|
||||
"logits/chosen": -0.7245436906814575,
|
||||
"logits/rejected": -0.740323543548584,
|
||||
"logps/chosen": -277.5360412597656,
|
||||
"logps/ref_chosen": -274.0089416503906,
|
||||
"logps/ref_rejected": -274.14447021484375,
|
||||
"logps/rejected": -289.0398254394531,
|
||||
"loss": 5.1787,
|
||||
"r_dpo/chosen_len": 275.3343811035156,
|
||||
"r_dpo/length_delta": 21.912500381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 253.421875,
|
||||
"step": 100
|
||||
},
|
||||
{
|
||||
"epoch": 0.23036649214659685,
|
||||
"grad_norm": 34.80721664428711,
|
||||
"learning_rate": 4.75468677825789e-07,
|
||||
"logits/chosen": -0.7716718912124634,
|
||||
"logits/rejected": -0.787286102771759,
|
||||
"logps/chosen": -280.61236572265625,
|
||||
"logps/ref_chosen": -273.23333740234375,
|
||||
"logps/ref_rejected": -263.88787841796875,
|
||||
"logps/rejected": -287.048583984375,
|
||||
"loss": 5.0023,
|
||||
"r_dpo/chosen_len": 283.43438720703125,
|
||||
"r_dpo/length_delta": 50.34375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 233.0906219482422,
|
||||
"step": 110
|
||||
},
|
||||
{
|
||||
"epoch": 0.2513089005235602,
|
||||
"grad_norm": 42.601844787597656,
|
||||
"learning_rate": 4.669625898336438e-07,
|
||||
"logits/chosen": -0.8156879544258118,
|
||||
"logits/rejected": -0.8101061582565308,
|
||||
"logps/chosen": -291.2214660644531,
|
||||
"logps/ref_chosen": -269.77142333984375,
|
||||
"logps/ref_rejected": -272.7685546875,
|
||||
"logps/rejected": -310.86175537109375,
|
||||
"loss": 4.9998,
|
||||
"r_dpo/chosen_len": 264.7593688964844,
|
||||
"r_dpo/length_delta": 13.840624809265137,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 250.9187469482422,
|
||||
"step": 120
|
||||
},
|
||||
{
|
||||
"epoch": 0.27225130890052357,
|
||||
"grad_norm": 55.5117301940918,
|
||||
"learning_rate": 4.5729351198915705e-07,
|
||||
"logits/chosen": -0.8489806056022644,
|
||||
"logits/rejected": -0.8312622904777527,
|
||||
"logps/chosen": -301.61578369140625,
|
||||
"logps/ref_chosen": -275.03448486328125,
|
||||
"logps/ref_rejected": -276.39862060546875,
|
||||
"logps/rejected": -325.17193603515625,
|
||||
"loss": 4.8754,
|
||||
"r_dpo/chosen_len": 266.625,
|
||||
"r_dpo/length_delta": 18.668750762939453,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 247.9562530517578,
|
||||
"step": 130
|
||||
},
|
||||
{
|
||||
"epoch": 0.2931937172774869,
|
||||
"grad_norm": 54.120121002197266,
|
||||
"learning_rate": 4.4651327368569684e-07,
|
||||
"logits/chosen": -0.8442527651786804,
|
||||
"logits/rejected": -0.842776894569397,
|
||||
"logps/chosen": -307.4860534667969,
|
||||
"logps/ref_chosen": -276.0029602050781,
|
||||
"logps/ref_rejected": -255.9320526123047,
|
||||
"logps/rejected": -313.87664794921875,
|
||||
"loss": 4.8385,
|
||||
"r_dpo/chosen_len": 261.46875,
|
||||
"r_dpo/length_delta": 22.375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 239.09375,
|
||||
"step": 140
|
||||
},
|
||||
{
|
||||
"epoch": 0.31413612565445026,
|
||||
"grad_norm": 64.18093872070312,
|
||||
"learning_rate": 4.346796604970912e-07,
|
||||
"logits/chosen": -0.8901342153549194,
|
||||
"logits/rejected": -0.8738521337509155,
|
||||
"logps/chosen": -332.3876037597656,
|
||||
"logps/ref_chosen": -298.2093505859375,
|
||||
"logps/ref_rejected": -254.8907012939453,
|
||||
"logps/rejected": -322.8716125488281,
|
||||
"loss": 4.7144,
|
||||
"r_dpo/chosen_len": 283.84375,
|
||||
"r_dpo/length_delta": 48.359375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 235.484375,
|
||||
"step": 150
|
||||
},
|
||||
{
|
||||
"epoch": 0.33507853403141363,
|
||||
"grad_norm": 61.34012985229492,
|
||||
"learning_rate": 4.218561044282098e-07,
|
||||
"logits/chosen": -0.8931280374526978,
|
||||
"logits/rejected": -0.8777166604995728,
|
||||
"logps/chosen": -336.1731262207031,
|
||||
"logps/ref_chosen": -281.94189453125,
|
||||
"logps/ref_rejected": -255.5653533935547,
|
||||
"logps/rejected": -352.1561584472656,
|
||||
"loss": 4.4414,
|
||||
"r_dpo/chosen_len": 267.828125,
|
||||
"r_dpo/length_delta": 41.368751525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 226.45938110351562,
|
||||
"step": 160
|
||||
},
|
||||
{
|
||||
"epoch": 0.35602094240837695,
|
||||
"grad_norm": 99.01383209228516,
|
||||
"learning_rate": 4.081113438988443e-07,
|
||||
"logits/chosen": -0.8554754257202148,
|
||||
"logits/rejected": -0.836098313331604,
|
||||
"logps/chosen": -345.37591552734375,
|
||||
"logps/ref_chosen": -288.2863464355469,
|
||||
"logps/ref_rejected": -239.758056640625,
|
||||
"logps/rejected": -336.4610595703125,
|
||||
"loss": 4.4666,
|
||||
"r_dpo/chosen_len": 285.203125,
|
||||
"r_dpo/length_delta": 46.396873474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 238.80624389648438,
|
||||
"step": 170
|
||||
},
|
||||
{
|
||||
"epoch": 0.3769633507853403,
|
||||
"grad_norm": 119.2037353515625,
|
||||
"learning_rate": 3.935190552834828e-07,
|
||||
"logits/chosen": -0.8174031376838684,
|
||||
"logits/rejected": -0.8205310702323914,
|
||||
"logps/chosen": -338.1886291503906,
|
||||
"logps/ref_chosen": -286.17889404296875,
|
||||
"logps/ref_rejected": -249.9820098876953,
|
||||
"logps/rejected": -345.1977233886719,
|
||||
"loss": 4.5145,
|
||||
"r_dpo/chosen_len": 266.09063720703125,
|
||||
"r_dpo/length_delta": 40.12812423706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 225.96249389648438,
|
||||
"step": 180
|
||||
},
|
||||
{
|
||||
"epoch": 0.39790575916230364,
|
||||
"grad_norm": 115.86841583251953,
|
||||
"learning_rate": 3.781574579820464e-07,
|
||||
"logits/chosen": -0.8579525947570801,
|
||||
"logits/rejected": -0.8599478602409363,
|
||||
"logps/chosen": -350.3121032714844,
|
||||
"logps/ref_chosen": -280.9278259277344,
|
||||
"logps/ref_rejected": -254.3533477783203,
|
||||
"logps/rejected": -377.564208984375,
|
||||
"loss": 4.3438,
|
||||
"r_dpo/chosen_len": 276.33123779296875,
|
||||
"r_dpo/length_delta": 41.993751525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 234.33749389648438,
|
||||
"step": 190
|
||||
},
|
||||
{
|
||||
"epoch": 0.418848167539267,
|
||||
"grad_norm": 108.07948303222656,
|
||||
"learning_rate": 3.621088951385353e-07,
|
||||
"logits/chosen": -0.8816668391227722,
|
||||
"logits/rejected": -0.8807156682014465,
|
||||
"logps/chosen": -321.8008728027344,
|
||||
"logps/ref_chosen": -253.1712188720703,
|
||||
"logps/ref_rejected": -241.90478515625,
|
||||
"logps/rejected": -354.93450927734375,
|
||||
"loss": 4.4915,
|
||||
"r_dpo/chosen_len": 248.0749969482422,
|
||||
"r_dpo/length_delta": 28.131250381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 219.94375610351562,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"epoch": 0.418848167539267,
|
||||
"eval_logits/chosen": -0.9071072340011597,
|
||||
"eval_logits/rejected": -0.884462296962738,
|
||||
"eval_logps/chosen": -383.4085998535156,
|
||||
"eval_logps/ref_chosen": -288.6414794921875,
|
||||
"eval_logps/ref_rejected": -265.96160888671875,
|
||||
"eval_logps/rejected": -408.76361083984375,
|
||||
"eval_loss": 0.5614430904388428,
|
||||
"eval_r_dpo/chosen_len": 286.97601318359375,
|
||||
"eval_r_dpo/length_delta": 40.88800048828125,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 246.08799743652344,
|
||||
"eval_runtime": 78.5514,
|
||||
"eval_samples_per_second": 25.461,
|
||||
"eval_steps_per_second": 3.183,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"epoch": 0.4397905759162304,
|
||||
"grad_norm": 98.39983367919922,
|
||||
"learning_rate": 3.454593922550693e-07,
|
||||
"logits/chosen": -0.844610869884491,
|
||||
"logits/rejected": -0.8483532667160034,
|
||||
"logps/chosen": -389.2551574707031,
|
||||
"logps/ref_chosen": -287.9228210449219,
|
||||
"logps/ref_rejected": -263.35595703125,
|
||||
"logps/rejected": -412.36932373046875,
|
||||
"loss": 4.5587,
|
||||
"r_dpo/chosen_len": 280.3125,
|
||||
"r_dpo/length_delta": 36.68437576293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 243.6281280517578,
|
||||
"step": 210
|
||||
},
|
||||
{
|
||||
"epoch": 0.4607329842931937,
|
||||
"grad_norm": 88.01953125,
|
||||
"learning_rate": 3.2829819606729477e-07,
|
||||
"logits/chosen": -0.860802948474884,
|
||||
"logits/rejected": -0.8526037335395813,
|
||||
"logps/chosen": -362.9411315917969,
|
||||
"logps/ref_chosen": -282.3331604003906,
|
||||
"logps/ref_rejected": -272.5645446777344,
|
||||
"logps/rejected": -409.726318359375,
|
||||
"loss": 4.3191,
|
||||
"r_dpo/chosen_len": 261.359375,
|
||||
"r_dpo/length_delta": 17.865625381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 243.49374389648438,
|
||||
"step": 220
|
||||
},
|
||||
{
|
||||
"epoch": 0.4816753926701571,
|
||||
"grad_norm": 80.2542953491211,
|
||||
"learning_rate": 3.1071729615293424e-07,
|
||||
"logits/chosen": -0.8298615217208862,
|
||||
"logits/rejected": -0.8124361038208008,
|
||||
"logps/chosen": -361.07958984375,
|
||||
"logps/ref_chosen": -276.1485595703125,
|
||||
"logps/ref_rejected": -252.81198120117188,
|
||||
"logps/rejected": -392.0174255371094,
|
||||
"loss": 4.2918,
|
||||
"r_dpo/chosen_len": 264.43438720703125,
|
||||
"r_dpo/length_delta": 31.256250381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 233.17813110351562,
|
||||
"step": 230
|
||||
},
|
||||
{
|
||||
"epoch": 0.5026178010471204,
|
||||
"grad_norm": 102.94511413574219,
|
||||
"learning_rate": 2.9281093183781403e-07,
|
||||
"logits/chosen": -0.8141033053398132,
|
||||
"logits/rejected": -0.8255136609077454,
|
||||
"logps/chosen": -350.80035400390625,
|
||||
"logps/ref_chosen": -270.52520751953125,
|
||||
"logps/ref_rejected": -254.83334350585938,
|
||||
"logps/rejected": -386.9250183105469,
|
||||
"loss": 4.3624,
|
||||
"r_dpo/chosen_len": 271.81561279296875,
|
||||
"r_dpo/length_delta": 37.099998474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 234.7156219482422,
|
||||
"step": 240
|
||||
},
|
||||
{
|
||||
"epoch": 0.5235602094240838,
|
||||
"grad_norm": 99.23353576660156,
|
||||
"learning_rate": 2.7467508704251135e-07,
|
||||
"logits/chosen": -0.8481950759887695,
|
||||
"logits/rejected": -0.8352023959159851,
|
||||
"logps/chosen": -369.6294860839844,
|
||||
"logps/ref_chosen": -289.6054992675781,
|
||||
"logps/ref_rejected": -265.0482482910156,
|
||||
"logps/rejected": -404.7892761230469,
|
||||
"loss": 4.3335,
|
||||
"r_dpo/chosen_len": 277.50311279296875,
|
||||
"r_dpo/length_delta": 41.103126525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 236.39999389648438,
|
||||
"step": 250
|
||||
},
|
||||
{
|
||||
"epoch": 0.5445026178010471,
|
||||
"grad_norm": 103.33648681640625,
|
||||
"learning_rate": 2.5640697577740815e-07,
|
||||
"logits/chosen": -0.8530448079109192,
|
||||
"logits/rejected": -0.8350385427474976,
|
||||
"logps/chosen": -389.0526123046875,
|
||||
"logps/ref_chosen": -288.6393737792969,
|
||||
"logps/ref_rejected": -265.315673828125,
|
||||
"logps/rejected": -423.25390625,
|
||||
"loss": 4.3842,
|
||||
"r_dpo/chosen_len": 271.48126220703125,
|
||||
"r_dpo/length_delta": 24.390625,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 247.0906219482422,
|
||||
"step": 260
|
||||
},
|
||||
{
|
||||
"epoch": 0.5654450261780105,
|
||||
"grad_norm": 79.62167358398438,
|
||||
"learning_rate": 2.381045210440644e-07,
|
||||
"logits/chosen": -0.8272823095321655,
|
||||
"logits/rejected": -0.8247928619384766,
|
||||
"logps/chosen": -396.57269287109375,
|
||||
"logps/ref_chosen": -280.1373596191406,
|
||||
"logps/ref_rejected": -264.84295654296875,
|
||||
"logps/rejected": -442.4415588378906,
|
||||
"loss": 4.2579,
|
||||
"r_dpo/chosen_len": 272.2875061035156,
|
||||
"r_dpo/length_delta": 19.956249237060547,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 252.3312530517578,
|
||||
"step": 270
|
||||
},
|
||||
{
|
||||
"epoch": 0.5863874345549738,
|
||||
"grad_norm": 73.61250305175781,
|
||||
"learning_rate": 2.1986582993616925e-07,
|
||||
"logits/chosen": -0.8542989492416382,
|
||||
"logits/rejected": -0.8387068510055542,
|
||||
"logps/chosen": -406.976806640625,
|
||||
"logps/ref_chosen": -301.7547912597656,
|
||||
"logps/ref_rejected": -254.6543731689453,
|
||||
"logps/rejected": -424.46075439453125,
|
||||
"loss": 4.2315,
|
||||
"r_dpo/chosen_len": 285.44061279296875,
|
||||
"r_dpo/length_delta": 52.962501525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 232.47811889648438,
|
||||
"step": 280
|
||||
},
|
||||
{
|
||||
"epoch": 0.6073298429319371,
|
||||
"grad_norm": 103.07135009765625,
|
||||
"learning_rate": 2.0178866775369774e-07,
|
||||
"logits/chosen": -0.8455541729927063,
|
||||
"logits/rejected": -0.8156248927116394,
|
||||
"logps/chosen": -421.5577087402344,
|
||||
"logps/ref_chosen": -302.79217529296875,
|
||||
"logps/ref_rejected": -292.9220275878906,
|
||||
"logps/rejected": -465.0528259277344,
|
||||
"loss": 4.4571,
|
||||
"r_dpo/chosen_len": 294.90625,
|
||||
"r_dpo/length_delta": 20.774999618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 274.1312561035156,
|
||||
"step": 290
|
||||
},
|
||||
{
|
||||
"epoch": 0.6282722513089005,
|
||||
"grad_norm": 91.10374450683594,
|
||||
"learning_rate": 1.839699339491937e-07,
|
||||
"logits/chosen": -0.8590185046195984,
|
||||
"logits/rejected": -0.8340511322021484,
|
||||
"logps/chosen": -382.68487548828125,
|
||||
"logps/ref_chosen": -275.8238220214844,
|
||||
"logps/ref_rejected": -264.05743408203125,
|
||||
"logps/rejected": -428.8130798339844,
|
||||
"loss": 4.2559,
|
||||
"r_dpo/chosen_len": 266.859375,
|
||||
"r_dpo/length_delta": 20.734375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 246.125,
|
||||
"step": 300
|
||||
},
|
||||
{
|
||||
"epoch": 0.6492146596858639,
|
||||
"grad_norm": 82.53608703613281,
|
||||
"learning_rate": 1.6650514271527465e-07,
|
||||
"logits/chosen": -0.8300539255142212,
|
||||
"logits/rejected": -0.8278457522392273,
|
||||
"logps/chosen": -413.59539794921875,
|
||||
"logps/ref_chosen": -296.6716003417969,
|
||||
"logps/ref_rejected": -278.68426513671875,
|
||||
"logps/rejected": -453.2350158691406,
|
||||
"loss": 4.1651,
|
||||
"r_dpo/chosen_len": 292.91876220703125,
|
||||
"r_dpo/length_delta": 32.55937576293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 260.359375,
|
||||
"step": 310
|
||||
},
|
||||
{
|
||||
"epoch": 0.6701570680628273,
|
||||
"grad_norm": 90.0842056274414,
|
||||
"learning_rate": 1.4948791099758052e-07,
|
||||
"logits/chosen": -0.847748875617981,
|
||||
"logits/rejected": -0.849408745765686,
|
||||
"logps/chosen": -404.92718505859375,
|
||||
"logps/ref_chosen": -284.1717529296875,
|
||||
"logps/ref_rejected": -261.2606506347656,
|
||||
"logps/rejected": -445.50567626953125,
|
||||
"loss": 4.1314,
|
||||
"r_dpo/chosen_len": 279.90313720703125,
|
||||
"r_dpo/length_delta": 44.537498474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 235.36563110351562,
|
||||
"step": 320
|
||||
},
|
||||
{
|
||||
"epoch": 0.6910994764397905,
|
||||
"grad_norm": 117.30803680419922,
|
||||
"learning_rate": 1.3300945667758012e-07,
|
||||
"logits/chosen": -0.8595107793807983,
|
||||
"logits/rejected": -0.8555922508239746,
|
||||
"logps/chosen": -411.070556640625,
|
||||
"logps/ref_chosen": -283.40338134765625,
|
||||
"logps/ref_rejected": -271.27569580078125,
|
||||
"logps/rejected": -461.5215759277344,
|
||||
"loss": 4.2619,
|
||||
"r_dpo/chosen_len": 267.67498779296875,
|
||||
"r_dpo/length_delta": 13.015625,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 254.6593780517578,
|
||||
"step": 330
|
||||
},
|
||||
{
|
||||
"epoch": 0.7120418848167539,
|
||||
"grad_norm": 119.9719009399414,
|
||||
"learning_rate": 1.1715810961514072e-07,
|
||||
"logits/chosen": -0.8573703765869141,
|
||||
"logits/rejected": -0.841023325920105,
|
||||
"logps/chosen": -396.1806945800781,
|
||||
"logps/ref_chosen": -259.7261962890625,
|
||||
"logps/ref_rejected": -243.4088897705078,
|
||||
"logps/rejected": -444.3870544433594,
|
||||
"loss": 4.2292,
|
||||
"r_dpo/chosen_len": 256.11248779296875,
|
||||
"r_dpo/length_delta": 32.546875,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 223.5656280517578,
|
||||
"step": 340
|
||||
},
|
||||
{
|
||||
"epoch": 0.7329842931937173,
|
||||
"grad_norm": 92.5477066040039,
|
||||
"learning_rate": 1.0201883817182949e-07,
|
||||
"logits/chosen": -0.8849331140518188,
|
||||
"logits/rejected": -0.8748366236686707,
|
||||
"logps/chosen": -426.97198486328125,
|
||||
"logps/ref_chosen": -298.24725341796875,
|
||||
"logps/ref_rejected": -272.657958984375,
|
||||
"logps/rejected": -465.6683044433594,
|
||||
"loss": 4.2977,
|
||||
"r_dpo/chosen_len": 281.4624938964844,
|
||||
"r_dpo/length_delta": 45.275001525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 236.1875,
|
||||
"step": 350
|
||||
},
|
||||
{
|
||||
"epoch": 0.7539267015706806,
|
||||
"grad_norm": 116.8237533569336,
|
||||
"learning_rate": 8.76727937529367e-08,
|
||||
"logits/chosen": -0.8409330248832703,
|
||||
"logits/rejected": -0.8409973382949829,
|
||||
"logps/chosen": -399.4936828613281,
|
||||
"logps/ref_chosen": -281.881103515625,
|
||||
"logps/ref_rejected": -265.4746398925781,
|
||||
"logps/rejected": -450.76092529296875,
|
||||
"loss": 4.3479,
|
||||
"r_dpo/chosen_len": 272.64373779296875,
|
||||
"r_dpo/length_delta": 30.071874618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.57186889648438,
|
||||
"step": 360
|
||||
},
|
||||
{
|
||||
"epoch": 0.774869109947644,
|
||||
"grad_norm": 76.65868377685547,
|
||||
"learning_rate": 7.419687580962222e-08,
|
||||
"logits/chosen": -0.8691250681877136,
|
||||
"logits/rejected": -0.8575793504714966,
|
||||
"logps/chosen": -415.9898376464844,
|
||||
"logps/ref_chosen": -302.17822265625,
|
||||
"logps/ref_rejected": -265.92877197265625,
|
||||
"logps/rejected": -448.15936279296875,
|
||||
"loss": 4.0997,
|
||||
"r_dpo/chosen_len": 273.88751220703125,
|
||||
"r_dpo/length_delta": 33.759376525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 240.1281280517578,
|
||||
"step": 370
|
||||
},
|
||||
{
|
||||
"epoch": 0.7958115183246073,
|
||||
"grad_norm": 115.7716293334961,
|
||||
"learning_rate": 6.166331963291519e-08,
|
||||
"logits/chosen": -0.8324747085571289,
|
||||
"logits/rejected": -0.8221855163574219,
|
||||
"logps/chosen": -414.8014221191406,
|
||||
"logps/ref_chosen": -301.2120361328125,
|
||||
"logps/ref_rejected": -266.4872741699219,
|
||||
"logps/rejected": -437.1184997558594,
|
||||
"loss": 4.3057,
|
||||
"r_dpo/chosen_len": 286.75311279296875,
|
||||
"r_dpo/length_delta": 33.53125,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 253.2218780517578,
|
||||
"step": 380
|
||||
},
|
||||
{
|
||||
"epoch": 0.8167539267015707,
|
||||
"grad_norm": 72.40222930908203,
|
||||
"learning_rate": 5.013930914912476e-08,
|
||||
"logits/chosen": -0.8503654599189758,
|
||||
"logits/rejected": -0.8345378041267395,
|
||||
"logps/chosen": -410.3568420410156,
|
||||
"logps/ref_chosen": -296.6472473144531,
|
||||
"logps/ref_rejected": -278.953857421875,
|
||||
"logps/rejected": -457.876220703125,
|
||||
"loss": 4.1891,
|
||||
"r_dpo/chosen_len": 287.91876220703125,
|
||||
"r_dpo/length_delta": 30.081249237060547,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 257.8374938964844,
|
||||
"step": 390
|
||||
},
|
||||
{
|
||||
"epoch": 0.837696335078534,
|
||||
"grad_norm": 100.89118957519531,
|
||||
"learning_rate": 3.968661679220467e-08,
|
||||
"logits/chosen": -0.8497657775878906,
|
||||
"logits/rejected": -0.8488407135009766,
|
||||
"logps/chosen": -412.8147888183594,
|
||||
"logps/ref_chosen": -296.6556091308594,
|
||||
"logps/ref_rejected": -256.9266662597656,
|
||||
"logps/rejected": -432.18450927734375,
|
||||
"loss": 4.2712,
|
||||
"r_dpo/chosen_len": 278.96875,
|
||||
"r_dpo/length_delta": 39.60625076293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 239.3625030517578,
|
||||
"step": 400
|
||||
},
|
||||
{
|
||||
"epoch": 0.837696335078534,
|
||||
"eval_logits/chosen": -0.8653351068496704,
|
||||
"eval_logits/rejected": -0.8488379716873169,
|
||||
"eval_logps/chosen": -406.7590026855469,
|
||||
"eval_logps/ref_chosen": -288.6414794921875,
|
||||
"eval_logps/ref_rejected": -265.96160888671875,
|
||||
"eval_logps/rejected": -443.36309814453125,
|
||||
"eval_loss": 0.5338261723518372,
|
||||
"eval_r_dpo/chosen_len": 286.97601318359375,
|
||||
"eval_r_dpo/length_delta": 40.88800048828125,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 246.08799743652344,
|
||||
"eval_runtime": 78.6291,
|
||||
"eval_samples_per_second": 25.436,
|
||||
"eval_steps_per_second": 3.179,
|
||||
"step": 400
|
||||
},
|
||||
{
|
||||
"epoch": 0.8586387434554974,
|
||||
"grad_norm": 83.20862579345703,
|
||||
"learning_rate": 3.036127238347164e-08,
|
||||
"logits/chosen": -0.8332167863845825,
|
||||
"logits/rejected": -0.8190832138061523,
|
||||
"logps/chosen": -414.7737731933594,
|
||||
"logps/ref_chosen": -289.9568786621094,
|
||||
"logps/ref_rejected": -272.4674377441406,
|
||||
"logps/rejected": -451.9229431152344,
|
||||
"loss": 4.1574,
|
||||
"r_dpo/chosen_len": 282.40625,
|
||||
"r_dpo/length_delta": 26.265625,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 256.140625,
|
||||
"step": 410
|
||||
},
|
||||
{
|
||||
"epoch": 0.8795811518324608,
|
||||
"grad_norm": 116.7166976928711,
|
||||
"learning_rate": 2.2213262793589482e-08,
|
||||
"logits/chosen": -0.8510443568229675,
|
||||
"logits/rejected": -0.8338598012924194,
|
||||
"logps/chosen": -428.4990234375,
|
||||
"logps/ref_chosen": -307.40240478515625,
|
||||
"logps/ref_rejected": -279.85760498046875,
|
||||
"logps/rejected": -461.937255859375,
|
||||
"loss": 4.1325,
|
||||
"r_dpo/chosen_len": 296.8343811035156,
|
||||
"r_dpo/length_delta": 37.165626525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 259.66876220703125,
|
||||
"step": 420
|
||||
},
|
||||
{
|
||||
"epoch": 0.900523560209424,
|
||||
"grad_norm": 93.6956787109375,
|
||||
"learning_rate": 1.5286263996730026e-08,
|
||||
"logits/chosen": -0.8407491445541382,
|
||||
"logits/rejected": -0.8321961164474487,
|
||||
"logps/chosen": -418.92791748046875,
|
||||
"logps/ref_chosen": -297.7133483886719,
|
||||
"logps/ref_rejected": -266.862060546875,
|
||||
"logps/rejected": -456.95721435546875,
|
||||
"loss": 4.0672,
|
||||
"r_dpo/chosen_len": 290.703125,
|
||||
"r_dpo/length_delta": 48.868751525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 241.83438110351562,
|
||||
"step": 430
|
||||
},
|
||||
{
|
||||
"epoch": 0.9214659685863874,
|
||||
"grad_norm": 90.70089721679688,
|
||||
"learning_rate": 9.617406953185136e-09,
|
||||
"logits/chosen": -0.8413969278335571,
|
||||
"logits/rejected": -0.8441200256347656,
|
||||
"logps/chosen": -419.41925048828125,
|
||||
"logps/ref_chosen": -293.67095947265625,
|
||||
"logps/ref_rejected": -289.4698791503906,
|
||||
"logps/rejected": -471.10052490234375,
|
||||
"loss": 4.2715,
|
||||
"r_dpo/chosen_len": 285.3656311035156,
|
||||
"r_dpo/length_delta": 12.868749618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 272.49688720703125,
|
||||
"step": 440
|
||||
},
|
||||
{
|
||||
"epoch": 0.9424083769633508,
|
||||
"grad_norm": 93.14322662353516,
|
||||
"learning_rate": 5.2370785753763356e-09,
|
||||
"logits/chosen": -0.8536258935928345,
|
||||
"logits/rejected": -0.8501097559928894,
|
||||
"logps/chosen": -415.6376037597656,
|
||||
"logps/ref_chosen": -296.9415283203125,
|
||||
"logps/ref_rejected": -262.6710510253906,
|
||||
"logps/rejected": -445.570556640625,
|
||||
"loss": 4.0987,
|
||||
"r_dpo/chosen_len": 282.81561279296875,
|
||||
"r_dpo/length_delta": 40.631248474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.1843719482422,
|
||||
"step": 450
|
||||
},
|
||||
{
|
||||
"epoch": 0.9633507853403142,
|
||||
"grad_norm": 86.69686126708984,
|
||||
"learning_rate": 2.168758844148272e-09,
|
||||
"logits/chosen": -0.8644816279411316,
|
||||
"logits/rejected": -0.8618124723434448,
|
||||
"logps/chosen": -437.73199462890625,
|
||||
"logps/ref_chosen": -312.42291259765625,
|
||||
"logps/ref_rejected": -278.7356262207031,
|
||||
"logps/rejected": -466.789306640625,
|
||||
"loss": 4.2414,
|
||||
"r_dpo/chosen_len": 288.9125061035156,
|
||||
"r_dpo/length_delta": 43.23125076293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 245.68124389648438,
|
||||
"step": 460
|
||||
},
|
||||
{
|
||||
"epoch": 0.9842931937172775,
|
||||
"grad_norm": 72.70208740234375,
|
||||
"learning_rate": 4.288949484559934e-10,
|
||||
"logits/chosen": -0.8316763043403625,
|
||||
"logits/rejected": -0.8185717463493347,
|
||||
"logps/chosen": -396.99578857421875,
|
||||
"logps/ref_chosen": -278.0654602050781,
|
||||
"logps/ref_rejected": -256.5596618652344,
|
||||
"logps/rejected": -439.91888427734375,
|
||||
"loss": 4.1528,
|
||||
"r_dpo/chosen_len": 268.8687438964844,
|
||||
"r_dpo/length_delta": 26.262500762939453,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.6062469482422,
|
||||
"step": 470
|
||||
},
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"step": 477,
|
||||
"total_flos": 0.0,
|
||||
"train_loss": 4.588983364824979,
|
||||
"train_runtime": 6140.251,
|
||||
"train_samples_per_second": 9.956,
|
||||
"train_steps_per_second": 0.078
|
||||
}
|
||||
],
|
||||
"logging_steps": 10,
|
||||
"max_steps": 477,
|
||||
"num_input_tokens_seen": 0,
|
||||
"num_train_epochs": 1,
|
||||
"save_steps": 50,
|
||||
"stateful_callbacks": {
|
||||
"TrainerControl": {
|
||||
"args": {
|
||||
"should_epoch_stop": false,
|
||||
"should_evaluate": false,
|
||||
"should_log": false,
|
||||
"should_save": false,
|
||||
"should_training_stop": false
|
||||
},
|
||||
"attributes": {}
|
||||
}
|
||||
},
|
||||
"total_flos": 0.0,
|
||||
"train_batch_size": 4,
|
||||
"trial_name": null,
|
||||
"trial_params": null
|
||||
}
|
||||
Reference in New Issue
Block a user