初始化项目,由ModelHub XC社区提供模型
Model: jackf857/llama-3-8b-base-r-dpo-ultrafeedback-4xh200 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
78
README.md
Normal file
78
README.md
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
library_name: transformers
|
||||
base_model: W-61/llama-3-8b-base-sft-ultrachat-8xh200
|
||||
tags:
|
||||
- alignment-handbook
|
||||
- r-dpo
|
||||
- generated_from_trainer
|
||||
datasets:
|
||||
- HuggingFaceH4/ultrafeedback_binarized
|
||||
model-index:
|
||||
- name: llama-3-8b-base-r-dpo-ultrafeedback-4xh200
|
||||
results: []
|
||||
---
|
||||
|
||||
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
||||
should probably proofread and complete it, then remove this comment. -->
|
||||
|
||||
# llama-3-8b-base-r-dpo-ultrafeedback-4xh200
|
||||
|
||||
This model is a fine-tuned version of [W-61/llama-3-8b-base-sft-ultrachat-8xh200](https://huggingface.co/W-61/llama-3-8b-base-sft-ultrachat-8xh200) on the HuggingFaceH4/ultrafeedback_binarized dataset.
|
||||
It achieves the following results on the evaluation set:
|
||||
- Loss: 0.5080
|
||||
- R Dpo/chosen Len: 291.2620
|
||||
- R Dpo/rejected Len: 248.3960
|
||||
- R Dpo/length Delta: 42.8660
|
||||
- R Dpo/regularization Term: 0.0
|
||||
- Logps/chosen: -288.0679
|
||||
- Logps/rejected: -272.9751
|
||||
- Logps/ref Chosen: -289.1346
|
||||
- Logps/ref Rejected: -264.7782
|
||||
- Logits/chosen: -0.7442
|
||||
- Logits/rejected: -0.7460
|
||||
|
||||
## Model description
|
||||
|
||||
More information needed
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
More information needed
|
||||
|
||||
## Training and evaluation data
|
||||
|
||||
More information needed
|
||||
|
||||
## Training procedure
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- learning_rate: 5e-07
|
||||
- train_batch_size: 4
|
||||
- eval_batch_size: 4
|
||||
- seed: 42
|
||||
- distributed_type: multi-GPU
|
||||
- num_devices: 4
|
||||
- gradient_accumulation_steps: 8
|
||||
- total_train_batch_size: 128
|
||||
- total_eval_batch_size: 16
|
||||
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
||||
- lr_scheduler_type: cosine
|
||||
- lr_scheduler_warmup_ratio: 0.1
|
||||
- num_epochs: 1
|
||||
|
||||
### Training results
|
||||
|
||||
| Training Loss | Epoch | Step | Validation Loss | R Dpo/chosen Len | R Dpo/rejected Len | R Dpo/length Delta | R Dpo/regularization Term | Logps/chosen | Logps/rejected | Logps/ref Chosen | Logps/ref Rejected | Logits/chosen | Logits/rejected |
|
||||
|:-------------:|:------:|:----:|:---------------:|:----------------:|:------------------:|:------------------:|:-------------------------:|:------------:|:--------------:|:----------------:|:------------------:|:-------------:|:---------------:|
|
||||
| 4.178 | 0.4188 | 200 | 0.5266 | 291.2620 | 248.3960 | 42.8660 | 0.0 | -287.2393 | -271.3152 | -289.1346 | -264.7782 | -0.7479 | -0.7490 |
|
||||
| 4.0423 | 0.8377 | 400 | 0.5080 | 291.2620 | 248.3960 | 42.8660 | 0.0 | -288.0679 | -272.9751 | -289.1346 | -264.7782 | -0.7442 | -0.7460 |
|
||||
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 4.51.0
|
||||
- Pytorch 2.3.1+cu121
|
||||
- Datasets 2.21.0
|
||||
- Tokenizers 0.21.4
|
||||
24
all_results.json
Normal file
24
all_results.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"eval_logits/chosen": -0.7394673824310303,
|
||||
"eval_logits/rejected": -0.7407819628715515,
|
||||
"eval_logps/chosen": -288.1478271484375,
|
||||
"eval_logps/ref_chosen": -289.1346435546875,
|
||||
"eval_logps/ref_rejected": -264.7782287597656,
|
||||
"eval_logps/rejected": -273.09014892578125,
|
||||
"eval_loss": 0.5070626139640808,
|
||||
"eval_r_dpo/chosen_len": 291.2619934082031,
|
||||
"eval_r_dpo/length_delta": 42.86600112915039,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 248.39599609375,
|
||||
"eval_runtime": 81.9848,
|
||||
"eval_samples": 2000,
|
||||
"eval_samples_per_second": 24.395,
|
||||
"eval_steps_per_second": 1.525,
|
||||
"total_flos": 0.0,
|
||||
"train_loss": 4.32008807264284,
|
||||
"train_runtime": 6099.6278,
|
||||
"train_samples": 61135,
|
||||
"train_samples_per_second": 10.023,
|
||||
"train_steps_per_second": 0.078
|
||||
}
|
||||
29
config.json
Normal file
29
config.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 128000,
|
||||
"eos_token_id": 128001,
|
||||
"head_dim": 128,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"max_position_embeddings": 8192,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 8,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 500000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "float32",
|
||||
"transformers_version": "4.51.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 128256
|
||||
}
|
||||
18
eval_results.json
Normal file
18
eval_results.json
Normal file
@@ -0,0 +1,18 @@
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"eval_logits/chosen": -0.7394673824310303,
|
||||
"eval_logits/rejected": -0.7407819628715515,
|
||||
"eval_logps/chosen": -288.1478271484375,
|
||||
"eval_logps/ref_chosen": -289.1346435546875,
|
||||
"eval_logps/ref_rejected": -264.7782287597656,
|
||||
"eval_logps/rejected": -273.09014892578125,
|
||||
"eval_loss": 0.5070626139640808,
|
||||
"eval_r_dpo/chosen_len": 291.2619934082031,
|
||||
"eval_r_dpo/length_delta": 42.86600112915039,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 248.39599609375,
|
||||
"eval_runtime": 81.9848,
|
||||
"eval_samples": 2000,
|
||||
"eval_samples_per_second": 24.395,
|
||||
"eval_steps_per_second": 1.525
|
||||
}
|
||||
9
generation_config.json
Normal file
9
generation_config.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"bos_token_id": 128000,
|
||||
"do_sample": true,
|
||||
"eos_token_id": 128001,
|
||||
"max_length": 4096,
|
||||
"temperature": 0.6,
|
||||
"top_p": 0.9,
|
||||
"transformers_version": "4.51.0"
|
||||
}
|
||||
3
model-00001-of-00007.safetensors
Normal file
3
model-00001-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e4af25c24cbbbec408efbf3243a7870e030d3c563899066bef1ab333eca8cde2
|
||||
size 4886466168
|
||||
3
model-00002-of-00007.safetensors
Normal file
3
model-00002-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2fb1c3ca3e1629e1f2be0b945ae20667861b4d53fb6cc1b9cadcef7fff48fab8
|
||||
size 4832007448
|
||||
3
model-00003-of-00007.safetensors
Normal file
3
model-00003-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ef1bb58a5a53c29f1b245acaf023b18f3078b096c6d0a0f8eadd6573e9629ab3
|
||||
size 4999813112
|
||||
3
model-00004-of-00007.safetensors
Normal file
3
model-00004-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e084916b38d2156579312b165da8c8ec42a4036b78e292d9a1f29c702fe6c8b1
|
||||
size 4999813128
|
||||
3
model-00005-of-00007.safetensors
Normal file
3
model-00005-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7a809250d30eab837e1d0dce893711087df73ac9e3511fe2319af1991948ad29
|
||||
size 4832007496
|
||||
3
model-00006-of-00007.safetensors
Normal file
3
model-00006-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6aee84b0d0695de5e61d419e9497cb4452ccf94fb837d7942a3f163103e85c0b
|
||||
size 4999813120
|
||||
3
model-00007-of-00007.safetensors
Normal file
3
model-00007-of-00007.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7c91d53c4580564ae5626f15f8416938532482cd8493bc7996c92074068f8793
|
||||
size 2571158184
|
||||
298
model.safetensors.index.json
Normal file
298
model.safetensors.index.json
Normal file
@@ -0,0 +1,298 @@
|
||||
{
|
||||
"metadata": {
|
||||
"total_size": 32121044992
|
||||
},
|
||||
"weight_map": {
|
||||
"lm_head.weight": "model-00007-of-00007.safetensors",
|
||||
"model.embed_tokens.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.input_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.input_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.10.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.10.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.11.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.12.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.15.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.15.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.16.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.17.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.18.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.input_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.mlp.down_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.mlp.gate_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.mlp.up_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.post_attention_layernorm.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.19.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.2.input_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.20.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.k_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.o_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.q_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.20.self_attn.v_proj.weight": "model-00004-of-00007.safetensors",
|
||||
"model.layers.21.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.21.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.22.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.23.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.input_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.mlp.down_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.post_attention_layernorm.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.24.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.25.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.25.mlp.gate_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.mlp.up_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.k_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.o_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.q_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.25.self_attn.v_proj.weight": "model-00005-of-00007.safetensors",
|
||||
"model.layers.26.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.26.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.27.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.28.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.29.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.3.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00007.safetensors",
|
||||
"model.layers.30.input_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.mlp.down_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.mlp.up_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.post_attention_layernorm.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.30.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.input_layernorm.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.mlp.down_proj.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.mlp.gate_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.mlp.up_proj.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.post_attention_layernorm.weight": "model-00007-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.k_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.o_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.q_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.31.self_attn.v_proj.weight": "model-00006-of-00007.safetensors",
|
||||
"model.layers.4.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.4.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.5.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.input_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.8.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00007.safetensors",
|
||||
"model.layers.9.input_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.mlp.down_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.mlp.gate_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.mlp.up_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.post_attention_layernorm.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.k_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.o_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.q_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.layers.9.self_attn.v_proj.weight": "model-00003-of-00007.safetensors",
|
||||
"model.norm.weight": "model-00007-of-00007.safetensors"
|
||||
}
|
||||
}
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3c5cf44023714fb39b05e71e425f8d7b92805ff73f7988b083b8c87f0bf87393
|
||||
size 17209961
|
||||
2064
tokenizer_config.json
Normal file
2064
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
9
train_results.json
Normal file
9
train_results.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"total_flos": 0.0,
|
||||
"train_loss": 4.32008807264284,
|
||||
"train_runtime": 6099.6278,
|
||||
"train_samples": 61135,
|
||||
"train_samples_per_second": 10.023,
|
||||
"train_steps_per_second": 0.078
|
||||
}
|
||||
895
trainer_state.json
Normal file
895
trainer_state.json
Normal file
@@ -0,0 +1,895 @@
|
||||
{
|
||||
"best_global_step": null,
|
||||
"best_metric": null,
|
||||
"best_model_checkpoint": null,
|
||||
"epoch": 0.9989528795811519,
|
||||
"eval_steps": 200,
|
||||
"global_step": 477,
|
||||
"is_hyper_param_search": false,
|
||||
"is_local_process_zero": true,
|
||||
"is_world_process_zero": true,
|
||||
"log_history": [
|
||||
{
|
||||
"epoch": 0.0020942408376963353,
|
||||
"grad_norm": 286.114501953125,
|
||||
"learning_rate": 0.0,
|
||||
"logits/chosen": -0.5995081663131714,
|
||||
"logits/rejected": -0.6144353747367859,
|
||||
"logps/chosen": -267.5272216796875,
|
||||
"logps/ref_chosen": -267.5935363769531,
|
||||
"logps/ref_rejected": -204.2306671142578,
|
||||
"logps/rejected": -204.23907470703125,
|
||||
"loss": 5.5602,
|
||||
"r_dpo/chosen_len": 257.75,
|
||||
"r_dpo/length_delta": 47.875,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 209.875,
|
||||
"step": 1
|
||||
},
|
||||
{
|
||||
"epoch": 0.020942408376963352,
|
||||
"grad_norm": 265.7362365722656,
|
||||
"learning_rate": 9.375e-08,
|
||||
"logits/chosen": -0.6309213042259216,
|
||||
"logits/rejected": -0.63545161485672,
|
||||
"logps/chosen": -296.74920654296875,
|
||||
"logps/ref_chosen": -296.72576904296875,
|
||||
"logps/ref_rejected": -258.9681396484375,
|
||||
"logps/rejected": -259.0067443847656,
|
||||
"loss": 5.5477,
|
||||
"r_dpo/chosen_len": 291.8680419921875,
|
||||
"r_dpo/length_delta": 49.76388931274414,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.1041717529297,
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"epoch": 0.041884816753926704,
|
||||
"grad_norm": 296.5692138671875,
|
||||
"learning_rate": 1.9791666666666664e-07,
|
||||
"logits/chosen": -0.597561240196228,
|
||||
"logits/rejected": -0.6285912990570068,
|
||||
"logps/chosen": -297.9436340332031,
|
||||
"logps/ref_chosen": -297.9349365234375,
|
||||
"logps/ref_rejected": -256.9902648925781,
|
||||
"logps/rejected": -256.98876953125,
|
||||
"loss": 5.5369,
|
||||
"r_dpo/chosen_len": 291.29998779296875,
|
||||
"r_dpo/length_delta": 52.89374923706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 238.40625,
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"epoch": 0.06282722513089005,
|
||||
"grad_norm": 279.9869384765625,
|
||||
"learning_rate": 3.020833333333333e-07,
|
||||
"logits/chosen": -0.6136946678161621,
|
||||
"logits/rejected": -0.605636477470398,
|
||||
"logps/chosen": -278.4833679199219,
|
||||
"logps/ref_chosen": -278.64752197265625,
|
||||
"logps/ref_rejected": -249.309814453125,
|
||||
"logps/rejected": -249.22158813476562,
|
||||
"loss": 5.495,
|
||||
"r_dpo/chosen_len": 270.8812561035156,
|
||||
"r_dpo/length_delta": 25.228124618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 245.6531219482422,
|
||||
"step": 30
|
||||
},
|
||||
{
|
||||
"epoch": 0.08376963350785341,
|
||||
"grad_norm": 258.0906677246094,
|
||||
"learning_rate": 4.0625e-07,
|
||||
"logits/chosen": -0.6181729435920715,
|
||||
"logits/rejected": -0.6429071426391602,
|
||||
"logps/chosen": -282.7382507324219,
|
||||
"logps/ref_chosen": -283.49981689453125,
|
||||
"logps/ref_rejected": -265.32733154296875,
|
||||
"logps/rejected": -265.03875732421875,
|
||||
"loss": 5.3372,
|
||||
"r_dpo/chosen_len": 281.43438720703125,
|
||||
"r_dpo/length_delta": 33.34375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 248.0906219482422,
|
||||
"step": 40
|
||||
},
|
||||
{
|
||||
"epoch": 0.10471204188481675,
|
||||
"grad_norm": 267.5690002441406,
|
||||
"learning_rate": 4.999932966293553e-07,
|
||||
"logits/chosen": -0.6292977333068848,
|
||||
"logits/rejected": -0.6698824763298035,
|
||||
"logps/chosen": -278.5817565917969,
|
||||
"logps/ref_chosen": -280.224365234375,
|
||||
"logps/ref_rejected": -274.3541259765625,
|
||||
"logps/rejected": -273.8267517089844,
|
||||
"loss": 5.1553,
|
||||
"r_dpo/chosen_len": 290.32501220703125,
|
||||
"r_dpo/length_delta": 35.11249923706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 255.21249389648438,
|
||||
"step": 50
|
||||
},
|
||||
{
|
||||
"epoch": 0.1256544502617801,
|
||||
"grad_norm": 258.2756652832031,
|
||||
"learning_rate": 4.991893270335525e-07,
|
||||
"logits/chosen": -0.6408439874649048,
|
||||
"logits/rejected": -0.6542555093765259,
|
||||
"logps/chosen": -278.864501953125,
|
||||
"logps/ref_chosen": -281.12664794921875,
|
||||
"logps/ref_rejected": -259.86456298828125,
|
||||
"logps/rejected": -259.77728271484375,
|
||||
"loss": 4.9283,
|
||||
"r_dpo/chosen_len": 273.953125,
|
||||
"r_dpo/length_delta": 29.084375381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 244.86874389648438,
|
||||
"step": 60
|
||||
},
|
||||
{
|
||||
"epoch": 0.14659685863874344,
|
||||
"grad_norm": 233.6326446533203,
|
||||
"learning_rate": 4.970496218214204e-07,
|
||||
"logits/chosen": -0.7015193700790405,
|
||||
"logits/rejected": -0.7072200179100037,
|
||||
"logps/chosen": -284.4013977050781,
|
||||
"logps/ref_chosen": -287.71063232421875,
|
||||
"logps/ref_rejected": -276.839599609375,
|
||||
"logps/rejected": -276.4761047363281,
|
||||
"loss": 4.76,
|
||||
"r_dpo/chosen_len": 267.4937438964844,
|
||||
"r_dpo/length_delta": 14.484375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 253.00936889648438,
|
||||
"step": 70
|
||||
},
|
||||
{
|
||||
"epoch": 0.16753926701570682,
|
||||
"grad_norm": 234.1904296875,
|
||||
"learning_rate": 4.935856505068998e-07,
|
||||
"logits/chosen": -0.6775287389755249,
|
||||
"logits/rejected": -0.6742324829101562,
|
||||
"logps/chosen": -275.77288818359375,
|
||||
"logps/ref_chosen": -280.123046875,
|
||||
"logps/ref_rejected": -258.8989562988281,
|
||||
"logps/rejected": -258.0046691894531,
|
||||
"loss": 4.6236,
|
||||
"r_dpo/chosen_len": 267.4781188964844,
|
||||
"r_dpo/length_delta": 32.46562576293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 235.0124969482422,
|
||||
"step": 80
|
||||
},
|
||||
{
|
||||
"epoch": 0.18848167539267016,
|
||||
"grad_norm": 249.3949737548828,
|
||||
"learning_rate": 4.8881598109976e-07,
|
||||
"logits/chosen": -0.6876357197761536,
|
||||
"logits/rejected": -0.694031834602356,
|
||||
"logps/chosen": -273.43939208984375,
|
||||
"logps/ref_chosen": -278.02545166015625,
|
||||
"logps/ref_rejected": -251.0922393798828,
|
||||
"logps/rejected": -251.4302978515625,
|
||||
"loss": 4.5727,
|
||||
"r_dpo/chosen_len": 274.20623779296875,
|
||||
"r_dpo/length_delta": 44.97187423706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 229.234375,
|
||||
"step": 90
|
||||
},
|
||||
{
|
||||
"epoch": 0.2094240837696335,
|
||||
"grad_norm": 247.84217834472656,
|
||||
"learning_rate": 4.827661805750437e-07,
|
||||
"logits/chosen": -0.6776241660118103,
|
||||
"logits/rejected": -0.6956905126571655,
|
||||
"logps/chosen": -269.8210754394531,
|
||||
"logps/ref_chosen": -274.0089416503906,
|
||||
"logps/ref_rejected": -274.14447021484375,
|
||||
"logps/rejected": -275.0433349609375,
|
||||
"loss": 4.6252,
|
||||
"r_dpo/chosen_len": 275.3343811035156,
|
||||
"r_dpo/length_delta": 21.912500381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 253.421875,
|
||||
"step": 100
|
||||
},
|
||||
{
|
||||
"epoch": 0.23036649214659685,
|
||||
"grad_norm": 227.6590576171875,
|
||||
"learning_rate": 4.75468677825789e-07,
|
||||
"logits/chosen": -0.701612114906311,
|
||||
"logits/rejected": -0.7261066436767578,
|
||||
"logps/chosen": -269.59173583984375,
|
||||
"logps/ref_chosen": -273.23333740234375,
|
||||
"logps/ref_rejected": -263.88787841796875,
|
||||
"logps/rejected": -266.48272705078125,
|
||||
"loss": 4.2996,
|
||||
"r_dpo/chosen_len": 283.43438720703125,
|
||||
"r_dpo/length_delta": 50.34375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 233.0906219482422,
|
||||
"step": 110
|
||||
},
|
||||
{
|
||||
"epoch": 0.2513089005235602,
|
||||
"grad_norm": 243.6549835205078,
|
||||
"learning_rate": 4.669625898336438e-07,
|
||||
"logits/chosen": -0.7124736905097961,
|
||||
"logits/rejected": -0.7152305841445923,
|
||||
"logps/chosen": -267.23974609375,
|
||||
"logps/ref_chosen": -269.54656982421875,
|
||||
"logps/ref_rejected": -272.7981872558594,
|
||||
"logps/rejected": -275.8104553222656,
|
||||
"loss": 4.5034,
|
||||
"r_dpo/chosen_len": 264.7593688964844,
|
||||
"r_dpo/length_delta": 13.840624809265137,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 250.9187469482422,
|
||||
"step": 120
|
||||
},
|
||||
{
|
||||
"epoch": 0.27225130890052357,
|
||||
"grad_norm": 274.58111572265625,
|
||||
"learning_rate": 4.5729351198915705e-07,
|
||||
"logits/chosen": -0.7257631421089172,
|
||||
"logits/rejected": -0.7154949903488159,
|
||||
"logps/chosen": -272.1502990722656,
|
||||
"logps/ref_chosen": -275.03448486328125,
|
||||
"logps/ref_rejected": -276.39862060546875,
|
||||
"logps/rejected": -279.466796875,
|
||||
"loss": 4.445,
|
||||
"r_dpo/chosen_len": 266.625,
|
||||
"r_dpo/length_delta": 18.668750762939453,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 247.9562530517578,
|
||||
"step": 130
|
||||
},
|
||||
{
|
||||
"epoch": 0.2931937172774869,
|
||||
"grad_norm": 247.52589416503906,
|
||||
"learning_rate": 4.4651327368569684e-07,
|
||||
"logits/chosen": -0.7425049543380737,
|
||||
"logits/rejected": -0.7479813098907471,
|
||||
"logps/chosen": -273.00714111328125,
|
||||
"logps/ref_chosen": -276.0029602050781,
|
||||
"logps/ref_rejected": -255.9320526123047,
|
||||
"logps/rejected": -259.60784912109375,
|
||||
"loss": 4.4958,
|
||||
"r_dpo/chosen_len": 261.46875,
|
||||
"r_dpo/length_delta": 22.375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 239.09375,
|
||||
"step": 140
|
||||
},
|
||||
{
|
||||
"epoch": 0.31413612565445026,
|
||||
"grad_norm": 264.3229675292969,
|
||||
"learning_rate": 4.346796604970912e-07,
|
||||
"logits/chosen": -0.7595945596694946,
|
||||
"logits/rejected": -0.7660830616950989,
|
||||
"logps/chosen": -294.779541015625,
|
||||
"logps/ref_chosen": -298.2093505859375,
|
||||
"logps/ref_rejected": -254.8907012939453,
|
||||
"logps/rejected": -258.84454345703125,
|
||||
"loss": 4.2489,
|
||||
"r_dpo/chosen_len": 283.84375,
|
||||
"r_dpo/length_delta": 48.359375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 235.484375,
|
||||
"step": 150
|
||||
},
|
||||
{
|
||||
"epoch": 0.33507853403141363,
|
||||
"grad_norm": 212.1317138671875,
|
||||
"learning_rate": 4.218561044282098e-07,
|
||||
"logits/chosen": -0.7484012246131897,
|
||||
"logits/rejected": -0.7548493146896362,
|
||||
"logps/chosen": -280.19281005859375,
|
||||
"logps/ref_chosen": -281.94189453125,
|
||||
"logps/ref_rejected": -255.5653533935547,
|
||||
"logps/rejected": -261.9532470703125,
|
||||
"loss": 4.0494,
|
||||
"r_dpo/chosen_len": 267.828125,
|
||||
"r_dpo/length_delta": 41.368751525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 226.45938110351562,
|
||||
"step": 160
|
||||
},
|
||||
{
|
||||
"epoch": 0.35602094240837695,
|
||||
"grad_norm": 263.6576843261719,
|
||||
"learning_rate": 4.081113438988443e-07,
|
||||
"logits/chosen": -0.7331063151359558,
|
||||
"logits/rejected": -0.7357569932937622,
|
||||
"logps/chosen": -287.6741638183594,
|
||||
"logps/ref_chosen": -288.32232666015625,
|
||||
"logps/ref_rejected": -239.85415649414062,
|
||||
"logps/rejected": -246.4853973388672,
|
||||
"loss": 4.1229,
|
||||
"r_dpo/chosen_len": 285.203125,
|
||||
"r_dpo/length_delta": 46.396873474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 238.80624389648438,
|
||||
"step": 170
|
||||
},
|
||||
{
|
||||
"epoch": 0.3769633507853403,
|
||||
"grad_norm": 204.8552703857422,
|
||||
"learning_rate": 3.935190552834828e-07,
|
||||
"logits/chosen": -0.7205232977867126,
|
||||
"logits/rejected": -0.7394840121269226,
|
||||
"logps/chosen": -285.03363037109375,
|
||||
"logps/ref_chosen": -286.17889404296875,
|
||||
"logps/ref_rejected": -249.9820098876953,
|
||||
"logps/rejected": -256.336669921875,
|
||||
"loss": 4.2693,
|
||||
"r_dpo/chosen_len": 266.09063720703125,
|
||||
"r_dpo/length_delta": 40.12812423706055,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 225.96249389648438,
|
||||
"step": 180
|
||||
},
|
||||
{
|
||||
"epoch": 0.39790575916230364,
|
||||
"grad_norm": 227.1675567626953,
|
||||
"learning_rate": 3.781574579820464e-07,
|
||||
"logits/chosen": -0.7226096987724304,
|
||||
"logits/rejected": -0.7422462701797485,
|
||||
"logps/chosen": -279.1224365234375,
|
||||
"logps/ref_chosen": -280.9278259277344,
|
||||
"logps/ref_rejected": -254.3533477783203,
|
||||
"logps/rejected": -262.07611083984375,
|
||||
"loss": 4.0622,
|
||||
"r_dpo/chosen_len": 276.33123779296875,
|
||||
"r_dpo/length_delta": 41.993751525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 234.33749389648438,
|
||||
"step": 190
|
||||
},
|
||||
{
|
||||
"epoch": 0.418848167539267,
|
||||
"grad_norm": 257.41168212890625,
|
||||
"learning_rate": 3.621088951385353e-07,
|
||||
"logits/chosen": -0.741692841053009,
|
||||
"logits/rejected": -0.7598770260810852,
|
||||
"logps/chosen": -250.05776977539062,
|
||||
"logps/ref_chosen": -253.1712188720703,
|
||||
"logps/ref_rejected": -241.90478515625,
|
||||
"logps/rejected": -246.9326629638672,
|
||||
"loss": 4.178,
|
||||
"r_dpo/chosen_len": 248.0749969482422,
|
||||
"r_dpo/length_delta": 28.131250381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 219.94375610351562,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"epoch": 0.418848167539267,
|
||||
"eval_logits/chosen": -0.7478832006454468,
|
||||
"eval_logits/rejected": -0.7490243911743164,
|
||||
"eval_logps/chosen": -287.2392883300781,
|
||||
"eval_logps/ref_chosen": -289.1346435546875,
|
||||
"eval_logps/ref_rejected": -264.7782287597656,
|
||||
"eval_logps/rejected": -271.315185546875,
|
||||
"eval_loss": 0.5266098380088806,
|
||||
"eval_r_dpo/chosen_len": 291.2619934082031,
|
||||
"eval_r_dpo/length_delta": 42.86600112915039,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 248.39599609375,
|
||||
"eval_runtime": 81.9324,
|
||||
"eval_samples_per_second": 24.41,
|
||||
"eval_steps_per_second": 1.526,
|
||||
"step": 200
|
||||
},
|
||||
{
|
||||
"epoch": 0.4397905759162304,
|
||||
"grad_norm": 272.2992858886719,
|
||||
"learning_rate": 3.454593922550693e-07,
|
||||
"logits/chosen": -0.7123077511787415,
|
||||
"logits/rejected": -0.7341128587722778,
|
||||
"logps/chosen": -285.9898376464844,
|
||||
"logps/ref_chosen": -287.9228210449219,
|
||||
"logps/ref_rejected": -263.35595703125,
|
||||
"logps/rejected": -270.154541015625,
|
||||
"loss": 4.2219,
|
||||
"r_dpo/chosen_len": 280.3125,
|
||||
"r_dpo/length_delta": 36.68437576293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 243.6281280517578,
|
||||
"step": 210
|
||||
},
|
||||
{
|
||||
"epoch": 0.4607329842931937,
|
||||
"grad_norm": 244.5552520751953,
|
||||
"learning_rate": 3.2829819606729477e-07,
|
||||
"logits/chosen": -0.7547723054885864,
|
||||
"logits/rejected": -0.7528045773506165,
|
||||
"logps/chosen": -280.8878479003906,
|
||||
"logps/ref_chosen": -282.3331604003906,
|
||||
"logps/ref_rejected": -272.5645446777344,
|
||||
"logps/rejected": -279.7627868652344,
|
||||
"loss": 4.0545,
|
||||
"r_dpo/chosen_len": 261.359375,
|
||||
"r_dpo/length_delta": 17.865625381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 243.49374389648438,
|
||||
"step": 220
|
||||
},
|
||||
{
|
||||
"epoch": 0.4816753926701571,
|
||||
"grad_norm": 290.22955322265625,
|
||||
"learning_rate": 3.1071729615293424e-07,
|
||||
"logits/chosen": -0.7329878211021423,
|
||||
"logits/rejected": -0.7301102876663208,
|
||||
"logps/chosen": -276.18194580078125,
|
||||
"logps/ref_chosen": -276.1485595703125,
|
||||
"logps/ref_rejected": -252.81198120117188,
|
||||
"logps/rejected": -262.63323974609375,
|
||||
"loss": 3.8953,
|
||||
"r_dpo/chosen_len": 264.43438720703125,
|
||||
"r_dpo/length_delta": 31.256250381469727,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 233.17813110351562,
|
||||
"step": 230
|
||||
},
|
||||
{
|
||||
"epoch": 0.5026178010471204,
|
||||
"grad_norm": 242.89169311523438,
|
||||
"learning_rate": 2.9281093183781403e-07,
|
||||
"logits/chosen": -0.7076805233955383,
|
||||
"logits/rejected": -0.729147732257843,
|
||||
"logps/chosen": -270.7227783203125,
|
||||
"logps/ref_chosen": -270.52520751953125,
|
||||
"logps/ref_rejected": -254.83334350585938,
|
||||
"logps/rejected": -263.5503234863281,
|
||||
"loss": 4.1412,
|
||||
"r_dpo/chosen_len": 271.81561279296875,
|
||||
"r_dpo/length_delta": 37.099998474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 234.7156219482422,
|
||||
"step": 240
|
||||
},
|
||||
{
|
||||
"epoch": 0.5235602094240838,
|
||||
"grad_norm": 281.21990966796875,
|
||||
"learning_rate": 2.7467508704251135e-07,
|
||||
"logits/chosen": -0.7120658159255981,
|
||||
"logits/rejected": -0.7247544527053833,
|
||||
"logps/chosen": -288.3058166503906,
|
||||
"logps/ref_chosen": -288.9503173828125,
|
||||
"logps/ref_rejected": -265.0694580078125,
|
||||
"logps/rejected": -273.70220947265625,
|
||||
"loss": 4.2125,
|
||||
"r_dpo/chosen_len": 277.50311279296875,
|
||||
"r_dpo/length_delta": 41.103126525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 236.39999389648438,
|
||||
"step": 250
|
||||
},
|
||||
{
|
||||
"epoch": 0.5445026178010471,
|
||||
"grad_norm": 305.6956787109375,
|
||||
"learning_rate": 2.5640697577740815e-07,
|
||||
"logits/chosen": -0.7282508611679077,
|
||||
"logits/rejected": -0.7263582944869995,
|
||||
"logps/chosen": -287.1412353515625,
|
||||
"logps/ref_chosen": -288.6393737792969,
|
||||
"logps/ref_rejected": -265.315673828125,
|
||||
"logps/rejected": -273.0039367675781,
|
||||
"loss": 4.111,
|
||||
"r_dpo/chosen_len": 271.48126220703125,
|
||||
"r_dpo/length_delta": 24.390625,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 247.0906219482422,
|
||||
"step": 260
|
||||
},
|
||||
{
|
||||
"epoch": 0.5654450261780105,
|
||||
"grad_norm": 248.96470642089844,
|
||||
"learning_rate": 2.381045210440644e-07,
|
||||
"logits/chosen": -0.7059388756752014,
|
||||
"logits/rejected": -0.7142220735549927,
|
||||
"logps/chosen": -279.7881774902344,
|
||||
"logps/ref_chosen": -280.1373596191406,
|
||||
"logps/ref_rejected": -264.84295654296875,
|
||||
"logps/rejected": -273.27484130859375,
|
||||
"loss": 4.011,
|
||||
"r_dpo/chosen_len": 272.2875061035156,
|
||||
"r_dpo/length_delta": 19.956249237060547,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 252.3312530517578,
|
||||
"step": 270
|
||||
},
|
||||
{
|
||||
"epoch": 0.5863874345549738,
|
||||
"grad_norm": 216.84811401367188,
|
||||
"learning_rate": 2.1986582993616925e-07,
|
||||
"logits/chosen": -0.7371554374694824,
|
||||
"logits/rejected": -0.7457467317581177,
|
||||
"logps/chosen": -300.7189636230469,
|
||||
"logps/ref_chosen": -301.8018798828125,
|
||||
"logps/ref_rejected": -254.75112915039062,
|
||||
"logps/rejected": -263.86175537109375,
|
||||
"loss": 3.9179,
|
||||
"r_dpo/chosen_len": 285.44061279296875,
|
||||
"r_dpo/length_delta": 52.962501525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 232.47811889648438,
|
||||
"step": 280
|
||||
},
|
||||
{
|
||||
"epoch": 0.6073298429319371,
|
||||
"grad_norm": 258.0394592285156,
|
||||
"learning_rate": 2.0178866775369774e-07,
|
||||
"logits/chosen": -0.7233304977416992,
|
||||
"logits/rejected": -0.7041932344436646,
|
||||
"logps/chosen": -303.56158447265625,
|
||||
"logps/ref_chosen": -302.79217529296875,
|
||||
"logps/ref_rejected": -292.9220275878906,
|
||||
"logps/rejected": -301.3799743652344,
|
||||
"loss": 4.2767,
|
||||
"r_dpo/chosen_len": 294.90625,
|
||||
"r_dpo/length_delta": 20.774999618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 274.1312561035156,
|
||||
"step": 290
|
||||
},
|
||||
{
|
||||
"epoch": 0.6282722513089005,
|
||||
"grad_norm": 259.0774230957031,
|
||||
"learning_rate": 1.839699339491937e-07,
|
||||
"logits/chosen": -0.7432774305343628,
|
||||
"logits/rejected": -0.7294503450393677,
|
||||
"logps/chosen": -273.68316650390625,
|
||||
"logps/ref_chosen": -275.8238220214844,
|
||||
"logps/ref_rejected": -264.05743408203125,
|
||||
"logps/rejected": -269.7857666015625,
|
||||
"loss": 4.1741,
|
||||
"r_dpo/chosen_len": 266.859375,
|
||||
"r_dpo/length_delta": 20.734375,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 246.125,
|
||||
"step": 300
|
||||
},
|
||||
{
|
||||
"epoch": 0.6492146596858639,
|
||||
"grad_norm": 330.3496398925781,
|
||||
"learning_rate": 1.6650514271527465e-07,
|
||||
"logits/chosen": -0.7020508646965027,
|
||||
"logits/rejected": -0.7177096605300903,
|
||||
"logps/chosen": -294.5872802734375,
|
||||
"logps/ref_chosen": -296.6716003417969,
|
||||
"logps/ref_rejected": -278.68426513671875,
|
||||
"logps/rejected": -285.25396728515625,
|
||||
"loss": 4.015,
|
||||
"r_dpo/chosen_len": 292.91876220703125,
|
||||
"r_dpo/length_delta": 32.55937576293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 260.359375,
|
||||
"step": 310
|
||||
},
|
||||
{
|
||||
"epoch": 0.6701570680628273,
|
||||
"grad_norm": 258.3315734863281,
|
||||
"learning_rate": 1.4948791099758052e-07,
|
||||
"logits/chosen": -0.7137759923934937,
|
||||
"logits/rejected": -0.7293487191200256,
|
||||
"logps/chosen": -283.85919189453125,
|
||||
"logps/ref_chosen": -284.1717529296875,
|
||||
"logps/ref_rejected": -261.2606506347656,
|
||||
"logps/rejected": -270.0918884277344,
|
||||
"loss": 4.014,
|
||||
"r_dpo/chosen_len": 279.90313720703125,
|
||||
"r_dpo/length_delta": 44.537498474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 235.36563110351562,
|
||||
"step": 320
|
||||
},
|
||||
{
|
||||
"epoch": 0.6910994764397905,
|
||||
"grad_norm": 268.5577087402344,
|
||||
"learning_rate": 1.3300945667758012e-07,
|
||||
"logits/chosen": -0.7268847227096558,
|
||||
"logits/rejected": -0.7360986471176147,
|
||||
"logps/chosen": -284.22406005859375,
|
||||
"logps/ref_chosen": -283.40338134765625,
|
||||
"logps/ref_rejected": -271.27569580078125,
|
||||
"logps/rejected": -280.9209899902344,
|
||||
"loss": 4.0598,
|
||||
"r_dpo/chosen_len": 267.67498779296875,
|
||||
"r_dpo/length_delta": 13.015625,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 254.6593780517578,
|
||||
"step": 330
|
||||
},
|
||||
{
|
||||
"epoch": 0.7120418848167539,
|
||||
"grad_norm": 247.73794555664062,
|
||||
"learning_rate": 1.1715810961514072e-07,
|
||||
"logits/chosen": -0.7288721203804016,
|
||||
"logits/rejected": -0.7343642115592957,
|
||||
"logps/chosen": -263.1542053222656,
|
||||
"logps/ref_chosen": -259.82537841796875,
|
||||
"logps/ref_rejected": -243.50222778320312,
|
||||
"logps/rejected": -255.48574829101562,
|
||||
"loss": 4.0065,
|
||||
"r_dpo/chosen_len": 256.11248779296875,
|
||||
"r_dpo/length_delta": 32.546875,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 223.5656280517578,
|
||||
"step": 340
|
||||
},
|
||||
{
|
||||
"epoch": 0.7329842931937173,
|
||||
"grad_norm": 270.27557373046875,
|
||||
"learning_rate": 1.0201883817182949e-07,
|
||||
"logits/chosen": -0.7445515990257263,
|
||||
"logits/rejected": -0.7549839615821838,
|
||||
"logps/chosen": -299.8614501953125,
|
||||
"logps/ref_chosen": -298.24725341796875,
|
||||
"logps/ref_rejected": -272.657958984375,
|
||||
"logps/rejected": -283.16290283203125,
|
||||
"loss": 4.0568,
|
||||
"r_dpo/chosen_len": 281.4624938964844,
|
||||
"r_dpo/length_delta": 45.275001525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 236.1875,
|
||||
"step": 350
|
||||
},
|
||||
{
|
||||
"epoch": 0.7539267015706806,
|
||||
"grad_norm": 239.69332885742188,
|
||||
"learning_rate": 8.76727937529367e-08,
|
||||
"logits/chosen": -0.7207680344581604,
|
||||
"logits/rejected": -0.7342051863670349,
|
||||
"logps/chosen": -282.36883544921875,
|
||||
"logps/ref_chosen": -281.881103515625,
|
||||
"logps/ref_rejected": -265.4746398925781,
|
||||
"logps/rejected": -275.1003112792969,
|
||||
"loss": 4.1996,
|
||||
"r_dpo/chosen_len": 272.64373779296875,
|
||||
"r_dpo/length_delta": 30.071874618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.57186889648438,
|
||||
"step": 360
|
||||
},
|
||||
{
|
||||
"epoch": 0.774869109947644,
|
||||
"grad_norm": 198.85562133789062,
|
||||
"learning_rate": 7.419687580962222e-08,
|
||||
"logits/chosen": -0.7548901438713074,
|
||||
"logits/rejected": -0.7558341026306152,
|
||||
"logps/chosen": -302.01898193359375,
|
||||
"logps/ref_chosen": -302.17822265625,
|
||||
"logps/ref_rejected": -265.92877197265625,
|
||||
"logps/rejected": -275.6526794433594,
|
||||
"loss": 3.8979,
|
||||
"r_dpo/chosen_len": 273.88751220703125,
|
||||
"r_dpo/length_delta": 33.759376525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 240.1281280517578,
|
||||
"step": 370
|
||||
},
|
||||
{
|
||||
"epoch": 0.7958115183246073,
|
||||
"grad_norm": 286.9093933105469,
|
||||
"learning_rate": 6.166331963291519e-08,
|
||||
"logits/chosen": -0.7173658013343811,
|
||||
"logits/rejected": -0.7228809595108032,
|
||||
"logps/chosen": -300.5129699707031,
|
||||
"logps/ref_chosen": -301.07965087890625,
|
||||
"logps/ref_rejected": -266.7089538574219,
|
||||
"logps/rejected": -274.7568359375,
|
||||
"loss": 4.1149,
|
||||
"r_dpo/chosen_len": 286.75311279296875,
|
||||
"r_dpo/length_delta": 33.53125,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 253.2218780517578,
|
||||
"step": 380
|
||||
},
|
||||
{
|
||||
"epoch": 0.8167539267015707,
|
||||
"grad_norm": 256.02996826171875,
|
||||
"learning_rate": 5.013930914912476e-08,
|
||||
"logits/chosen": -0.7370638251304626,
|
||||
"logits/rejected": -0.7369734048843384,
|
||||
"logps/chosen": -295.3925476074219,
|
||||
"logps/ref_chosen": -296.6472473144531,
|
||||
"logps/ref_rejected": -278.953857421875,
|
||||
"logps/rejected": -287.25274658203125,
|
||||
"loss": 4.0032,
|
||||
"r_dpo/chosen_len": 287.91876220703125,
|
||||
"r_dpo/length_delta": 30.081249237060547,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 257.8374938964844,
|
||||
"step": 390
|
||||
},
|
||||
{
|
||||
"epoch": 0.837696335078534,
|
||||
"grad_norm": 229.88453674316406,
|
||||
"learning_rate": 3.968661679220467e-08,
|
||||
"logits/chosen": -0.726976752281189,
|
||||
"logits/rejected": -0.7373378872871399,
|
||||
"logps/chosen": -295.95574951171875,
|
||||
"logps/ref_chosen": -296.6556091308594,
|
||||
"logps/ref_rejected": -256.9266662597656,
|
||||
"logps/rejected": -264.715576171875,
|
||||
"loss": 4.0423,
|
||||
"r_dpo/chosen_len": 278.96875,
|
||||
"r_dpo/length_delta": 39.60625076293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 239.3625030517578,
|
||||
"step": 400
|
||||
},
|
||||
{
|
||||
"epoch": 0.837696335078534,
|
||||
"eval_logits/chosen": -0.7442260980606079,
|
||||
"eval_logits/rejected": -0.7460020780563354,
|
||||
"eval_logps/chosen": -288.06787109375,
|
||||
"eval_logps/ref_chosen": -289.1346435546875,
|
||||
"eval_logps/ref_rejected": -264.7782287597656,
|
||||
"eval_logps/rejected": -272.9751281738281,
|
||||
"eval_loss": 0.5080092549324036,
|
||||
"eval_r_dpo/chosen_len": 291.2619934082031,
|
||||
"eval_r_dpo/length_delta": 42.86600112915039,
|
||||
"eval_r_dpo/regularization_term": 0.0,
|
||||
"eval_r_dpo/rejected_len": 248.39599609375,
|
||||
"eval_runtime": 82.0059,
|
||||
"eval_samples_per_second": 24.388,
|
||||
"eval_steps_per_second": 1.524,
|
||||
"step": 400
|
||||
},
|
||||
{
|
||||
"epoch": 0.8586387434554974,
|
||||
"grad_norm": 257.00933837890625,
|
||||
"learning_rate": 3.036127238347164e-08,
|
||||
"logits/chosen": -0.7183164358139038,
|
||||
"logits/rejected": -0.7075697183609009,
|
||||
"logps/chosen": -290.19342041015625,
|
||||
"logps/ref_chosen": -290.08154296875,
|
||||
"logps/ref_rejected": -272.56988525390625,
|
||||
"logps/rejected": -280.45965576171875,
|
||||
"loss": 4.0132,
|
||||
"r_dpo/chosen_len": 282.40625,
|
||||
"r_dpo/length_delta": 26.265625,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 256.140625,
|
||||
"step": 410
|
||||
},
|
||||
{
|
||||
"epoch": 0.8795811518324608,
|
||||
"grad_norm": 258.52166748046875,
|
||||
"learning_rate": 2.2213262793589482e-08,
|
||||
"logits/chosen": -0.7189906239509583,
|
||||
"logits/rejected": -0.7180750966072083,
|
||||
"logps/chosen": -306.213623046875,
|
||||
"logps/ref_chosen": -307.40240478515625,
|
||||
"logps/ref_rejected": -279.85760498046875,
|
||||
"logps/rejected": -287.7927551269531,
|
||||
"loss": 3.9865,
|
||||
"r_dpo/chosen_len": 296.8343811035156,
|
||||
"r_dpo/length_delta": 37.165626525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 259.66876220703125,
|
||||
"step": 420
|
||||
},
|
||||
{
|
||||
"epoch": 0.900523560209424,
|
||||
"grad_norm": 225.66854858398438,
|
||||
"learning_rate": 1.5286263996730026e-08,
|
||||
"logits/chosen": -0.7201143503189087,
|
||||
"logits/rejected": -0.7314451932907104,
|
||||
"logps/chosen": -297.2125244140625,
|
||||
"logps/ref_chosen": -297.7133483886719,
|
||||
"logps/ref_rejected": -266.862060546875,
|
||||
"logps/rejected": -276.1941223144531,
|
||||
"loss": 4.0823,
|
||||
"r_dpo/chosen_len": 290.703125,
|
||||
"r_dpo/length_delta": 48.868751525878906,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 241.83438110351562,
|
||||
"step": 430
|
||||
},
|
||||
{
|
||||
"epoch": 0.9214659685863874,
|
||||
"grad_norm": 296.7327880859375,
|
||||
"learning_rate": 9.617406953185136e-09,
|
||||
"logits/chosen": -0.7148677706718445,
|
||||
"logits/rejected": -0.7244837880134583,
|
||||
"logps/chosen": -293.4369201660156,
|
||||
"logps/ref_chosen": -293.84521484375,
|
||||
"logps/ref_rejected": -289.6698303222656,
|
||||
"logps/rejected": -297.15863037109375,
|
||||
"loss": 4.2486,
|
||||
"r_dpo/chosen_len": 285.3656311035156,
|
||||
"r_dpo/length_delta": 12.868749618530273,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 272.49688720703125,
|
||||
"step": 440
|
||||
},
|
||||
{
|
||||
"epoch": 0.9424083769633508,
|
||||
"grad_norm": 255.14761352539062,
|
||||
"learning_rate": 5.2370785753763356e-09,
|
||||
"logits/chosen": -0.735411524772644,
|
||||
"logits/rejected": -0.742519199848175,
|
||||
"logps/chosen": -295.8509826660156,
|
||||
"logps/ref_chosen": -297.1932373046875,
|
||||
"logps/ref_rejected": -262.9091796875,
|
||||
"logps/rejected": -271.23907470703125,
|
||||
"loss": 3.8827,
|
||||
"r_dpo/chosen_len": 282.81561279296875,
|
||||
"r_dpo/length_delta": 40.631248474121094,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.1843719482422,
|
||||
"step": 450
|
||||
},
|
||||
{
|
||||
"epoch": 0.9633507853403142,
|
||||
"grad_norm": 254.89694213867188,
|
||||
"learning_rate": 2.168758844148272e-09,
|
||||
"logits/chosen": -0.7343727946281433,
|
||||
"logits/rejected": -0.7490428686141968,
|
||||
"logps/chosen": -311.859619140625,
|
||||
"logps/ref_chosen": -312.42291259765625,
|
||||
"logps/ref_rejected": -278.7356262207031,
|
||||
"logps/rejected": -286.7757263183594,
|
||||
"loss": 4.1604,
|
||||
"r_dpo/chosen_len": 288.9125061035156,
|
||||
"r_dpo/length_delta": 43.23125076293945,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 245.68124389648438,
|
||||
"step": 460
|
||||
},
|
||||
{
|
||||
"epoch": 0.9842931937172775,
|
||||
"grad_norm": 231.78506469726562,
|
||||
"learning_rate": 4.288949484559934e-10,
|
||||
"logits/chosen": -0.7186932563781738,
|
||||
"logits/rejected": -0.7156038284301758,
|
||||
"logps/chosen": -276.20965576171875,
|
||||
"logps/ref_chosen": -278.13385009765625,
|
||||
"logps/ref_rejected": -256.70648193359375,
|
||||
"logps/rejected": -264.2013244628906,
|
||||
"loss": 3.9555,
|
||||
"r_dpo/chosen_len": 268.8687438964844,
|
||||
"r_dpo/length_delta": 26.262500762939453,
|
||||
"r_dpo/regularization_term": 0.0,
|
||||
"r_dpo/rejected_len": 242.6062469482422,
|
||||
"step": 470
|
||||
},
|
||||
{
|
||||
"epoch": 0.9989528795811519,
|
||||
"step": 477,
|
||||
"total_flos": 0.0,
|
||||
"train_loss": 4.32008807264284,
|
||||
"train_runtime": 6099.6278,
|
||||
"train_samples_per_second": 10.023,
|
||||
"train_steps_per_second": 0.078
|
||||
}
|
||||
],
|
||||
"logging_steps": 10,
|
||||
"max_steps": 477,
|
||||
"num_input_tokens_seen": 0,
|
||||
"num_train_epochs": 1,
|
||||
"save_steps": 50,
|
||||
"stateful_callbacks": {
|
||||
"TrainerControl": {
|
||||
"args": {
|
||||
"should_epoch_stop": false,
|
||||
"should_evaluate": false,
|
||||
"should_log": false,
|
||||
"should_save": false,
|
||||
"should_training_stop": false
|
||||
},
|
||||
"attributes": {}
|
||||
}
|
||||
},
|
||||
"total_flos": 0.0,
|
||||
"train_batch_size": 4,
|
||||
"trial_name": null,
|
||||
"trial_params": null
|
||||
}
|
||||
Reference in New Issue
Block a user