Files
ModelHub XC 1db7c7ce96 初始化项目,由ModelHub XC社区提供模型
Model: W-61/llama-3-8b-base-sft-hh-helpful-4xh200
Source: Original Platform
2026-04-25 14:10:50 +08:00

682 lines
103 KiB
Plaintext
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

2026-04-16 16:21:35 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False
2026-04-16 16:21:35 - WARNING - __main__ - Process rank: 3, device: cuda:3, n_gpu: 1 distributed training: True, 16-bits training: False
2026-04-16 16:21:35 - WARNING - __main__ - Process rank: 2, device: cuda:2, n_gpu: 1 distributed training: True, 16-bits training: False
2026-04-16 16:21:35 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False
2026-04-16 16:21:35 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='/scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B', model_revision='main', model_code_revision=None, torch_dtype='bfloat16', tokenizer_name_or_path=None, trust_remote_code=False, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False, bnb_4bit_quant_storage='uint8')
2026-04-16 16:21:35 - INFO - __main__ - Data parameters DataArguments(chat_template="{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}", dataset_mixer={'Anthropic/hh-rlhf': 1.0}, text_column='text', dataset_splits=['train', 'test'], dataset_configs=['helpful-base'], dataset_dir=None, preprocessing_num_workers=12, use_persistent_hf_cache=False, hf_cache_dir=None, truncation_side=None, auto_insert_empty_system_msg=True, preprocessing_log_samples=0, preprocessing_log_dir=None)
2026-04-16 16:21:35 - INFO - __main__ - Training/evaluation parameters SFTConfig(
_n_gpu=1,
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
average_tokens_across_devices=False,
batch_eval_metrics=False,
bf16=True,
bf16_full_eval=False,
chars_per_token=<CHARS_PER_TOKEN>,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
dataloader_prefetch_factor=None,
dataset_batch_size=1000,
dataset_kwargs=None,
dataset_num_proc=None,
dataset_text_field=None,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_do_concat_batches=True,
eval_on_start=False,
eval_packing=None,
eval_steps=100,
eval_strategy=IntervalStrategy.STEPS,
eval_use_gather_object=False,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=2,
gradient_checkpointing=True,
gradient_checkpointing_kwargs={'use_reentrant': False},
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=W-61/llama-3-8b-base-sft-hh-helpful-4xh200,
hub_model_revision=main,
hub_private_repo=None,
hub_strategy=HubStrategy.END,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_for_metrics=[],
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=2e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=info,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=outputs/llama-3-8b-base-sft-hh-helpful-4xh200/runs/Apr16_16-21-35_d4054,
logging_first_step=True,
logging_nan_inf_filter=True,
logging_steps=5,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_kwargs={},
lr_scheduler_type=SchedulerType.COSINE,
max_grad_norm=1.0,
max_seq_length=512,
max_steps=-1,
metric_for_best_model=None,
model_init_kwargs=None,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_of_sequences=1024,
num_train_epochs=1,
optim=OptimizerNames.ADAMW_TORCH,
optim_args=None,
optim_target_modules=None,
output_dir=/scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101,
overwrite_output_dir=True,
packing=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['wandb'],
restore_callback_states_from_checkpoint=False,
resume_from_checkpoint=None,
run_name=llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101,
save_on_each_node=False,
save_only_model=False,
save_safetensors=True,
save_steps=200,
save_strategy=SaveStrategy.STEPS,
save_total_limit=2,
seed=42,
skip_memory_metrics=True,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torch_empty_cache_steps=None,
torchdynamo=None,
tp_size=0,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_liger=False,
use_liger_kernel=False,
use_mps_device=False,
warmup_ratio=0.1,
warmup_steps=0,
weight_decay=0.0,
)
No config specified, defaulting to the single config: hh-rlhf/default
2026-04-16 16:21:36 - INFO - datasets.builder - No config specified, defaulting to the single config: hh-rlhf/default
Using custom data configuration default-cfba128a0ab1b99f
2026-04-16 16:21:36 - INFO - datasets.builder - Using custom data configuration default-cfba128a0ab1b99f
Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/json
2026-04-16 16:21:36 - INFO - datasets.info - Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/json
Overwrite dataset info from restored data version if exists.
2026-04-16 16:21:36 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
2026-04-16 16:21:36 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
Found cached dataset hh-rlhf (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa)
2026-04-16 16:21:36 - INFO - datasets.builder - Found cached dataset hh-rlhf (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa)
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
2026-04-16 16:21:36 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
2026-04-16 16:21:38 - WARNING - alignment.data - Dropped 237 non-canonical HH preference examples from split `train` before normalization (126 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 111 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (train): 0%| | 0/43598 [00:00<?, ? examples/s]2026-04-16 16:21:38 - WARNING - alignment.data - Dropped 237 non-canonical HH preference examples from split `train` before normalization (126 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 111 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (train): 0%| | 0/43598 [00:00<?, ? examples/s]2026-04-16 16:21:38 - WARNING - alignment.data - Dropped 237 non-canonical HH preference examples from split `train` before normalization (126 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 111 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (train): 0%| | 0/43598 [00:00<?, ? examples/s]Caching processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d6e6bfbe34161664.arrow
2026-04-16 16:21:38 - INFO - datasets.arrow_dataset - Caching processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d6e6bfbe34161664.arrow
2026-04-16 16:21:38 - WARNING - alignment.data - Dropped 237 non-canonical HH preference examples from split `train` before normalization (126 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 111 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (train): 0%| | 0/43598 [00:00<?, ? examples/s]
Normalizing raw HH preferences (train): 2%|▏ | 1008/43598 [00:00<00:04, 10026.94 examples/s]
Normalizing raw HH preferences (train): 2%|▏ | 992/43598 [00:00<00:04, 9876.96 examples/s]
Normalizing raw HH preferences (train): 2%|▏ | 1000/43598 [00:00<00:04, 9139.35 examples/s]
Normalizing raw HH preferences (train): 2%|▏ | 1000/43598 [00:00<00:04, 9601.09 examples/s]
Normalizing raw HH preferences (train): 5%|▌ | 2207/43598 [00:00<00:03, 11177.39 examples/s]
Normalizing raw HH preferences (train): 5%|▍ | 2125/43598 [00:00<00:03, 10725.18 examples/s]
Normalizing raw HH preferences (train): 5%|▌ | 2302/43598 [00:00<00:03, 11329.62 examples/s]
Normalizing raw HH preferences (train): 5%|▌ | 2306/43598 [00:00<00:03, 11597.99 examples/s]
Normalizing raw HH preferences (train): 8%|▊ | 3486/43598 [00:00<00:03, 11909.20 examples/s]
Normalizing raw HH preferences (train): 8%|▊ | 3425/43598 [00:00<00:03, 11756.46 examples/s]
Normalizing raw HH preferences (train): 8%|▊ | 3597/43598 [00:00<00:03, 12051.07 examples/s]
Normalizing raw HH preferences (train): 9%|▊ | 3709/43598 [00:00<00:03, 12295.82 examples/s]
Normalizing raw HH preferences (train): 11%|█▏ | 5000/43598 [00:00<00:05, 7343.12 examples/s]
Normalizing raw HH preferences (train): 11%|█▏ | 5000/43598 [00:00<00:04, 7893.68 examples/s]
Normalizing raw HH preferences (train): 11%|█▏ | 5000/43598 [00:00<00:05, 7360.24 examples/s]
Normalizing raw HH preferences (train): 11%|█▏ | 5000/43598 [00:00<00:05, 7409.09 examples/s]
Normalizing raw HH preferences (train): 14%|█▍ | 6265/43598 [00:00<00:04, 8609.14 examples/s]
Normalizing raw HH preferences (train): 14%|█▍ | 6281/43598 [00:00<00:04, 9144.71 examples/s]
Normalizing raw HH preferences (train): 14%|█▍ | 6280/43598 [00:00<00:04, 8686.03 examples/s]
Normalizing raw HH preferences (train): 14%|█▍ | 6274/43598 [00:00<00:04, 8677.89 examples/s]
Normalizing raw HH preferences (train): 17%|█▋ | 7526/43598 [00:00<00:03, 9626.03 examples/s]
Normalizing raw HH preferences (train): 17%|█▋ | 7556/43598 [00:00<00:03, 10106.84 examples/s]
Normalizing raw HH preferences (train): 17%|█▋ | 7570/43598 [00:00<00:03, 9773.43 examples/s]
Normalizing raw HH preferences (train): 17%|█▋ | 7549/43598 [00:00<00:03, 9711.57 examples/s]
Normalizing raw HH preferences (train): 20%|██ | 8804/43598 [00:00<00:03, 10466.28 examples/s]
Normalizing raw HH preferences (train): 20%|██ | 8842/43598 [00:00<00:03, 10865.11 examples/s]
Normalizing raw HH preferences (train): 20%|██ | 8859/43598 [00:00<00:03, 10613.24 examples/s]
Normalizing raw HH preferences (train): 20%|██ | 8832/43598 [00:00<00:03, 10539.63 examples/s]
Normalizing raw HH preferences (train): 23%|██▎ | 10000/43598 [00:01<00:03, 10771.20 examples/s]
Normalizing raw HH preferences (train): 25%|██▍ | 10704/43598 [00:01<00:02, 11434.75 examples/s]
Normalizing raw HH preferences (train): 25%|██▍ | 10736/43598 [00:01<00:02, 11305.73 examples/s]
Normalizing raw HH preferences (train): 25%|██▍ | 10703/43598 [00:01<00:02, 11211.39 examples/s]
Normalizing raw HH preferences (train): 26%|██▌ | 11277/43598 [00:01<00:02, 11331.17 examples/s]
Normalizing raw HH preferences (train): 27%|██▋ | 11989/43598 [00:01<00:02, 11800.65 examples/s]
Normalizing raw HH preferences (train): 28%|██▊ | 11990/43598 [00:01<00:02, 11634.75 examples/s]
Normalizing raw HH preferences (train): 28%|██▊ | 12000/43598 [00:01<00:02, 11456.29 examples/s]
Normalizing raw HH preferences (train): 29%|██▉ | 12541/43598 [00:01<00:02, 11703.67 examples/s]
Normalizing raw HH preferences (train): 30%|███ | 13287/43598 [00:01<00:02, 11832.16 examples/s]
Normalizing raw HH preferences (train): 32%|███▏ | 13870/43598 [00:01<00:02, 12062.72 examples/s]
Normalizing raw HH preferences (train): 32%|███▏ | 13817/43598 [00:01<00:02, 12007.87 examples/s]
Normalizing raw HH preferences (train): 32%|███▏ | 13856/43598 [00:01<00:02, 11917.66 examples/s]
Normalizing raw HH preferences (train): 33%|███▎ | 14568/43598 [00:01<00:02, 12098.83 examples/s]
Normalizing raw HH preferences (train): 36%|███▌ | 15753/43598 [00:01<00:02, 12225.75 examples/s]
Normalizing raw HH preferences (train): 36%|███▌ | 15699/43598 [00:01<00:02, 12181.97 examples/s]
Normalizing raw HH preferences (train): 36%|███▌ | 15733/43598 [00:01<00:02, 12116.80 examples/s]
Normalizing raw HH preferences (train): 36%|███▋ | 15883/43598 [00:01<00:02, 12395.76 examples/s]
Normalizing raw HH preferences (train): 39%|███▉ | 16975/43598 [00:01<00:02, 12333.37 examples/s]
Normalizing raw HH preferences (train): 39%|███▉ | 17000/43598 [00:01<00:02, 12139.15 examples/s]
Normalizing raw HH preferences (train): 39%|███▉ | 17000/43598 [00:01<00:02, 12055.31 examples/s]
Normalizing raw HH preferences (train): 41%|████ | 17758/43598 [00:01<00:02, 12431.90 examples/s]
Normalizing raw HH preferences (train): 42%|████▏ | 18270/43598 [00:01<00:02, 12279.44 examples/s]
Normalizing raw HH preferences (train): 42%|████▏ | 18269/43598 [00:01<00:02, 12213.81 examples/s]
Normalizing raw HH preferences (train): 43%|████▎ | 18746/43598 [00:01<00:02, 12137.90 examples/s]
Normalizing raw HH preferences (train): 45%|████▍ | 19555/43598 [00:01<00:01, 12427.80 examples/s]
Normalizing raw HH preferences (train): 45%|████▍ | 19544/43598 [00:01<00:01, 12353.91 examples/s]
Normalizing raw HH preferences (train): 45%|████▌ | 19703/43598 [00:01<00:01, 12423.77 examples/s]
Normalizing raw HH preferences (train): 46%|████▌ | 20000/43598 [00:01<00:01, 12034.42 examples/s]
Normalizing raw HH preferences (train): 48%|████▊ | 20843/43598 [00:01<00:01, 12548.29 examples/s]
Normalizing raw HH preferences (train): 48%|████▊ | 20981/43598 [00:01<00:01, 12507.32 examples/s]
Normalizing raw HH preferences (train): 49%|████▉ | 21273/43598 [00:01<00:01, 12215.98 examples/s]
Normalizing raw HH preferences (train): 49%|████▉ | 21347/43598 [00:01<00:01, 12231.48 examples/s]
Normalizing raw HH preferences (train): 52%|█████▏ | 22723/43598 [00:01<00:01, 12539.00 examples/s]
Normalizing raw HH preferences (train): 52%|█████▏ | 22546/43598 [00:02<00:01, 12354.82 examples/s]
Normalizing raw HH preferences (train): 52%|█████▏ | 22868/43598 [00:02<00:01, 12527.47 examples/s]
Normalizing raw HH preferences (train): 52%|█████▏ | 22698/43598 [00:02<00:01, 12411.61 examples/s]
Normalizing raw HH preferences (train): 55%|█████▍ | 23818/43598 [00:02<00:01, 12457.21 examples/s]
Normalizing raw HH preferences (train): 55%|█████▌ | 24000/43598 [00:02<00:01, 12410.71 examples/s]
Normalizing raw HH preferences (train): 55%|█████▌ | 23980/43598 [00:02<00:01, 12517.58 examples/s]
Normalizing raw HH preferences (train): 57%|█████▋ | 24762/43598 [00:02<00:01, 12556.14 examples/s]
Normalizing raw HH preferences (train): 58%|█████▊ | 25283/43598 [00:02<00:01, 12521.81 examples/s]
Normalizing raw HH preferences (train): 59%|█████▉ | 25692/43598 [00:02<00:01, 12420.22 examples/s]
Normalizing raw HH preferences (train): 59%|█████▉ | 25841/43598 [00:02<00:01, 12473.53 examples/s]
Normalizing raw HH preferences (train): 61%|██████ | 26692/43598 [00:02<00:01, 12525.73 examples/s]
Normalizing raw HH preferences (train): 62%|██████▏ | 27071/43598 [00:02<00:01, 12295.87 examples/s]
Normalizing raw HH preferences (train): 63%|██████▎ | 27510/43598 [00:02<00:01, 12312.00 examples/s]
Normalizing raw HH preferences (train): 64%|██████▎ | 27693/43598 [00:02<00:01, 12412.52 examples/s]
Normalizing raw HH preferences (train): 64%|██████▍ | 27951/43598 [00:02<00:01, 12538.47 examples/s]
Normalizing raw HH preferences (train): 65%|██████▌ | 28339/43598 [00:02<00:01, 12392.41 examples/s]
Normalizing raw HH preferences (train): 66%|██████▌ | 28777/43598 [00:02<00:01, 12398.79 examples/s]
Normalizing raw HH preferences (train): 66%|██████▋ | 28975/43598 [00:02<00:01, 12508.86 examples/s]
Normalizing raw HH preferences (train): 68%|██████▊ | 29708/43598 [00:02<00:01, 12586.96 examples/s]
Normalizing raw HH preferences (train): 68%|██████▊ | 29856/43598 [00:02<00:01, 12588.47 examples/s]
Normalizing raw HH preferences (train): 70%|███████ | 30697/43598 [00:02<00:01, 12443.53 examples/s]
Normalizing raw HH preferences (train): 71%|███████ | 30993/43598 [00:02<00:00, 12658.33 examples/s]
Normalizing raw HH preferences (train): 71%|███████ | 30851/43598 [00:02<00:01, 12503.73 examples/s]
Normalizing raw HH preferences (train): 73%|███████▎ | 31744/43598 [00:02<00:00, 12585.43 examples/s]
Normalizing raw HH preferences (train): 73%|███████▎ | 31965/43598 [00:02<00:00, 12499.62 examples/s]
Normalizing raw HH preferences (train): 75%|███████▌ | 32875/43598 [00:02<00:00, 12612.49 examples/s]
Normalizing raw HH preferences (train): 75%|███████▌ | 32726/43598 [00:02<00:00, 12500.82 examples/s]
Normalizing raw HH preferences (train): 77%|███████▋ | 33569/43598 [00:02<00:00, 12450.79 examples/s]
Normalizing raw HH preferences (train): 78%|███████▊ | 33815/43598 [00:02<00:00, 12436.31 examples/s]
Normalizing raw HH preferences (train): 78%|███████▊ | 33984/43598 [00:02<00:00, 12518.90 examples/s]
Normalizing raw HH preferences (train): 80%|███████▉ | 34734/43598 [00:02<00:00, 12534.04 examples/s]
Normalizing raw HH preferences (train): 80%|███████▉ | 34854/43598 [00:02<00:00, 12538.98 examples/s]
Normalizing raw HH preferences (train): 82%|████████▏ | 35676/43598 [00:03<00:00, 12381.51 examples/s]
Normalizing raw HH preferences (train): 82%|████████▏ | 35846/43598 [00:03<00:00, 12478.93 examples/s]
Normalizing raw HH preferences (train): 84%|████████▍ | 36571/43598 [00:03<00:00, 12434.70 examples/s]
Normalizing raw HH preferences (train): 84%|████████▍ | 36713/43598 [00:03<00:00, 12488.80 examples/s]
Normalizing raw HH preferences (train): 87%|████████▋ | 37857/43598 [00:03<00:00, 12532.79 examples/s]
Normalizing raw HH preferences (train): 86%|████████▌ | 37506/43598 [00:03<00:00, 12318.98 examples/s]
Normalizing raw HH preferences (train): 86%|████████▋ | 37699/43598 [00:03<00:00, 12433.62 examples/s]
Normalizing raw HH preferences (train): 87%|████████▋ | 37991/43598 [00:03<00:00, 12556.78 examples/s]
Normalizing raw HH preferences (train): 89%|████████▉ | 38767/43598 [00:03<00:00, 12384.62 examples/s]
Normalizing raw HH preferences (train): 89%|████████▉ | 38964/43598 [00:03<00:00, 12481.91 examples/s]
Normalizing raw HH preferences (train): 91%|█████████ | 39717/43598 [00:03<00:00, 12483.95 examples/s]
Normalizing raw HH preferences (train): 91%|█████████▏| 39872/43598 [00:03<00:00, 12547.18 examples/s]
Normalizing raw HH preferences (train): 94%|█████████▍| 40992/43598 [00:03<00:00, 12544.66 examples/s]
Normalizing raw HH preferences (train): 93%|█████████▎| 40694/43598 [00:03<00:00, 12373.51 examples/s]
Normalizing raw HH preferences (train): 94%|█████████▎| 40829/43598 [00:03<00:00, 12462.37 examples/s]
Normalizing raw HH preferences (train): 96%|█████████▌| 41762/43598 [00:03<00:00, 12562.23 examples/s]
Normalizing raw HH preferences (train): 96%|█████████▌| 41957/43598 [00:03<00:00, 12431.13 examples/s]
Normalizing raw HH preferences (train): 98%|█████████▊| 42697/43598 [00:03<00:00, 12438.33 examples/s]
Normalizing raw HH preferences (train): 98%|█████████▊| 42711/43598 [00:03<00:00, 8957.82 examples/s]
Normalizing raw HH preferences (train): 99%|█████████▉| 43277/43598 [00:03<00:00, 9026.50 examples/s]
Normalizing raw HH preferences (train): 99%|█████████▉| 43255/43598 [00:03<00:00, 8453.97 examples/s]
Normalizing raw HH preferences (train): 100%|██████████| 43598/43598 [00:04<00:00, 10705.72 examples/s]
Normalizing raw HH preferences (train): 100%|██████████| 43598/43598 [00:04<00:00, 10646.71 examples/s]
Normalizing raw HH preferences (train): 100%|██████████| 43598/43598 [00:04<00:00, 10608.01 examples/s]
Normalizing raw HH preferences (train): 100%|██████████| 43598/43598 [00:04<00:00, 10504.28 examples/s]
No config specified, defaulting to the single config: hh-rlhf/default
2026-04-16 16:21:42 - INFO - datasets.builder - No config specified, defaulting to the single config: hh-rlhf/default
Using custom data configuration default-cfba128a0ab1b99f
2026-04-16 16:21:42 - INFO - datasets.builder - Using custom data configuration default-cfba128a0ab1b99f
Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/json
2026-04-16 16:21:42 - INFO - datasets.info - Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/json
Overwrite dataset info from restored data version if exists.
2026-04-16 16:21:42 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
2026-04-16 16:21:42 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
Found cached dataset hh-rlhf (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa)
2026-04-16 16:21:42 - INFO - datasets.builder - Found cached dataset hh-rlhf (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa)
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
2026-04-16 16:21:42 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa
2026-04-16 16:21:42 - WARNING - alignment.data - Dropped 15 non-canonical HH preference examples from split `test` before normalization (9 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 6 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (test): 0%| | 0/2339 [00:00<?, ? examples/s]2026-04-16 16:21:42 - WARNING - alignment.data - Dropped 15 non-canonical HH preference examples from split `test` before normalization (9 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 6 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (test): 0%| | 0/2339 [00:00<?, ? examples/s]2026-04-16 16:21:42 - WARNING - alignment.data - Dropped 15 non-canonical HH preference examples from split `test` before normalization (9 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 6 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (test): 0%| | 0/2339 [00:00<?, ? examples/s]Caching processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-fa6f4b7acba8a3e1.arrow
2026-04-16 16:21:42 - INFO - datasets.arrow_dataset - Caching processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-fa6f4b7acba8a3e1.arrow
2026-04-16 16:21:43 - WARNING - alignment.data - Dropped 15 non-canonical HH preference examples from split `test` before normalization (9 x HH preprocessing expects exactly one final assistant response in chosen/rejected suffixes., 6 x HH chosen/rejected transcripts must each contain a divergent assistant response.).
Normalizing raw HH preferences (test): 0%| | 0/2339 [00:00<?, ? examples/s]
Normalizing raw HH preferences (test): 52%|█████▏ | 1205/2339 [00:00<00:00, 12002.88 examples/s]
Normalizing raw HH preferences (test): 50%|█████ | 1174/2339 [00:00<00:00, 11689.65 examples/s]
Normalizing raw HH preferences (test): 51%|█████ | 1191/2339 [00:00<00:00, 11858.41 examples/s]
Normalizing raw HH preferences (test): 51%|█████ | 1186/2339 [00:00<00:00, 11808.77 examples/s]
Normalizing raw HH preferences (test): 100%|██████████| 2339/2339 [00:00<00:00, 11040.31 examples/s]
Normalizing raw HH preferences (test): 100%|██████████| 2339/2339 [00:00<00:00, 10729.27 examples/s]
Normalizing raw HH preferences (test): 100%|██████████| 2339/2339 [00:00<00:00, 10934.21 examples/s]
Loading cached shuffled indices for dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-be0876dd0add1b31.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Loading cached shuffled indices for dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-be0876dd0add1b31.arrow
Loading cached shuffled indices for dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-40e942b49dfd026a.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Loading cached shuffled indices for dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-40e942b49dfd026a.arrow
2026-04-16 16:21:43 - INFO - __main__ - Training on the following datasets and their proportions: ['train : 43598', 'test : 2339']
[INFO|tokenization_utils_base.py:2058] 2026-04-16 16:21:43,179 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2058] 2026-04-16 16:21:43,179 >> loading file tokenizer.model
[INFO|tokenization_utils_base.py:2058] 2026-04-16 16:21:43,179 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2058] 2026-04-16 16:21:43,179 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2058] 2026-04-16 16:21:43,179 >> loading file tokenizer_config.json
[INFO|tokenization_utils_base.py:2058] 2026-04-16 16:21:43,179 >> loading file chat_template.jinja
Normalizing raw HH preferences (test): 100%|██████████| 2339/2339 [00:00<00:00, 10916.03 examples/s]
[INFO|tokenization_utils_base.py:2323] 2026-04-16 16:21:43,499 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2026-04-16 16:21:43 - INFO - __main__ - *** Load pretrained model ***
Process #0 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00000_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #0 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00000_of_00012.arrow
Process #1 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00001_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #1 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00001_of_00012.arrow
Process #2 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00002_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #2 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00002_of_00012.arrow
Process #3 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00003_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #3 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00003_of_00012.arrow
Process #4 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00004_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #4 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00004_of_00012.arrow
Process #5 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00005_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #5 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00005_of_00012.arrow
Process #6 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00006_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #6 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00006_of_00012.arrow
Process #7 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00007_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #7 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00007_of_00012.arrow
Process #8 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00008_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #8 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00008_of_00012.arrow
Process #9 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00009_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #9 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00009_of_00012.arrow
Process #10 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00010_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #10 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00010_of_00012.arrow
Process #11 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00011_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #11 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00011_of_00012.arrow
Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_*_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_*_of_00012.arrow
Concatenating 12 shards
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Concatenating 12 shards
Process #0 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00000_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #0 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00000_of_00012.arrow
Process #1 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00001_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #1 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00001_of_00012.arrow
Process #2 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00002_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #2 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00002_of_00012.arrow
Process #3 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00003_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #3 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00003_of_00012.arrow
Process #4 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00004_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #4 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00004_of_00012.arrow
Process #5 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00005_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #5 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00005_of_00012.arrow
Process #6 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00006_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #6 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00006_of_00012.arrow
Process #7 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00007_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #7 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00007_of_00012.arrow
Process #8 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00008_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #8 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00008_of_00012.arrow
Process #9 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00009_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #9 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00009_of_00012.arrow
Process #10 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00010_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #10 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00010_of_00012.arrow
Process #11 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00011_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Process #11 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_00011_of_00012.arrow
Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_*_of_00012.arrow
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-0f820217b8a8b27e_*_of_00012.arrow
Concatenating 12 shards
2026-04-16 16:21:43 - INFO - datasets.arrow_dataset - Concatenating 12 shards
Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d924339979854155.arrow
2026-04-16 16:21:44 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d924339979854155.arrow
Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-45f8b5bf4d93c54f.arrow
2026-04-16 16:21:44 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-45f8b5bf4d93c54f.arrow
2026-04-16 16:21:44 - INFO - __main__ - Decontaminated 0 (0.00%) samples from the training set.
2026-04-16 16:21:44 - INFO - __main__ - Sample 41905 of the processed training set:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
What is the best way to clean my refrigerator?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Well, I have a few ideas, but the best way to know what works best is to run some tests and then analyze the data to figure out which technique works best.<|eot_id|><|start_header_id|>user<|end_header_id|>
What are your ideas?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Some of my ideas include wiping down the interior with a rag, using bleach and water to wipe down the door, emptying the contents of the fridge to scrub the shelves with a clean cloth, cleaning the interior of the fridge with vinegar and water, and using a vacuum cleaner to clean the interior.<|eot_id|>
2026-04-16 16:21:44 - INFO - __main__ - Sample 7296 of the processed training set:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
How can I get blood out of cotton?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Blood? What do you mean?<|eot_id|>
2026-04-16 16:21:44 - INFO - __main__ - Sample 1639 of the processed training set:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
I'm trying to get my friend to open up<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Sounds great! What have you tried so far?<|eot_id|><|start_header_id|>user<|end_header_id|>
I've tried to give my friend a safe space to open up about their feelings but they still aren't comfortable<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Its really important that people feel safe sharing their feelings, but I totally hear that this isnt easy for them. And in my experience, that has a lot to do with them feeling that you can accept them and what theyre saying, even if theyre feeling difficult or uncomfortable feelings. Do you think youve done that, and that they still havent opened up?<|eot_id|>
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.
Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.
warnings.warn(message, FutureWarning)
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.
Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.
warnings.warn(message, FutureWarning)
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.
Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.
warnings.warn(message, FutureWarning)
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.
Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.
warnings.warn(message, FutureWarning)
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you.
warnings.warn(
[INFO|configuration_utils.py:691] 2026-04-16 16:21:45,743 >> loading configuration file /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B/config.json
[INFO|configuration_utils.py:765] 2026-04-16 16:21:45,744 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128001,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.51.0",
"use_cache": false,
"vocab_size": 128256
}
[WARNING|logging.py:328] 2026-04-16 16:21:45,759 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[WARNING|logging.py:328] 2026-04-16 16:21:45,759 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[INFO|modeling_utils.py:1121] 2026-04-16 16:21:45,759 >> loading weights file /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B/model.safetensors.index.json
[INFO|modeling_utils.py:2167] 2026-04-16 16:21:45,760 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16.
[WARNING|logging.py:328] 2026-04-16 16:21:45,763 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[WARNING|logging.py:328] 2026-04-16 16:21:45,763 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[INFO|configuration_utils.py:1142] 2026-04-16 16:21:45,764 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"eos_token_id": 128001,
"use_cache": false
}
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 245.53it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 386.55it/s]
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:195: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:195: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:283: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:321: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:283: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:321: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 412.36it/s]
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:195: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:283: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:321: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:00, 5.81it/s]
Loading checkpoint shards: 50%|█████ | 2/4 [00:00<00:00, 4.72it/s]
Loading checkpoint shards: 75%|███████▌ | 3/4 [00:00<00:00, 4.80it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 6.43it/s]
[INFO|modeling_utils.py:4926] 2026-04-16 16:21:46,477 >> All model checkpoint weights were used when initializing LlamaForCausalLM.
[INFO|modeling_utils.py:4934] 2026-04-16 16:21:46,477 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:1095] 2026-04-16 16:21:46,479 >> loading configuration file /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B/generation_config.json
[INFO|configuration_utils.py:1142] 2026-04-16 16:21:46,480 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": 128001,
"max_length": 4096,
"temperature": 0.6,
"top_p": 0.9
}
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:195: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:283: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:321: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.
warnings.warn(
Using custom data configuration default-39b52f6e03e85a82
2026-04-16 16:21:46 - INFO - datasets.builder - Using custom data configuration default-39b52f6e03e85a82
Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator
2026-04-16 16:21:46 - INFO - datasets.info - Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator
Overwrite dataset info from restored data version if exists.
2026-04-16 16:21:46 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0
2026-04-16 16:21:46 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0
Found cached dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0)
2026-04-16 16:21:46 - INFO - datasets.builder - Found cached dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0)
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0
2026-04-16 16:21:46 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0
Using custom data configuration default-1519231937de8df3
2026-04-16 16:21:46 - INFO - datasets.builder - Using custom data configuration default-1519231937de8df3
Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator
2026-04-16 16:21:46 - INFO - datasets.info - Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator
Overwrite dataset info from restored data version if exists.
2026-04-16 16:21:46 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0
2026-04-16 16:21:46 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0
Found cached dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0)
2026-04-16 16:21:46 - INFO - datasets.builder - Found cached dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0)
Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0
2026-04-16 16:21:46 - INFO - datasets.info - Loading Dataset info from /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead.
super().__init__(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead.
super().__init__(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead.
super().__init__(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead.
super().__init__(
[INFO|trainer.py:748] 2026-04-16 16:21:49,099 >> Using auto half precision backend
2026-04-16 16:21:49 - INFO - __main__ - *** Train ***
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/accelerate/accelerator.py:1557: UserWarning: Upcasted low precision parameters in LlamaForCausalLM because mixed precision turned on in FSDP. Affects: model.embed_tokens.weight, model.norm.weight, lm_head.weight.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/accelerate/accelerator.py:1557: UserWarning: Upcasted low precision parameters in LlamaDecoderLayer because mixed precision turned on in FSDP. Affects: self_attn.q_proj.weight, self_attn.k_proj.weight, self_attn.v_proj.weight, self_attn.o_proj.weight, mlp.gate_proj.weight, mlp.up_proj.weight, mlp.down_proj.weight, input_layernorm.weight, post_attention_layernorm.weight.
warnings.warn(
/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/accelerate/accelerator.py:1563: UserWarning: FSDP upcast of low precision parameters may affect the precision of model checkpoints.
warnings.warn(
[INFO|trainer.py:2414] 2026-04-16 16:22:27,009 >> ***** Running training *****
[INFO|trainer.py:2415] 2026-04-16 16:22:27,009 >> Num examples = 16,516
[INFO|trainer.py:2416] 2026-04-16 16:22:27,009 >> Num Epochs = 1
[INFO|trainer.py:2417] 2026-04-16 16:22:27,009 >> Instantaneous batch size per device = 8
[INFO|trainer.py:2420] 2026-04-16 16:22:27,009 >> Total train batch size (w. parallel, distributed & accumulation) = 64
[INFO|trainer.py:2421] 2026-04-16 16:22:27,009 >> Gradient Accumulation steps = 2
[INFO|trainer.py:2422] 2026-04-16 16:22:27,009 >> Total optimization steps = 258
[INFO|trainer.py:2423] 2026-04-16 16:22:27,010 >> Number of trainable parameters = 2,007,565,312
[INFO|integration_utils.py:831] 2026-04-16 16:22:27,011 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: can-not-fand (can-not-fand-northeastern-university). Use `wandb login --relogin` to force relogin
wandb: wandb version 0.26.0 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.17.5
wandb: Run data is saved locally in /scratch/feng.yulu/dynamic-dpo-v4/wandb/wandb/run-20260416_162228-ivik22vv
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101
wandb: ⭐️ View project at https://wandb.ai/can-not-fand-northeastern-university/huggingface
wandb: 🚀 View run at https://wandb.ai/can-not-fand-northeastern-university/huggingface/runs/ivik22vv
0%| | 0/258 [00:00<?, ?it/s]
0%| | 1/258 [00:02<11:29, 2.68s/it]
{'loss': 3.1242, 'grad_norm': 6.09610478125056e+18, 'learning_rate': 0.0, 'epoch': 0.0}
0%| | 1/258 [00:02<11:29, 2.68s/it]
1%| | 2/258 [00:03<07:51, 1.84s/it]
1%| | 3/258 [00:05<06:44, 1.58s/it]
2%|▏ | 4/258 [00:06<06:14, 1.47s/it]
2%|▏ | 5/258 [00:07<05:55, 1.41s/it]
{'loss': 3.1772, 'grad_norm': 2697.71337890625, 'learning_rate': 3.0769230769230774e-06, 'epoch': 0.02}
2%|▏ | 5/258 [00:07<05:55, 1.41s/it]
2%|▏ | 6/258 [00:09<05:43, 1.36s/it]
3%|▎ | 7/258 [00:10<05:34, 1.33s/it]
3%|▎ | 8/258 [00:11<05:28, 1.31s/it]
3%|▎ | 9/258 [00:12<05:23, 1.30s/it]
4%|▍ | 10/258 [00:14<05:20, 1.29s/it]
{'loss': 2.8881, 'grad_norm': 25.10409927368164, 'learning_rate': 6.923076923076923e-06, 'epoch': 0.04}
4%|▍ | 10/258 [00:14<05:20, 1.29s/it]
4%|▍ | 11/258 [00:15<05:16, 1.28s/it]
5%|▍ | 12/258 [00:16<05:14, 1.28s/it]
5%|▌ | 13/258 [00:17<05:12, 1.27s/it]
5%|▌ | 14/258 [00:19<05:09, 1.27s/it]
6%|▌ | 15/258 [00:20<05:08, 1.27s/it]
{'loss': 2.4609, 'grad_norm': 12.363713264465332, 'learning_rate': 1.076923076923077e-05, 'epoch': 0.06}
6%|▌ | 15/258 [00:20<05:08, 1.27s/it]
6%|▌ | 16/258 [00:21<05:06, 1.27s/it]
7%|▋ | 17/258 [00:23<05:04, 1.26s/it]
7%|▋ | 18/258 [00:24<05:03, 1.27s/it]
7%|▋ | 19/258 [00:25<05:13, 1.31s/it]
8%|▊ | 20/258 [00:26<05:08, 1.30s/it]
{'loss': 2.2508, 'grad_norm': 22.165477752685547, 'learning_rate': 1.4615384615384615e-05, 'epoch': 0.08}
8%|▊ | 20/258 [00:27<05:08, 1.30s/it]
8%|▊ | 21/258 [00:28<05:05, 1.29s/it]
9%|▊ | 22/258 [00:29<05:02, 1.28s/it]
9%|▉ | 23/258 [00:30<04:59, 1.27s/it]
9%|▉ | 24/258 [00:32<04:57, 1.27s/it]
10%|▉ | 25/258 [00:33<04:55, 1.27s/it]
{'loss': 2.0898, 'grad_norm': 14.86285400390625, 'learning_rate': 1.8461538461538465e-05, 'epoch': 0.1}
10%|▉ | 25/258 [00:33<04:55, 1.27s/it]
10%|█ | 26/258 [00:34<04:53, 1.27s/it]
10%|█ | 27/258 [00:35<04:51, 1.26s/it]
11%|█ | 28/258 [00:37<04:51, 1.27s/it]
11%|█ | 29/258 [00:38<04:49, 1.26s/it]
12%|█▏ | 30/258 [00:39<04:47, 1.26s/it]
{'loss': 1.9877, 'grad_norm': 7.5494914054870605, 'learning_rate': 1.9991749570421146e-05, 'epoch': 0.12}
12%|█▏ | 30/258 [00:39<04:47, 1.26s/it]
12%|█▏ | 31/258 [00:40<04:46, 1.26s/it]
12%|█▏ | 32/258 [00:42<04:45, 1.26s/it]
13%|█▎ | 33/258 [00:43<04:44, 1.27s/it]
13%|█▎ | 34/258 [00:44<04:42, 1.26s/it]
14%|█▎ | 35/258 [00:45<04:41, 1.26s/it]
{'loss': 1.8119, 'grad_norm': 4.193073272705078, 'learning_rate': 1.9941379571543597e-05, 'epoch': 0.14}
14%|█▎ | 35/258 [00:45<04:41, 1.26s/it]
14%|█▍ | 36/258 [00:47<04:51, 1.31s/it]
14%|█▍ | 37/258 [00:48<04:46, 1.30s/it]
15%|█▍ | 38/258 [00:49<04:42, 1.28s/it]
15%|█▌ | 39/258 [00:51<04:48, 1.32s/it]
16%|█▌ | 40/258 [00:52<04:43, 1.30s/it]
{'loss': 1.7714, 'grad_norm': 5.1745171546936035, 'learning_rate': 1.984545368367337e-05, 'epoch': 0.15}
16%|█▌ | 40/258 [00:52<04:43, 1.30s/it]
16%|█▌ | 41/258 [00:53<04:39, 1.29s/it]
16%|█▋ | 42/258 [00:55<04:36, 1.28s/it]
17%|█▋ | 43/258 [00:56<04:33, 1.27s/it]
17%|█▋ | 44/258 [00:57<04:31, 1.27s/it]
17%|█▋ | 45/258 [00:58<04:30, 1.27s/it]
{'loss': 1.7391, 'grad_norm': 6.300442695617676, 'learning_rate': 1.9704411482532116e-05, 'epoch': 0.17}
17%|█▋ | 45/258 [00:58<04:30, 1.27s/it]
18%|█▊ | 46/258 [01:00<04:28, 1.27s/it]
18%|█▊ | 47/258 [01:01<04:26, 1.26s/it]
19%|█▊ | 48/258 [01:02<04:25, 1.26s/it]
19%|█▉ | 49/258 [01:03<04:23, 1.26s/it]
19%|█▉ | 50/258 [01:05<04:22, 1.26s/it]
{'loss': 1.7088, 'grad_norm': 5.5538201332092285, 'learning_rate': 1.9518899287155558e-05, 'epoch': 0.19}
19%|█▉ | 50/258 [01:05<04:22, 1.26s/it]
20%|█▉ | 51/258 [01:06<04:40, 1.36s/it]
20%|██ | 52/258 [01:07<04:33, 1.33s/it]
21%|██ | 53/258 [01:09<04:27, 1.31s/it]
21%|██ | 54/258 [01:10<04:23, 1.29s/it]
21%|██▏ | 55/258 [01:11<04:20, 1.28s/it]
{'loss': 1.6553, 'grad_norm': 4.729171276092529, 'learning_rate': 1.9289767198167918e-05, 'epoch': 0.21}
21%|██▏ | 55/258 [01:11<04:20, 1.28s/it]
22%|██▏ | 56/258 [01:12<04:17, 1.28s/it]
22%|██▏ | 57/258 [01:14<04:15, 1.27s/it]
22%|██▏ | 58/258 [01:15<04:13, 1.27s/it]
23%|██▎ | 59/258 [01:16<04:11, 1.26s/it]
23%|██▎ | 60/258 [01:18<04:10, 1.26s/it]
{'loss': 1.6234, 'grad_norm': 2.736438512802124, 'learning_rate': 1.9018065202237083e-05, 'epoch': 0.23}
23%|██▎ | 60/258 [01:18<04:10, 1.26s/it]
24%|██▎ | 61/258 [01:19<04:08, 1.26s/it]
24%|██▍ | 62/258 [01:20<04:07, 1.26s/it]
24%|██▍ | 63/258 [01:21<04:05, 1.26s/it]
25%|██▍ | 64/258 [01:23<04:22, 1.35s/it]
25%|██▌ | 65/258 [01:24<04:15, 1.32s/it]
{'loss': 1.6005, 'grad_norm': 2.781848192214966, 'learning_rate': 1.8705038360561724e-05, 'epoch': 0.25}
25%|██▌ | 65/258 [01:24<04:15, 1.32s/it]
26%|██▌ | 66/258 [01:25<04:10, 1.31s/it]
26%|██▌ | 67/258 [01:27<04:06, 1.29s/it]
26%|██▋ | 68/258 [01:28<04:04, 1.28s/it]
27%|██▋ | 69/258 [01:29<04:01, 1.28s/it]
27%|██▋ | 70/258 [01:31<04:06, 1.31s/it]
{'loss': 1.6161, 'grad_norm': 30.041719436645508, 'learning_rate': 1.8352121103438804e-05, 'epoch': 0.27}
27%|██▋ | 70/258 [01:31<04:06, 1.31s/it]
28%|██▊ | 71/258 [01:32<04:02, 1.30s/it]
28%|██▊ | 72/258 [01:33<03:59, 1.29s/it]
28%|██▊ | 73/258 [01:34<03:56, 1.28s/it]
29%|██▊ | 74/258 [01:36<03:54, 1.27s/it]
29%|██▉ | 75/258 [01:37<04:00, 1.32s/it]
{'loss': 1.5743, 'grad_norm': 2.318030595779419, 'learning_rate': 1.796093065705644e-05, 'epoch': 0.29}
29%|██▉ | 75/258 [01:37<04:00, 1.32s/it]
29%|██▉ | 76/258 [01:38<04:03, 1.34s/it]
30%|██▉ | 77/258 [01:40<03:57, 1.31s/it]
30%|███ | 78/258 [01:41<03:53, 1.30s/it]
31%|███ | 79/258 [01:42<03:49, 1.28s/it]
31%|███ | 80/258 [01:43<03:47, 1.28s/it]
{'loss': 1.537, 'grad_norm': 2.369173526763916, 'learning_rate': 1.7533259632633443e-05, 'epoch': 0.31}
31%|███ | 80/258 [01:43<03:47, 1.28s/it]
31%|███▏ | 81/258 [01:45<03:45, 1.27s/it]
32%|███▏ | 82/258 [01:46<03:43, 1.27s/it]
32%|███▏ | 83/258 [01:47<03:41, 1.27s/it]
33%|███▎ | 84/258 [01:48<03:40, 1.27s/it]
33%|███▎ | 85/258 [01:50<03:38, 1.27s/it]
{'loss': 1.5716, 'grad_norm': 8.767570495605469, 'learning_rate': 1.7071067811865477e-05, 'epoch': 0.33}
33%|███▎ | 85/258 [01:50<03:38, 1.27s/it]
33%|███▎ | 86/258 [01:51<03:45, 1.31s/it]
34%|███▎ | 87/258 [01:53<03:49, 1.34s/it]
34%|███▍ | 88/258 [01:54<03:44, 1.32s/it]
34%|███▍ | 89/258 [01:55<03:39, 1.30s/it]
35%|███▍ | 90/258 [01:56<03:36, 1.29s/it]
{'loss': 1.5327, 'grad_norm': 2.4866902828216553, 'learning_rate': 1.6576473166320644e-05, 'epoch': 0.35}
35%|███▍ | 90/258 [01:56<03:36, 1.29s/it]
35%|███▌ | 91/258 [01:58<03:33, 1.28s/it]
36%|███▌ | 92/258 [01:59<03:31, 1.28s/it]
36%|███▌ | 93/258 [02:00<03:30, 1.27s/it]
36%|███▋ | 94/258 [02:02<03:35, 1.31s/it]
37%|███▋ | 95/258 [02:03<03:31, 1.29s/it]
{'loss': 1.5223, 'grad_norm': 2.8688602447509766, 'learning_rate': 1.6051742151937655e-05, 'epoch': 0.37}
37%|███▋ | 95/258 [02:03<03:31, 1.29s/it]
37%|███▋ | 96/258 [02:04<03:36, 1.33s/it]
38%|███▊ | 97/258 [02:06<03:37, 1.35s/it]
38%|███▊ | 98/258 [02:07<03:31, 1.32s/it]
38%|███▊ | 99/258 [02:08<03:27, 1.30s/it]
39%|███▉ | 100/258 [02:09<03:24, 1.29s/it]
{'loss': 1.5069, 'grad_norm': 9.360047340393066, 'learning_rate': 1.549927932310155e-05, 'epoch': 0.39}
39%|███▉ | 100/258 [02:09<03:24, 1.29s/it][INFO|trainer.py:4307] 2026-04-16 16:24:43,454 >>
***** Running Evaluation *****
[INFO|trainer.py:4309] 2026-04-16 16:24:43,454 >> Num examples = 895
[INFO|trainer.py:4312] 2026-04-16 16:24:43,454 >> Batch size = 8
0%| | 0/28 [00:00<?, ?it/s]
7%|▋ | 2/28 [00:00<00:02, 11.79it/s]
14%|█▍ | 4/28 [00:00<00:03, 7.44it/s]
18%|█▊ | 5/28 [00:00<00:03, 6.93it/s]
21%|██▏ | 6/28 [00:00<00:03, 6.50it/s]
25%|██▌ | 7/28 [00:01<00:03, 6.28it/s]
29%|██▊ | 8/28 [00:01<00:03, 6.14it/s]
32%|███▏ | 9/28 [00:01<00:03, 6.11it/s]
36%|███▌ | 10/28 [00:01<00:02, 6.07it/s]
39%|███▉ | 11/28 [00:01<00:02, 5.98it/s]
43%|████▎ | 12/28 [00:01<00:02, 5.89it/s]
46%|████▋ | 13/28 [00:02<00:02, 5.89it/s]
50%|█████ | 14/28 [00:02<00:02, 5.89it/s]
54%|█████▎ | 15/28 [00:02<00:02, 5.89it/s]
57%|█████▋ | 16/28 [00:02<00:02, 5.87it/s]
61%|██████ | 17/28 [00:02<00:01, 5.87it/s]
64%|██████▍ | 18/28 [00:02<00:01, 5.82it/s]
68%|██████▊ | 19/28 [00:03<00:01, 5.80it/s]
71%|███████▏ | 20/28 [00:03<00:01, 5.82it/s]
75%|███████▌ | 21/28 [00:03<00:01, 5.83it/s]
79%|███████▊ | 22/28 [00:03<00:01, 5.86it/s]
82%|████████▏ | 23/28 [00:03<00:00, 5.84it/s]
86%|████████▌ | 24/28 [00:03<00:00, 5.83it/s]
89%|████████▉ | 25/28 [00:04<00:00, 5.84it/s]
93%|█████████▎| 26/28 [00:04<00:00, 5.87it/s]
96%|█████████▋| 27/28 [00:04<00:00, 5.86it/s]
100%|██████████| 28/28 [00:04<00:00, 5.89it/s]
{'eval_loss': 1.489111065864563, 'eval_runtime': 4.7946, 'eval_samples_per_second': 186.667, 'eval_steps_per_second': 5.84, 'epoch': 0.39}
39%|███▉ | 100/258 [02:14<03:24, 1.29s/it]
100%|██████████| 28/28 [00:04<00:00, 5.89it/s]

39%|███▉ | 101/258 [02:15<07:07, 2.72s/it]
40%|███▉ | 102/258 [02:17<05:56, 2.28s/it]
40%|███▉ | 103/258 [02:18<05:06, 1.97s/it]
40%|████ | 104/258 [02:19<04:31, 1.76s/it]
41%|████ | 105/258 [02:21<04:14, 1.66s/it]
{'loss': 1.4631, 'grad_norm': 2.7148385047912598, 'learning_rate': 1.4921616313890073e-05, 'epoch': 0.41}
41%|████ | 105/258 [02:21<04:14, 1.66s/it]
41%|████ | 106/258 [02:22<04:00, 1.58s/it]
41%|████▏ | 107/258 [02:23<03:44, 1.49s/it]
42%|████▏ | 108/258 [02:25<03:32, 1.42s/it]
42%|████▏ | 109/258 [02:26<03:23, 1.37s/it]
43%|████▎ | 110/258 [02:27<03:17, 1.34s/it]
{'loss': 1.4502, 'grad_norm': 2.1432764530181885, 'learning_rate': 1.4321400236983459e-05, 'epoch': 0.43}
43%|████▎ | 110/258 [02:27<03:17, 1.34s/it]
43%|████▎ | 111/258 [02:28<03:12, 1.31s/it]
43%|████▎ | 112/258 [02:30<03:09, 1.30s/it]
44%|████▍ | 113/258 [02:31<03:05, 1.28s/it]
44%|████▍ | 114/258 [02:32<03:03, 1.27s/it]
45%|████▍ | 115/258 [02:34<03:08, 1.31s/it]
{'loss': 1.411, 'grad_norm': 2.373682975769043, 'learning_rate': 1.3701381553399147e-05, 'epoch': 0.44}
45%|████▍ | 115/258 [02:34<03:08, 1.31s/it]
45%|████▍ | 116/258 [02:35<03:09, 1.34s/it]
45%|████▌ | 117/258 [02:36<03:10, 1.35s/it]
46%|████▌ | 118/258 [02:38<03:05, 1.32s/it]
46%|████▌ | 119/258 [02:39<03:01, 1.30s/it]
47%|████▋ | 120/258 [02:40<02:58, 1.29s/it]
{'loss': 1.4031, 'grad_norm': 2.2317380905151367, 'learning_rate': 1.3064401468637793e-05, 'epoch': 0.46}
47%|████▋ | 120/258 [02:40<02:58, 1.29s/it]
47%|████▋ | 121/258 [02:41<02:55, 1.28s/it]
47%|████▋ | 122/258 [02:43<02:53, 1.27s/it]
48%|████▊ | 123/258 [02:44<02:51, 1.27s/it]
48%|████▊ | 124/258 [02:45<02:55, 1.31s/it]
48%|████▊ | 125/258 [02:47<02:56, 1.33s/it]
{'loss': 1.3929, 'grad_norm': 1.9476709365844727, 'learning_rate': 1.2413378912997058e-05, 'epoch': 0.48}
48%|████▊ | 125/258 [02:47<02:56, 1.33s/it]
49%|████▉ | 126/258 [02:48<02:52, 1.31s/it]
49%|████▉ | 127/258 [02:49<02:49, 1.29s/it]
50%|████▉ | 128/258 [02:50<02:46, 1.28s/it]
50%|█████ | 129/258 [02:52<02:44, 1.27s/it]
50%|█████ | 130/258 [02:53<02:42, 1.27s/it]
{'loss': 1.3855, 'grad_norm': 1.9585973024368286, 'learning_rate': 1.175129716571531e-05, 'epoch': 0.5}
50%|█████ | 130/258 [02:53<02:42, 1.27s/it]
51%|█████ | 131/258 [02:54<02:40, 1.27s/it]
51%|█████ | 132/258 [02:55<02:39, 1.26s/it]
52%|█████▏ | 133/258 [02:57<02:43, 1.31s/it]
52%|█████▏ | 134/258 [02:58<02:46, 1.34s/it]
52%|█████▏ | 135/258 [03:00<02:41, 1.32s/it]
{'loss': 1.3818, 'grad_norm': 2.38059401512146, 'learning_rate': 1.1081190184239418e-05, 'epoch': 0.52}
52%|█████▏ | 135/258 [03:00<02:41, 1.32s/it]
53%|█████▎ | 136/258 [03:01<02:38, 1.30s/it]
53%|█████▎ | 137/258 [03:02<02:41, 1.33s/it]
53%|█████▎ | 138/258 [03:03<02:37, 1.31s/it]
54%|█████▍ | 139/258 [03:05<02:34, 1.30s/it]
54%|█████▍ | 140/258 [03:06<02:31, 1.29s/it]
{'loss': 1.334, 'grad_norm': 2.2399237155914307, 'learning_rate': 1.0406128701262128e-05, 'epoch': 0.54}
54%|█████▍ | 140/258 [03:06<02:31, 1.29s/it]
55%|█████▍ | 141/258 [03:07<02:29, 1.28s/it]
55%|█████▌ | 142/258 [03:09<02:33, 1.32s/it]
55%|█████▌ | 143/258 [03:10<02:35, 1.35s/it]
56%|█████▌ | 144/258 [03:11<02:30, 1.32s/it]
56%|█████▌ | 145/258 [03:13<02:27, 1.30s/it]
{'loss': 1.3365, 'grad_norm': 2.42098069190979, 'learning_rate': 9.729206153238658e-06, 'epoch': 0.56}
56%|█████▌ | 145/258 [03:13<02:27, 1.30s/it]
57%|█████▋ | 146/258 [03:14<02:24, 1.29s/it]
57%|█████▋ | 147/258 [03:15<02:21, 1.28s/it]
57%|█████▋ | 148/258 [03:16<02:19, 1.27s/it]
58%|█████▊ | 149/258 [03:18<02:17, 1.26s/it]
58%|█████▊ | 150/258 [03:19<02:16, 1.26s/it]
{'loss': 1.3288, 'grad_norm': 2.278903007507324, 'learning_rate': 9.053524504864391e-06, 'epoch': 0.58}
58%|█████▊ | 150/258 [03:19<02:16, 1.26s/it]
59%|█████▊ | 151/258 [03:20<02:20, 1.31s/it]
59%|█████▉ | 152/258 [03:22<02:21, 1.34s/it]
59%|█████▉ | 153/258 [03:23<02:17, 1.31s/it]
60%|█████▉ | 154/258 [03:24<02:18, 1.33s/it]
60%|██████ | 155/258 [03:26<02:14, 1.31s/it]
{'loss': 1.309, 'grad_norm': 1.902079701423645, 'learning_rate': 8.382180034472353e-06, 'epoch': 0.6}
60%|██████ | 155/258 [03:26<02:14, 1.31s/it]
60%|██████ | 156/258 [03:27<02:12, 1.29s/it]
61%|██████ | 157/258 [03:28<02:09, 1.28s/it]
61%|██████ | 158/258 [03:29<02:07, 1.28s/it]
62%|██████▏ | 159/258 [03:31<02:05, 1.27s/it]
62%|██████▏ | 160/258 [03:32<02:08, 1.31s/it]
{'loss': 1.3069, 'grad_norm': 1.8283921480178833, 'learning_rate': 7.718249145488143e-06, 'epoch': 0.62}
62%|██████▏ | 160/258 [03:32<02:08, 1.31s/it]
62%|██████▏ | 161/258 [03:33<02:09, 1.33s/it]
63%|██████▎ | 162/258 [03:35<02:05, 1.31s/it]
63%|██████▎ | 163/258 [03:36<02:02, 1.29s/it]
64%|██████▎ | 164/258 [03:37<02:00, 1.28s/it]
64%|██████▍ | 165/258 [03:38<01:58, 1.27s/it]
{'loss': 1.2787, 'grad_norm': 1.9884213209152222, 'learning_rate': 7.064774268960654e-06, 'epoch': 0.64}
64%|██████▍ | 165/258 [03:38<01:58, 1.27s/it]
64%|██████▍ | 166/258 [03:40<01:56, 1.27s/it]
65%|██████▍ | 167/258 [03:41<01:54, 1.26s/it]
65%|██████▌ | 168/258 [03:42<01:53, 1.26s/it]
66%|██████▌ | 169/258 [03:44<01:59, 1.34s/it]
66%|██████▌ | 170/258 [03:45<01:59, 1.36s/it]
{'loss': 1.2678, 'grad_norm': 2.0930237770080566, 'learning_rate': 6.4247499217695995e-06, 'epoch': 0.66}
66%|██████▌ | 170/258 [03:45<01:59, 1.36s/it]
66%|██████▋ | 171/258 [03:46<01:55, 1.33s/it]
67%|██████▋ | 172/258 [03:48<01:52, 1.31s/it]
67%|██████▋ | 173/258 [03:49<01:49, 1.29s/it]
67%|██████▋ | 174/258 [03:50<01:47, 1.28s/it]
68%|██████▊ | 175/258 [03:51<01:45, 1.27s/it]
{'loss': 1.2107, 'grad_norm': 2.522498369216919, 'learning_rate': 5.801108984397355e-06, 'epoch': 0.68}
68%|██████▊ | 175/258 [03:51<01:45, 1.27s/it]
68%|██████▊ | 176/258 [03:53<01:43, 1.26s/it]
69%|██████▊ | 177/258 [03:54<01:42, 1.26s/it]
69%|██████▉ | 178/258 [03:55<01:44, 1.31s/it]
69%|██████▉ | 179/258 [03:57<01:45, 1.33s/it]
70%|██████▉ | 180/258 [03:58<01:42, 1.31s/it]
{'loss': 1.2417, 'grad_norm': 2.825383424758911, 'learning_rate': 5.196709261146606e-06, 'epoch': 0.7}
70%|██████▉ | 180/258 [03:58<01:42, 1.31s/it]
70%|███████ | 181/258 [03:59<01:39, 1.29s/it]
71%|███████ | 182/258 [04:01<01:40, 1.32s/it]
71%|███████ | 183/258 [04:02<01:37, 1.30s/it]
71%|███████▏ | 184/258 [04:03<01:35, 1.29s/it]
72%|███████▏ | 185/258 [04:04<01:33, 1.28s/it]
{'loss': 1.2198, 'grad_norm': 2.353144645690918, 'learning_rate': 4.614320384390959e-06, 'epoch': 0.72}
72%|███████▏ | 185/258 [04:04<01:33, 1.28s/it]
72%|███████▏ | 186/258 [04:06<01:31, 1.27s/it]
72%|███████▏ | 187/258 [04:07<01:32, 1.31s/it]
73%|███████▎ | 188/258 [04:08<01:33, 1.33s/it]
73%|███████▎ | 189/258 [04:10<01:30, 1.31s/it]
74%|███████▎ | 190/258 [04:11<01:27, 1.29s/it]
{'loss': 1.2026, 'grad_norm': 2.3441641330718994, 'learning_rate': 4.056611122869106e-06, 'epoch': 0.74}
74%|███████▎ | 190/258 [04:11<01:27, 1.29s/it]
74%|███████▍ | 191/258 [04:12<01:25, 1.28s/it]
74%|███████▍ | 192/258 [04:13<01:24, 1.27s/it]
75%|███████▍ | 193/258 [04:15<01:22, 1.27s/it]
75%|███████▌ | 194/258 [04:16<01:23, 1.31s/it]
76%|███████▌ | 195/258 [04:17<01:21, 1.29s/it]
{'loss': 1.1828, 'grad_norm': 2.090709924697876, 'learning_rate': 3.5261371521817247e-06, 'epoch': 0.75}
76%|███████▌ | 195/258 [04:17<01:21, 1.29s/it]
76%|███████▌ | 196/258 [04:19<01:22, 1.33s/it]
76%|███████▋ | 197/258 [04:20<01:22, 1.35s/it]
77%|███████▋ | 198/258 [04:21<01:19, 1.32s/it]
77%|███████▋ | 199/258 [04:23<01:16, 1.30s/it]
78%|███████▊ | 200/258 [04:24<01:14, 1.29s/it]
{'loss': 1.2171, 'grad_norm': 1.8887003660202026, 'learning_rate': 3.0253293435321797e-06, 'epoch': 0.77}
78%|███████▊ | 200/258 [04:24<01:14, 1.29s/it][INFO|trainer.py:4307] 2026-04-16 16:26:57,969 >>
***** Running Evaluation *****
[INFO|trainer.py:4309] 2026-04-16 16:26:57,969 >> Num examples = 895
[INFO|trainer.py:4312] 2026-04-16 16:26:57,969 >> Batch size = 8
0%| | 0/28 [00:00<?, ?it/s]
7%|▋ | 2/28 [00:00<00:02, 11.85it/s]
14%|█▍ | 4/28 [00:00<00:03, 7.45it/s]
18%|█▊ | 5/28 [00:00<00:03, 6.90it/s]
21%|██▏ | 6/28 [00:00<00:03, 6.51it/s]
25%|██▌ | 7/28 [00:01<00:03, 6.32it/s]
29%|██▊ | 8/28 [00:01<00:03, 6.18it/s]
32%|███▏ | 9/28 [00:01<00:03, 6.06it/s]
36%|███▌ | 10/28 [00:01<00:02, 6.02it/s]
39%|███▉ | 11/28 [00:01<00:02, 6.01it/s]
43%|████▎ | 12/28 [00:01<00:02, 5.96it/s]
46%|████▋ | 13/28 [00:02<00:02, 5.92it/s]
50%|█████ | 14/28 [00:02<00:02, 5.95it/s]
54%|█████▎ | 15/28 [00:02<00:02, 5.92it/s]
57%|█████▋ | 16/28 [00:02<00:02, 5.90it/s]
61%|██████ | 17/28 [00:02<00:01, 5.89it/s]
64%|██████▍ | 18/28 [00:02<00:01, 5.91it/s]
68%|██████▊ | 19/28 [00:03<00:01, 5.91it/s]
71%|███████▏ | 20/28 [00:03<00:01, 5.90it/s]
75%|███████▌ | 21/28 [00:03<00:01, 5.89it/s]
79%|███████▊ | 22/28 [00:03<00:01, 5.88it/s]
82%|████████▏ | 23/28 [00:03<00:00, 5.90it/s]
86%|████████▌ | 24/28 [00:03<00:00, 5.88it/s]
89%|████████▉ | 25/28 [00:04<00:00, 5.91it/s]
93%|█████████▎| 26/28 [00:04<00:00, 5.88it/s]
96%|█████████▋| 27/28 [00:04<00:00, 5.88it/s]
100%|██████████| 28/28 [00:04<00:00, 5.92it/s]
{'eval_loss': 1.193406581878662, 'eval_runtime': 4.7625, 'eval_samples_per_second': 187.926, 'eval_steps_per_second': 5.879, 'epoch': 0.77}
78%|███████▊ | 200/258 [04:29<01:14, 1.29s/it]
100%|██████████| 28/28 [00:04<00:00, 5.92it/s]
[INFO|trainer.py:3984] 2026-04-16 16:27:21,898 >> Saving model checkpoint to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-200
[INFO|configuration_utils.py:419] 2026-04-16 16:27:21,904 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-200/config.json
[INFO|configuration_utils.py:911] 2026-04-16 16:27:21,910 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-200/generation_config.json
[INFO|modeling_utils.py:3580] 2026-04-16 16:28:10,583 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-200/model.safetensors.index.json.
[INFO|tokenization_utils_base.py:2510] 2026-04-16 16:28:10,589 >> tokenizer config file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-200/tokenizer_config.json
[INFO|tokenization_utils_base.py:2519] 2026-04-16 16:28:10,592 >> Special tokens file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-200/special_tokens_map.json
78%|███████▊ | 201/258 [09:14<1:23:30, 87.91s/it]
78%|███████▊ | 202/258 [09:15<57:47, 61.91s/it]
79%|███████▊ | 203/258 [09:16<40:04, 43.71s/it]
79%|███████▉ | 204/258 [09:18<27:52, 30.97s/it]
79%|███████▉ | 205/258 [09:19<19:32, 22.11s/it]
{'loss': 1.1818, 'grad_norm': 2.0174171924591064, 'learning_rate': 2.5564826243772965e-06, 'epoch': 0.79}
79%|███████▉ | 205/258 [09:19<19:32, 22.11s/it]
80%|███████▉ | 206/258 [09:21<13:47, 15.91s/it]
80%|████████ | 207/258 [09:22<09:49, 11.56s/it]
81%|████████ | 208/258 [09:23<07:03, 8.47s/it]
81%|████████ | 209/258 [09:24<05:08, 6.30s/it]
81%|████████▏ | 210/258 [09:26<03:49, 4.79s/it]
{'loss': 1.1851, 'grad_norm': 2.1461687088012695, 'learning_rate': 2.1217454620337842e-06, 'epoch': 0.81}
81%|████████▏ | 210/258 [09:26<03:49, 4.79s/it]
82%|████████▏ | 211/258 [09:27<02:55, 3.73s/it]
82%|████████▏ | 212/258 [09:28<02:17, 2.99s/it]
83%|████████▎ | 213/258 [09:30<01:51, 2.47s/it]
83%|████████▎ | 214/258 [09:31<01:34, 2.16s/it]
83%|████████▎ | 215/258 [09:32<01:22, 1.92s/it]
{'loss': 1.1799, 'grad_norm': 1.9792951345443726, 'learning_rate': 1.7231100184310955e-06, 'epoch': 0.83}
83%|████████▎ | 215/258 [09:32<01:22, 1.92s/it]
84%|████████▎ | 216/258 [09:34<01:12, 1.72s/it]
84%|████████▍ | 217/258 [09:35<01:04, 1.58s/it]
84%|████████▍ | 218/258 [09:36<00:59, 1.48s/it]
85%|████████▍ | 219/258 [09:37<00:55, 1.42s/it]
85%|████████▌ | 220/258 [09:39<00:52, 1.37s/it]
{'loss': 1.1748, 'grad_norm': 2.1498570442199707, 'learning_rate': 1.3624030211261684e-06, 'epoch': 0.85}
85%|████████▌ | 220/258 [09:39<00:52, 1.37s/it]
86%|████████▌ | 221/258 [09:40<00:49, 1.34s/it]
86%|████████▌ | 222/258 [09:41<00:49, 1.37s/it]
86%|████████▋ | 223/258 [09:43<00:48, 1.38s/it]
87%|████████▋ | 224/258 [09:44<00:46, 1.38s/it]
87%|████████▋ | 225/258 [09:45<00:44, 1.34s/it]
{'loss': 1.1529, 'grad_norm': 2.3734147548675537, 'learning_rate': 1.0412773924131202e-06, 'epoch': 0.87}
87%|████████▋ | 225/258 [09:45<00:44, 1.34s/it]
88%|████████▊ | 226/258 [09:47<00:42, 1.32s/it]
88%|████████▊ | 227/258 [09:48<00:40, 1.30s/it]
88%|████████▊ | 228/258 [09:49<00:38, 1.29s/it]
89%|████████▉ | 229/258 [09:50<00:37, 1.28s/it]
89%|████████▉ | 230/258 [09:52<00:35, 1.27s/it]
{'loss': 1.1486, 'grad_norm': 21.268768310546875, 'learning_rate': 7.612046748871327e-07, 'epoch': 0.89}
89%|████████▉ | 230/258 [09:52<00:35, 1.27s/it]
90%|████████▉ | 231/258 [09:53<00:34, 1.27s/it]
90%|████████▉ | 232/258 [09:54<00:34, 1.31s/it]
90%|█████████ | 233/258 [09:56<00:33, 1.33s/it]
91%|█████████ | 234/258 [09:57<00:31, 1.31s/it]
91%|█████████ | 235/258 [09:58<00:30, 1.34s/it]
{'loss': 1.1451, 'grad_norm': 2.0091605186462402, 'learning_rate': 5.234682881719766e-07, 'epoch': 0.91}
91%|█████████ | 235/258 [09:58<00:30, 1.34s/it]
91%|█████████▏| 236/258 [10:00<00:29, 1.32s/it]
92%|█████████▏| 237/258 [10:01<00:27, 1.30s/it]
92%|█████████▏| 238/258 [10:02<00:25, 1.29s/it]
93%|█████████▎| 239/258 [10:03<00:24, 1.28s/it]
93%|█████████▎| 240/258 [10:05<00:22, 1.27s/it]
{'loss': 1.1681, 'grad_norm': 1.8940573930740356, 'learning_rate': 3.2915764771193294e-07, 'epoch': 0.93}
93%|█████████▎| 240/258 [10:05<00:22, 1.27s/it]
93%|█████████▎| 241/258 [10:06<00:22, 1.31s/it]
94%|█████████▍| 242/258 [10:07<00:21, 1.33s/it]
94%|█████████▍| 243/258 [10:09<00:19, 1.31s/it]
95%|█████████▍| 244/258 [10:10<00:18, 1.30s/it]
95%|█████████▍| 245/258 [10:11<00:16, 1.29s/it]
{'loss': 1.1391, 'grad_norm': 2.0212926864624023, 'learning_rate': 1.791631725784404e-07, 'epoch': 0.95}
95%|█████████▍| 245/258 [10:11<00:16, 1.29s/it]
95%|█████████▌| 246/258 [10:13<00:15, 1.28s/it]
96%|█████████▌| 247/258 [10:14<00:14, 1.32s/it]
96%|█████████▌| 248/258 [10:15<00:13, 1.30s/it]
97%|█████████▋| 249/258 [10:16<00:11, 1.29s/it]
97%|█████████▋| 250/258 [10:18<00:10, 1.32s/it]
{'loss': 1.1662, 'grad_norm': 2.1692233085632324, 'learning_rate': 7.4172205167945e-08, 'epoch': 0.97}
97%|█████████▋| 250/258 [10:18<00:10, 1.32s/it]
97%|█████████▋| 251/258 [10:19<00:09, 1.34s/it]
98%|█████████▊| 252/258 [10:20<00:07, 1.32s/it]
98%|█████████▊| 253/258 [10:22<00:06, 1.30s/it]
98%|█████████▊| 254/258 [10:23<00:05, 1.29s/it]
99%|█████████▉| 255/258 [10:24<00:03, 1.28s/it]
{'loss': 1.163, 'grad_norm': 1.9714499711990356, 'learning_rate': 1.4665861488761813e-08, 'epoch': 0.99}
99%|█████████▉| 255/258 [10:24<00:03, 1.28s/it]
99%|█████████▉| 256/258 [10:26<00:02, 1.27s/it]
100%|█████████▉| 257/258 [10:27<00:01, 1.27s/it]
100%|██████████| 258/258 [10:28<00:00, 1.31s/it][INFO|trainer.py:3984] 2026-04-16 16:33:17,710 >> Saving model checkpoint to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-258
[INFO|configuration_utils.py:419] 2026-04-16 16:33:17,717 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-258/config.json
[INFO|configuration_utils.py:911] 2026-04-16 16:33:17,725 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-258/generation_config.json
[INFO|modeling_utils.py:3580] 2026-04-16 16:34:00,825 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-258/model.safetensors.index.json.
[INFO|tokenization_utils_base.py:2510] 2026-04-16 16:34:00,890 >> tokenizer config file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-258/tokenizer_config.json
[INFO|tokenization_utils_base.py:2519] 2026-04-16 16:34:00,904 >> Special tokens file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/checkpoint-258/special_tokens_map.json
[INFO|trainer.py:2681] 2026-04-16 16:37:12,033 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 885.0234, 'train_samples_per_second': 18.662, 'train_steps_per_second': 0.292, 'train_loss': 1.5008004404777704, 'epoch': 1.0}
100%|██████████| 258/258 [14:38<00:00, 1.31s/it]
100%|██████████| 258/258 [14:38<00:00, 3.41s/it]
***** train metrics *****
epoch = 0.9981
total_flos = 88635435GF
train_loss = 1.5008
train_runtime = 0:14:45.02
train_samples = 43598
train_samples_per_second = 18.662
train_steps_per_second = 0.292
2026-04-16 16:37:12 - INFO - __main__ - *** Save model ***
[INFO|configuration_utils.py:419] 2026-04-16 16:37:29,519 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/config.json
[INFO|configuration_utils.py:911] 2026-04-16 16:37:29,523 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/generation_config.json
[INFO|modeling_utils.py:3580] 2026-04-16 16:38:16,222 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 7 checkpoint shards. You can find where each parameters has been saved in the index located at /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/model.safetensors.index.json.
[INFO|tokenization_utils_base.py:2510] 2026-04-16 16:38:16,229 >> tokenizer config file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/tokenizer_config.json
[INFO|tokenization_utils_base.py:2519] 2026-04-16 16:38:16,232 >> Special tokens file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/special_tokens_map.json
2026-04-16 16:38:16 - INFO - __main__ - Saved HF-compatible model artifacts to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101
2026-04-16 16:38:16 - INFO - __main__ - Saved validated HF-compatible model artifacts to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101
[INFO|modelcard.py:450] 2026-04-16 16:38:16,543 >> Dropping the following result as it does not have all the necessary fields:
{'dataset': {'name': 'Anthropic/hh-rlhf', 'type': 'Anthropic/hh-rlhf', 'config': 'default', 'split': 'train', 'args': 'default'}}
[INFO|configuration_utils.py:419] 2026-04-16 16:38:16,550 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101/config.json
2026-04-16 16:38:16 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:4307] 2026-04-16 16:38:16,552 >>
***** Running Evaluation *****
[INFO|trainer.py:4309] 2026-04-16 16:38:16,552 >> Num examples = 895
[INFO|trainer.py:4312] 2026-04-16 16:38:16,552 >> Batch size = 8
0%| | 0/28 [00:00<?, ?it/s]
7%|▋ | 2/28 [00:00<00:02, 12.23it/s]
14%|█▍ | 4/28 [00:00<00:03, 7.54it/s]
18%|█▊ | 5/28 [00:00<00:03, 7.03it/s]
21%|██▏ | 6/28 [00:00<00:03, 6.59it/s]
25%|██▌ | 7/28 [00:01<00:03, 6.39it/s]
29%|██▊ | 8/28 [00:01<00:03, 6.23it/s]
32%|███▏ | 9/28 [00:01<00:03, 6.12it/s]
36%|███▌ | 10/28 [00:01<00:02, 6.10it/s]
39%|███▉ | 11/28 [00:01<00:02, 6.08it/s]
43%|████▎ | 12/28 [00:01<00:02, 5.99it/s]
46%|████▋ | 13/28 [00:02<00:02, 5.96it/s]
50%|█████ | 14/28 [00:02<00:02, 5.94it/s]
54%|█████▎ | 15/28 [00:02<00:02, 5.94it/s]
57%|█████▋ | 16/28 [00:02<00:02, 5.90it/s]
61%|██████ | 17/28 [00:02<00:01, 5.90it/s]
64%|██████▍ | 18/28 [00:02<00:01, 5.90it/s]
68%|██████▊ | 19/28 [00:03<00:01, 5.87it/s]
71%|███████▏ | 20/28 [00:03<00:01, 5.89it/s]
75%|███████▌ | 21/28 [00:03<00:01, 5.91it/s]
79%|███████▊ | 22/28 [00:03<00:01, 5.91it/s]
82%|████████▏ | 23/28 [00:03<00:00, 5.91it/s]
86%|████████▌ | 24/28 [00:03<00:00, 5.91it/s]
89%|████████▉ | 25/28 [00:04<00:00, 5.89it/s]
93%|█████████▎| 26/28 [00:04<00:00, 5.89it/s]
96%|█████████▋| 27/28 [00:04<00:00, 5.90it/s]
100%|██████████| 28/28 [00:04<00:00, 5.90it/s]
100%|██████████| 28/28 [00:04<00:00, 6.12it/s]
***** eval metrics *****
epoch = 0.9981
eval_loss = 1.1544
eval_runtime = 0:00:04.73
eval_samples = 2339
eval_samples_per_second = 188.882
eval_steps_per_second = 5.909
2026-04-16 16:38:21 - INFO - __main__ - *** Training complete ***
wandb: - 0.014 MB of 0.014 MB uploaded
wandb: \ 0.014 MB of 0.014 MB uploaded
wandb: | 0.014 MB of 0.014 MB uploaded
wandb: / 0.014 MB of 0.014 MB uploaded
wandb: - 0.043 MB of 0.069 MB uploaded
wandb: \ 0.070 MB of 0.070 MB uploaded
wandb:
wandb: Run history:
wandb: eval/loss █▂▁
wandb: eval/runtime █▄▁
wandb: eval/samples_per_second ▁▅█
wandb: eval/steps_per_second ▁▅█
wandb: train/epoch ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇████
wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇████
wandb: train/grad_norm █▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/learning_rate ▁▂▃▅▇██████▇▇▇▇▇▆▆▆▆▅▅▅▄▄▄▃▃▃▃▂▂▂▂▁▁▁▁▁▁
wandb: train/loss ██▇▆▄▄▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb:
wandb: Run summary:
wandb: eval/loss 1.15445
wandb: eval/runtime 4.7384
wandb: eval/samples_per_second 188.882
wandb: eval/steps_per_second 5.909
wandb: total_flos 9.517157411769549e+16
wandb: train/epoch 0.99807
wandb: train/global_step 258
wandb: train/grad_norm 1.97145
wandb: train/learning_rate 0.0
wandb: train/loss 1.163
wandb: train_loss 1.5008
wandb: train_runtime 885.0234
wandb: train_samples_per_second 18.662
wandb: train_steps_per_second 0.292
wandb:
wandb: 🚀 View run llama-3-8b-base-sft-hh-helpful-4xh200-batch-64-20260416-162101 at: https://wandb.ai/can-not-fand-northeastern-university/huggingface/runs/ivik22vv
wandb: ⭐️ View project at: https://wandb.ai/can-not-fand-northeastern-university/huggingface
wandb: Synced 6 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: /scratch/feng.yulu/dynamic-dpo-v4/wandb/wandb/run-20260416_162228-ivik22vv/logs
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.