/home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/transformers/utils/hub.py:105: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 2, device: cuda:2, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 7, device: cuda:7, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='/scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B', model_revision='main', model_code_revision=None, torch_dtype='bfloat16', tokenizer_name_or_path=None, trust_remote_code=False, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False, bnb_4bit_quant_storage='uint8') 2026-04-10 13:38:21 - INFO - __main__ - Data parameters DataArguments(chat_template="{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}", dataset_mixer={'Anthropic/hh-rlhf': 1.0}, text_column='text', dataset_splits=['train', 'test'], dataset_configs=['helpful-base'], dataset_dir=None, preprocessing_num_workers=12, use_persistent_hf_cache=False, hf_cache_dir=None, truncation_side=None, auto_insert_empty_system_msg=True, preprocessing_log_samples=0, preprocessing_log_dir=None) 2026-04-10 13:38:21 - INFO - __main__ - Training/evaluation parameters SFTConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, chars_per_token=, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=1000, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=100, eval_strategy=IntervalStrategy.STEPS, eval_use_gather_object=False, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=W-61/llama-3-8b-base-sft-hh-helpful-4xh200, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.END, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=2e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=outputs/llama-3-8b-base-sft-hh-helpful-4xh200/runs/Apr10_13-38-20_d4054, logging_first_step=True, logging_nan_inf_filter=True, logging_steps=5, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.COSINE, max_grad_norm=1.0, max_seq_length=512, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=1024, num_train_epochs=1, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=/scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=16, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=2, seed=42, skip_memory_metrics=True, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tp_size=0, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 4, device: cuda:4, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 5, device: cuda:5, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 3, device: cuda:3, n_gpu: 1 distributed training: True, 16-bits training: False 2026-04-10 13:38:21 - WARNING - __main__ - Process rank: 6, device: cuda:6, n_gpu: 1 distributed training: True, 16-bits training: False Downloading readme: 0%| | 0.00/5.77k [00:00> loading file tokenizer.json [INFO|tokenization_utils_base.py:2058] 2026-04-10 13:38:31,308 >> loading file tokenizer.model [INFO|tokenization_utils_base.py:2058] 2026-04-10 13:38:31,308 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2058] 2026-04-10 13:38:31,308 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2058] 2026-04-10 13:38:31,308 >> loading file tokenizer_config.json [INFO|tokenization_utils_base.py:2058] 2026-04-10 13:38:31,308 >> loading file chat_template.jinja Normalizing raw HH preferences (test): 100%|██████████| 2339/2339 [00:00<00:00, 11399.82 examples/s] Normalizing raw HH preferences (test): 100%|██████████| 2339/2339 [00:00<00:00, 9899.47 examples/s] [INFO|tokenization_utils_base.py:2323] 2026-04-10 13:38:31,605 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 2026-04-10 13:38:31 - INFO - __main__ - *** Load pretrained model *** Process #0 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00000_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #0 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00000_of_00012.arrow Process #1 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00001_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #1 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00001_of_00012.arrow Process #2 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00002_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #2 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00002_of_00012.arrow Process #3 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00003_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #3 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00003_of_00012.arrow Process #4 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00004_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #4 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00004_of_00012.arrow Process #5 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00005_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #5 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00005_of_00012.arrow Process #6 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00006_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #6 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00006_of_00012.arrow Process #7 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00007_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #7 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00007_of_00012.arrow Process #8 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00008_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #8 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00008_of_00012.arrow Process #9 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00009_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #9 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00009_of_00012.arrow Process #10 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00010_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #10 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00010_of_00012.arrow Process #11 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00011_of_00012.arrow 2026-04-10 13:38:31 - INFO - datasets.arrow_dataset - Process #11 will write at /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/Anthropic___hh-rlhf/default-cfba128a0ab1b99f/0.0.0/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa/cache-d3917bc8eb716f92_00011_of_00012.arrow Applying chat template (num_proc=12): 0%| | 0/43598 [00:00<|start_header_id|>user<|end_header_id|> What is the best way to clean my refrigerator?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Well, I have a few ideas, but the best way to know what works best is to run some tests and then analyze the data to figure out which technique works best.<|eot_id|><|start_header_id|>user<|end_header_id|> What are your ideas?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Some of my ideas include wiping down the interior with a rag, using bleach and water to wipe down the door, emptying the contents of the fridge to scrub the shelves with a clean cloth, cleaning the interior of the fridge with vinegar and water, and using a vacuum cleaner to clean the interior.<|eot_id|> 2026-04-10 13:39:23 - INFO - __main__ - Sample 7296 of the processed training set: <|begin_of_text|><|start_header_id|>user<|end_header_id|> How can I get blood out of cotton?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Blood? What do you mean?<|eot_id|> 2026-04-10 13:39:23 - INFO - __main__ - Sample 1639 of the processed training set: <|begin_of_text|><|start_header_id|>user<|end_header_id|> I'm trying to get my friend to open up<|eot_id|><|start_header_id|>assistant<|end_header_id|> Sounds great! What have you tried so far?<|eot_id|><|start_header_id|>user<|end_header_id|> I've tried to give my friend a safe space to open up about their feelings but they still aren't comfortable<|eot_id|><|start_header_id|>assistant<|end_header_id|> It’s really important that people feel safe sharing their feelings, but I totally hear that this isn’t easy for them. And in my experience, that has a lot to do with them feeling that you can accept them and what they’re saying, even if they’re feeling difficult or uncomfortable feelings. Do you think you’ve done that, and that they still haven’t opened up?<|eot_id|> Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1093.93 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1088.80 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1092.40 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1083.66 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1074.30 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1068.17 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1076.92 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1072.00 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1107.86 examples/s] Filter: 100%|██████████| 2339/2339 [00:02<00:00, 1103.67 examples/s] /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': model_init_kwargs, dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'. Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. warnings.warn(message, FutureWarning) /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:158: UserWarning: You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:185: UserWarning: You passed a model_id to the SFTTrainer. This will automatically create an `AutoModelForCausalLM` or a `PeftModel` (if you passed a `peft_config`) for you. warnings.warn( [INFO|configuration_utils.py:691] 2026-04-10 13:39:26,149 >> loading configuration file /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B/config.json [INFO|configuration_utils.py:765] 2026-04-10 13:39:26,151 >> Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 8192, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.51.0", "use_cache": false, "vocab_size": 128256 } [INFO|modeling_utils.py:1121] 2026-04-10 13:39:26,165 >> loading weights file /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B/model.safetensors.index.json [WARNING|logging.py:328] 2026-04-10 13:39:26,166 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [INFO|modeling_utils.py:2167] 2026-04-10 13:39:26,166 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [WARNING|logging.py:328] 2026-04-10 13:39:26,166 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:328] 2026-04-10 13:39:26,166 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:328] 2026-04-10 13:39:26,166 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:328] 2026-04-10 13:39:26,169 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:328] 2026-04-10 13:39:26,170 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:328] 2026-04-10 13:39:26,170 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [WARNING|logging.py:328] 2026-04-10 13:39:26,170 >> You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [INFO|configuration_utils.py:1142] 2026-04-10 13:39:26,171 >> Generate config GenerationConfig { "bos_token_id": 128000, "eos_token_id": 128001, "use_cache": false } Loading checkpoint shards: 0%| | 0/4 [00:00> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|modeling_utils.py:4934] 2026-04-10 13:39:26,262 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|configuration_utils.py:1095] 2026-04-10 13:39:26,264 >> loading configuration file /scratch/feng.yulu/dynamic-dpo-v4/base_models/Meta-Llama-3-8B/generation_config.json [INFO|configuration_utils.py:1142] 2026-04-10 13:39:26,265 >> Generate config GenerationConfig { "bos_token_id": 128000, "do_sample": true, "eos_token_id": 128001, "max_length": 4096, "temperature": 0.6, "top_p": 0.9 } /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:195: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:283: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:321: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`. warnings.warn( Using custom data configuration default-39b52f6e03e85a82 2026-04-10 13:39:26 - INFO - datasets.builder - Using custom data configuration default-39b52f6e03e85a82 Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator 2026-04-10 13:39:26 - INFO - datasets.info - Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator Generating dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0) 2026-04-10 13:39:26 - INFO - datasets.builder - Generating dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0) Downloading and preparing dataset generator/default to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0... 2026-04-10 13:39:26 - INFO - datasets.builder - Downloading and preparing dataset generator/default to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0... Generating train split 2026-04-10 13:39:26 - INFO - datasets.builder - Generating train split Generating train split: 0 examples [00:00, ? examples/s] Generating train split: 1 examples [00:00, 1.74 examples/s] Generating train split: 705 examples [00:01, 599.40 examples/s] Generating train split: 1413 examples [00:01, 818.34 examples/s] Generating train split: 2120 examples [00:02, 927.30 examples/s] Generating train split: 2825 examples [00:03, 999.25 examples/s] Generating train split: 3530 examples [00:03, 1022.46 examples/s] Generating train split: 4236 examples [00:04, 1046.23 examples/s] Generating train split: 4943 examples [00:05, 1075.23 examples/s] Generating train split: 5650 examples [00:05, 1079.58 examples/s] Generating train split: 6358 examples [00:06, 1008.34 examples/s] Generating train split: 7061 examples [00:07, 1030.47 examples/s] Generating train split: 7767 examples [00:07, 1054.48 examples/s] Generating train split: 8476 examples [00:08, 1064.89 examples/s] Generating train split: 9186 examples [00:09, 1074.60 examples/s] Generating train split: 9895 examples [00:09, 1095.85 examples/s] Generating train split: 10604 examples [00:10, 1094.68 examples/s] Generating train split: 11310 examples [00:11, 1092.87 examples/s] Generating train split: 12000 examples [00:11, 1364.84 examples/s] Generating train split: 12723 examples [00:12, 974.19 examples/s] Generating train split: 13430 examples [00:13, 1005.80 examples/s] Generating train split: 14136 examples [00:13, 1032.77 examples/s] Generating train split: 14842 examples [00:14, 1063.97 examples/s] Generating train split: 15550 examples [00:15, 1071.50 examples/s] Generating train split: 16260 examples [00:15, 1300.26 examples/s] Generating train split: 16516 examples [00:15, 1064.79 examples/s] Unable to verify splits sizes. 2026-04-10 13:39:41 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset generator downloaded and prepared to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0. Subsequent calls will reuse this data. 2026-04-10 13:39:41 - INFO - datasets.builder - Dataset generator downloaded and prepared to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-39b52f6e03e85a82/0.0.0. Subsequent calls will reuse this data. Using custom data configuration default-1519231937de8df3 2026-04-10 13:39:42 - INFO - datasets.builder - Using custom data configuration default-1519231937de8df3 Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator 2026-04-10 13:39:42 - INFO - datasets.info - Loading Dataset Infos from /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/datasets/packaged_modules/generator Generating dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0) 2026-04-10 13:39:42 - INFO - datasets.builder - Generating dataset generator (/scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0) Downloading and preparing dataset generator/default to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0... 2026-04-10 13:39:42 - INFO - datasets.builder - Downloading and preparing dataset generator/default to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0... Generating train split 2026-04-10 13:39:42 - INFO - datasets.builder - Generating train split Generating train split: 0 examples [00:00, ? examples/s] Generating train split: 1 examples [00:00, 1.72 examples/s] Generating train split: 711 examples [00:00, 1190.80 examples/s] Generating train split: 895 examples [00:00, 1099.18 examples/s] Unable to verify splits sizes. 2026-04-10 13:39:42 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset generator downloaded and prepared to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0. Subsequent calls will reuse this data. 2026-04-10 13:39:43 - INFO - datasets.builder - Dataset generator downloaded and prepared to /scratch/feng.yulu/dynamic-dpo-v4/hf/datasets/generator/default-1519231937de8df3/0.0.0. Subsequent calls will reuse this data. /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/trl/trainer/sft_trainer.py:412: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `SFTTrainer.__init__`. Use `processing_class` instead. super().__init__( [INFO|trainer.py:748] 2026-04-10 13:39:44,030 >> Using auto half precision backend 2026-04-10 13:39:44 - INFO - __main__ - *** Train *** /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/accelerate/accelerator.py:1557: UserWarning: Upcasted low precision parameters in LlamaForCausalLM because mixed precision turned on in FSDP. Affects: model.embed_tokens.weight, model.norm.weight, lm_head.weight. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/accelerate/accelerator.py:1557: UserWarning: Upcasted low precision parameters in LlamaDecoderLayer because mixed precision turned on in FSDP. Affects: self_attn.q_proj.weight, self_attn.k_proj.weight, self_attn.v_proj.weight, self_attn.o_proj.weight, mlp.gate_proj.weight, mlp.up_proj.weight, mlp.down_proj.weight, input_layernorm.weight, post_attention_layernorm.weight. warnings.warn( /home/feng.yulu/.conda/envs/dpo_venv/lib/python3.11/site-packages/accelerate/accelerator.py:1563: UserWarning: FSDP upcast of low precision parameters may affect the precision of model checkpoints. warnings.warn( [INFO|trainer.py:2414] 2026-04-10 13:39:48,781 >> ***** Running training ***** [INFO|trainer.py:2415] 2026-04-10 13:39:48,781 >> Num examples = 16,516 [INFO|trainer.py:2416] 2026-04-10 13:39:48,781 >> Num Epochs = 1 [INFO|trainer.py:2417] 2026-04-10 13:39:48,781 >> Instantaneous batch size per device = 16 [INFO|trainer.py:2420] 2026-04-10 13:39:48,781 >> Total train batch size (w. parallel, distributed & accumulation) = 128 [INFO|trainer.py:2421] 2026-04-10 13:39:48,781 >> Gradient Accumulation steps = 1 [INFO|trainer.py:2422] 2026-04-10 13:39:48,781 >> Total optimization steps = 130 [INFO|trainer.py:2423] 2026-04-10 13:39:48,781 >> Number of trainable parameters = 1,003,782,656 [INFO|integration_utils.py:831] 2026-04-10 13:39:48,782 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true" wandb: Currently logged in as: can-not-fand (can-not-fand-northeastern-university). Use `wandb login --relogin` to force relogin wandb: wandb version 0.25.1 is available! To upgrade, please run: wandb: $ pip install wandb --upgrade wandb: Tracking run with wandb version 0.17.5 wandb: Run data is saved locally in /scratch/feng.yulu/dynamic-dpo-v4/wandb/wandb/run-20260410_133950-cqdvywmp wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758 wandb: ⭐️ View project at https://wandb.ai/can-not-fand-northeastern-university/huggingface wandb: 🚀 View run at https://wandb.ai/can-not-fand-northeastern-university/huggingface/runs/cqdvywmp 0%| | 0/130 [00:00> ***** Running Evaluation ***** [INFO|trainer.py:4309] 2026-04-10 13:41:49,893 >> Num examples = 895 [INFO|trainer.py:4312] 2026-04-10 13:41:49,893 >> Batch size = 16 0%| | 0/7 [00:00> Saving model checkpoint to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/checkpoint-130 [INFO|configuration_utils.py:419] 2026-04-10 13:42:42,671 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/checkpoint-130/config.json [INFO|configuration_utils.py:911] 2026-04-10 13:42:42,675 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/checkpoint-130/generation_config.json [INFO|modeling_utils.py:3580] 2026-04-10 13:43:25,431 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 6 checkpoint shards. You can find where each parameters has been saved in the index located at /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/checkpoint-130/model.safetensors.index.json. [INFO|tokenization_utils_base.py:2510] 2026-04-10 13:43:25,448 >> tokenizer config file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/checkpoint-130/tokenizer_config.json [INFO|tokenization_utils_base.py:2519] 2026-04-10 13:43:25,451 >> Special tokens file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/checkpoint-130/special_tokens_map.json [INFO|trainer.py:2681] 2026-04-10 13:46:32,794 >> Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 404.0129, 'train_samples_per_second': 40.88, 'train_steps_per_second': 0.322, 'train_loss': 1.8291644793290358, 'epoch': 1.0} 100%|██████████| 130/130 [06:39<00:00, 1.16s/it] 100%|██████████| 130/130 [06:39<00:00, 3.07s/it] ***** train metrics ***** epoch = 1.0 total_flos = 44661265GF train_loss = 1.8292 train_runtime = 0:06:44.01 train_samples = 43598 train_samples_per_second = 40.88 train_steps_per_second = 0.322 2026-04-10 13:46:32 - INFO - __main__ - *** Save model *** [INFO|configuration_utils.py:419] 2026-04-10 13:46:51,065 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/config.json [INFO|configuration_utils.py:911] 2026-04-10 13:46:51,069 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/generation_config.json [INFO|modeling_utils.py:3580] 2026-04-10 13:47:37,204 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 7 checkpoint shards. You can find where each parameters has been saved in the index located at /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/model.safetensors.index.json. [INFO|tokenization_utils_base.py:2510] 2026-04-10 13:47:37,213 >> tokenizer config file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/tokenizer_config.json [INFO|tokenization_utils_base.py:2519] 2026-04-10 13:47:37,217 >> Special tokens file saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/special_tokens_map.json 2026-04-10 13:47:37 - INFO - __main__ - Saved HF-compatible model artifacts to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758 2026-04-10 13:47:37 - INFO - __main__ - Saved validated HF-compatible model artifacts to /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758 [INFO|modelcard.py:450] 2026-04-10 13:47:37,519 >> Dropping the following result as it does not have all the necessary fields: {'dataset': {'name': 'Anthropic/hh-rlhf', 'type': 'Anthropic/hh-rlhf', 'config': 'default', 'split': 'train', 'args': 'default'}} [INFO|configuration_utils.py:419] 2026-04-10 13:47:37,528 >> Configuration saved in /scratch/feng.yulu/dynamic-dpo-v4/outputs/llama-3-8b-base-sft-hh-helpful-8xh200-20260410-133758/config.json 2026-04-10 13:47:37 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:4307] 2026-04-10 13:47:37,531 >> ***** Running Evaluation ***** [INFO|trainer.py:4309] 2026-04-10 13:47:37,531 >> Num examples = 895 [INFO|trainer.py:4312] 2026-04-10 13:47:37,531 >> Batch size = 16 0%| | 0/7 [00:00