library_name, license, base_model, tags, model-index
library_name license base_model tags model-index
transformers mit gpt2
generated_from_trainer
name results
gpt2-poems-finetuned-v1

gpt2-poems-finetuned-v1

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.8988

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
4.2281 0.3193 500 4.0769
4.1077 0.6385 1000 4.0222
4.0667 0.9578 1500 3.9898
4.0905 1.2765 2000 3.9738
4.008 1.5957 2500 3.9561
4.0071 1.9150 3000 3.9448
3.9696 2.2337 3500 3.9376
3.9984 2.5530 4000 3.9289
3.9279 2.8722 4500 3.9212
3.9569 3.1909 5000 3.9181
3.9598 3.5102 5500 3.9154
3.9403 3.8294 6000 3.9126
3.908 4.1481 6500 3.9117
3.9344 4.4674 7000 3.9109
3.9645 4.7867 7500 3.9105

Framework versions

  • Transformers 4.57.6
  • Pytorch 2.9.1+cu126
  • Datasets 4.5.0
  • Tokenizers 0.22.2
Description
Model synced from source: kriteekathapa/gpt2-poems-finetuned-v1
Readme 1.3 MiB
Languages
Text 100%