Files
math-gpt2-simple/README.md
ModelHub XC 98e20d998c 初始化项目,由ModelHub XC社区提供模型
Model: Solutionsmedia/math-gpt2-simple
Source: Original Platform
2026-05-01 15:36:15 +08:00

1.6 KiB

library_name, license, base_model, tags, model-index
library_name license base_model tags model-index
transformers mit gpt2
generated_from_trainer
name results
math-gpt2-simple

math-gpt2-simple

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7100

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.272 0.7937 50 2.0902
2.1191 1.5873 100 1.8907
1.9521 2.3810 150 1.7958
1.9319 3.1746 200 1.7425
1.8994 3.9683 250 1.7167
1.8937 4.7619 300 1.7100

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0