Files
ModelHub XC 25afcdaa79 初始化项目,由ModelHub XC社区提供模型
Model: Bialy17/qwen-finetuned-Reasoning-Socratic-QandA
Source: Original Platform
2026-05-04 09:19:02 +08:00

1.2 KiB

base_model, tags, license, language, datasets
base_model tags license language datasets
unsloth/Qwen2.5-7B-Instruct
text-generation-inference
transformers
unsloth
qwen2
apache-2.0
en
Bialy17/Reasoning-Socratic-QandA

Uploaded finetuned model

  • Developed by: Bialy17
  • License: apache-2.0
  • Finetuned from model : unsloth/Qwen2.5-7B-Instruct

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.


This is the same model as 'Bialy17/qwen-finetuned-Reasoning-Socratic-QandA-unsloth' but as a pure standalone model;


Fine tuned on Runpod using RTX4090 and unsloth template (docker.io/unsloth/unsloth:latest)

Trained with the below LoRA configurations

r = 64 lora alpha = 128 lora dropout = 0 max seq length = 2048

Trained for 1875 step

''' TrainOutput(global_step=1875, training_loss=0.6351530222256978, metrics={'train_runtime': 9419.7211, 'train_samples_per_second': 3.185, 'train_steps_per_second': 0.199, 'total_flos': 9.825842690769101e+17, 'train_loss': 0.6351530222256978, 'epoch': 1.9994666666666667}) '''