Model: Bialy17/qwen-finetuned-Reasoning-Socratic-QandA Source: Original Platform
base_model, tags, license, language, datasets
| base_model | tags | license | language | datasets | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
apache-2.0 |
|
|
Uploaded finetuned model
- Developed by: Bialy17
- License: apache-2.0
- Finetuned from model : unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
This is the same model as 'Bialy17/qwen-finetuned-Reasoning-Socratic-QandA-unsloth' but as a pure standalone model;
Fine tuned on Runpod using RTX4090 and unsloth template (docker.io/unsloth/unsloth:latest)
Trained with the below LoRA configurations
r = 64 lora alpha = 128 lora dropout = 0 max seq length = 2048
Trained for 1875 step
''' TrainOutput(global_step=1875, training_loss=0.6351530222256978, metrics={'train_runtime': 9419.7211, 'train_samples_per_second': 3.185, 'train_steps_per_second': 0.199, 'total_flos': 9.825842690769101e+17, 'train_loss': 0.6351530222256978, 'epoch': 1.9994666666666667}) '''
Description
Languages
Jinja
100%
