43 lines
1.2 KiB
Markdown
43 lines
1.2 KiB
Markdown
|
|
---
|
||
|
|
base_model:
|
||
|
|
- unsloth/Qwen2.5-7B-Instruct
|
||
|
|
tags:
|
||
|
|
- text-generation-inference
|
||
|
|
- transformers
|
||
|
|
- unsloth
|
||
|
|
- qwen2
|
||
|
|
license: apache-2.0
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
datasets:
|
||
|
|
- Bialy17/Reasoning-Socratic-QandA
|
||
|
|
---
|
||
|
|
|
||
|
|
# Uploaded finetuned model
|
||
|
|
|
||
|
|
- **Developed by:** Bialy17
|
||
|
|
- **License:** apache-2.0
|
||
|
|
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
|
||
|
|
|
||
|
|
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
||
|
|
|
||
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## This is the same model as 'Bialy17/qwen-finetuned-Reasoning-Socratic-QandA-unsloth' but as a pure standalone model;
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
# Fine tuned on Runpod using RTX4090 and unsloth template (docker.io/unsloth/unsloth:latest)
|
||
|
|
|
||
|
|
# Trained with the below LoRA configurations
|
||
|
|
r = 64
|
||
|
|
lora alpha = 128
|
||
|
|
lora dropout = 0
|
||
|
|
max seq length = 2048
|
||
|
|
|
||
|
|
# Trained for 1875 step
|
||
|
|
'''
|
||
|
|
TrainOutput(global_step=1875, training_loss=0.6351530222256978, metrics={'train_runtime': 9419.7211, 'train_samples_per_second': 3.185, 'train_steps_per_second': 0.199, 'total_flos': 9.825842690769101e+17, 'train_loss': 0.6351530222256978, 'epoch': 1.9994666666666667})
|
||
|
|
'''
|