初始化项目,由ModelHub XC社区提供模型

Model: CorticalStack/gemma-7b-ultrachat-sft
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-16 14:56:04 +08:00
commit c403cebb05
14 changed files with 500 additions and 0 deletions

View File

@@ -0,0 +1,28 @@
---
license: apache-2.0
---
# gemma-7b-ultrachat-sft
gemma-7b-ultrachat-sft is an SFT fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) using the [stingning/ultrachat](https://huggingface.co/datasets/stingning/ultrachat) dataset.
## Fine-tuning configuration
### LoRA
- r: 8
- LoRA alpha: 16
- LoRA dropout: 0.1
### Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: paged_adamw_32bit
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.001
- Learning rate scheduler type: constant
- Max seq length: 2048
Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)