初始化项目,由ModelHub XC社区提供模型
Model: CorticalStack/gemma-7b-ultrachat-sft Source: Original Platform
This commit is contained in:
24
README.md
Normal file
24
README.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
---
|
||||
|
||||
# gemma-7b-ultrachat-sft
|
||||
|
||||
gemma-7b-ultrachat-sft is an SFT fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) using the [stingning/ultrachat](https://huggingface.co/datasets/stingning/ultrachat) dataset.
|
||||
|
||||
## Fine-tuning configuration
|
||||
### LoRA
|
||||
- LoRA r: 8
|
||||
- LoRA alpha: 16
|
||||
- LoRA dropout: 0.1
|
||||
|
||||
### Training arguments
|
||||
- Epochs: 1
|
||||
- Batch size: 4
|
||||
- Gradient accumulation steps: 6
|
||||
- Optimizer: paged_adamw_32bit
|
||||
- Max steps: 100
|
||||
- Learning rate: 0.0002
|
||||
- Weight decay: 0.001
|
||||
- Learning rate scheduler type: constant
|
||||
- Max seq length: 2048
|
||||
Reference in New Issue
Block a user