29 lines
727 B
Markdown
29 lines
727 B
Markdown
|
|
---
|
||
|
|
license: apache-2.0
|
||
|
|
tags:
|
||
|
|
- dpo
|
||
|
|
base_model:
|
||
|
|
- mistralai/Mistral-7B-v0.1
|
||
|
|
dataset:
|
||
|
|
- mlabonne/distilabel-truthy-dpo-v0.1
|
||
|
|
---
|
||
|
|
|
||
|
|
# mistral-7b-distilabel-truthy-dpo
|
||
|
|
|
||
|
|
mistral-7b-distilabel-truthy-dpo is a DPO fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the [mlabonne/distilabel-truthy-dpo-v0.1](https://huggingface.co/datasets/mlabonne/distilabel-truthy-dpo-v0.1) dataset.
|
||
|
|
|
||
|
|
### LoRA
|
||
|
|
- r: 16
|
||
|
|
- LoRA alpha: 16
|
||
|
|
- LoRA dropout: 0.05
|
||
|
|
|
||
|
|
### Training arguments
|
||
|
|
- Batch size: 4
|
||
|
|
- Gradient accumulation steps: 4
|
||
|
|
- Optimizer: paged_adamw_32bit
|
||
|
|
- Max steps: 100
|
||
|
|
- Learning rate: 5e-05
|
||
|
|
- Learning rate scheduler type: cosine
|
||
|
|
- Beta: 0.1
|
||
|
|
- Max prompt length: 1024
|
||
|
|
- Max length: 1536
|