18 lines
517 B
Markdown
18 lines
517 B
Markdown
---
|
|
library_name: transformers
|
|
license: cc-by-nc-4.0
|
|
tags:
|
|
- merge
|
|
- automerger
|
|
---
|
|
|
|
# UltraMerge-7B
|
|
|
|
This model is an experimental DPO fine-tune of [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) on the following datasets:
|
|
|
|
- mlabonne/truthy-dpo-v0.1
|
|
- mlabonne/distilabel-intel-orca-dpo-pairs
|
|
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
|
|
- mlabonne/ultrafeedback-binarized-preferences-cleaned
|
|
|
|
I have no idea about what's the best chat template. Probably Mistral-Instruct or ChatML. |