2025-03-18 02:51:40 +00:00
2025-03-18 02:48:06 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:39 +00:00
2025-03-18 02:51:40 +00:00

library_name, license, tags
library_name license tags
transformers cc-by-nc-4.0
merge
automerger

UltraMerge-7B

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:

  • mlabonne/truthy-dpo-v0.1
  • mlabonne/distilabel-intel-orca-dpo-pairs
  • mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
  • mlabonne/ultrafeedback-binarized-preferences-cleaned

I have no idea about what's the best chat template. Probably Mistral-Instruct or ChatML.

Description
Model synced from source: mlabonne/UltraMerge-7B
Readme 565 KiB