This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.

Description
Model synced from source: princeton-nlp/Llama-3-Base-8B-SFT-ORPO
Readme 29 KiB