This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.

Description
Model synced from source: princeton-nlp/Llama-3-Instruct-8B-KTO
Readme 29 KiB