0ac11a8e026ac4ffc4e0fc4cd0d8f6a7902e3936
Model: princeton-nlp/Llama-3-Base-8B-SFT-ORPO Source: Original Platform
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.
Description