3be380b8d7b941ba020ee2ebadaa275d7ea9a748
Model: princeton-nlp/Llama-3-Base-8B-SFT-RRHF Source: Original Platform
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward. Please refer to our repository for more details.
Description