a2c392d25e86a530b1cc922015c911ee3bd63974
Model: princeton-nlp/Llama-3-Instruct-8B-RRHF-v0.2 Source: Original Platform
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward. Please refer to our repository for more details.
Description