Files
Llama-3-Instruct-8B-RDPO-v0.2/README.md
ModelHub XC 3501fbf273 初始化项目,由ModelHub XC社区提供模型
Model: princeton-nlp/Llama-3-Instruct-8B-RDPO-v0.2
Source: Original Platform
2026-05-07 17:54:06 +08:00

238 B

This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward. Please refer to our repository for more details.