Files
Llama-3-Base-8B-SFT-ORPO/README.md
ModelHub XC 0ac11a8e02 初始化项目,由ModelHub XC社区提供模型
Model: princeton-nlp/Llama-3-Base-8B-SFT-ORPO
Source: Original Platform
2026-04-30 22:10:08 +08:00

2 lines
241 B
Markdown

This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.