初始化项目,由ModelHub XC社区提供模型

Model: princeton-nlp/Llama-3-Base-8B-SFT-ORPO
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-30 22:10:08 +08:00
commit 0ac11a8e02
13 changed files with 2484 additions and 0 deletions

1
README.md Normal file
View File

@@ -0,0 +1 @@
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.