初始化项目,由ModelHub XC社区提供模型
Model: yunconglong/13B_MATH_DPO Source: Original Platform
This commit is contained in:
13
README.md
Normal file
13
README.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
license: other
|
||||
tags:
|
||||
- moe
|
||||
- DPO
|
||||
- RL-TUNED
|
||||
---
|
||||
|
||||
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset kyujinpy/orca_math_dpo to improve [yunconglong/MoE_13B_DPO]
|
||||
```
|
||||
DPO Trainer
|
||||
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
|
||||
```
|
||||
Reference in New Issue
Block a user