Update README.md

This commit is contained in:
ai-modelscope
2024-12-12 03:05:07 +08:00
parent 0ce8a6a112
commit c65d706e57
13 changed files with 91603 additions and 65 deletions

View File

@@ -1,47 +1,41 @@
---
license: Apache License 2.0
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
library_name: transformers
license: mit
datasets:
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
# Zephyr-7B-DICE-Iter2
SDK下载
```bash
#安装ModelScope
pip install modelscope
```
```python
#SDK模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('sail/Zephyr-7B-DICE-Iter2')
```
Git下载
```
#Git模型下载
git clone https://www.modelscope.cn/sail/Zephyr-7B-DICE-Iter2.git
```
This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 2, based on the [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) as the starting point.
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
## Links to Other Models
- [Zephyr-7B-DICE-Iter1](https://huggingface.co/sail/Zephyr-7B-DICE-Iter1)
- [Zephyr-7B-DICE-Iter2](https://huggingface.co/sail/Zephyr-7B-DICE-Iter2)
## Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Fine-tuned from model: HuggingFaceH4/zephyr-7b-beta
## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/)
| Model | LC. Win Rate | Win Rate |
|-------------------------------------------|:------------:|:--------:|
|[Zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) |12.69 |10.71
|[Zephyr-7B-DICE-Iter1](https://huggingface.co/sail/Zephyr-7B-DICE-Iter1) |19.03 |17.67
|[Zephyr-7B-DICE-Iter2](https://huggingface.co/sail/Zephyr-7B-DICE-Iter2) |**20.71** |**20.16**
## Citation
```bibtex
@article{chen2024bootstrapping,
title={Bootstrapping Language Models with DPO Implicit Rewards},
author={Chen, Changyu and Liu, Zichen and Du, Chao and Pang, Tianyu and Liu, Qian and Sinha, Arunesh and Varakantham, Pradeep and Lin, Min},
journal={arXiv preprint arXiv:2406.09760},
year={2024}
}
```