初始化项目,由ModelHub XC社区提供模型
Model: LLM-Research/tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm Source: Original Platform
This commit is contained in:
90
README.md
Normal file
90
README.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
model-index:
|
||||
- name: tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm
|
||||
results: []
|
||||
datasets:
|
||||
- allenai/tulu-2.5-preference-data
|
||||
- allenai/tulu-v2-sft-mixture
|
||||
language:
|
||||
- en
|
||||
base_model: allenai/tulu-2-13b
|
||||
license: apache-2.0
|
||||
---
|
||||
<center>
|
||||
<img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-2.5/tulu_25_banner.png" alt="Tulu 2.5 banner image" width="800px"/>
|
||||
</center>
|
||||
|
||||
# Model Card for Tulu V2.5 PPO 13B - UltraFeedback Mean w. 70B mixture RM
|
||||
|
||||
Tulu is a series of language models that are trained to act as helpful assistants.
|
||||
Tulu V2.5 is a series of models trained using DPO and PPO starting from the [Tulu 2 suite](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
|
||||
This model is trained using PPO.
|
||||
We used a 70B RM trained on our preference data mix, and then used the UltraFeedback prompts during PPO training.
|
||||
|
||||
For more details, read the paper:
|
||||
[Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback](https://arxiv.org/abs/2406.09279).
|
||||
|
||||
|
||||
## .Model description
|
||||
|
||||
- **Model type:** One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
|
||||
- **Language(s) (NLP):** English
|
||||
- **License:** Apache 2.0.
|
||||
- **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
|
||||
|
||||
### Model Sources
|
||||
|
||||
- **Repository:** https://github.com/allenai/open-instruct
|
||||
- **Dataset:** Data used to train this model can be found [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `ultrafeedback_mean_aspects` split. Only the prompts were used.
|
||||
- **Model Family:** The collection of related models can be found [here](https://huggingface.co/collections/allenai/tulu-v25-suite-66676520fd578080e126f618).
|
||||
- **Reward Model:** The reward model used during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-70b-preference-mix-rm), and the data used to train it [here](https://huggingface.co/datasets/allenai/tulu-2.5-preference-data) - specifically the `preference_big_mixture` split.
|
||||
- **Value Model:** The value model produced during PPO training can be found [here](https://huggingface.co/allenai/tulu-v2.5-ppo-13b-uf-mean-70b-mix-rm-value).
|
||||
|
||||
## Input Format
|
||||
|
||||
The model is trained to use the following format (note the newlines):
|
||||
```
|
||||
<|user|>
|
||||
Your message here!
|
||||
<|assistant|>
|
||||
```
|
||||
|
||||
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
|
||||
We have included a [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) in the tokenizer implementing this template.
|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
The model was initially fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
|
||||
We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the dataset mentioned above.
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
|
||||
It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
|
||||
|
||||
|
||||
### Training hyperparameters
|
||||
|
||||
The following hyperparameters were used during PPO training:
|
||||
- learning_rate: 1e-06
|
||||
- total_train_batch_size: 64
|
||||
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
||||
- lr_scheduler_type: linear
|
||||
- lr_scheduler_warmup_ratio: 0.1
|
||||
- num_epochs: 1.0
|
||||
- KL penalty coefficient: 0.0325 (we found larger RMs benefited from a lower KL penalty coefficient)
|
||||
|
||||
## Citation
|
||||
|
||||
If you find Tulu 2.5 is useful in your work, please cite it with:
|
||||
|
||||
```
|
||||
@misc{ivison2024unpacking,
|
||||
title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}},
|
||||
author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
|
||||
year={2024},
|
||||
eprint={2406.09279},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL}
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user