初始化项目,由ModelHub XC社区提供模型

Model: VladShash/mistral-7B-lean-prover-dpo-deepseek
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-03 21:04:45 +08:00
commit 1701405462
13 changed files with 282398 additions and 0 deletions

69
README.md Normal file
View File

@@ -0,0 +1,69 @@
---
base_model: formalmathatepfl/mistral-7B-v0.3-finetuned
library_name: transformers
model_name: mistral-7B-lean-prover-dpo-deepseek
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for mistral-7B-lean-prover-dpo-deepseek
This model is a fine-tuned version of [formalmathatepfl/mistral-7B-v0.3-finetuned](https://huggingface.co/formalmathatepfl/mistral-7B-v0.3-finetuned).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="VladShash/mistral-7B-lean-prover-dpo-deepseek", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 1.2.0
- Transformers: 4.57.0
- Pytorch: 2.10.0+default
- Datasets: 4.8.4
- Tokenizers: 0.22.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@software{vonwerra2020trl,
title = {{TRL: Transformers Reinforcement Learning}},
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
license = {Apache-2.0},
url = {https://github.com/huggingface/trl},
year = {2020}
}
```