初始化项目,由ModelHub XC社区提供模型
Model: wh-zhu/qwen2.5-1.5B-longcot-reasoning-HPD Source: Original Platform
This commit is contained in:
38
README.md
Normal file
38
README.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Hybrid Policy Distillation: Qwen2.5-1.5B Student
|
||||
|
||||
This repository contains a **Qwen2.5-1.5B student model** distilled from **Qwen2.5-7B-Thinking** using **Hybrid Policy Distillation (HPD)**, as presented in the paper [Hybrid Policy Distillation for LLMs](https://huggingface.co/papers/2604.20244).
|
||||
|
||||
## Overview
|
||||
|
||||
Knowledge distillation (KD) is a powerful paradigm for compressing large language models (LLMs). Hybrid Policy Distillation (HPD) is a framework designed to make policy distillation more stable and efficient for reasoning-oriented models. It integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, and combines off-policy data with lightweight, approximate on-policy sampling.
|
||||
|
||||
- **Paper:** [Hybrid Policy Distillation for LLMs](https://huggingface.co/papers/2604.20244)
|
||||
- **Repository:** [zwhong714/Hybrid-Policy-Distillation](https://github.com/zwhong714/Hybrid-Policy-Distillation)
|
||||
|
||||
## Benchmark Performance
|
||||
|
||||
The following table shows the performance of the distilled student model compared to the teacher model across various reasoning benchmarks:
|
||||
|
||||
| Model | AIME24 | AIME25 | AMC | MATH | OlympiadMath | GPQA |
|
||||
| --- | ---: | ---: | ---: | ---: | ---: | ---: |
|
||||
| Qwen2.5-7B-Thinking (Teacher) | 28.13 | 27.19 | 71.72 | 87.48 | 58.50 | 43.43 |
|
||||
| **Qwen2.5-1.5B-Thinking (Student)** | **7.71** | **9.89** | **39.84** | **63.40** | **32.53** | **28.09** |
|
||||
|
||||
## Citation
|
||||
|
||||
If you find this model or the HPD framework useful in your research, please cite the following work:
|
||||
|
||||
```bibtex
|
||||
@article{hong2024hybrid,
|
||||
title={Hybrid Policy Distillation for LLMs},
|
||||
author={Hong, Zhang-Wei and others},
|
||||
journal={arXiv preprint arXiv:2604.20244},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user