Files
qwen2.5-1.5B-longcot-reason…/README.md
ModelHub XC d4382c6f44 初始化项目,由ModelHub XC社区提供模型
Model: wh-zhu/qwen2.5-1.5B-longcot-reasoning-HPD
Source: Original Platform
2026-04-27 13:23:06 +08:00

1.8 KiB

license, library_name, pipeline_tag
license library_name pipeline_tag
apache-2.0 transformers text-generation

Hybrid Policy Distillation: Qwen2.5-1.5B Student

This repository contains a Qwen2.5-1.5B student model distilled from Qwen2.5-7B-Thinking using Hybrid Policy Distillation (HPD), as presented in the paper Hybrid Policy Distillation for LLMs.

Overview

Knowledge distillation (KD) is a powerful paradigm for compressing large language models (LLMs). Hybrid Policy Distillation (HPD) is a framework designed to make policy distillation more stable and efficient for reasoning-oriented models. It integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, and combines off-policy data with lightweight, approximate on-policy sampling.

Benchmark Performance

The following table shows the performance of the distilled student model compared to the teacher model across various reasoning benchmarks:

Model AIME24 AIME25 AMC MATH OlympiadMath GPQA
Qwen2.5-7B-Thinking (Teacher) 28.13 27.19 71.72 87.48 58.50 43.43
Qwen2.5-1.5B-Thinking (Student) 7.71 9.89 39.84 63.40 32.53 28.09

Citation

If you find this model or the HPD framework useful in your research, please cite the following work:

@article{hong2024hybrid,
  title={Hybrid Policy Distillation for LLMs},
  author={Hong, Zhang-Wei and others},
  journal={arXiv preprint arXiv:2604.20244},
  year={2024}
}