Update README.md

This commit is contained in:
ai-modelscope
2025-04-22 08:58:53 +08:00
parent 259693af7b
commit 102c50e4d6
6 changed files with 34 additions and 58 deletions

View File

@@ -4,52 +4,53 @@ license: apache-2.0
## 📖 Introduction ## 📖 Introduction
# DistilQwen2.5-DS3-0324 系列快思考推理模型 # DistilQwen2.5-DS3-0324 Series: Fast-Thinking Reasoning Models
## 概述 ## Overview
在平衡高效推理与思维能力的行业挑战下DistilQwen2.5-DS3-0324系列创新性地将DeepSeekV3-0324的快思考能力迁移到轻量模型中。通过两阶段蒸馏框架该系列在保持高性能的同时实现 In response to the industry challenge of balancing efficient reasoning with cognitive capabilities, the DistilQwen2.5-DS3-0324 series innovatively transfers the fast-thinking capabilities of DeepSeekV3-0324 to lightweight models. Through a two-stage distillation framework, this series achieves high performance while delivering:
- **推理速度提升**输出token数减少60-80%(相比慢思考模型) - **Enhanced Reasoning Speed**: Reduces output tokens by 60-80% (compared to slow-thinking models)
- **资源消耗降低**:适合边缘计算部署 - **Reduced Resource Consumption**: Suitable for edge computing deployment
- **认知偏差消除**:独创的轨迹对齐技术 - **Elimination of Cognitive Bias**: Proprietary trajectory alignment technology
## 核心创新 ## Core Innovations
### 1. 快思考蒸馏框架 ### 1. Fast-Thinking Distillation Framework
- **阶段一快思考CoT数据收集** - **Stage 1: Fast-Thinking CoT Data Collection**
- **Long-to-Short改写**:从DeepSeek-R1提炼关键推理步骤 - **Long-to-Short Rewriting**: Extracts key reasoning steps from DeepSeek-R1
- **教师模型蒸馏**:提取DeepSeekV3-0324的快速推理轨迹 - **Teacher Model Distillation**: Captures the rapid reasoning trajectories of DeepSeekV3-0324
- **阶段二CoT轨迹认知对齐** - **Stage 2: CoT Trajectory Cognitive Alignment**
- **动态难度分级**(简单/中等/困难) - **Dynamic Difficulty Grading** (Easy/Medium/Hard)
- LLM-as-a-Judge评估小模型可理解性 - LLM-as-a-Judge evaluates small model comprehensibility
- 简单链扩展 → 补充必要步骤 - Simple chain expansion → Adds necessary steps
- 困难链精简 → 移除高阶逻辑跳跃 - Hard chain simplification → Removes high-level logical leaps
- **验证机制**:迭代优化直至所有数据达"中等"评级 - **Validation Mechanism**: Iterative optimization until all data reaches "Medium" rating
### 2. 性能突破 ### 2. Performance Breakthroughs
- **32B模型**在GPQA Diamond基准接近10倍参数量的闭源模型 - **32B Model** approaches the performance of closed-source models with 10x the parameters on the GPQA Diamond benchmark
- **推理效率**显著提升(见下表对比) - **Significant Improvement in Reasoning Efficiency** (see comparison table below)
| 模型 | MMLU_PRO Tokens | AIME2024 Tokens | 速度增益 | | Model | MMLU_PRO Tokens | AIME2024 Tokens | Speed Gain |
|--------------------------------|-----------------|-----------------|----------| |--------------------------------|-----------------|-----------------|------------|
| DistilQwen2.5-R1-32B (慢思考) | 4198 | 12178 | 1x | | DistilQwen2.5-R1-32B (Slow-Thinking) | 4198 | 12178 | 1x |
| DistilQwen2.5-DS3-0324-32B | 690 | 4177 | 5-8x | | DistilQwen2.5-DS3-0324-32B | 690 | 4177 | 5-8x |
## 技术优势 ## Technical Advantages
- **双阶段蒸馏**:先压缩推理长度,再对齐认知轨迹 - **Two-Stage Distillation**: First compresses reasoning length, then aligns cognitive trajectories
- **动态数据优化**:自适应难度调整确保知识可迁移性 - **Dynamic Data Optimization**: Adaptive difficulty adjustment ensures knowledge transferability
- **开源兼容**基于Qwen2.5基座模型微调 - **Open-Source Compatibility**: Fine-tuned based on the Qwen2.5 base model
## 🚀 快速开始 ## 🚀 Quick Start
```python ```python
from modelscope import AutoModelForCausalLM, AutoTokenizer from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained( model = AutoModelForCausalLM.from_pretrained(
"PAI/DistilQwen2.5-DS3-0324-7B", "alibaba-pai/DistilQwen2.5-DS3-0324-7B",
torch_dtype="auto", torch_dtype="auto",
device_map="auto" device_map="auto"
) )
tokenizer = AutoTokenizer.from_pretrained("PAI/DistilQwen2.5-DS3-0324-7B") tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/DistilQwen2.5-DS3-0324-7B")
prompt = "Give me a short introduction to large language model." prompt = "Give me a short introduction to large language model."
messages=[ messages=[
@@ -72,4 +73,5 @@ generated_ids = [
] ]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
``` ```

View File

@@ -1,14 +0,0 @@
{
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8,
"transformers_version": "4.46.1"
}

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2936da2eb6666f60def42e354b4e36a9bad19ebdf5090cbc02c0834f92c27df5
size 4877660776

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:11aab5fc3313e10f1aaa9438a2d74cfe5c59538dd8e6506ec216926887cc0820
size 4932751008

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2053eee73faf67773f7e8e042b2f3e9bf599edb6bca4e77a2cd0d83d375852de
size 4330865200

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:30be9d72da5b630c956bcd0fa3725246d1c5891df7f9d0bec00f65ea7145114a
size 1089994880