[Feat] Add reasoning parser for Qwen/Qwen3-235B-A22B-Thinking-2507 (#8363)
This commit is contained in:
@@ -97,14 +97,23 @@
|
||||
"\n",
|
||||
"#### Enabling Model Thinking/Reasoning\n",
|
||||
"\n",
|
||||
"You can use `chat_template_kwargs` to enable or disable the model's internal thinking or reasoning process output. Set `\"enable_thinking\": True` within `chat_template_kwargs` to include the reasoning steps in the response. This requires launching the server with a compatible reasoning parser (e.g., `--reasoning-parser qwen3` for Qwen3 models).\n",
|
||||
"You can use `chat_template_kwargs` to enable or disable the model's internal thinking or reasoning process output. Set `\"enable_thinking\": True` within `chat_template_kwargs` to include the reasoning steps in the response. This requires launching the server with a compatible reasoning parser.\n",
|
||||
"\n",
|
||||
"**Reasoning Parser Options:**\n",
|
||||
"- `--reasoning-parser deepseek-r1`: For DeepSeek-R1 family models (R1, R1-0528, R1-Distill)\n",
|
||||
"- `--reasoning-parser qwen3`: For standard Qwen3 models that support `enable_thinking` parameter\n",
|
||||
"- `--reasoning-parser qwen3-thinking`: For Qwen3-Thinking models (e.g., Qwen/Qwen3-235B-A22B-Thinking-2507) that always generate thinking content\n",
|
||||
"- `--reasoning-parser kimi`: For Kimi thinking models\n",
|
||||
"\n",
|
||||
"Here's an example demonstrating how to enable thinking and retrieve the reasoning content separately (using `separate_reasoning: True`):\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# Ensure the server is launched with a compatible reasoning parser, e.g.:\n",
|
||||
"# For standard Qwen3 models with enable_thinking support:\n",
|
||||
"# python3 -m sglang.launch_server --model-path QwQ/Qwen3-32B-250415 --reasoning-parser qwen3 ...\n",
|
||||
"\n",
|
||||
"# For Qwen3-Thinking models that always think:\n",
|
||||
"# python3 -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Thinking-2507 --reasoning-parser qwen3-thinking ...\n",
|
||||
"\n",
|
||||
"from openai import OpenAI\n",
|
||||
"\n",
|
||||
"# Modify OpenAI's API key and API base to use SGLang's API server.\n",
|
||||
@@ -123,7 +132,7 @@
|
||||
" model=model,\n",
|
||||
" messages=messages,\n",
|
||||
" extra_body={\n",
|
||||
" \"chat_template_kwargs\": {\"enable_thinking\": True},\n",
|
||||
" \"chat_template_kwargs\": {\"enable_thinking\": True}, # Only for standard Qwen3 models\n",
|
||||
" \"separate_reasoning\": True\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
@@ -149,6 +158,8 @@
|
||||
"\n",
|
||||
"Setting `\"enable_thinking\": False` (or omitting it) will result in `reasoning_content` being `None`.\n",
|
||||
"\n",
|
||||
"**Note for Qwen3-Thinking models:** These models always generate thinking content and do not support the `enable_thinking` parameter. When using `--reasoning-parser qwen3-thinking`, the model will always produce reasoning content regardless of the `enable_thinking` setting.\n",
|
||||
"\n",
|
||||
"Here is an example of a detailed chat completion request using standard OpenAI parameters:"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -6,14 +6,27 @@
|
||||
"source": [
|
||||
"# Reasoning Parser\n",
|
||||
"\n",
|
||||
"SGLang supports parsing reasoning content our from \"normal\" content for reasoning models such as [DeepSeek R1](https://huggingface.co/deepseek-ai/DeepSeek-R1).\n",
|
||||
"SGLang supports parsing reasoning content out from \"normal\" content for reasoning models such as [DeepSeek R1](https://huggingface.co/deepseek-ai/DeepSeek-R1).\n",
|
||||
"\n",
|
||||
"## Supported Models & Parsers\n",
|
||||
"\n",
|
||||
"| Model | Reasoning tags | Parser |\n",
|
||||
"|---------|-----------------------------|------------------|\n",
|
||||
"| [DeepSeek‑R1 series](https://huggingface.co/collections/deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d) | `<think>` … `</think>` | `deepseek-r1` |\n",
|
||||
"| [Qwen3 and QwQ series](https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f) | `<think>` … `</think>` | `qwen3` |"
|
||||
"| Model | Reasoning tags | Parser | Notes |\n",
|
||||
"|---------|-----------------------------|------------------|-------|\n",
|
||||
"| [DeepSeek‑R1 series](https://huggingface.co/collections/deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d) | `<think>` … `</think>` | `deepseek-r1` | Supports all variants (R1, R1-0528, R1-Distill) |\n",
|
||||
"| [Standard Qwen3 models](https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f) | `<think>` … `</think>` | `qwen3` | Supports `enable_thinking` parameter |\n",
|
||||
"| [Qwen3-Thinking models](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) | `<think>` … `</think>` | `qwen3-thinking` | Always generates thinking content |\n",
|
||||
"| [Kimi models](https://huggingface.co/collections/MoonshotAI/kimi-675e30c072b7ba7e79833be7) | `◁think▷` … `◁/think▷` | `kimi` | Uses special thinking delimiters |\n",
|
||||
"\n",
|
||||
"### Model-Specific Behaviors\n",
|
||||
"\n",
|
||||
"**DeepSeek-R1 Family:**\n",
|
||||
"- DeepSeek-R1: No `<think>` start tag, jumps directly to thinking content\n",
|
||||
"- DeepSeek-R1-0528: Generates both `<think>` start and `</think>` end tags\n",
|
||||
"- Both are handled by the same `deepseek-r1` parser\n",
|
||||
"\n",
|
||||
"**Qwen3 Family:**\n",
|
||||
"- Standard Qwen3 (e.g., Qwen3-2507): Use `qwen3` parser, supports `enable_thinking` in chat templates\n",
|
||||
"- Qwen3-Thinking (e.g., Qwen3-235B-A22B-Thinking-2507): Use `qwen3-thinking` parser, always thinks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -353,36 +366,61 @@
|
||||
"```python\n",
|
||||
"class DeepSeekR1Detector(BaseReasoningFormatDetector):\n",
|
||||
" \"\"\"\n",
|
||||
" Detector for DeepSeek-R1 model.\n",
|
||||
" Assumes reasoning format:\n",
|
||||
" (<think>)*(.*)</think>\n",
|
||||
" Returns all the text before the </think> tag as `reasoning_text`\n",
|
||||
" and the rest of the text as `normal_text`.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" stream_reasoning (bool): If False, accumulates reasoning content until the end tag.\n",
|
||||
" If True, streams reasoning content as it arrives.\n",
|
||||
" Detector for DeepSeek-R1 family models.\n",
|
||||
" \n",
|
||||
" Supported models:\n",
|
||||
" - DeepSeek-R1: Always generates thinking content without <think> start tag\n",
|
||||
" - DeepSeek-R1-0528: Generates thinking content with <think> start tag\n",
|
||||
" \n",
|
||||
" This detector handles both patterns automatically.\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" def __init__(self, stream_reasoning: bool = False):\n",
|
||||
" # DeepSeek-R1 is assumed to be reasoning until `</think>` token\n",
|
||||
" super().__init__(\"<think>\", \"</think>\", True, stream_reasoning=stream_reasoning)\n",
|
||||
" # https://github.com/sgl-project/sglang/pull/3202#discussion_r1950153599\n",
|
||||
" def __init__(self, stream_reasoning: bool = True):\n",
|
||||
" super().__init__(\"<think>\", \"</think>\", force_reasoning=True, stream_reasoning=stream_reasoning)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class Qwen3Detector(BaseReasoningFormatDetector):\n",
|
||||
" \"\"\"\n",
|
||||
" Detector for standard Qwen3 models that support enable_thinking parameter.\n",
|
||||
" \n",
|
||||
" These models can switch between thinking and non-thinking modes:\n",
|
||||
" - enable_thinking=True: Generates <think>...</think> tags\n",
|
||||
" - enable_thinking=False: No thinking content generated\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" def __init__(self, stream_reasoning: bool = True):\n",
|
||||
" super().__init__(\"<think>\", \"</think>\", force_reasoning=False, stream_reasoning=stream_reasoning)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class Qwen3ThinkingDetector(BaseReasoningFormatDetector):\n",
|
||||
" \"\"\"\n",
|
||||
" Detector for Qwen3-Thinking models (e.g., Qwen3-235B-A22B-Thinking-2507).\n",
|
||||
" \n",
|
||||
" These models always generate thinking content without <think> start tag.\n",
|
||||
" They do not support the enable_thinking parameter.\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" def __init__(self, stream_reasoning: bool = True):\n",
|
||||
" super().__init__(\"<think>\", \"</think>\", force_reasoning=True, stream_reasoning=stream_reasoning)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class ReasoningParser:\n",
|
||||
" \"\"\"\n",
|
||||
" Parser that handles both streaming and non-streaming scenarios for extracting\n",
|
||||
" reasoning content from model outputs.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" model_type (str): Type of model to parse reasoning from\n",
|
||||
" stream_reasoning (bool): If False, accumulates reasoning content until complete.\n",
|
||||
" If True, streams reasoning content as it arrives.\n",
|
||||
" Parser that handles both streaming and non-streaming scenarios.\n",
|
||||
" \n",
|
||||
" Usage:\n",
|
||||
" # For standard Qwen3 models with enable_thinking support\n",
|
||||
" parser = ReasoningParser(\"qwen3\")\n",
|
||||
" \n",
|
||||
" # For Qwen3-Thinking models that always think\n",
|
||||
" parser = ReasoningParser(\"qwen3-thinking\")\n",
|
||||
" \"\"\"\n",
|
||||
"\n",
|
||||
" DetectorMap: Dict[str, BaseReasoningFormatDetector] = {\n",
|
||||
" \"deepseek-r1\": DeepSeekR1Detector\n",
|
||||
" DetectorMap: Dict[str, Type[BaseReasoningFormatDetector]] = {\n",
|
||||
" \"deepseek-r1\": DeepSeekR1Detector,\n",
|
||||
" \"qwen3\": Qwen3Detector,\n",
|
||||
" \"qwen3-thinking\": Qwen3ThinkingDetector,\n",
|
||||
" \"kimi\": KimiDetector,\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
" def __init__(self, model_type: str = None, stream_reasoning: bool = True):\n",
|
||||
@@ -395,13 +433,13 @@
|
||||
"\n",
|
||||
" self.detector = detector_class(stream_reasoning=stream_reasoning)\n",
|
||||
"\n",
|
||||
" def parse_non_stream(self, full_text: str) -> StreamingParseResult:\n",
|
||||
" \"\"\"Non-streaming call: one-time parsing\"\"\"\n",
|
||||
" def parse_non_stream(self, full_text: str) -> Tuple[str, str]:\n",
|
||||
" \"\"\"Returns (reasoning_text, normal_text)\"\"\"\n",
|
||||
" ret = self.detector.detect_and_parse(full_text)\n",
|
||||
" return ret.reasoning_text, ret.normal_text\n",
|
||||
"\n",
|
||||
" def parse_stream_chunk(self, chunk_text: str) -> StreamingParseResult:\n",
|
||||
" \"\"\"Streaming call: incremental parsing\"\"\"\n",
|
||||
" def parse_stream_chunk(self, chunk_text: str) -> Tuple[str, str]:\n",
|
||||
" \"\"\"Returns (reasoning_text, normal_text) for the current chunk\"\"\"\n",
|
||||
" ret = self.detector.parse_streaming_increment(chunk_text)\n",
|
||||
" return ret.reasoning_text, ret.normal_text\n",
|
||||
"```"
|
||||
|
||||
Reference in New Issue
Block a user