[Feat] Add reasoning parser for Qwen/Qwen3-235B-A22B-Thinking-2507 (#8363)
This commit is contained in:
@@ -97,14 +97,23 @@
|
||||
"\n",
|
||||
"#### Enabling Model Thinking/Reasoning\n",
|
||||
"\n",
|
||||
"You can use `chat_template_kwargs` to enable or disable the model's internal thinking or reasoning process output. Set `\"enable_thinking\": True` within `chat_template_kwargs` to include the reasoning steps in the response. This requires launching the server with a compatible reasoning parser (e.g., `--reasoning-parser qwen3` for Qwen3 models).\n",
|
||||
"You can use `chat_template_kwargs` to enable or disable the model's internal thinking or reasoning process output. Set `\"enable_thinking\": True` within `chat_template_kwargs` to include the reasoning steps in the response. This requires launching the server with a compatible reasoning parser.\n",
|
||||
"\n",
|
||||
"**Reasoning Parser Options:**\n",
|
||||
"- `--reasoning-parser deepseek-r1`: For DeepSeek-R1 family models (R1, R1-0528, R1-Distill)\n",
|
||||
"- `--reasoning-parser qwen3`: For standard Qwen3 models that support `enable_thinking` parameter\n",
|
||||
"- `--reasoning-parser qwen3-thinking`: For Qwen3-Thinking models (e.g., Qwen/Qwen3-235B-A22B-Thinking-2507) that always generate thinking content\n",
|
||||
"- `--reasoning-parser kimi`: For Kimi thinking models\n",
|
||||
"\n",
|
||||
"Here's an example demonstrating how to enable thinking and retrieve the reasoning content separately (using `separate_reasoning: True`):\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# Ensure the server is launched with a compatible reasoning parser, e.g.:\n",
|
||||
"# For standard Qwen3 models with enable_thinking support:\n",
|
||||
"# python3 -m sglang.launch_server --model-path QwQ/Qwen3-32B-250415 --reasoning-parser qwen3 ...\n",
|
||||
"\n",
|
||||
"# For Qwen3-Thinking models that always think:\n",
|
||||
"# python3 -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Thinking-2507 --reasoning-parser qwen3-thinking ...\n",
|
||||
"\n",
|
||||
"from openai import OpenAI\n",
|
||||
"\n",
|
||||
"# Modify OpenAI's API key and API base to use SGLang's API server.\n",
|
||||
@@ -123,7 +132,7 @@
|
||||
" model=model,\n",
|
||||
" messages=messages,\n",
|
||||
" extra_body={\n",
|
||||
" \"chat_template_kwargs\": {\"enable_thinking\": True},\n",
|
||||
" \"chat_template_kwargs\": {\"enable_thinking\": True}, # Only for standard Qwen3 models\n",
|
||||
" \"separate_reasoning\": True\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
@@ -149,6 +158,8 @@
|
||||
"\n",
|
||||
"Setting `\"enable_thinking\": False` (or omitting it) will result in `reasoning_content` being `None`.\n",
|
||||
"\n",
|
||||
"**Note for Qwen3-Thinking models:** These models always generate thinking content and do not support the `enable_thinking` parameter. When using `--reasoning-parser qwen3-thinking`, the model will always produce reasoning content regardless of the `enable_thinking` setting.\n",
|
||||
"\n",
|
||||
"Here is an example of a detailed chat completion request using standard OpenAI parameters:"
|
||||
]
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user