Commit Graph

2074 Commits

Author SHA1 Message Date
Yineng Zhang
8441baad6e fix: update model runner (#5934) 2025-04-30 19:49:26 -07:00
mlmz
256c4c2519 fix: correct stream response when enable_thinking is set to false (#5881) 2025-04-30 19:44:37 -07:00
Qiaolin Yu
7bcd8b1cb2 Fix lora batch processing when input lora_path contains None (#5930) 2025-04-30 19:42:42 -07:00
Ying Sheng
11383cec3c [PP] Add pipeline parallelism (#5724) 2025-04-30 18:18:07 -07:00
XinyuanTong
e97e57e699 Remove unused method calculate_num_image_tokens from qwen2_vl.py (#5783) 2025-04-30 17:46:59 -07:00
Yineng Zhang
9a6ad8916d chore: upgrade sgl-kernel 0.1.1 (#5933) 2025-04-30 16:13:30 -07:00
laixin
e330f2b86c [qwen3] support qwen3 ep moe (#5917)
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
2025-04-30 09:15:21 -07:00
liwenju0
8fefdd32c7 [Feature] add support kimi vl model (#5383)
Co-authored-by: wenju.li <wenju.li@deepctr.cn>
2025-04-29 21:31:19 -07:00
lambert0312
1698e94e67 Add A800 fused moe config for qwen3 235b (#5900) 2025-04-29 20:18:11 -07:00
Qiaolin Yu
58195dd588 [Fix] Unload lora in HF_Runner if needed (#5899) 2025-04-29 20:17:42 -07:00
Baizhou Zhang
799789afed Bump Flashinfer to 0.2.5 (#5870)
Co-authored-by: Yuhao Chen <yxckeis8@gmail.com>
2025-04-29 19:50:57 -07:00
ybyang
cc4a80caf6 [PD] Fix Assertion failed: /DeepEP/csrc/kernels/internode.cu:483, condition: ibgda_get_state()->num_rc_per_pe >= num_channels #134 (#5830) 2025-04-29 19:38:54 -07:00
lambert0312
3c8a52311a Fix check_env script (#5901) 2025-04-29 18:54:54 -07:00
Chang Su
28b26dbf48 [Bugfix]: fix missing queue_time_start for requests from grammar_queue (#5696) 2025-04-29 17:31:44 -07:00
Chang Su
2b06484bd1 feat: support pythonic tool call and index in tool call streaming (#5725) 2025-04-29 17:30:44 -07:00
JieXin Liang
e4b6133b78 [fix] relax mem_fraction_static for h200 (#5893)
Co-authored-by: alcanerian <alcanerian@gmail.com>
2025-04-29 17:01:12 -07:00
Ke Bao
dd408ee481 Auto set draft model path for MTP (#5793) 2025-04-29 16:25:40 -07:00
lambert0312
91dda4cd06 Add A800 fused moe config for qwen3 30b (#5880) 2025-04-29 02:02:24 -07:00
pengcuo
8e5a6d3441 [Fix] Fix a bug for flashmla to run R1 model (#5875)
Co-authored-by: pengcuo <dgpengcuo@gmail.com>
2025-04-29 01:03:13 -07:00
XinyuanTong
8465f035d1 Add qwen3 30b fused moe config (#5859) 2025-04-29 00:24:00 -07:00
Qiaolin Yu
8c0cfca87d Feat: support cuda graph for LoRA (#4115)
Co-authored-by: Beichen Ma <mabeichen12@gmail.com>
2025-04-28 23:30:44 -07:00
woodx
2c3ea29476 [Feature] support auto chat template (#4949) 2025-04-28 22:34:18 -07:00
Trevor Morris
8d463fe351 Cutlass MLA decode - fix dtype error (#5868) 2025-04-28 21:12:58 -07:00
Lianmin Zheng
26fc32d168 [CI] tune the test order to warmup the server (#5860) 2025-04-28 19:27:37 -07:00
Xiaoyu Zhang
1cc326032d simplify fused_moe config logging (#5801) 2025-04-28 17:04:54 -07:00
Chang Su
05ee219286 Support max_completion_tokens for OpenAIChatCompletions (#5857) 2025-04-28 13:50:13 -07:00
Yineng Zhang
dcae1fb2cd chore: bump v0.4.6.post1 (#5845) 2025-04-28 12:57:08 -07:00
Yi Zhang
a0251a3fd6 add fused moe config for qwen3moe fp8/bf16 (#5849) 2025-04-28 11:55:52 -07:00
Yineng Zhang
663037a7a0 feat: update is_fa3_default_architecture (#5854) 2025-04-28 11:53:22 -07:00
XTY
f4a9f60cbd [Fix] Missing bootstrap_port field (#5823) 2025-04-28 11:13:04 -07:00
HAI
d364b9b0f2 ROCm: update AITER (#5816) 2025-04-28 11:01:20 -07:00
Lianmin Zheng
849c83a0c0 [CI] test chunked prefill more (#5798) 2025-04-28 10:57:17 -07:00
JiLi
d73ddeb196 feat: Add fused moe triton config for qwen3-30b-fp8 moe on h20 (#5850) 2025-04-28 10:49:33 -07:00
ybyang
74cb12a878 [config] qwen3moe_tune_h20 fp8 tp4 (#5846) 2025-04-28 10:21:06 -07:00
ybyang
c6c6264073 [PD] support pd fake transfer for warmup (#5726) 2025-04-29 00:33:20 +08:00
yhyang201
92ab0a2055 feat: Add fused moe triton config for qwen3bf16 moe on h20 (#5839) 2025-04-28 09:30:59 -07:00
XinyuanTong
0045f4b2af feat: Add fused moe triton config for qwen3 moe on h100 (#5833) 2025-04-28 08:37:13 -07:00
mlmz
8601300beb fix: fix the error where the content is None when reasoning and tool … (#5838) 2025-04-28 08:36:08 -07:00
mlmz
6fa6f38ed3 Feat: add support for thinking mode via chat_template_kwargs.enable_t… (#5551)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-04-28 07:07:45 -07:00
Lianmin Zheng
693723d1f7 Revert "Tiny refactor DefaultModelLoader.Source" (#5825) 2025-04-28 01:18:57 -07:00
fzyzcjy
644ed409d1 Tiny refactor DefaultModelLoader.Source (#5482) 2025-04-28 00:35:51 -07:00
Lianmin Zheng
3029889cb4 Turn on overlap scheduler for multimodal models (#5771) 2025-04-27 23:45:09 -07:00
Yineng Zhang
41ac0c6d48 chore: upgrade sgl-kernel 0.1.0 (#5690) 2025-04-27 21:00:50 -07:00
Trevor Morris
84810da4ae Add Cutlass MLA attention backend (#5390) 2025-04-27 20:58:53 -07:00
Liangsheng Yin
40d9b8acce Improve overlap scheduling (#5788) 2025-04-28 11:19:16 +08:00
Lianmin Zheng
daed453e84 [CI] Improve github summary & enable fa3 for more models (#5796) 2025-04-27 15:29:46 -07:00
Baizhou Zhang
84022c0e56 Release v0.4.6 (#5795) 2025-04-27 14:07:05 -07:00
Lianmin Zheng
a38f6932cc [CI] Fix test case (#5790) 2025-04-27 08:55:35 -07:00
Liangsheng Yin
beb65c7433 [PD]Reduce kv transfer threads (#5791) 2025-04-27 23:03:30 +08:00
Lianmin Zheng
621e96bf9b [CI] Fix ci tests (#5769) 2025-04-27 07:18:10 -07:00