Commit Graph

3531 Commits

Author SHA1 Message Date
Trevor Morris
c7e85f5378 fix: flashinfer_cutlass_moe: Use max of global expert scales instead of local for input scale (#10296) 2025-09-11 20:19:17 -07:00
Shu Wang
3df05f4d6a [NVIDIA] [3/N] Nvfp4 Masked Gemm: Add flashinfer grouped_gemm_nt_masked (#9199) 2025-09-11 20:18:43 -07:00
Lianmin Zheng
144ee5f37c [Auto Sync] Update server_args.py (20250912) (#10347)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-09-11 19:18:07 -07:00
Yineng Zhang
b0d25e72c4 chore: bump v0.5.2 (#10221) 2025-09-11 16:09:20 -07:00
gongwei-130
a2424068ec add try catch for quant config hf download (#10340) 2025-09-11 15:00:21 -07:00
zk-lover
c5d2b01cea [LongCat] Optimize zero_experts_compute_triton by changing mask (#10303) 2025-09-11 14:56:25 -07:00
eigen
70c0c1f926 fix: trtllm-gen attention take zero-init workspace (#10330) 2025-09-11 14:35:23 -07:00
Yi Zhang
ab795ae840 add h20 qwen3 next config (#10264)
Co-authored-by: cao1zhg <114661107+cao1zhg@users.noreply.github.com>
2025-09-11 14:02:24 -07:00
Stefan He
6c18ab46a2 [Qwen3-Next] switch to triton and cache conv states to accelerate MTP from 300 tok/s to 341 tok/s (#10335)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
2025-09-11 11:59:48 -07:00
cao1zhg
4a0e0be2a2 [bugfix] fix norm type error in qwen3_next model (#10322)
Co-authored-by: caoyizhong.cyz <caoyizhong.cyz@alibaba-inc.com>
Co-authored-by: Yi Zhang <1109276519@qq.com>
2025-09-12 00:05:59 +08:00
Lianmin Zheng
64f296f8e6 [Minor] Improve the style of server args (#10328) 2025-09-11 07:06:29 -07:00
Lianmin Zheng
956d805dde [Auto Sync] Update parallel_state.py (20250911) (#10326)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2025-09-11 06:36:29 -07:00
Yi Zhang
30c6e1f569 Qwen3-Next support (#10233)
Co-authored-by: cao1zhg <114661107+cao1zhg@users.noreply.github.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: qingquansong <ustcsqq@gmail.com>
Co-authored-by: Yaoyao Ding <dingyaoyao.cs@gmail.com>
Co-authored-by: Ke Bao <ISPObaoke@163.com>
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
2025-09-11 04:11:49 -07:00
Yineng Zhang
bfe01a5eef chore: upgrade v0.3.9.post2 sgl-kernel (#10297) 2025-09-11 04:10:29 -07:00
Yineng Zhang
de15d1405a Revert "Fix flashinfer version in sgl-kernel (#10135)" (#10310) 2025-09-11 01:27:58 -07:00
Xiaoyu Zhang
37367da639 [fix CI] Fix logical condition in fused MoE layer for compressed tensor quantization (#10299) 2025-09-10 23:54:09 -07:00
Zaili Wang
ef959d7b85 [CPU] fix OOM when mem-fraction is not set (#9090) 2025-09-10 23:52:22 -07:00
Yi Zhang
dc491b399d add flash linear attention triton kernel (#10239) 2025-09-10 21:47:20 -07:00
Even Zhou
5b64f006ec [Feature] Support DeepEP normal & Redundant Experts on NPU (#9881) 2025-09-10 20:35:26 -07:00
Yineng Zhang
6d55f60e77 Revert "[1/2] Optimizations and refactors about quant kernel (#9534)" (#10292) 2025-09-10 18:24:23 -07:00
Lianmin Zheng
033b75f559 [Auto Sync] Update serving_base.py, serving_chat.py, servin... (20250910) (#10282)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: cctry <shiyang@x.ai>
2025-09-10 16:58:59 -07:00
Xinyuan Tong
f3b5db6ee8 Feat: support disable tool parser (#10184) 2025-09-10 14:03:55 -07:00
Rain Jiang
2286e85e77 pass a_scale from fp8 quant result instead of hard code to 1.0f (#10241)
Co-authored-by: Yichen Wang <yichen.wang@bytedance.com>
Co-authored-by: Jinwu Guo <641876696@qq.com>
2025-09-10 12:56:05 -07:00
Hubert Lu
91b3555d2d Add tests to AMD CI for MI35x (#9662)
Co-authored-by: Sai Enduri <saimanas.enduri@amd.com>
2025-09-10 12:50:05 -07:00
Yi Zhang
9e2f7252db add dual stream for qwen2_moe (#10252) 2025-09-10 12:49:43 -07:00
Pavani Majety
21176b0093 [Bugfix] Fix Weightloading for the original nvidia/Deepseek-R1-FP4 checkpoint (#9940)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
2025-09-10 12:00:23 -07:00
Lifu Huang
941002945b [1/2] Refactor LoRA to support backend-specific batch preprocessing. (#10251) 2025-09-10 09:58:37 -07:00
Lianmin Zheng
27760fc1b6 [Auto Sync] Update io_struct.py (20250910) (#10262)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-09-10 00:16:37 -07:00
Seunggeun Cho
0ac809de33 Fix assertion typo in tp_worker.py (#9954) 2025-09-10 13:43:50 +08:00
Yiyu Liu
737d73ed5b Fix: the default choice is wrong for flashinfer mxfp4 moe precision (#10253) 2025-09-10 12:10:38 +08:00
ryang
dccf52f9c8 [UT for RL] Add UT to cover release/resume memory case for moe model (#8803) 2025-09-09 19:25:12 -07:00
Lianmin Zheng
676a7b51bd make --speculative-draft-model an alias of --speculative-draft-model-path (#10246) 2025-09-09 19:12:24 -07:00
Kevin Tuan
15f993472c refactor(InternVL): Use gpu to preprocess the input image (#9795) 2025-09-09 19:09:04 -07:00
Lianmin Zheng
bcf1955f7e Revert "chore: upgrade v0.3.9 sgl-kernel" (#10245) 2025-09-09 19:05:20 -07:00
Lianmin Zheng
a06bf66425 [Auto Sync] Update collector.py, startup_func_log_and_timer... (20250910) (#10242)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: cctry <shiyang@x.ai>
2025-09-09 18:05:16 -07:00
Lianmin Zheng
bf72b80122 [Auto Sync] Update io_struct.py (20250909) (#10236)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: cctry <shiyang@x.ai>
2025-09-09 14:15:21 -07:00
Teng Ma
8471e5e616 [HiCache] feat: add mooncake backend extra config (#10213) 2025-09-09 12:50:00 -07:00
Lianmin Zheng
4582931ac3 Revert "Revert the changes on NCCL symmetric memory" (#10238) 2025-09-09 12:11:49 -07:00
Lianmin Zheng
d352c29aa0 Revert the changes on NCCL symmetric memory (#10210)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-09-09 11:01:33 -07:00
Yineng Zhang
d3ee70985f chore: upgrade v0.3.9 sgl-kernel (#10220) 2025-09-09 03:16:25 -07:00
Rain H
71fc7b7fad [Fix] KV-cache eviction mismatch across PP ranks in DeepSeek V3/R1 (#10214) 2025-09-09 02:07:13 -07:00
shaharmor98
9ab72f9895 add variable TP Decode > Prefill size support (#9960)
Signed-off-by: Shahar Mor <smor@nvidia.com>
2025-09-09 16:47:26 +08:00
Lianmin Zheng
71133a0426 [Auto Sync] Update sampling_batch_info.py (20250909) (#10212)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: cctry <shiyang@x.ai>
2025-09-09 01:29:52 -07:00
Shangming Cai
f5f6b3b4b5 Refactor fused_add_rmsnorm import logic (#10207)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-09-09 00:23:58 -07:00
Yineng Zhang
94fb4e9e54 feat: support fa cute in sgl-kernel (#10205)
Co-authored-by: cicirori <32845984+cicirori@users.noreply.github.com>
2025-09-09 00:14:39 -07:00
blzheng
d1d4074c4e [CPU] Add gelu_and_mul kernel in sgl-kernel and add ut (#9300) 2025-09-08 23:23:13 -07:00
DarkSharpness
948b01a04c [Refactor] Remove Hicache Load & Write threads (#10127)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-09-08 22:18:50 -07:00
wenhuipeng
16ff3d4b05 Support opt model (#10165) 2025-09-09 12:45:00 +08:00
Liangsheng Yin
83d55ac51f [1/N]DP refactor: Improve dp rank scheduling in PD disaggregation mode. (#10169) 2025-09-09 12:27:55 +08:00
blzheng
97fff98c68 [CPU] Fix phi4-mm prompt issue in bench_serving (#9900) 2025-09-08 20:12:32 -07:00