Commit Graph

2441 Commits

Author SHA1 Message Date
Baizhou Zhang
a979daac3b Fallback to lower triton version for unfound fused moe configs (#7013) 2025-06-09 15:41:03 -07:00
ishandhanani
f1569876d5 feat: add direct routing strategy to DP worker (#6884) 2025-06-09 11:44:05 -07:00
fzyzcjy
e58423b2b9 Fix cutlass MLA gets almost zero accuracy (#6998) 2025-06-09 10:16:29 -07:00
Yineng Zhang
56ccd3c22c chore: upgrade flashinfer v0.2.6.post1 jit (#6958)
Co-authored-by: alcanderian <alcanderian@gmail.com>
Co-authored-by: Qiaolin Yu <qy254@cornell.edu>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-06-09 09:22:39 -07:00
Yueyang Pan
98c00a2df1 Fix torch profiler bugs for bench_offline_throughput.py (#6557) 2025-06-09 20:33:41 +08:00
Pan Lyu
451ffe74d9 support qwen3 emebedding (#6990) 2025-06-09 01:32:49 -07:00
Lifu Huang
b1e5a33ae3 Eliminate stream sync to speed up LoRA batch init (#6960) 2025-06-09 00:22:45 -07:00
Lianmin Zheng
9d5fa68b90 Use torch.compile to fuse flash attention decode metadata preparation (#6973) 2025-06-08 23:05:40 -07:00
fzyzcjy
de1350ea20 Minor remove one kernel for DeepSeek (#6977) 2025-06-08 17:41:35 -07:00
fzyzcjy
86fe943bc3 Fix expert distribution dumping causes OOM (#6967) 2025-06-08 17:41:14 -07:00
Lianmin Zheng
0c1f03a23d Sync cuda graph runners (#6976) 2025-06-08 16:12:25 -07:00
Xiaoyu Zhang
3712abfaf9 Fuse routed scaling factor in deepseek (#6970) 2025-06-08 15:24:24 -07:00
Baizhou Zhang
971a0dfa32 Extend cuda graph capture bs for B200 (#6937) 2025-06-08 05:13:22 -07:00
fzyzcjy
2fc1299562 Remove unnecessary kernels of num_token_non_padded (#6965) 2025-06-08 05:09:17 -07:00
Lianmin Zheng
20d3ad3b58 Fix CI and triton moe Configs (#6974) 2025-06-08 05:06:46 -07:00
Xiaoyu Zhang
fa3592cfeb rebase h20 fused_moe config (#6966) 2025-06-08 05:01:34 -07:00
Lianmin Zheng
608668e143 Slightly improve the sampler to skip unnecessary steps (#6956) 2025-06-08 03:18:54 -07:00
Yineng Zhang
1fb76ebb93 Revert "Fuse routed scaling factor in topk_reduce kernel (#6220)" (#6968) 2025-06-07 21:02:49 -07:00
Pavani Majety
c2c4f57f63 [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (#6853)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-07 17:24:35 -07:00
Yineng Zhang
23881fa60c chore: upgrade sgl-kernel v0.1.6.post1 (#6957) 2025-06-07 17:18:55 -07:00
Elfie Guo
3e56f557fd Add a CUDA kernel for fusing mapping and weighted sum for MoE. (#6916)
Co-authored-by: Elfie Guo <elfiegxf@gmail.com>
2025-06-07 15:24:39 -07:00
Xu Wenqing
62fec60d81 Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (#6885)
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
2025-06-07 15:17:34 -07:00
JieXin Liang
e7759778e5 [misc] add is_cpu() (#6950) 2025-06-07 15:13:45 -07:00
Sai Enduri
77e928d00e Update server timeout time in AMD CI. (#6953) 2025-06-07 15:10:27 -07:00
Xiaoyu Zhang
515ef4facb Fuse routed scaling factor in topk_reduce kernel (#6220) 2025-06-07 11:06:50 -07:00
fzyzcjy
f5599ef124 Refactor global_server_args_dict (#6866) 2025-06-07 03:10:35 -07:00
fzyzcjy
c499591ac8 Add canary for EPLB rebalancing (#6895) 2025-06-07 03:09:33 -07:00
Swipe4057
e1ce44cdb1 Disabling mixed chunked prefill when eagle is enabled (#6874) 2025-06-07 03:06:58 -07:00
JieXin Liang
6153f2ff6e chore: upgrade sgl-kernel v0.1.6 (#6945) 2025-06-07 02:53:26 -07:00
Xiaoyu Zhang
8b5f83ed3b reduce torch.zeros overhead in moe align block size kernel (#6369) 2025-06-07 02:47:36 -07:00
Xiaoyu Zhang
2a413829f4 Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (#5955) 2025-06-07 02:43:50 -07:00
fzyzcjy
d5c097a2f9 Tiny re-introduce profile id logging (#6912) 2025-06-07 02:32:50 -07:00
Swipe4057
9736cd3b7d [Bugfix] pipeline parallelism and Eagle Qwen2 (#6910) 2025-06-07 01:58:50 -07:00
fzyzcjy
2f715f51cc Minor compile fused topk (#6944) 2025-06-07 01:40:38 -07:00
JieXin Liang
22fe787852 [sgl-kernel] update deepgemm (#6942) 2025-06-06 23:24:41 -07:00
Baizhou Zhang
c4ffbeca19 Add triton fused moe kernel config for E=257 on B200 (#6939) 2025-06-06 23:15:01 -07:00
miter
f8eaaab817 [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (#6767)
Signed-off-by: miter <miterv@outlook.com>
2025-06-06 21:32:33 -07:00
Xinyuan Tong
697b0f71f0 [Refactor] image data process in bench_serving (#6879)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-06-06 21:11:17 -07:00
shangmingc
132dad874d [PD] Optimize transfer queue forward logic for dummy rank (#6922)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-06-06 18:26:14 -07:00
Lianmin Zheng
60fdad7cf3 Sync the changes on cuda graph runners (#6932) 2025-06-06 18:23:52 -07:00
fzyzcjy
61ce91ed28 Tiny support customize DeepEP max dispatch tokens per rank (#6934) 2025-06-06 17:18:35 -07:00
Lianmin Zheng
e6b7053b60 Fix a bug in abort & Improve docstrings for abort (#6931) 2025-06-06 14:35:45 -07:00
Jianan Ji
5f91c82526 [Feature] Support Flashinfer fmha on Blackwell (#6930) 2025-06-06 12:57:50 -07:00
HAI
b819381fec AITER backend extension and workload optimizations (#6838)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
2025-06-05 23:00:18 -07:00
Zaili Wang
562f279a2d [CPU] enable CI for PRs, add Dockerfile and auto build task (#6458)
Co-authored-by: diwei sun <diwei.sun@intel.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-06-05 13:43:54 -07:00
Chang Su
8b2474898b bugfix(OAI): Fix image_data processing for jinja chat templates (#6877) 2025-06-05 13:37:01 -07:00
Pavani Majety
0df6765c83 [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (#6887)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-05 13:13:14 -07:00
fzyzcjy
35b65cf0ca Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (#6890) 2025-06-05 11:37:05 -07:00
shangmingc
dd1012fcbe [PD] Fix potential perf spike caused by tracker gc and optimize doc (#6764)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-06-05 10:56:02 -07:00
Ravi Theja
44aab7f91c oai: fix openAI client error with single request via batch api (#6170)
Co-authored-by: Ravi Theja Desetty <ravitheja@Ravis-MacBook-Pro.local>
2025-06-05 18:21:47 +08:00