Commit Graph

2414 Commits

Author SHA1 Message Date
Swipe4057
e1ce44cdb1 Disabling mixed chunked prefill when eagle is enabled (#6874) 2025-06-07 03:06:58 -07:00
JieXin Liang
6153f2ff6e chore: upgrade sgl-kernel v0.1.6 (#6945) 2025-06-07 02:53:26 -07:00
Xiaoyu Zhang
8b5f83ed3b reduce torch.zeros overhead in moe align block size kernel (#6369) 2025-06-07 02:47:36 -07:00
Xiaoyu Zhang
2a413829f4 Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (#5955) 2025-06-07 02:43:50 -07:00
fzyzcjy
d5c097a2f9 Tiny re-introduce profile id logging (#6912) 2025-06-07 02:32:50 -07:00
Swipe4057
9736cd3b7d [Bugfix] pipeline parallelism and Eagle Qwen2 (#6910) 2025-06-07 01:58:50 -07:00
fzyzcjy
2f715f51cc Minor compile fused topk (#6944) 2025-06-07 01:40:38 -07:00
JieXin Liang
22fe787852 [sgl-kernel] update deepgemm (#6942) 2025-06-06 23:24:41 -07:00
Baizhou Zhang
c4ffbeca19 Add triton fused moe kernel config for E=257 on B200 (#6939) 2025-06-06 23:15:01 -07:00
miter
f8eaaab817 [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (#6767)
Signed-off-by: miter <miterv@outlook.com>
2025-06-06 21:32:33 -07:00
Xinyuan Tong
697b0f71f0 [Refactor] image data process in bench_serving (#6879)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-06-06 21:11:17 -07:00
shangmingc
132dad874d [PD] Optimize transfer queue forward logic for dummy rank (#6922)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-06-06 18:26:14 -07:00
Lianmin Zheng
60fdad7cf3 Sync the changes on cuda graph runners (#6932) 2025-06-06 18:23:52 -07:00
fzyzcjy
61ce91ed28 Tiny support customize DeepEP max dispatch tokens per rank (#6934) 2025-06-06 17:18:35 -07:00
Lianmin Zheng
e6b7053b60 Fix a bug in abort & Improve docstrings for abort (#6931) 2025-06-06 14:35:45 -07:00
Jianan Ji
5f91c82526 [Feature] Support Flashinfer fmha on Blackwell (#6930) 2025-06-06 12:57:50 -07:00
HAI
b819381fec AITER backend extension and workload optimizations (#6838)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
2025-06-05 23:00:18 -07:00
Zaili Wang
562f279a2d [CPU] enable CI for PRs, add Dockerfile and auto build task (#6458)
Co-authored-by: diwei sun <diwei.sun@intel.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-06-05 13:43:54 -07:00
Chang Su
8b2474898b bugfix(OAI): Fix image_data processing for jinja chat templates (#6877) 2025-06-05 13:37:01 -07:00
Pavani Majety
0df6765c83 [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (#6887)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-05 13:13:14 -07:00
fzyzcjy
35b65cf0ca Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (#6890) 2025-06-05 11:37:05 -07:00
shangmingc
dd1012fcbe [PD] Fix potential perf spike caused by tracker gc and optimize doc (#6764)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-06-05 10:56:02 -07:00
Ravi Theja
44aab7f91c oai: fix openAI client error with single request via batch api (#6170)
Co-authored-by: Ravi Theja Desetty <ravitheja@Ravis-MacBook-Pro.local>
2025-06-05 18:21:47 +08:00
fzyzcjy
bcf66ef3e1 Tiny allow profiler API to auto create directory (#6865) 2025-06-05 00:07:03 -07:00
fzyzcjy
0de5e7d40f Support layerwise rebalancing experts (#6851) 2025-06-05 00:05:52 -07:00
fzyzcjy
72a110f664 Tiny update error hints (#6846) 2025-06-05 00:05:28 -07:00
fzyzcjy
5aff1e9392 Fix Qwen3MoE missing token padding optimization (#6820) 2025-06-05 00:04:59 -07:00
zyksir
8e3797be1c support 1 shot allreduce in 1-node and 2-node using mscclpp (#6277) 2025-06-04 22:11:24 -07:00
Lifu Huang
4474eaf552 Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (#6861) 2025-06-04 22:08:30 -07:00
Cheng Wan
499f5e620c Fix one missing arg in DeepEP (#6878) 2025-06-04 19:14:47 -07:00
Cheng Wan
81964328b7 Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736) 2025-06-04 15:53:22 -07:00
ishandhanani
f0f84975f4 feat: add dp-rank to KV events (#6852) 2025-06-04 15:29:34 -07:00
Chanh Nguyen
3f1e433903 Decoder-only Scoring API (#6460)
Co-authored-by: Chanh Nguyen <cnguyen@linkedin.com>
2025-06-04 14:14:54 -07:00
Xinyuan Tong
cf9815ba69 [Refactor] Multimodal data processing for VLM (#6659)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-06-04 11:22:33 -07:00
JieXin Liang
180ff5eecc [fix] recover auto-dispatch for rmsnorm and rope (#6745) 2025-06-03 21:44:20 -07:00
Marc Sun
37f1547587 [FEAT] Add transformers backend support (#5929) 2025-06-03 21:05:29 -07:00
Cheng Wan
8a5480528d [Refactor] Rename n_share_experts_fusion as num_fused_shared_experts (#6735) 2025-06-03 17:48:24 -07:00
fzyzcjy
b6d0ce9f78 Minor add metrics to expert location updater (#6816) 2025-06-02 23:59:11 -07:00
fzyzcjy
0ea330ca34 Fix wrong weight reference in dynamic EPLB (#6818) 2025-06-02 23:26:04 -07:00
pansicheng
27e327b415 fix new_page_count_next_decode (#6671) 2025-06-02 22:48:52 -07:00
Pavani Majety
eb38c7d1ca [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (#6093)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-02 13:48:03 -07:00
fzyzcjy
df7f61ee7d Speed up rebalancing when using non-static dispatch algorithms (#6812) 2025-06-02 11:18:17 -07:00
fzyzcjy
ef21729c1d Fix profiles do not have consistent names (#6811) 2025-06-02 11:17:22 -07:00
fzyzcjy
f5159315b2 Add simple utility to dump tensors for debugging (#6815) 2025-06-02 11:15:31 -07:00
fzyzcjy
6d7b6696d4 Tiny fix EPLB assertion about rebalancing period and recorder window size (#6813) 2025-06-02 11:13:33 -07:00
fzyzcjy
6376b632eb Tiny log prefill time (#6780) 2025-06-02 10:28:27 -07:00
fzyzcjy
e05e29d178 Refactor CustomOp to avoid confusing bugs (#5382) 2025-06-02 10:27:36 -07:00
Ke Bao
a2cb5913a0 Add draft extend CUDA graph for flashinfer backend (#6805) 2025-06-02 01:51:26 -07:00
Lianmin Zheng
20fd53b8f6 Correctly abort the failed grammar requests & Improve the handling of abort (#6803) 2025-06-01 19:00:07 -07:00
Baizhou Zhang
6a47b73024 Remove contiguous before Flashinfer groupwise fp8 gemm (#6804) 2025-06-01 18:30:54 -07:00