Commit Graph

2400 Commits

Author SHA1 Message Date
Lianmin Zheng
e6b7053b60 Fix a bug in abort & Improve docstrings for abort (#6931) 2025-06-06 14:35:45 -07:00
Jianan Ji
5f91c82526 [Feature] Support Flashinfer fmha on Blackwell (#6930) 2025-06-06 12:57:50 -07:00
HAI
b819381fec AITER backend extension and workload optimizations (#6838)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
2025-06-05 23:00:18 -07:00
Zaili Wang
562f279a2d [CPU] enable CI for PRs, add Dockerfile and auto build task (#6458)
Co-authored-by: diwei sun <diwei.sun@intel.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-06-05 13:43:54 -07:00
Chang Su
8b2474898b bugfix(OAI): Fix image_data processing for jinja chat templates (#6877) 2025-06-05 13:37:01 -07:00
Pavani Majety
0df6765c83 [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (#6887)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-05 13:13:14 -07:00
fzyzcjy
35b65cf0ca Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (#6890) 2025-06-05 11:37:05 -07:00
shangmingc
dd1012fcbe [PD] Fix potential perf spike caused by tracker gc and optimize doc (#6764)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-06-05 10:56:02 -07:00
Ravi Theja
44aab7f91c oai: fix openAI client error with single request via batch api (#6170)
Co-authored-by: Ravi Theja Desetty <ravitheja@Ravis-MacBook-Pro.local>
2025-06-05 18:21:47 +08:00
fzyzcjy
bcf66ef3e1 Tiny allow profiler API to auto create directory (#6865) 2025-06-05 00:07:03 -07:00
fzyzcjy
0de5e7d40f Support layerwise rebalancing experts (#6851) 2025-06-05 00:05:52 -07:00
fzyzcjy
72a110f664 Tiny update error hints (#6846) 2025-06-05 00:05:28 -07:00
fzyzcjy
5aff1e9392 Fix Qwen3MoE missing token padding optimization (#6820) 2025-06-05 00:04:59 -07:00
zyksir
8e3797be1c support 1 shot allreduce in 1-node and 2-node using mscclpp (#6277) 2025-06-04 22:11:24 -07:00
Lifu Huang
4474eaf552 Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (#6861) 2025-06-04 22:08:30 -07:00
Cheng Wan
499f5e620c Fix one missing arg in DeepEP (#6878) 2025-06-04 19:14:47 -07:00
Cheng Wan
81964328b7 Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736) 2025-06-04 15:53:22 -07:00
ishandhanani
f0f84975f4 feat: add dp-rank to KV events (#6852) 2025-06-04 15:29:34 -07:00
Chanh Nguyen
3f1e433903 Decoder-only Scoring API (#6460)
Co-authored-by: Chanh Nguyen <cnguyen@linkedin.com>
2025-06-04 14:14:54 -07:00
Xinyuan Tong
cf9815ba69 [Refactor] Multimodal data processing for VLM (#6659)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-06-04 11:22:33 -07:00
JieXin Liang
180ff5eecc [fix] recover auto-dispatch for rmsnorm and rope (#6745) 2025-06-03 21:44:20 -07:00
Marc Sun
37f1547587 [FEAT] Add transformers backend support (#5929) 2025-06-03 21:05:29 -07:00
Cheng Wan
8a5480528d [Refactor] Rename n_share_experts_fusion as num_fused_shared_experts (#6735) 2025-06-03 17:48:24 -07:00
fzyzcjy
b6d0ce9f78 Minor add metrics to expert location updater (#6816) 2025-06-02 23:59:11 -07:00
fzyzcjy
0ea330ca34 Fix wrong weight reference in dynamic EPLB (#6818) 2025-06-02 23:26:04 -07:00
pansicheng
27e327b415 fix new_page_count_next_decode (#6671) 2025-06-02 22:48:52 -07:00
Pavani Majety
eb38c7d1ca [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (#6093)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-02 13:48:03 -07:00
fzyzcjy
df7f61ee7d Speed up rebalancing when using non-static dispatch algorithms (#6812) 2025-06-02 11:18:17 -07:00
fzyzcjy
ef21729c1d Fix profiles do not have consistent names (#6811) 2025-06-02 11:17:22 -07:00
fzyzcjy
f5159315b2 Add simple utility to dump tensors for debugging (#6815) 2025-06-02 11:15:31 -07:00
fzyzcjy
6d7b6696d4 Tiny fix EPLB assertion about rebalancing period and recorder window size (#6813) 2025-06-02 11:13:33 -07:00
fzyzcjy
6376b632eb Tiny log prefill time (#6780) 2025-06-02 10:28:27 -07:00
fzyzcjy
e05e29d178 Refactor CustomOp to avoid confusing bugs (#5382) 2025-06-02 10:27:36 -07:00
Ke Bao
a2cb5913a0 Add draft extend CUDA graph for flashinfer backend (#6805) 2025-06-02 01:51:26 -07:00
Lianmin Zheng
20fd53b8f6 Correctly abort the failed grammar requests & Improve the handling of abort (#6803) 2025-06-01 19:00:07 -07:00
Baizhou Zhang
6a47b73024 Remove contiguous before Flashinfer groupwise fp8 gemm (#6804) 2025-06-01 18:30:54 -07:00
Lifu Huang
0a9bfc20ab [Minor] Always append newline after image token when parsing chat message (#6797) 2025-05-31 20:50:33 -07:00
Yineng Zhang
34c63731fc chore: upgrade sgl-kernel v0.1.5 (#6795) 2025-05-31 18:32:00 -07:00
Lianmin Zheng
2d72fc47cf Improve profiler and integrate profiler in bench_one_batch_server (#6787) 2025-05-31 15:53:55 -07:00
Qiaolin Yu
7dc0e39442 Bump torch to 2.7.0 (#6788) 2025-05-31 14:43:12 -07:00
Yikai Zhang
fb507b7b10 [FIX] mmmu bench serving result display error (#6525) (#6791) 2025-05-31 13:48:06 -07:00
storyicon
f90945c45a fix(PD-disaggregation): Can not get local ip (#6792)
Signed-off-by: storyicon <storyicon@foxmail.com>
2025-05-31 13:47:14 -07:00
Lifu Huang
094fbdacd5 Fix incorrect LoRA weight loading for fused gate_up_proj (#6734) 2025-05-31 13:41:44 -07:00
YanbingJiang
888cb175a6 Add intel_amx backend for Radix Attention for CPU (#6408)
Co-authored-by: Chunyuan WU <chunyuan.wu@intel.com>
Co-authored-by: Thien Tran <gau.nernst@yahoo.com.sg>
2025-05-30 21:37:42 -07:00
Cheng Wan
ced3c07afe Support token-level quantization for EP MoE (#6782) 2025-05-30 17:26:30 -07:00
Chang Su
f18b068f15 feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (#6784) 2025-05-30 17:05:17 -07:00
Chao Yang
4fac524b14 update llama4 chat template and pythonic parser (#6679)
Co-authored-by: Chang Su <chang.s.su@oracle.com>
2025-05-30 17:01:22 -07:00
Cheng Wan
b581b22504 Fix one bug in the grouped-gemm triton kernel (#6772) 2025-05-30 01:42:08 -07:00
Li Hui
69dd878b51 Fix shared experts fusion error (#6289) 2025-05-30 01:16:11 -07:00
Jianan Ji
22630ca242 Support sliding window in triton backend (#6509) 2025-05-30 01:11:53 -07:00