Commit Graph

4406 Commits

Author SHA1 Message Date
Yineng Zhang
02bc1c7d80 chore: bump sgl-kernel v0.3.1 (#8771) 2025-08-04 13:18:54 -07:00
Qiaolin Yu
fc8c8e5041 Integrate triton_kernels in sgl-kernel (#8762) 2025-08-04 12:12:14 -07:00
Trevor Morris
9bd4872a34 [bugfix] Fix typo in modelopt quant: 'FusedMoE' object has no attribute 'local_num_experts' (#8768) 2025-08-04 11:08:08 -07:00
Simo Lin
2fa0462c39 [router] introduce dp worker abstraction (#8639) 2025-08-04 06:42:20 -07:00
azhurkevich
915140fd18 [NVIDIA] Add Low Latency NVFP4 decode kernels from Flashinfer (#8552)
Co-authored-by: Cheng Wan <cwan@x.ai>
2025-08-04 03:10:02 -07:00
Baron Liu
36fc9260a2 [bugfix] fix import path in HiCacheController (#8749) 2025-08-03 22:19:15 -07:00
Even Zhou
fee0ab0fba [CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
2025-08-03 22:16:38 -07:00
Xiaoyu Zhang
f57d2dc162 [sgl-kernel] avoid per_token_quant_fp8.cu hardcode sm_count (#8738) 2025-08-04 12:55:57 +08:00
Baizhou Zhang
f2d68ded6d Rename lora_path to lora_id in batches (#8437) 2025-08-03 21:08:28 -07:00
Yuan Luo
3b87a9e8ae Fix bug of refactoring TopKOutput in w4afp8 (#8745)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-08-03 20:05:02 -07:00
YyWangCS
f024795e57 Replace torch.jit.script with torch.compile in get_masked_input_and_mask to fix benchmark underreporting (#8733) 2025-08-03 19:02:51 -07:00
Cheng Wan
b102353f8f [MoE] Enable renormalize=False in Triton kernels (#8735) 2025-08-03 17:03:04 -07:00
Liangsheng Yin
7a27e798ca [CI] Do not trigger pd-disaggregation CI in draft PR (#8737) 2025-08-04 05:12:20 +08:00
huangtingwei
76ba5bbe12 fix args typo in memory_pool_host (#8662) 2025-08-03 13:47:29 -07:00
Yingchun Lai
ed6f7597b3 Fix the missing 'lof' choice of --schedule-policy server args (#7114) 2025-08-03 12:29:42 -07:00
tql.99
e67276ecb3 feat: support cutlass_moe_fp8 kernel for fusedmoe in sm90 (#8678) 2025-08-03 10:47:15 -07:00
Ke Bao
0242bb9c74 Fix triton kernels topk with keyword arguments (#8732) 2025-08-03 10:45:15 -07:00
Yuxuan Zhang
760286e3d3 use fp32 for e_score_correction_bias in GLM-4.5 (#8729) 2025-08-03 10:43:40 -07:00
Zilin Zhu
3435a24e81 [RL] fix update weight for FusedMoE with EP (#8676) 2025-08-03 10:20:39 -07:00
yhyang201
00da906584 feat: Support DP Attention for step3_vl (#8699) 2025-08-03 19:35:26 +08:00
Yineng Zhang
8cd344586e chore: bump v0.4.10.post2 (#8727) 2025-08-03 03:43:29 -07:00
Cheng Wan
0e0eef00ce [DP] fix the compatibility issue between DP attention and --attention-backend triton (#8723) 2025-08-03 03:06:57 -07:00
Cheng Wan
cb099d2095 [CUDA Graph] save cuda graph memory by using next_token_logits_buffer (#8579) 2025-08-03 03:06:47 -07:00
Cheng Wan
7a91330149 Save cuda graph memory for fa3 (#8567) 2025-08-03 03:06:31 -07:00
Yineng Zhang
5ce5093b97 chore: bump sgl-kernel 0.3.0 with torch 2.8.0 (#8718) 2025-08-03 02:31:50 -07:00
ybyang
6f9baf1002 [Improvements] Merge health check route (#8444)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-08-03 01:59:06 -07:00
Jasper James
a31b7a7024 feat: Add new moe triton for NVIDIA RTX 6000 Ada (#8547) 2025-08-03 00:57:35 -07:00
Varun Vinayak Shenoy
7ed8e51bc3 [fix] Fix divide by zero error for llama4. (#8683) 2025-08-03 00:55:55 -07:00
Trevor Morris
32f2815451 Do layernorm before allgather for DP attention (#8631) 2025-08-03 00:53:08 -07:00
Guanhua Wang
f7b2853ff8 [feat] support minimum token load balance in dp attention (#7379) 2025-08-03 00:46:47 -07:00
Zhiqiang Xie
b0add2da00 HiCache storage, style change and bug fix (#8719) 2025-08-03 15:05:04 +08:00
Wenxuan Tan
0305c5053f Reduce memory accumulation in long-running server (#8306)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-08-03 15:03:16 +08:00
Lifu Huang
8675bdf246 Support limiting max loaded loras in CPU. (#8650) 2025-08-03 00:02:23 -07:00
Cheng Wan
a437aa9987 [hotfix] fix mixtral with tensor-level compressed-tensor quantization (#8721) 2025-08-02 22:59:25 -07:00
fzyzcjy
0e612dbf12 Tiny fix CI pytest error (#8524) 2025-08-02 22:48:42 -07:00
Liangsheng Yin
9f47d686e5 Fix fused MoE when routed_scaling_factor is None (#8709) 2025-08-03 12:42:01 +08:00
Qi Yuhang
d9def43dcd [Perf]Use Cooperative Schedule for H100 & H200 & H800 in fp8_blockwise_scaled_grouped_mm (#8722) 2025-08-02 21:13:47 -07:00
DarkSharpness
e273aa6dcf [Feature] Radix Tree in C++ (#7369) 2025-08-02 19:50:14 -07:00
Simo Lin
828a4fe944 [router] Implement HTTP Dependency Injection Pattern for Router System (#8714) 2025-08-02 19:16:47 -07:00
fzyzcjy
8ada1ab6c7 Fix triton moe error caused by TopK refactor (#8705) 2025-08-02 18:49:47 -07:00
Lianmin Zheng
e314b084c5 [FIX] Fix the nightly CI by disabling swa mem pool for gemma2 (#8693) 2025-08-02 18:43:14 -07:00
fzyzcjy
403566bcca Remove assertions about per group quant fp8 (#8717) 2025-08-02 17:08:40 -07:00
Yineng Zhang
0a56b721d5 chore: bump sgl-kernel v0.2.9 (#8713) 2025-08-02 16:21:56 -07:00
Liangsheng Yin
603f5ce020 [Bug] fix green context's incompatibility with cuda < 12.4 (#8701) 2025-08-02 15:23:11 -07:00
Simo Lin
6d4fd8826e [router] minor code clean up and and refactoring (#8711) 2025-08-02 13:46:31 -07:00
Liangsheng Yin
f9f0138f80 Revert "[1/2] sgl-kernel: Fuse routed scaling factor into select_experts" (#8706) 2025-08-02 20:14:30 +08:00
PGFLMG
ac6962ccd6 [Doc] Polish sgl-kernel readme for cu126 build error (#8704) 2025-08-02 17:03:07 +08:00
Stefan He
4ca43b061c Add tensor.detach() back to update weight util (#8691) 2025-08-02 00:41:05 -07:00
Wenchen Lo
ea93079b30 model: adapt mllama4 to VisionAttention (#8512)
Co-authored-by: root <mickjagger19@icloud.com>
2025-08-02 00:39:40 -07:00
Yusong Gao
4bec99ecd0 Fix: resolve prefill of retracted request out-of-memory issue when ignore_eos is enabled (#7434) 2025-08-02 14:43:45 +08:00