Commit Graph

2970 Commits

Author SHA1 Message Date
Guanhua Wang
f7b2853ff8 [feat] support minimum token load balance in dp attention (#7379) 2025-08-03 00:46:47 -07:00
Zhiqiang Xie
b0add2da00 HiCache storage, style change and bug fix (#8719) 2025-08-03 15:05:04 +08:00
Wenxuan Tan
0305c5053f Reduce memory accumulation in long-running server (#8306)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-08-03 15:03:16 +08:00
Lifu Huang
8675bdf246 Support limiting max loaded loras in CPU. (#8650) 2025-08-03 00:02:23 -07:00
Cheng Wan
a437aa9987 [hotfix] fix mixtral with tensor-level compressed-tensor quantization (#8721) 2025-08-02 22:59:25 -07:00
fzyzcjy
0e612dbf12 Tiny fix CI pytest error (#8524) 2025-08-02 22:48:42 -07:00
Liangsheng Yin
9f47d686e5 Fix fused MoE when routed_scaling_factor is None (#8709) 2025-08-03 12:42:01 +08:00
DarkSharpness
e273aa6dcf [Feature] Radix Tree in C++ (#7369) 2025-08-02 19:50:14 -07:00
fzyzcjy
8ada1ab6c7 Fix triton moe error caused by TopK refactor (#8705) 2025-08-02 18:49:47 -07:00
Lianmin Zheng
e314b084c5 [FIX] Fix the nightly CI by disabling swa mem pool for gemma2 (#8693) 2025-08-02 18:43:14 -07:00
fzyzcjy
403566bcca Remove assertions about per group quant fp8 (#8717) 2025-08-02 17:08:40 -07:00
Stefan He
4ca43b061c Add tensor.detach() back to update weight util (#8691) 2025-08-02 00:41:05 -07:00
Wenchen Lo
ea93079b30 model: adapt mllama4 to VisionAttention (#8512)
Co-authored-by: root <mickjagger19@icloud.com>
2025-08-02 00:39:40 -07:00
Yusong Gao
4bec99ecd0 Fix: resolve prefill of retracted request out-of-memory issue when ignore_eos is enabled (#7434) 2025-08-02 14:43:45 +08:00
Trevor Morris
89caf7a3c6 [bugfix] Apply routed scaling factor to cutlass_fused_experts_fp8 (#8688) 2025-08-01 19:00:24 -07:00
Nicolas Castet
82e6c3a65a Add support for NCCL symmetric memory for TP allreduces (#8238) 2025-08-01 23:30:55 +00:00
Baron Liu
b89d37cb11 [bugfix] Add 'disaggregation_mode' parameter to warmup function when compile deep_gemm manually (#8618) 2025-08-01 16:02:53 -07:00
Swipe4057
5deab1283a upgrade xgrammar 0.1.22 (#8522) 2025-08-01 15:59:15 -07:00
hzh0425
d1c4d51c08 bugfix(hicache): Fix 'MooncakeStore' not defined error. (#8668) 2025-08-01 15:58:17 -07:00
Ke Bao
e252192679 Fix deepgemm masked grouped gemm jit compile (#8679) 2025-08-01 15:37:59 -07:00
Trevor Morris
6a7528e623 [bugfix] Fix page size for create_flashmla_kv_indices_triton() for cutlass mla (#8685) 2025-08-01 14:28:04 -07:00
Minglei Zhu
2ae95d17e8 Disable tp for shared experts under expert parallelism for GLM4.5 model (#8647) (#8647)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
2025-08-01 12:02:35 -07:00
萝卜菜
2d401bd99d [fix] fix pd disagg error of vlms (#8094) 2025-08-02 02:16:29 +08:00
Cheng Wan
6c88f6c8d9 [5/N] MoE Refactor: Update MoE parallelism arguments (#8658) 2025-08-01 01:20:03 -07:00
Binyao Jiang
c8d3a402c1 Bug: apply final_hidden_states*=self.routed_scaling_factor at MoE lay… (#8511)
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
2025-08-01 00:07:41 -07:00
Xinyuan Tong
7e831efee8 Fix chat template handling for OpenAI serving (#8635)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-07-31 21:49:45 -07:00
pansicheng
20b5563eda Add hf3fs_utils.cpp to package-data (#8653) 2025-08-01 12:41:09 +08:00
Ke Bao
33f0de337d chore: bump v0.4.10.post1 (#8652) 2025-08-01 12:07:30 +08:00
Baizhou Zhang
e7e5a3050a Update batch size limitation of dsv3_router_gemm kernel to 16 (#8051) 2025-08-01 11:53:31 +08:00
Zhiqiang Xie
dd7ca00601 Interface change for kvcache io to support page first layout (#8318) 2025-08-01 11:37:49 +08:00
Zhiqiang Xie
9305ea6c2d HiCache, fixing hash value indexing (#8636) 2025-08-01 11:29:51 +08:00
Kaixi Hou
aa4c66b564 [NVIDIA] Enable Flashinfer MoE blockscale fp8 backend for TP MoE (#8450)
Co-authored-by: kushanam <42385577+kushanam@users.noreply.github.com>
2025-07-31 19:56:34 -07:00
Even Zhou
99795d61e6 [Bugfix] fix w8a8_int8 load issue (#8308)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
2025-07-31 17:30:16 -07:00
yrk111222
04913430c6 Feature/modelscope model download (#8083)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
2025-07-31 17:29:31 -07:00
Yineng Zhang
0ad098b494 Revert "Fix nan value generated after custom all reduce (#8532)" (#8642) 2025-07-31 17:26:49 -07:00
kk
4a6e7a66a0 Fix nan value generated after custom all reduce (#8532) 2025-07-31 16:15:43 -07:00
Faraz
4b04998d38 TRTLLM Gen MLA Decode Kernel Integration (same as #7938) (#8632)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
2025-07-31 16:03:40 -07:00
pansicheng
3dde86194a Conditionally import HiCacheHF3FS (#8598)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-07-31 14:59:29 -07:00
Trevor Morris
b7170cc820 [bugfix] Fix flashinfer cutlass EP moe after MoE refactor (#8630) 2025-07-31 13:57:08 -07:00
Simo Lin
5c14515fec [bug] remove pdlb from minilb since its no longer available (#8634) 2025-07-31 13:54:02 -07:00
Vishwanath Venkatesan
2cd2e27f80 SGLang HiCache NIXL Connector (#8488)
Signed-off-by: Vishwanath Venkatesan <vvenkatesan@nvidia.com>
Co-authored-by: Moein Khazraee <moein@nvidia.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-07-31 13:09:42 -07:00
Chang Su
743638bc03 misc: Remove debug print to logger.info (#8633) 2025-07-31 12:56:52 -07:00
Brayden Zhong
4acf690206 [Optimization][Perf] Disable the GC during CUDA graph capture to speed up by up to 3x (#8577) 2025-07-31 11:31:21 -07:00
Ke Bao
8fbcfd0723 Update step3v default config (#8626) 2025-08-01 00:49:26 +08:00
Ke Bao
3c307dc057 Fix hf3fs_fuse import error (#8623) 2025-07-31 22:42:31 +08:00
Shangming Cai
016fd25127 [PD] Use batch transfer for rdma transport and add notes for mnnvl usage (#8595)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-31 21:29:34 +08:00
Yineng Zhang
023288645b chore: bump v0.4.10 (#8608) 2025-07-31 20:50:17 +08:00
Cheng Wan
7a1f7fc504 [Feature] Hybrid EP and TP (#8590) 2025-07-31 02:53:25 -07:00
Chang Su
51c38163c1 model: support Step3V (#8583)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: nnnobody-code <nnnobody@foxmail.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Qiaolin-Yu <qy254@cornell.edu>
Co-authored-by: Qiaolin-Yu <liin1211@outlook.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
2025-07-31 02:41:00 -07:00
Cheng Wan
32fa1e9cc2 [4/N] MoE Refactor: Unified Triton Kernel for FusedMoE and EPMoE (#8515) 2025-07-31 02:34:02 -07:00