Commit Graph

3006 Commits

Author SHA1 Message Date
kk
32d9e39a29 Fix potential memory fault issue and ncclSystemError in CI test (#8681)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-05 12:19:37 -07:00
Yineng Zhang
4f4e0e4162 chore: upgrade flashinfer 0.2.10 (#8827) 2025-08-05 12:04:01 -07:00
Yineng Zhang
901ab758ec chore: upgrade transformers 4.55.0 (#8823)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
2025-08-05 11:37:21 -07:00
Yuxuan Zhang
a4b0d5c9e5 GLM-4.5 and GLM-4.5-Air both support (#8804) 2025-08-05 03:29:20 -07:00
eigen
40e3b2beeb feat: add trtllm-gen mha from direct call (#8782)
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-08-05 03:28:39 -07:00
Yineng Zhang
5e91fed1c5 Revert "[NVIDIA]Fix local_num_experts for EP (#8779)" (#8797) 2025-08-04 23:30:43 -07:00
Yuhao Yao
873f384a51 [feat] Add detail in image_data (#8596) 2025-08-05 14:01:38 +08:00
Shu Wang
b01eeb80f8 [NVIDIA]Fix local_num_experts for EP (#8779) 2025-08-04 22:01:14 -07:00
Yineng Zhang
1ea94d3b92 chore: upgrade flashinfer v0.2.9 (#8780) 2025-08-04 21:59:18 -07:00
Shangming Cai
d98a4913ea [PD] Refactor parallel sizes and add pp support for mooncake (#8571)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-08-04 20:18:11 -07:00
kk
d4bf5a8524 Support OCP MXFP4 quantization on AMD GPUs (#8255)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
2025-08-04 18:14:52 -07:00
Lifu Huang
7cb20754fa [Fix] Fix several issues preventing gemma3n LoRA support. (#8776) 2025-08-04 17:11:46 -07:00
Kaixi Hou
6d0646da11 [NVIDIA] Fix breakage of using trtllm-gen fp8 moe (#8773) 2025-08-04 16:30:13 -07:00
Trevor Morris
9bd4872a34 [bugfix] Fix typo in modelopt quant: 'FusedMoE' object has no attribute 'local_num_experts' (#8768) 2025-08-04 11:08:08 -07:00
azhurkevich
915140fd18 [NVIDIA] Add Low Latency NVFP4 decode kernels from Flashinfer (#8552)
Co-authored-by: Cheng Wan <cwan@x.ai>
2025-08-04 03:10:02 -07:00
Baron Liu
36fc9260a2 [bugfix] fix import path in HiCacheController (#8749) 2025-08-03 22:19:15 -07:00
Even Zhou
fee0ab0fba [CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
2025-08-03 22:16:38 -07:00
Baizhou Zhang
f2d68ded6d Rename lora_path to lora_id in batches (#8437) 2025-08-03 21:08:28 -07:00
Yuan Luo
3b87a9e8ae Fix bug of refactoring TopKOutput in w4afp8 (#8745)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-08-03 20:05:02 -07:00
YyWangCS
f024795e57 Replace torch.jit.script with torch.compile in get_masked_input_and_mask to fix benchmark underreporting (#8733) 2025-08-03 19:02:51 -07:00
Cheng Wan
b102353f8f [MoE] Enable renormalize=False in Triton kernels (#8735) 2025-08-03 17:03:04 -07:00
huangtingwei
76ba5bbe12 fix args typo in memory_pool_host (#8662) 2025-08-03 13:47:29 -07:00
Yingchun Lai
ed6f7597b3 Fix the missing 'lof' choice of --schedule-policy server args (#7114) 2025-08-03 12:29:42 -07:00
tql.99
e67276ecb3 feat: support cutlass_moe_fp8 kernel for fusedmoe in sm90 (#8678) 2025-08-03 10:47:15 -07:00
Ke Bao
0242bb9c74 Fix triton kernels topk with keyword arguments (#8732) 2025-08-03 10:45:15 -07:00
Yuxuan Zhang
760286e3d3 use fp32 for e_score_correction_bias in GLM-4.5 (#8729) 2025-08-03 10:43:40 -07:00
Zilin Zhu
3435a24e81 [RL] fix update weight for FusedMoE with EP (#8676) 2025-08-03 10:20:39 -07:00
yhyang201
00da906584 feat: Support DP Attention for step3_vl (#8699) 2025-08-03 19:35:26 +08:00
Yineng Zhang
8cd344586e chore: bump v0.4.10.post2 (#8727) 2025-08-03 03:43:29 -07:00
Cheng Wan
0e0eef00ce [DP] fix the compatibility issue between DP attention and --attention-backend triton (#8723) 2025-08-03 03:06:57 -07:00
Cheng Wan
cb099d2095 [CUDA Graph] save cuda graph memory by using next_token_logits_buffer (#8579) 2025-08-03 03:06:47 -07:00
Cheng Wan
7a91330149 Save cuda graph memory for fa3 (#8567) 2025-08-03 03:06:31 -07:00
ybyang
6f9baf1002 [Improvements] Merge health check route (#8444)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-08-03 01:59:06 -07:00
Jasper James
a31b7a7024 feat: Add new moe triton for NVIDIA RTX 6000 Ada (#8547) 2025-08-03 00:57:35 -07:00
Varun Vinayak Shenoy
7ed8e51bc3 [fix] Fix divide by zero error for llama4. (#8683) 2025-08-03 00:55:55 -07:00
Trevor Morris
32f2815451 Do layernorm before allgather for DP attention (#8631) 2025-08-03 00:53:08 -07:00
Guanhua Wang
f7b2853ff8 [feat] support minimum token load balance in dp attention (#7379) 2025-08-03 00:46:47 -07:00
Zhiqiang Xie
b0add2da00 HiCache storage, style change and bug fix (#8719) 2025-08-03 15:05:04 +08:00
Wenxuan Tan
0305c5053f Reduce memory accumulation in long-running server (#8306)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-08-03 15:03:16 +08:00
Lifu Huang
8675bdf246 Support limiting max loaded loras in CPU. (#8650) 2025-08-03 00:02:23 -07:00
Cheng Wan
a437aa9987 [hotfix] fix mixtral with tensor-level compressed-tensor quantization (#8721) 2025-08-02 22:59:25 -07:00
fzyzcjy
0e612dbf12 Tiny fix CI pytest error (#8524) 2025-08-02 22:48:42 -07:00
Liangsheng Yin
9f47d686e5 Fix fused MoE when routed_scaling_factor is None (#8709) 2025-08-03 12:42:01 +08:00
DarkSharpness
e273aa6dcf [Feature] Radix Tree in C++ (#7369) 2025-08-02 19:50:14 -07:00
fzyzcjy
8ada1ab6c7 Fix triton moe error caused by TopK refactor (#8705) 2025-08-02 18:49:47 -07:00
Lianmin Zheng
e314b084c5 [FIX] Fix the nightly CI by disabling swa mem pool for gemma2 (#8693) 2025-08-02 18:43:14 -07:00
fzyzcjy
403566bcca Remove assertions about per group quant fp8 (#8717) 2025-08-02 17:08:40 -07:00
Stefan He
4ca43b061c Add tensor.detach() back to update weight util (#8691) 2025-08-02 00:41:05 -07:00
Wenchen Lo
ea93079b30 model: adapt mllama4 to VisionAttention (#8512)
Co-authored-by: root <mickjagger19@icloud.com>
2025-08-02 00:39:40 -07:00
Yusong Gao
4bec99ecd0 Fix: resolve prefill of retracted request out-of-memory issue when ignore_eos is enabled (#7434) 2025-08-02 14:43:45 +08:00