Ke Bao
|
399e7ec8b3
|
Refine naming (#8868)
|
2025-08-06 21:37:02 +08:00 |
|
Ying Sheng
|
168033d5fb
|
Support mxfp4 for GPT-OSS (#8843)
Co-authored-by: Co-author fzyzcjy <ch271828n@outlook.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: zhuofan1123 <zhuofanl@nvidia.com>
Co-authored-by: liz-badada <jinyanc@nvidia.com>
Co-authored-by: xutizhou <xutingz@nvidia.com>
Co-authored-by: linhu-nv <linhu@nvidia.com>
|
2025-08-06 00:05:25 -07:00 |
|
Stefan He
|
cbbb738371
|
[2/3] Optimize Slime Update Weights: Avoid GPU-to-CPU Device Sync when update expert weights (#8753)
|
2025-08-05 22:09:52 -07:00 |
|
Stefan He
|
89588179cf
|
[1/3] Optimize Slime Update Weights: Remove QWen3MOE Load Weight Overhead (#8751)
|
2025-08-05 22:07:54 -07:00 |
|
HouseWest
|
ca47e24f5d
|
[Feature] improve TBO: two chunk overlap (#8144)
|
2025-08-05 21:11:01 -07:00 |
|
Praneth Paruchuri
|
d26ca84f39
|
Support bailing moe (#8680)
|
2025-08-05 20:40:34 -07:00 |
|
Ke Bao
|
8128e08d36
|
Turn off hybrid cache by default (#8839)
|
2025-08-06 09:53:45 +08:00 |
|
Yineng Zhang
|
3ae8e3ea8f
|
chore: upgrade torch 2.8.0 (#8836)
|
2025-08-05 17:32:01 -07:00 |
|
Ying Sheng
|
c1d2061f97
|
Add initial support for gpt-oss (#8824)
|
2025-08-05 13:42:01 -07:00 |
|
Yineng Zhang
|
556e4143f0
|
fix: remove unused import (#8809)
|
2025-08-05 13:40:22 -07:00 |
|
kk
|
32d9e39a29
|
Fix potential memory fault issue and ncclSystemError in CI test (#8681)
Co-authored-by: wunhuang <wunhuang@amd.com>
|
2025-08-05 12:19:37 -07:00 |
|
Yineng Zhang
|
4f4e0e4162
|
chore: upgrade flashinfer 0.2.10 (#8827)
|
2025-08-05 12:04:01 -07:00 |
|
Yineng Zhang
|
901ab758ec
|
chore: upgrade transformers 4.55.0 (#8823)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
|
2025-08-05 11:37:21 -07:00 |
|
Yuxuan Zhang
|
a4b0d5c9e5
|
GLM-4.5 and GLM-4.5-Air both support (#8804)
|
2025-08-05 03:29:20 -07:00 |
|
eigen
|
40e3b2beeb
|
feat: add trtllm-gen mha from direct call (#8782)
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
|
2025-08-05 03:28:39 -07:00 |
|
Yineng Zhang
|
5e91fed1c5
|
Revert "[NVIDIA]Fix local_num_experts for EP (#8779)" (#8797)
|
2025-08-04 23:30:43 -07:00 |
|
Yuhao Yao
|
873f384a51
|
[feat] Add detail in image_data (#8596)
|
2025-08-05 14:01:38 +08:00 |
|
Shu Wang
|
b01eeb80f8
|
[NVIDIA]Fix local_num_experts for EP (#8779)
|
2025-08-04 22:01:14 -07:00 |
|
Yineng Zhang
|
1ea94d3b92
|
chore: upgrade flashinfer v0.2.9 (#8780)
|
2025-08-04 21:59:18 -07:00 |
|
Shangming Cai
|
d98a4913ea
|
[PD] Refactor parallel sizes and add pp support for mooncake (#8571)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-08-04 20:18:11 -07:00 |
|
kk
|
d4bf5a8524
|
Support OCP MXFP4 quantization on AMD GPUs (#8255)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
|
2025-08-04 18:14:52 -07:00 |
|
Lifu Huang
|
7cb20754fa
|
[Fix] Fix several issues preventing gemma3n LoRA support. (#8776)
|
2025-08-04 17:11:46 -07:00 |
|
Kaixi Hou
|
6d0646da11
|
[NVIDIA] Fix breakage of using trtllm-gen fp8 moe (#8773)
|
2025-08-04 16:30:13 -07:00 |
|
Trevor Morris
|
9bd4872a34
|
[bugfix] Fix typo in modelopt quant: 'FusedMoE' object has no attribute 'local_num_experts' (#8768)
|
2025-08-04 11:08:08 -07:00 |
|
azhurkevich
|
915140fd18
|
[NVIDIA] Add Low Latency NVFP4 decode kernels from Flashinfer (#8552)
Co-authored-by: Cheng Wan <cwan@x.ai>
|
2025-08-04 03:10:02 -07:00 |
|
Baron Liu
|
36fc9260a2
|
[bugfix] fix import path in HiCacheController (#8749)
|
2025-08-03 22:19:15 -07:00 |
|
Even Zhou
|
fee0ab0fba
|
[CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
|
2025-08-03 22:16:38 -07:00 |
|
Baizhou Zhang
|
f2d68ded6d
|
Rename lora_path to lora_id in batches (#8437)
|
2025-08-03 21:08:28 -07:00 |
|
Yuan Luo
|
3b87a9e8ae
|
Fix bug of refactoring TopKOutput in w4afp8 (#8745)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-08-03 20:05:02 -07:00 |
|
YyWangCS
|
f024795e57
|
Replace torch.jit.script with torch.compile in get_masked_input_and_mask to fix benchmark underreporting (#8733)
|
2025-08-03 19:02:51 -07:00 |
|
Cheng Wan
|
b102353f8f
|
[MoE] Enable renormalize=False in Triton kernels (#8735)
|
2025-08-03 17:03:04 -07:00 |
|
huangtingwei
|
76ba5bbe12
|
fix args typo in memory_pool_host (#8662)
|
2025-08-03 13:47:29 -07:00 |
|
Yingchun Lai
|
ed6f7597b3
|
Fix the missing 'lof' choice of --schedule-policy server args (#7114)
|
2025-08-03 12:29:42 -07:00 |
|
tql.99
|
e67276ecb3
|
feat: support cutlass_moe_fp8 kernel for fusedmoe in sm90 (#8678)
|
2025-08-03 10:47:15 -07:00 |
|
Ke Bao
|
0242bb9c74
|
Fix triton kernels topk with keyword arguments (#8732)
|
2025-08-03 10:45:15 -07:00 |
|
Yuxuan Zhang
|
760286e3d3
|
use fp32 for e_score_correction_bias in GLM-4.5 (#8729)
|
2025-08-03 10:43:40 -07:00 |
|
Zilin Zhu
|
3435a24e81
|
[RL] fix update weight for FusedMoE with EP (#8676)
|
2025-08-03 10:20:39 -07:00 |
|
yhyang201
|
00da906584
|
feat: Support DP Attention for step3_vl (#8699)
|
2025-08-03 19:35:26 +08:00 |
|
Yineng Zhang
|
8cd344586e
|
chore: bump v0.4.10.post2 (#8727)
|
2025-08-03 03:43:29 -07:00 |
|
Cheng Wan
|
0e0eef00ce
|
[DP] fix the compatibility issue between DP attention and --attention-backend triton (#8723)
|
2025-08-03 03:06:57 -07:00 |
|
Cheng Wan
|
cb099d2095
|
[CUDA Graph] save cuda graph memory by using next_token_logits_buffer (#8579)
|
2025-08-03 03:06:47 -07:00 |
|
Cheng Wan
|
7a91330149
|
Save cuda graph memory for fa3 (#8567)
|
2025-08-03 03:06:31 -07:00 |
|
ybyang
|
6f9baf1002
|
[Improvements] Merge health check route (#8444)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
|
2025-08-03 01:59:06 -07:00 |
|
Jasper James
|
a31b7a7024
|
feat: Add new moe triton for NVIDIA RTX 6000 Ada (#8547)
|
2025-08-03 00:57:35 -07:00 |
|
Varun Vinayak Shenoy
|
7ed8e51bc3
|
[fix] Fix divide by zero error for llama4. (#8683)
|
2025-08-03 00:55:55 -07:00 |
|
Trevor Morris
|
32f2815451
|
Do layernorm before allgather for DP attention (#8631)
|
2025-08-03 00:53:08 -07:00 |
|
Guanhua Wang
|
f7b2853ff8
|
[feat] support minimum token load balance in dp attention (#7379)
|
2025-08-03 00:46:47 -07:00 |
|
Zhiqiang Xie
|
b0add2da00
|
HiCache storage, style change and bug fix (#8719)
|
2025-08-03 15:05:04 +08:00 |
|
Wenxuan Tan
|
0305c5053f
|
Reduce memory accumulation in long-running server (#8306)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
|
2025-08-03 15:03:16 +08:00 |
|
Lifu Huang
|
8675bdf246
|
Support limiting max loaded loras in CPU. (#8650)
|
2025-08-03 00:02:23 -07:00 |
|