Commit Graph

4446 Commits

Author SHA1 Message Date
Yineng Zhang
cbbd685a46 chore: use torch 2.8 stable (#8880) 2025-08-06 15:51:40 -07:00
Cheng Wan
78aad91037 [CI] fix pip upgrade (#8881) 2025-08-06 15:02:32 -07:00
Shu Wang
288ae41f7a [NVIDIA] Fix num_experts in modelopt_quant (#8811) 2025-08-06 14:35:07 -07:00
Mick
01c99a9959 chore: update Dockerfile (#8872)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-08-06 09:30:33 -07:00
fzyzcjy
b114a8105b Support B200 in CI (#8861) 2025-08-06 21:42:44 +08:00
Ke Bao
0475448ee3 Optimize triton swa kernel by skipping computation (#8860) 2025-08-06 21:37:50 +08:00
Ke Bao
399e7ec8b3 Refine naming (#8868) 2025-08-06 21:37:02 +08:00
Yuan Luo
1bd5316873 fix benchmark fp8 blockwise group gemm (#8815) 2025-08-06 21:02:21 +08:00
Yineng Zhang
aeac900ca2 fix: resolve ci issue (#8859) 2025-08-06 02:28:14 -07:00
Ke Bao
4fc5f2f977 Add unit test for triton swa kernel (#8853) 2025-08-06 16:10:38 +08:00
Ying Sheng
168033d5fb Support mxfp4 for GPT-OSS (#8843)
Co-authored-by: Co-author fzyzcjy <ch271828n@outlook.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: zhuofan1123 <zhuofanl@nvidia.com>
Co-authored-by: liz-badada <jinyanc@nvidia.com>
Co-authored-by: xutizhou <xutingz@nvidia.com>
Co-authored-by: linhu-nv <linhu@nvidia.com>
2025-08-06 00:05:25 -07:00
Stefan He
cbbb738371 [2/3] Optimize Slime Update Weights: Avoid GPU-to-CPU Device Sync when update expert weights (#8753) 2025-08-05 22:09:52 -07:00
Stefan He
89588179cf [1/3] Optimize Slime Update Weights: Remove QWen3MOE Load Weight Overhead (#8751) 2025-08-05 22:07:54 -07:00
Simo Lin
8c7bb39dfb [router] PD Router Simplification and Reorganization (#8838) 2025-08-05 21:20:38 -07:00
HouseWest
ca47e24f5d [Feature] improve TBO: two chunk overlap (#8144) 2025-08-05 21:11:01 -07:00
Praneth Paruchuri
d26ca84f39 Support bailing moe (#8680) 2025-08-05 20:40:34 -07:00
Ke Bao
8128e08d36 Turn off hybrid cache by default (#8839) 2025-08-06 09:53:45 +08:00
Simo Lin
5d62b56f7e [router] complete router oai spec (#8828) 2025-08-05 18:30:19 -07:00
Yineng Zhang
3ae8e3ea8f chore: upgrade torch 2.8.0 (#8836) 2025-08-05 17:32:01 -07:00
Ying Sheng
c1d2061f97 Add initial support for gpt-oss (#8824) 2025-08-05 13:42:01 -07:00
Yineng Zhang
556e4143f0 fix: remove unused import (#8809) 2025-08-05 13:40:22 -07:00
Yineng Zhang
4ef47839ae feat: use py312 (#8832) 2025-08-05 13:38:22 -07:00
kk
32d9e39a29 Fix potential memory fault issue and ncclSystemError in CI test (#8681)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-05 12:19:37 -07:00
Yineng Zhang
4f4e0e4162 chore: upgrade flashinfer 0.2.10 (#8827) 2025-08-05 12:04:01 -07:00
Yineng Zhang
901ab758ec chore: upgrade transformers 4.55.0 (#8823)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
2025-08-05 11:37:21 -07:00
Yineng Zhang
8e8545caf6 fix: update cmake (#8817) 2025-08-05 09:38:30 -07:00
Yuxuan Zhang
a4b0d5c9e5 GLM-4.5 and GLM-4.5-Air both support (#8804) 2025-08-05 03:29:20 -07:00
eigen
40e3b2beeb feat: add trtllm-gen mha from direct call (#8782)
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-08-05 03:28:39 -07:00
Yineng Zhang
75df31b60e chore: bump sgl-kernel v0.3.2 (#8802) 2025-08-05 02:35:20 -07:00
Yineng Zhang
194561f27a feat: support sgl-kernel cu129 (#8800) 2025-08-05 02:33:47 -07:00
Yineng Zhang
5e91fed1c5 Revert "[NVIDIA]Fix local_num_experts for EP (#8779)" (#8797) 2025-08-04 23:30:43 -07:00
Yuhao Yao
873f384a51 [feat] Add detail in image_data (#8596) 2025-08-05 14:01:38 +08:00
Shu Wang
b01eeb80f8 [NVIDIA]Fix local_num_experts for EP (#8779) 2025-08-04 22:01:14 -07:00
Yineng Zhang
1ea94d3b92 chore: upgrade flashinfer v0.2.9 (#8780) 2025-08-04 21:59:18 -07:00
Simo Lin
354ac43555 [pd-router] Add Configurable Retry Logic for reduce backend pressure (#8744) 2025-08-04 20:42:07 -07:00
Shangming Cai
d98a4913ea [PD] Refactor parallel sizes and add pp support for mooncake (#8571)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-08-04 20:18:11 -07:00
Chunyuan WU
08f8f49016 [CPU][sgl-kernel] biased_grouped_topk: fix correction_bias dtype to float32 (#8212)
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: YanbingJiang <yanbing.jiang@intel.com>
2025-08-04 18:28:31 -07:00
kk
d4bf5a8524 Support OCP MXFP4 quantization on AMD GPUs (#8255)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
2025-08-04 18:14:52 -07:00
Lifu Huang
7cb20754fa [Fix] Fix several issues preventing gemma3n LoRA support. (#8776) 2025-08-04 17:11:46 -07:00
Kaixi Hou
6d0646da11 [NVIDIA] Fix breakage of using trtllm-gen fp8 moe (#8773) 2025-08-04 16:30:13 -07:00
Yineng Zhang
02bc1c7d80 chore: bump sgl-kernel v0.3.1 (#8771) 2025-08-04 13:18:54 -07:00
Qiaolin Yu
fc8c8e5041 Integrate triton_kernels in sgl-kernel (#8762) 2025-08-04 12:12:14 -07:00
Trevor Morris
9bd4872a34 [bugfix] Fix typo in modelopt quant: 'FusedMoE' object has no attribute 'local_num_experts' (#8768) 2025-08-04 11:08:08 -07:00
Simo Lin
2fa0462c39 [router] introduce dp worker abstraction (#8639) 2025-08-04 06:42:20 -07:00
azhurkevich
915140fd18 [NVIDIA] Add Low Latency NVFP4 decode kernels from Flashinfer (#8552)
Co-authored-by: Cheng Wan <cwan@x.ai>
2025-08-04 03:10:02 -07:00
Baron Liu
36fc9260a2 [bugfix] fix import path in HiCacheController (#8749) 2025-08-03 22:19:15 -07:00
Even Zhou
fee0ab0fba [CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
2025-08-03 22:16:38 -07:00
Xiaoyu Zhang
f57d2dc162 [sgl-kernel] avoid per_token_quant_fp8.cu hardcode sm_count (#8738) 2025-08-04 12:55:57 +08:00
Baizhou Zhang
f2d68ded6d Rename lora_path to lora_id in batches (#8437) 2025-08-03 21:08:28 -07:00
Yuan Luo
3b87a9e8ae Fix bug of refactoring TopKOutput in w4afp8 (#8745)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-08-03 20:05:02 -07:00