Commit Graph

4473 Commits

Author SHA1 Message Date
Stefan He
d3be97104b correct the tp_plan logic (#8850) 2025-08-07 16:53:34 -07:00
Xinyuan Tong
3e7ff1ab1f fix: reasoning parser when request have enable_thinking flag (#8933)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 15:52:06 -07:00
Stefan He
aaf0ad8cdf remove vllm fp8quant from fp8.py (#8937) 2025-08-07 15:50:52 -07:00
Yineng Zhang
361379b52b docs: update README (#8929) 2025-08-07 14:28:35 -07:00
Yineng Zhang
1ac16add8b chore: support blackwell cu129 image (#8928) 2025-08-07 14:24:57 -07:00
Zhiyu
c3a5fb3b28 codeowner updates for modelopt related files (#8925) 2025-08-07 14:21:41 -07:00
Yineng Zhang
4bf6e5a6b0 fix: use openai 1.99.1 (#8927) 2025-08-07 14:20:35 -07:00
Xiaoyu Zhang
3ae33fcd0a Fix hopper launch gpt-oss model illegal memory (#8908) 2025-08-07 10:02:40 -07:00
Simo Lin
500b15c960 [router] upgrade router version to 0.1.9 (#8844) 2025-08-07 09:29:12 -07:00
Simo Lin
16a4c66d25 [router] update pd router ci summary step with new threshold (#8916) 2025-08-07 07:15:38 -07:00
Simo Lin
89e6521c61 [router] re-enable pd router benchmark CI (#8912) 2025-08-07 06:29:36 -07:00
Tien Nguyen
fd05b56750 refactor(sgl-router): Replace once_cell with LazyLock in worker.rs and remove once_cell dependency from Cargo.toml (#8698) 2025-08-07 06:14:03 -07:00
fzyzcjy
482c3db29f Fix sgl-kernel arch and missing package in CI (#8869) 2025-08-07 02:08:15 -07:00
Xiaoyu Zhang
47824c1488 [Perf] Auto enable best flashinfer mxfp4 kernel in b200 (#8898) 2025-08-07 01:08:41 -07:00
Xinyuan Tong
c36a6693f3 Disable gemma3 for SWA due to CUDA illegal memory access error (#8895)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 00:44:44 -07:00
blzheng
62f8eb48b1 [CPU] Fix fallback allgather issue (#8041) 2025-08-07 00:08:18 -07:00
PGFLMG
b7cd743038 [Feat] QWen-1M context support[2/2]: Update block sparse attention backend (#5949) 2025-08-06 23:49:36 -07:00
Simo Lin
a69b637014 [router] fix req handling order, improve serialization, remove retry (#8888) 2025-08-06 23:24:39 -07:00
Zheng Wengang
2d120f8b18 [Feature][Multimodal] Implement LRU cache for multimodal embeddings (#8292)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 23:21:40 -07:00
michael-amd
4f2e1490c3 [AMD] Pull latest SGLang version for AMD CI (#8787) 2025-08-06 20:20:26 -07:00
Xinyuan Tong
3fa3c6cd6a Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-06 20:02:47 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
eigen
6ad6c8c9e6 feat: openai oss attention sink support with trtllm-gen backend #8825 (#8834)
Co-authored-by: averyhuang <averyh@nvidia.com>
2025-08-06 19:18:27 -07:00
Cheng Wan
5b6acc1495 fix glm4 moe (#8883) 2025-08-06 18:02:31 -07:00
Xiaoyu Zhang
4373df5525 add flashinfer mxfp4 (#8847) 2025-08-06 16:23:41 -07:00
Trevor Morris
c0e84297c2 Use reduce scatter for DP (#8539) 2025-08-06 16:21:26 -07:00
Chang Su
92cc32d9fc Support v1/responses and use harmony in serving_chat (#8837)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 16:20:34 -07:00
Yineng Zhang
cbbd685a46 chore: use torch 2.8 stable (#8880) 2025-08-06 15:51:40 -07:00
Cheng Wan
78aad91037 [CI] fix pip upgrade (#8881) 2025-08-06 15:02:32 -07:00
Shu Wang
288ae41f7a [NVIDIA] Fix num_experts in modelopt_quant (#8811) 2025-08-06 14:35:07 -07:00
Mick
01c99a9959 chore: update Dockerfile (#8872)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-08-06 09:30:33 -07:00
fzyzcjy
b114a8105b Support B200 in CI (#8861) 2025-08-06 21:42:44 +08:00
Ke Bao
0475448ee3 Optimize triton swa kernel by skipping computation (#8860) 2025-08-06 21:37:50 +08:00
Ke Bao
399e7ec8b3 Refine naming (#8868) 2025-08-06 21:37:02 +08:00
Yuan Luo
1bd5316873 fix benchmark fp8 blockwise group gemm (#8815) 2025-08-06 21:02:21 +08:00
Yineng Zhang
aeac900ca2 fix: resolve ci issue (#8859) 2025-08-06 02:28:14 -07:00
Ke Bao
4fc5f2f977 Add unit test for triton swa kernel (#8853) 2025-08-06 16:10:38 +08:00
Ying Sheng
168033d5fb Support mxfp4 for GPT-OSS (#8843)
Co-authored-by: Co-author fzyzcjy <ch271828n@outlook.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: zhuofan1123 <zhuofanl@nvidia.com>
Co-authored-by: liz-badada <jinyanc@nvidia.com>
Co-authored-by: xutizhou <xutingz@nvidia.com>
Co-authored-by: linhu-nv <linhu@nvidia.com>
2025-08-06 00:05:25 -07:00
Stefan He
cbbb738371 [2/3] Optimize Slime Update Weights: Avoid GPU-to-CPU Device Sync when update expert weights (#8753) 2025-08-05 22:09:52 -07:00
Stefan He
89588179cf [1/3] Optimize Slime Update Weights: Remove QWen3MOE Load Weight Overhead (#8751) 2025-08-05 22:07:54 -07:00
Simo Lin
8c7bb39dfb [router] PD Router Simplification and Reorganization (#8838) 2025-08-05 21:20:38 -07:00
HouseWest
ca47e24f5d [Feature] improve TBO: two chunk overlap (#8144) 2025-08-05 21:11:01 -07:00
Praneth Paruchuri
d26ca84f39 Support bailing moe (#8680) 2025-08-05 20:40:34 -07:00
Ke Bao
8128e08d36 Turn off hybrid cache by default (#8839) 2025-08-06 09:53:45 +08:00
Simo Lin
5d62b56f7e [router] complete router oai spec (#8828) 2025-08-05 18:30:19 -07:00
Yineng Zhang
3ae8e3ea8f chore: upgrade torch 2.8.0 (#8836) 2025-08-05 17:32:01 -07:00
Ying Sheng
c1d2061f97 Add initial support for gpt-oss (#8824) 2025-08-05 13:42:01 -07:00
Yineng Zhang
556e4143f0 fix: remove unused import (#8809) 2025-08-05 13:40:22 -07:00
Yineng Zhang
4ef47839ae feat: use py312 (#8832) 2025-08-05 13:38:22 -07:00
kk
32d9e39a29 Fix potential memory fault issue and ncclSystemError in CI test (#8681)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-05 12:19:37 -07:00