Commit Graph

4485 Commits

Author SHA1 Message Date
Kaixi Hou
b4c9f38a76 [NVIDIA] Fix missing get_col_major_tma_aligned_tensor for Blackwell deepgemm in EpMoE (#8955) 2025-08-08 01:12:33 -07:00
Wenbo Yang
1132547496 Add ernie4.py for ERNIE-4.5 (#7657) 2025-08-08 00:55:48 -07:00
Cheng Wan
1d24db8348 Expert Parallelism for GPT-OSS (#8944) 2025-08-08 00:46:42 -07:00
triple-mu
444013585d Fix typos and unify size(s)/stride(s) API calls (#8799) 2025-08-08 00:18:08 -07:00
eigen
9c7e392465 bench: add attention sink op benchmark, triton and trtllm-gen [B200] (#8932)
Co-authored-by: averyhuang <averyh@nvidia.com>
2025-08-08 00:16:23 -07:00
eigen
08fab2b0c4 minor: global workspace buffer for trtllm-gen mha from flashinfer (#8952) 2025-08-08 00:12:12 -07:00
Xiaoyu Zhang
0d1e27a0c5 Better optimization log for gpt-oss model (#8953) 2025-08-08 00:11:48 -07:00
fzyzcjy
774b47f3f1 Reduce scheduler recv requests overhead (#8947) 2025-08-08 00:10:05 -07:00
Xiaoyu Zhang
76915d68a8 Fix enable flashinfer mxfp4 moe bf16 check (#8950) 2025-08-07 22:52:09 -07:00
Hongbo Xu
39fd178831 refactor: Move scalar_types.py to sgl-kernel to avoid circular import (#8720) 2025-08-07 19:22:16 -07:00
Zaili Wang
ed0a3dd54a Enhancements for bench_one_batch (#8703)
Co-authored-by: root <root@gnr630186.jf.intel.com>
2025-08-07 19:00:31 -07:00
Simo Lin
2e901e892f [router] dedicated prefill HTTP client and request-path optimizations (#8923) 2025-08-07 17:31:45 -07:00
Stefan He
d3be97104b correct the tp_plan logic (#8850) 2025-08-07 16:53:34 -07:00
Xinyuan Tong
3e7ff1ab1f fix: reasoning parser when request have enable_thinking flag (#8933)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 15:52:06 -07:00
Stefan He
aaf0ad8cdf remove vllm fp8quant from fp8.py (#8937) 2025-08-07 15:50:52 -07:00
Yineng Zhang
361379b52b docs: update README (#8929) 2025-08-07 14:28:35 -07:00
Yineng Zhang
1ac16add8b chore: support blackwell cu129 image (#8928) 2025-08-07 14:24:57 -07:00
Zhiyu
c3a5fb3b28 codeowner updates for modelopt related files (#8925) 2025-08-07 14:21:41 -07:00
Yineng Zhang
4bf6e5a6b0 fix: use openai 1.99.1 (#8927) 2025-08-07 14:20:35 -07:00
Xiaoyu Zhang
3ae33fcd0a Fix hopper launch gpt-oss model illegal memory (#8908) 2025-08-07 10:02:40 -07:00
Simo Lin
500b15c960 [router] upgrade router version to 0.1.9 (#8844) 2025-08-07 09:29:12 -07:00
Simo Lin
16a4c66d25 [router] update pd router ci summary step with new threshold (#8916) 2025-08-07 07:15:38 -07:00
Simo Lin
89e6521c61 [router] re-enable pd router benchmark CI (#8912) 2025-08-07 06:29:36 -07:00
Tien Nguyen
fd05b56750 refactor(sgl-router): Replace once_cell with LazyLock in worker.rs and remove once_cell dependency from Cargo.toml (#8698) 2025-08-07 06:14:03 -07:00
fzyzcjy
482c3db29f Fix sgl-kernel arch and missing package in CI (#8869) 2025-08-07 02:08:15 -07:00
Xiaoyu Zhang
47824c1488 [Perf] Auto enable best flashinfer mxfp4 kernel in b200 (#8898) 2025-08-07 01:08:41 -07:00
Xinyuan Tong
c36a6693f3 Disable gemma3 for SWA due to CUDA illegal memory access error (#8895)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 00:44:44 -07:00
blzheng
62f8eb48b1 [CPU] Fix fallback allgather issue (#8041) 2025-08-07 00:08:18 -07:00
PGFLMG
b7cd743038 [Feat] QWen-1M context support[2/2]: Update block sparse attention backend (#5949) 2025-08-06 23:49:36 -07:00
Simo Lin
a69b637014 [router] fix req handling order, improve serialization, remove retry (#8888) 2025-08-06 23:24:39 -07:00
Zheng Wengang
2d120f8b18 [Feature][Multimodal] Implement LRU cache for multimodal embeddings (#8292)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 23:21:40 -07:00
michael-amd
4f2e1490c3 [AMD] Pull latest SGLang version for AMD CI (#8787) 2025-08-06 20:20:26 -07:00
Xinyuan Tong
3fa3c6cd6a Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-06 20:02:47 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
eigen
6ad6c8c9e6 feat: openai oss attention sink support with trtllm-gen backend #8825 (#8834)
Co-authored-by: averyhuang <averyh@nvidia.com>
2025-08-06 19:18:27 -07:00
Cheng Wan
5b6acc1495 fix glm4 moe (#8883) 2025-08-06 18:02:31 -07:00
Xiaoyu Zhang
4373df5525 add flashinfer mxfp4 (#8847) 2025-08-06 16:23:41 -07:00
Trevor Morris
c0e84297c2 Use reduce scatter for DP (#8539) 2025-08-06 16:21:26 -07:00
Chang Su
92cc32d9fc Support v1/responses and use harmony in serving_chat (#8837)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 16:20:34 -07:00
Yineng Zhang
cbbd685a46 chore: use torch 2.8 stable (#8880) 2025-08-06 15:51:40 -07:00
Cheng Wan
78aad91037 [CI] fix pip upgrade (#8881) 2025-08-06 15:02:32 -07:00
Shu Wang
288ae41f7a [NVIDIA] Fix num_experts in modelopt_quant (#8811) 2025-08-06 14:35:07 -07:00
Mick
01c99a9959 chore: update Dockerfile (#8872)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-08-06 09:30:33 -07:00
fzyzcjy
b114a8105b Support B200 in CI (#8861) 2025-08-06 21:42:44 +08:00
Ke Bao
0475448ee3 Optimize triton swa kernel by skipping computation (#8860) 2025-08-06 21:37:50 +08:00
Ke Bao
399e7ec8b3 Refine naming (#8868) 2025-08-06 21:37:02 +08:00
Yuan Luo
1bd5316873 fix benchmark fp8 blockwise group gemm (#8815) 2025-08-06 21:02:21 +08:00
Yineng Zhang
aeac900ca2 fix: resolve ci issue (#8859) 2025-08-06 02:28:14 -07:00
Ke Bao
4fc5f2f977 Add unit test for triton swa kernel (#8853) 2025-08-06 16:10:38 +08:00
Ying Sheng
168033d5fb Support mxfp4 for GPT-OSS (#8843)
Co-authored-by: Co-author fzyzcjy <ch271828n@outlook.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: zhuofan1123 <zhuofanl@nvidia.com>
Co-authored-by: liz-badada <jinyanc@nvidia.com>
Co-authored-by: xutizhou <xutingz@nvidia.com>
Co-authored-by: linhu-nv <linhu@nvidia.com>
2025-08-06 00:05:25 -07:00