Commit Graph

3066 Commits

Author SHA1 Message Date
fzyzcjy
442534aa44 Add CI for gpt-oss model on hopper (#8851) 2025-08-09 00:34:23 -07:00
ishandhanani
de8b8b6e5c chore(deps): update minimum python to 3.10 (#8984) 2025-08-09 00:30:23 -07:00
tql.99
3f2e315f6e optimize: reduce shulffle and quantization overhead in cutlass_moe sm90 (#8962)
Co-authored-by: 戚余航 <qiyuhang@bytedance.com>
2025-08-09 00:29:12 -07:00
Lifu Huang
6e2151183b Fix incorrect default get_hidden_dim logic (#8987) 2025-08-09 00:25:38 -07:00
Cheng Wan
a47baff12c [hotfix] use the original implementation in 8785 (#8994) 2025-08-08 21:47:25 -07:00
Cheng Wan
fd7e15b76d Revert "[bug fix] Ensure local token and global token buffers are pointing to different storage " (#8993) 2025-08-08 21:34:17 -07:00
DarkSharpness
fc42ff7b63 [Fix] Fix wrong backend chosen in hybrid backend (#8989) 2025-08-08 21:21:17 -07:00
Lianmin Zheng
706bd69cc5 Clean up server_args.py to have a dedicated function for model specific adjustments (#8983) 2025-08-08 19:56:50 -07:00
Trevor Morris
a60f88b5a4 Add unit test for flashinfer fp4 moe (#8330)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-08-08 17:55:37 -07:00
Trevor Morris
591c232f7c [1/2][resubmit] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (select_experts) (#8770) 2025-08-08 17:55:06 -07:00
Lianmin Zheng
f352b793be Minor Optimizations in Schedule Batch (#8724)
Co-authored-by: Suruchi Shah <surshah@linkedin.com>
2025-08-08 16:10:16 -07:00
Lianmin Zheng
67a7d1f699 Create cancel-all-pr-test-runs (#8986) 2025-08-08 15:53:51 -07:00
Elfie Guo
92cbef59ec [bug fix] Ensure local token and global token buffers are pointing to different storage (#8785) 2025-08-08 15:13:32 -07:00
maocheng23
b3359dc9bf Update qwen3_coder_detector.py for streaming (#8371) 2025-08-08 14:51:03 -07:00
ishandhanani
4e7f025219 chore(gb200): update to CUDA 12.9 and improve build process (#8772) 2025-08-08 13:42:47 -07:00
Lianmin Zheng
91e2f902db Fix kimi k2 function call format (#8968) 2025-08-08 13:25:14 -07:00
valarLip
53f7874ae6 refine aiter_backend for mtp (#7279)
Co-authored-by: HAI <hixiao@gmail.com>
2025-08-08 11:06:02 -07:00
Yineng Zhang
9020f7fc32 chore: bump v0.5.0rc0 (#8959) 2025-08-08 09:16:18 -07:00
Zilin Zhu
dd650e0e21 [RL] fix skip_server_warmup and rl health_generate logic (#8757) 2025-08-08 04:34:38 -07:00
Lianmin Zheng
a947154286 Revert "Support Multi Process Tokenizer Manager" (#8960) 2025-08-08 02:28:27 -07:00
pansicheng
e2fd2b9c7e Simple prefetch policy (#8692) 2025-08-08 02:09:28 -07:00
ybyang
7490e3f67d Support Multi Process Tokenizer Manager (#6555)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: lw9527 <952799980@qq.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
2025-08-08 01:45:50 -07:00
Minglei Zhu
6ee6619b7a add zai-org/GLM-4.5-Air-FP8 model into nightly CI (#8894) 2025-08-08 01:44:19 -07:00
Kaixi Hou
b4c9f38a76 [NVIDIA] Fix missing get_col_major_tma_aligned_tensor for Blackwell deepgemm in EpMoE (#8955) 2025-08-08 01:12:33 -07:00
Wenbo Yang
1132547496 Add ernie4.py for ERNIE-4.5 (#7657) 2025-08-08 00:55:48 -07:00
Cheng Wan
1d24db8348 Expert Parallelism for GPT-OSS (#8944) 2025-08-08 00:46:42 -07:00
eigen
08fab2b0c4 minor: global workspace buffer for trtllm-gen mha from flashinfer (#8952) 2025-08-08 00:12:12 -07:00
Xiaoyu Zhang
0d1e27a0c5 Better optimization log for gpt-oss model (#8953) 2025-08-08 00:11:48 -07:00
fzyzcjy
774b47f3f1 Reduce scheduler recv requests overhead (#8947) 2025-08-08 00:10:05 -07:00
Xiaoyu Zhang
76915d68a8 Fix enable flashinfer mxfp4 moe bf16 check (#8950) 2025-08-07 22:52:09 -07:00
Zaili Wang
ed0a3dd54a Enhancements for bench_one_batch (#8703)
Co-authored-by: root <root@gnr630186.jf.intel.com>
2025-08-07 19:00:31 -07:00
Stefan He
d3be97104b correct the tp_plan logic (#8850) 2025-08-07 16:53:34 -07:00
Xinyuan Tong
3e7ff1ab1f fix: reasoning parser when request have enable_thinking flag (#8933)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 15:52:06 -07:00
Stefan He
aaf0ad8cdf remove vllm fp8quant from fp8.py (#8937) 2025-08-07 15:50:52 -07:00
Yineng Zhang
4bf6e5a6b0 fix: use openai 1.99.1 (#8927) 2025-08-07 14:20:35 -07:00
Xiaoyu Zhang
3ae33fcd0a Fix hopper launch gpt-oss model illegal memory (#8908) 2025-08-07 10:02:40 -07:00
Xiaoyu Zhang
47824c1488 [Perf] Auto enable best flashinfer mxfp4 kernel in b200 (#8898) 2025-08-07 01:08:41 -07:00
Xinyuan Tong
c36a6693f3 Disable gemma3 for SWA due to CUDA illegal memory access error (#8895)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 00:44:44 -07:00
blzheng
62f8eb48b1 [CPU] Fix fallback allgather issue (#8041) 2025-08-07 00:08:18 -07:00
PGFLMG
b7cd743038 [Feat] QWen-1M context support[2/2]: Update block sparse attention backend (#5949) 2025-08-06 23:49:36 -07:00
Zheng Wengang
2d120f8b18 [Feature][Multimodal] Implement LRU cache for multimodal embeddings (#8292)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 23:21:40 -07:00
Xinyuan Tong
3fa3c6cd6a Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-06 20:02:47 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
eigen
6ad6c8c9e6 feat: openai oss attention sink support with trtllm-gen backend #8825 (#8834)
Co-authored-by: averyhuang <averyh@nvidia.com>
2025-08-06 19:18:27 -07:00
Cheng Wan
5b6acc1495 fix glm4 moe (#8883) 2025-08-06 18:02:31 -07:00
Xiaoyu Zhang
4373df5525 add flashinfer mxfp4 (#8847) 2025-08-06 16:23:41 -07:00
Trevor Morris
c0e84297c2 Use reduce scatter for DP (#8539) 2025-08-06 16:21:26 -07:00
Chang Su
92cc32d9fc Support v1/responses and use harmony in serving_chat (#8837)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 16:20:34 -07:00
Shu Wang
288ae41f7a [NVIDIA] Fix num_experts in modelopt_quant (#8811) 2025-08-06 14:35:07 -07:00
Ke Bao
0475448ee3 Optimize triton swa kernel by skipping computation (#8860) 2025-08-06 21:37:50 +08:00