Commit Graph

3051 Commits

Author SHA1 Message Date
Lianmin Zheng
91e2f902db Fix kimi k2 function call format (#8968) 2025-08-08 13:25:14 -07:00
valarLip
53f7874ae6 refine aiter_backend for mtp (#7279)
Co-authored-by: HAI <hixiao@gmail.com>
2025-08-08 11:06:02 -07:00
Yineng Zhang
9020f7fc32 chore: bump v0.5.0rc0 (#8959) 2025-08-08 09:16:18 -07:00
Zilin Zhu
dd650e0e21 [RL] fix skip_server_warmup and rl health_generate logic (#8757) 2025-08-08 04:34:38 -07:00
Lianmin Zheng
a947154286 Revert "Support Multi Process Tokenizer Manager" (#8960) 2025-08-08 02:28:27 -07:00
pansicheng
e2fd2b9c7e Simple prefetch policy (#8692) 2025-08-08 02:09:28 -07:00
ybyang
7490e3f67d Support Multi Process Tokenizer Manager (#6555)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: lw9527 <952799980@qq.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
2025-08-08 01:45:50 -07:00
Minglei Zhu
6ee6619b7a add zai-org/GLM-4.5-Air-FP8 model into nightly CI (#8894) 2025-08-08 01:44:19 -07:00
Kaixi Hou
b4c9f38a76 [NVIDIA] Fix missing get_col_major_tma_aligned_tensor for Blackwell deepgemm in EpMoE (#8955) 2025-08-08 01:12:33 -07:00
Wenbo Yang
1132547496 Add ernie4.py for ERNIE-4.5 (#7657) 2025-08-08 00:55:48 -07:00
Cheng Wan
1d24db8348 Expert Parallelism for GPT-OSS (#8944) 2025-08-08 00:46:42 -07:00
eigen
08fab2b0c4 minor: global workspace buffer for trtllm-gen mha from flashinfer (#8952) 2025-08-08 00:12:12 -07:00
Xiaoyu Zhang
0d1e27a0c5 Better optimization log for gpt-oss model (#8953) 2025-08-08 00:11:48 -07:00
fzyzcjy
774b47f3f1 Reduce scheduler recv requests overhead (#8947) 2025-08-08 00:10:05 -07:00
Xiaoyu Zhang
76915d68a8 Fix enable flashinfer mxfp4 moe bf16 check (#8950) 2025-08-07 22:52:09 -07:00
Zaili Wang
ed0a3dd54a Enhancements for bench_one_batch (#8703)
Co-authored-by: root <root@gnr630186.jf.intel.com>
2025-08-07 19:00:31 -07:00
Stefan He
d3be97104b correct the tp_plan logic (#8850) 2025-08-07 16:53:34 -07:00
Xinyuan Tong
3e7ff1ab1f fix: reasoning parser when request have enable_thinking flag (#8933)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 15:52:06 -07:00
Stefan He
aaf0ad8cdf remove vllm fp8quant from fp8.py (#8937) 2025-08-07 15:50:52 -07:00
Yineng Zhang
4bf6e5a6b0 fix: use openai 1.99.1 (#8927) 2025-08-07 14:20:35 -07:00
Xiaoyu Zhang
3ae33fcd0a Fix hopper launch gpt-oss model illegal memory (#8908) 2025-08-07 10:02:40 -07:00
Xiaoyu Zhang
47824c1488 [Perf] Auto enable best flashinfer mxfp4 kernel in b200 (#8898) 2025-08-07 01:08:41 -07:00
Xinyuan Tong
c36a6693f3 Disable gemma3 for SWA due to CUDA illegal memory access error (#8895)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-07 00:44:44 -07:00
blzheng
62f8eb48b1 [CPU] Fix fallback allgather issue (#8041) 2025-08-07 00:08:18 -07:00
PGFLMG
b7cd743038 [Feat] QWen-1M context support[2/2]: Update block sparse attention backend (#5949) 2025-08-06 23:49:36 -07:00
Zheng Wengang
2d120f8b18 [Feature][Multimodal] Implement LRU cache for multimodal embeddings (#8292)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 23:21:40 -07:00
Xinyuan Tong
3fa3c6cd6a Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-06 20:02:47 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
eigen
6ad6c8c9e6 feat: openai oss attention sink support with trtllm-gen backend #8825 (#8834)
Co-authored-by: averyhuang <averyh@nvidia.com>
2025-08-06 19:18:27 -07:00
Cheng Wan
5b6acc1495 fix glm4 moe (#8883) 2025-08-06 18:02:31 -07:00
Xiaoyu Zhang
4373df5525 add flashinfer mxfp4 (#8847) 2025-08-06 16:23:41 -07:00
Trevor Morris
c0e84297c2 Use reduce scatter for DP (#8539) 2025-08-06 16:21:26 -07:00
Chang Su
92cc32d9fc Support v1/responses and use harmony in serving_chat (#8837)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 16:20:34 -07:00
Shu Wang
288ae41f7a [NVIDIA] Fix num_experts in modelopt_quant (#8811) 2025-08-06 14:35:07 -07:00
Ke Bao
0475448ee3 Optimize triton swa kernel by skipping computation (#8860) 2025-08-06 21:37:50 +08:00
Ke Bao
399e7ec8b3 Refine naming (#8868) 2025-08-06 21:37:02 +08:00
Ying Sheng
168033d5fb Support mxfp4 for GPT-OSS (#8843)
Co-authored-by: Co-author fzyzcjy <ch271828n@outlook.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
Co-authored-by: zhuofan1123 <zhuofanl@nvidia.com>
Co-authored-by: liz-badada <jinyanc@nvidia.com>
Co-authored-by: xutizhou <xutingz@nvidia.com>
Co-authored-by: linhu-nv <linhu@nvidia.com>
2025-08-06 00:05:25 -07:00
Stefan He
cbbb738371 [2/3] Optimize Slime Update Weights: Avoid GPU-to-CPU Device Sync when update expert weights (#8753) 2025-08-05 22:09:52 -07:00
Stefan He
89588179cf [1/3] Optimize Slime Update Weights: Remove QWen3MOE Load Weight Overhead (#8751) 2025-08-05 22:07:54 -07:00
HouseWest
ca47e24f5d [Feature] improve TBO: two chunk overlap (#8144) 2025-08-05 21:11:01 -07:00
Praneth Paruchuri
d26ca84f39 Support bailing moe (#8680) 2025-08-05 20:40:34 -07:00
Ke Bao
8128e08d36 Turn off hybrid cache by default (#8839) 2025-08-06 09:53:45 +08:00
Yineng Zhang
3ae8e3ea8f chore: upgrade torch 2.8.0 (#8836) 2025-08-05 17:32:01 -07:00
Ying Sheng
c1d2061f97 Add initial support for gpt-oss (#8824) 2025-08-05 13:42:01 -07:00
Yineng Zhang
556e4143f0 fix: remove unused import (#8809) 2025-08-05 13:40:22 -07:00
kk
32d9e39a29 Fix potential memory fault issue and ncclSystemError in CI test (#8681)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-05 12:19:37 -07:00
Yineng Zhang
4f4e0e4162 chore: upgrade flashinfer 0.2.10 (#8827) 2025-08-05 12:04:01 -07:00
Yineng Zhang
901ab758ec chore: upgrade transformers 4.55.0 (#8823)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
2025-08-05 11:37:21 -07:00
Yuxuan Zhang
a4b0d5c9e5 GLM-4.5 and GLM-4.5-Air both support (#8804) 2025-08-05 03:29:20 -07:00
eigen
40e3b2beeb feat: add trtllm-gen mha from direct call (#8782)
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-08-05 03:28:39 -07:00