Commit Graph

3088 Commits

Author SHA1 Message Date
JiLi
6b847a9a05 Optimize: Cache CUDA device to reduce redundant calls during tensor l… (#8996) 2025-08-10 00:32:57 -07:00
DarkSharpness
7ba5ad5766 [Fix] Fix flashinfer cpu <-> gpu synchronization (#8340) 2025-08-10 03:11:40 +00:00
DarkSharpness
19bc77f05c [Fix] Fix hicache backend (#8991) 2025-08-09 17:16:25 -07:00
huangtingwei
86497d99f2 fix page first per layer pf2lf kernel (#8915)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-09 17:16:11 -07:00
cctry
5c31b35db2 [hicache] Optimization for DMA copy (#8245) 2025-08-09 17:16:07 -07:00
Lianmin Zheng
ef48d5547e Fix CI (#9013) 2025-08-09 16:00:10 -07:00
Xiaoyu Zhang
a886564a18 fix flashinfer allreduce fusion import bug (#9007) 2025-08-09 13:47:05 -07:00
Lianmin Zheng
9a44b643c6 Fix CI (#9012) 2025-08-09 13:33:42 -07:00
Mick
41d71ca488 fix: fix obsolete qwen-audio processor arg (#9003) 2025-08-09 13:18:36 -07:00
JieXin Liang
20cfc5a251 [perf] add kimi-k2 b200 fused moe config (#9010) 2025-08-09 12:40:49 -07:00
Chaitanya Sri Krishna Lolla
323bc2f51a Enable TBO on ROCm (#8329) 2025-08-09 01:59:55 -07:00
Even Zhou
137e75daa1 [Feature] Optimize DeepSeek's DeepEP on Ascend NPU (#8355)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: Hexq0210 <hexq0809521@gmail.com>
2025-08-09 01:35:00 -07:00
Trevor Morris
52e1f52f32 [bugfix] Fix missing args in bench one batch (#8877) 2025-08-09 01:34:03 -07:00
Cheng Wan
5018809222 [DP] fix: engine crash when decode batch is padded (#8995) 2025-08-09 01:29:29 -07:00
Yineng Zhang
326a901df4 chore: upgrade sgl-kernel 0.3.3 (#8998) 2025-08-09 01:22:01 -07:00
Zhiqiang Xie
6e0b646832 HiCache Storage tp fix (#8878) 2025-08-09 01:16:51 -07:00
Brayden Zhong
4a9f3eef90 Tiny Llama4 type error in constructor (#6752) 2025-08-09 01:03:59 -07:00
hzh0425
1b7afad0dd feature(hicache): Support hf3fs-hicache reusing kvcache across different instances (#8673)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-09 01:03:00 -07:00
Binyao Jiang
f29aba8c6e Support glm4.1v and glm4.5v (#8798)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: zRzRzRzRzRzRzR <2448370773@qq.com>
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-09 00:59:13 -07:00
eigen
faa25df1ae feat: update flashinfer ar oneshot params (#8687) 2025-08-09 00:51:27 -07:00
Binyao Jiang
7b81f956eb Fix qwen2 audio not working bug (#8600) 2025-08-09 00:42:29 -07:00
fzyzcjy
d3e67deb1b Fix redundant kernel in sink dtype conversion (#8966) 2025-08-09 00:34:49 -07:00
fzyzcjy
442534aa44 Add CI for gpt-oss model on hopper (#8851) 2025-08-09 00:34:23 -07:00
ishandhanani
de8b8b6e5c chore(deps): update minimum python to 3.10 (#8984) 2025-08-09 00:30:23 -07:00
tql.99
3f2e315f6e optimize: reduce shulffle and quantization overhead in cutlass_moe sm90 (#8962)
Co-authored-by: 戚余航 <qiyuhang@bytedance.com>
2025-08-09 00:29:12 -07:00
Lifu Huang
6e2151183b Fix incorrect default get_hidden_dim logic (#8987) 2025-08-09 00:25:38 -07:00
Cheng Wan
a47baff12c [hotfix] use the original implementation in 8785 (#8994) 2025-08-08 21:47:25 -07:00
Cheng Wan
fd7e15b76d Revert "[bug fix] Ensure local token and global token buffers are pointing to different storage " (#8993) 2025-08-08 21:34:17 -07:00
DarkSharpness
fc42ff7b63 [Fix] Fix wrong backend chosen in hybrid backend (#8989) 2025-08-08 21:21:17 -07:00
Lianmin Zheng
706bd69cc5 Clean up server_args.py to have a dedicated function for model specific adjustments (#8983) 2025-08-08 19:56:50 -07:00
Trevor Morris
a60f88b5a4 Add unit test for flashinfer fp4 moe (#8330)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-08-08 17:55:37 -07:00
Trevor Morris
591c232f7c [1/2][resubmit] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (select_experts) (#8770) 2025-08-08 17:55:06 -07:00
Lianmin Zheng
f352b793be Minor Optimizations in Schedule Batch (#8724)
Co-authored-by: Suruchi Shah <surshah@linkedin.com>
2025-08-08 16:10:16 -07:00
Lianmin Zheng
67a7d1f699 Create cancel-all-pr-test-runs (#8986) 2025-08-08 15:53:51 -07:00
Elfie Guo
92cbef59ec [bug fix] Ensure local token and global token buffers are pointing to different storage (#8785) 2025-08-08 15:13:32 -07:00
maocheng23
b3359dc9bf Update qwen3_coder_detector.py for streaming (#8371) 2025-08-08 14:51:03 -07:00
ishandhanani
4e7f025219 chore(gb200): update to CUDA 12.9 and improve build process (#8772) 2025-08-08 13:42:47 -07:00
Lianmin Zheng
91e2f902db Fix kimi k2 function call format (#8968) 2025-08-08 13:25:14 -07:00
valarLip
53f7874ae6 refine aiter_backend for mtp (#7279)
Co-authored-by: HAI <hixiao@gmail.com>
2025-08-08 11:06:02 -07:00
Yineng Zhang
9020f7fc32 chore: bump v0.5.0rc0 (#8959) 2025-08-08 09:16:18 -07:00
Zilin Zhu
dd650e0e21 [RL] fix skip_server_warmup and rl health_generate logic (#8757) 2025-08-08 04:34:38 -07:00
Lianmin Zheng
a947154286 Revert "Support Multi Process Tokenizer Manager" (#8960) 2025-08-08 02:28:27 -07:00
pansicheng
e2fd2b9c7e Simple prefetch policy (#8692) 2025-08-08 02:09:28 -07:00
ybyang
7490e3f67d Support Multi Process Tokenizer Manager (#6555)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: lw9527 <952799980@qq.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
2025-08-08 01:45:50 -07:00
Minglei Zhu
6ee6619b7a add zai-org/GLM-4.5-Air-FP8 model into nightly CI (#8894) 2025-08-08 01:44:19 -07:00
Kaixi Hou
b4c9f38a76 [NVIDIA] Fix missing get_col_major_tma_aligned_tensor for Blackwell deepgemm in EpMoE (#8955) 2025-08-08 01:12:33 -07:00
Wenbo Yang
1132547496 Add ernie4.py for ERNIE-4.5 (#7657) 2025-08-08 00:55:48 -07:00
Cheng Wan
1d24db8348 Expert Parallelism for GPT-OSS (#8944) 2025-08-08 00:46:42 -07:00
eigen
08fab2b0c4 minor: global workspace buffer for trtllm-gen mha from flashinfer (#8952) 2025-08-08 00:12:12 -07:00
Xiaoyu Zhang
0d1e27a0c5 Better optimization log for gpt-oss model (#8953) 2025-08-08 00:11:48 -07:00