Commit Graph

2709 Commits

Author SHA1 Message Date
Brayden Zhong
a37e1247c1 [Multimodal][Perf] Use pybase64 instead of base64 (#7724) 2025-07-08 14:00:58 -07:00
Xinyuan Tong
136c6e0431 fix: Handles input_embeds in GenerateReqInput when n>1 (#7830)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:42 -07:00
Xinyuan Tong
43e20c0647 Support Mimo-VL (#7579)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:25 -07:00
Xinyuan Tong
4bab50a6b5 Fix llama4 vision (#7840)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:03 -07:00
Xiaoyu Zhang
2e7ab862e3 Fix illegal memory in trtllm allreduce fusion (#7864) 2025-07-08 11:47:17 -07:00
kk
653b873b91 Fix cache modules of triton import error (#7832) 2025-07-08 02:50:09 -07:00
Shangming Cai
d379bda4fa [Bugfix] Fix two batch overlap with auto DeepEP Dispatch (#7853)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-08 02:49:32 -07:00
Zhiyu
659907e32b Enable ModelOpt Llama4 fp8 checkpoint deployment in SGLang (#7129) 2025-07-08 00:19:50 -07:00
SijiaYang
cb9d91ea8a feat: support DeepSeek-R1-W4AFP8 model with ep-moe mode (#7762)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
2025-07-07 14:47:21 -07:00
Haohui Mai
076313bd09 [AMD] Fail gracefully when AITER is unavailable gfx90a GPUs (#7187) 2025-07-07 09:09:58 +00:00
Ziming Huang
9abe1163ac fix duplicate args in schedule_batch (#7816) 2025-07-07 01:31:03 -07:00
Zhiqiang Xie
2fc824b84c Kernels for efficient KV cache IO (#7313) 2025-07-06 22:53:36 -07:00
Yuan Luo
253454de9b Integrate triton moe kernel (#7689)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-07-06 20:05:49 -07:00
yuhsuan-t
8d4a01cbd7 Log the timestamps of each prefill/decode iteration (#6094)
Co-authored-by: yuhsuan-t <12108766+yuhsaun-t@users.noreply.github.com>
2025-07-07 01:57:27 +00:00
Nan Jiang
ba69c153f6 [RL]: Fix error tagging in multi-stage wake up (#7812)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
2025-07-06 16:51:29 -07:00
Stefan He
3589aa79b0 [RL] Fix illegal memory for _import_static_state (#7733)
Co-authored-by: nanjiangwill <willjiang2018@gmail.com>
2025-07-06 16:25:21 -07:00
Lifu Huang
ea4bf12286 Fix division-by-zero bug in LoRA triton kernels. (#7785) 2025-07-06 00:45:29 -07:00
fzyzcjy
a291439a59 Support logprobs in two-batch overlap (#7709) 2025-07-05 19:05:32 -07:00
JieXin Liang
54411f6afa fix: disable dsv3_router_gemm in dsv3_nextn (#7793) 2025-07-05 19:01:01 -07:00
Yineng Zhang
ec5f9c6269 chore: bump v0.4.9 (#7802) 2025-07-05 17:40:29 -07:00
Yineng Zhang
62f5522ffe chore: upgrade sgl-kernel v0.2.4 (#7801) 2025-07-05 17:37:40 -07:00
Lianmin Zheng
5589b75024 Add treemask mode to build_eagle_tree & release sgl-kernel 0.2.3 (#7756)
Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>
2025-07-05 12:17:05 -07:00
JieXin Liang
c04a8a820b [fix] fix misusing of is_cuda (#7790) 2025-07-05 04:02:14 -07:00
Cheng Wan
6c903611ca Fix incorrect spec_num_draft_tokens in draft_extend (#7757) 2025-07-05 02:18:16 -07:00
Yineng Zhang
77cfea689d chore: upgrade sgl-kernel v0.2.3 (#7786) 2025-07-05 01:55:55 -07:00
Cheng Wan
8fc910db03 DP Attention with Auto DeepEP Dispatch (#7222) 2025-07-05 01:54:24 -07:00
Gang Chen
ef8a29c429 Embedding parallel by attn_tp (#7623) 2025-07-04 23:21:56 -07:00
Leng Yue
8364608930 add model: qwen2-audio (#7596) 2025-07-04 21:13:10 -07:00
Cheng Wan
cb432f1770 saving hidden_states.clone() (#7705) 2025-07-04 20:07:42 -07:00
Ximingwang-09
1964c325de [feat] Support EAGLE3 for Qwen (#7745)
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>
2025-07-04 19:50:28 -07:00
Caproni
af5647748a [Fix] Alloc return type error (#7778)
Signed-off-by: Capronir <839972205@qq.com>
2025-07-04 19:00:40 -07:00
Zilin Zhu
af46f299f9 [RL] add pause and continue generation for async rl training (#7419) 2025-07-04 18:49:49 -07:00
Zilin Zhu
16a6b1d83a [RL] Add --nccl-port to prevent port conflict (#7418) 2025-07-04 18:48:57 -07:00
Lianmin Zheng
14229ccf8f Move mem_fraction_static adjustment for multimodal models to server_args.py & Fix session control & Other cleanups (#7748) 2025-07-04 16:33:33 -07:00
Yi Zhang
8c298031d5 refactor llama4 dp attention logic (#7729) 2025-07-03 22:48:11 -07:00
YanbingJiang
4de0395343 Add V2-lite model test (#7390)
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
2025-07-03 22:25:50 -07:00
Ke Bao
8b1942c6cc Remove type conversion and fix id map in topk (#7759) 2025-07-03 18:13:32 -07:00
Yi Zhang
489934be0a fuse renormal into moe topk softmax kernel python code (#7751)
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-07-03 16:22:14 -07:00
JieXin Liang
6840a7bbb2 [fix] put cpu in the first priority in get_device() (#7752) 2025-07-03 11:49:32 -07:00
yilian49
c01a1df588 [Bug] add flashinfer bool check for fusedmoe in Qwen moe models (#7723) 2025-07-03 11:32:11 -07:00
TianyuZhang1214
0099172327 feat: use D2D instead of H2H in pp (#7673)
Co-authored-by: alpha-baby <fujianhao1997@qq.com>
2025-07-03 10:58:50 -07:00
Yi Zhang
264dc6e744 [optimize] add two stream norm for qwen3 (#7740)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-07-03 09:59:17 -07:00
Yi Zhang
646cef2e2e support qwen3 dense model dp attention (#7681) 2025-07-03 09:58:20 -07:00
Chunyuan WU
1dce6c480f [CPU] support the case where num_attention_heads or intermediate_size is not divisible by the TP size (#6771) 2025-07-03 09:51:38 -07:00
Chunyuan WU
9fcc9a80e7 [CPU] refine CPU integration code (#7647) 2025-07-03 09:51:09 -07:00
JieXin Liang
ac49dac009 [fix] fix dsv3_router_gemm filter (#7750) 2025-07-03 09:25:32 -07:00
ronnie_zheng
1e0e549766 Ascend attention backend(PA&MLA) (#7722)
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>
2025-07-03 09:23:19 -07:00
AniZpZ
b58226510f fix dsv3 fused proj check (#7738) 2025-07-03 01:52:44 -07:00
Shangming Cai
2ff572e28c [CI][Router] Fix bench_one_batch_server for pd router test (#7731)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-02 23:18:24 -07:00
AniZpZ
84f2e4a0f8 fix awq and dsv3 fused gemm compatible (#7735) 2025-07-02 22:56:57 -07:00