Commit Graph

3956 Commits

Author SHA1 Message Date
JieXin Liang
c04a8a820b [fix] fix misusing of is_cuda (#7790) 2025-07-05 04:02:14 -07:00
Cheng Wan
6c903611ca Fix incorrect spec_num_draft_tokens in draft_extend (#7757) 2025-07-05 02:18:16 -07:00
Yineng Zhang
77cfea689d chore: upgrade sgl-kernel v0.2.3 (#7786) 2025-07-05 01:55:55 -07:00
Cheng Wan
8fc910db03 DP Attention with Auto DeepEP Dispatch (#7222) 2025-07-05 01:54:24 -07:00
Yineng Zhang
75354d9ae9 fix: use nvidia-nccl-cu12 2.27.5 (#7787) 2025-07-05 01:28:21 -07:00
Yineng Zhang
4fece12be9 chore: bump sgl-kernel v0.2.3 (#7784) 2025-07-05 00:05:45 -07:00
Mick
c797322280 fix: fix apply_shuffle_mul_sum (#7444) 2025-07-04 23:23:30 -07:00
Gang Chen
ef8a29c429 Embedding parallel by attn_tp (#7623) 2025-07-04 23:21:56 -07:00
Qi Yuhang
8e9fb43d82 Optimize Hopper CUTLASS FP8 Blockwise Grouped GEMM Kernel in Small K Scenario (#7782) 2025-07-04 22:25:49 -07:00
Leng Yue
8364608930 add model: qwen2-audio (#7596) 2025-07-04 21:13:10 -07:00
SijiaYang
da3890e82a [1/n]: add cutlass W4A8 moe kernel for hopper architecture (#7772)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>
2025-07-04 20:50:12 -07:00
Cheng Wan
cb432f1770 saving hidden_states.clone() (#7705) 2025-07-04 20:07:42 -07:00
Ximingwang-09
1964c325de [feat] Support EAGLE3 for Qwen (#7745)
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>
2025-07-04 19:50:28 -07:00
Caproni
af5647748a [Fix] Alloc return type error (#7778)
Signed-off-by: Capronir <839972205@qq.com>
2025-07-04 19:00:40 -07:00
Zilin Zhu
af46f299f9 [RL] add pause and continue generation for async rl training (#7419) 2025-07-04 18:49:49 -07:00
Zilin Zhu
16a6b1d83a [RL] Add --nccl-port to prevent port conflict (#7418) 2025-07-04 18:48:57 -07:00
Lianmin Zheng
14229ccf8f Move mem_fraction_static adjustment for multimodal models to server_args.py & Fix session control & Other cleanups (#7748) 2025-07-04 16:33:33 -07:00
Kay Yan
975a5ec69c [fix] update bench_speculative.py for compatibility (#7764)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-07-04 16:32:54 +08:00
Yuchen Cheng
1e3e3add3d fix(docs): fix the broken link in docs/references/production_metrics.md (#7741)
Signed-off-by: rudeigerc <rudeigerc@gmail.com>
2025-07-03 23:46:07 -07:00
Yi Zhang
8c298031d5 refactor llama4 dp attention logic (#7729) 2025-07-03 22:48:11 -07:00
YanbingJiang
4de0395343 Add V2-lite model test (#7390)
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
2025-07-03 22:25:50 -07:00
Ke Bao
8b1942c6cc Remove type conversion and fix id map in topk (#7759) 2025-07-03 18:13:32 -07:00
Yi Zhang
489934be0a fuse renormal into moe topk softmax kernel python code (#7751)
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-07-03 16:22:14 -07:00
Xinyuan Tong
43f93f632c fix CI: update native api ipynb (#7754)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-03 15:25:00 -07:00
Yineng Zhang
aca1101a13 chore: bump sgl-kernel 0.2.2 (#7755) 2025-07-03 12:49:10 -07:00
Yi Zhang
2998c4bdf4 [optimize] fuse renormalize into moe_topk_softmax (#7744)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-07-03 12:42:44 -07:00
JieXin Liang
6840a7bbb2 [fix] put cpu in the first priority in get_device() (#7752) 2025-07-03 11:49:32 -07:00
yilian49
c01a1df588 [Bug] add flashinfer bool check for fusedmoe in Qwen moe models (#7723) 2025-07-03 11:32:11 -07:00
TianyuZhang1214
0099172327 feat: use D2D instead of H2H in pp (#7673)
Co-authored-by: alpha-baby <fujianhao1997@qq.com>
2025-07-03 10:58:50 -07:00
Yi Zhang
264dc6e744 [optimize] add two stream norm for qwen3 (#7740)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-07-03 09:59:17 -07:00
Yi Zhang
646cef2e2e support qwen3 dense model dp attention (#7681) 2025-07-03 09:58:20 -07:00
Chunyuan WU
1dce6c480f [CPU] support the case where num_attention_heads or intermediate_size is not divisible by the TP size (#6771) 2025-07-03 09:51:38 -07:00
Chunyuan WU
9fcc9a80e7 [CPU] refine CPU integration code (#7647) 2025-07-03 09:51:09 -07:00
JieXin Liang
ac49dac009 [fix] fix dsv3_router_gemm filter (#7750) 2025-07-03 09:25:32 -07:00
ronnie_zheng
1e0e549766 Ascend attention backend(PA&MLA) (#7722)
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>
2025-07-03 09:23:19 -07:00
AniZpZ
b58226510f fix dsv3 fused proj check (#7738) 2025-07-03 01:52:44 -07:00
ayrnb
2c4feaf308 Add CUTLASS FP8 Blockscale MoE kernel for Hopper architecture (#7278)
Co-authored-by: HydraQYH <QYH820@Outlook.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
2025-07-02 23:27:03 -07:00
Shangming Cai
2ff572e28c [CI][Router] Fix bench_one_batch_server for pd router test (#7731)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-02 23:18:24 -07:00
AniZpZ
84f2e4a0f8 fix awq and dsv3 fused gemm compatible (#7735) 2025-07-02 22:56:57 -07:00
Chunyuan WU
8f844db699 [CPU] fix all_reduce and all_gather (#6770)
Co-authored-by: blzheng <beilei.zheng@intel.com>
2025-07-02 22:39:45 -07:00
Chunyuan WU
36cc3ffdc7 [CPU] [sgl-kernel] set dispatch key of initialize to CatchAll (#7734) 2025-07-02 22:39:24 -07:00
Ziming Huang
1bebd3154e Fix num_tokens_pre_allocated in disaggregation log (#7714) 2025-07-02 22:31:49 -07:00
Albert
d3c275b117 Support updating weights at once by stopping all requests (#6698)
Signed-off-by: Tianyu Zhou <albert.zty@antgroup.com>
Co-authored-by: Zilin Zhu <zhuzilinallen@gmail.com>
2025-07-02 22:26:06 -07:00
YanbingJiang
b044400dd3 Support non-contiguous query input for extend/decode attention (#7462) 2025-07-02 19:59:45 -07:00
Chunyuan WU
40e5cb7a9c [CPU] Bind threads and numa node for each TP rank (#6549)
Co-authored-by: srinarayan-srikanthan <srinarayan.srikanthan@intel.com>
2025-07-02 19:57:59 -07:00
Xiaoyu Zhang
8e64140e35 [b200] support trt-llm allreduce fuse rms_norm_add kernel (#7621) 2025-07-02 19:36:20 -07:00
Zilin Zhu
82f021e22e [router] add --log-level to sgl-router (#6512) 2025-07-02 19:33:04 -07:00
Zilin Zhu
0626f678de [RL] support update_weights_from_distributed with different group and multiple weights (#7292) 2025-07-02 19:29:11 -07:00
Zilin Zhu
09e699bba4 [RL] add --skip-warmup (#7416) 2025-07-02 18:50:43 -07:00
Hubert Lu
b116b21a46 [AMD] Temporarily disable test_no_overlap_scheduler and test_vision_chunked_prefill (#7717) 2025-07-02 12:39:18 -07:00