Commit Graph

2680 Commits

Author SHA1 Message Date
Ximingwang-09
1964c325de [feat] Support EAGLE3 for Qwen (#7745)
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>
2025-07-04 19:50:28 -07:00
Caproni
af5647748a [Fix] Alloc return type error (#7778)
Signed-off-by: Capronir <839972205@qq.com>
2025-07-04 19:00:40 -07:00
Zilin Zhu
af46f299f9 [RL] add pause and continue generation for async rl training (#7419) 2025-07-04 18:49:49 -07:00
Zilin Zhu
16a6b1d83a [RL] Add --nccl-port to prevent port conflict (#7418) 2025-07-04 18:48:57 -07:00
Lianmin Zheng
14229ccf8f Move mem_fraction_static adjustment for multimodal models to server_args.py & Fix session control & Other cleanups (#7748) 2025-07-04 16:33:33 -07:00
Yi Zhang
8c298031d5 refactor llama4 dp attention logic (#7729) 2025-07-03 22:48:11 -07:00
YanbingJiang
4de0395343 Add V2-lite model test (#7390)
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
2025-07-03 22:25:50 -07:00
Ke Bao
8b1942c6cc Remove type conversion and fix id map in topk (#7759) 2025-07-03 18:13:32 -07:00
Yi Zhang
489934be0a fuse renormal into moe topk softmax kernel python code (#7751)
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-07-03 16:22:14 -07:00
JieXin Liang
6840a7bbb2 [fix] put cpu in the first priority in get_device() (#7752) 2025-07-03 11:49:32 -07:00
yilian49
c01a1df588 [Bug] add flashinfer bool check for fusedmoe in Qwen moe models (#7723) 2025-07-03 11:32:11 -07:00
TianyuZhang1214
0099172327 feat: use D2D instead of H2H in pp (#7673)
Co-authored-by: alpha-baby <fujianhao1997@qq.com>
2025-07-03 10:58:50 -07:00
Yi Zhang
264dc6e744 [optimize] add two stream norm for qwen3 (#7740)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-07-03 09:59:17 -07:00
Yi Zhang
646cef2e2e support qwen3 dense model dp attention (#7681) 2025-07-03 09:58:20 -07:00
Chunyuan WU
1dce6c480f [CPU] support the case where num_attention_heads or intermediate_size is not divisible by the TP size (#6771) 2025-07-03 09:51:38 -07:00
Chunyuan WU
9fcc9a80e7 [CPU] refine CPU integration code (#7647) 2025-07-03 09:51:09 -07:00
JieXin Liang
ac49dac009 [fix] fix dsv3_router_gemm filter (#7750) 2025-07-03 09:25:32 -07:00
ronnie_zheng
1e0e549766 Ascend attention backend(PA&MLA) (#7722)
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>
2025-07-03 09:23:19 -07:00
AniZpZ
b58226510f fix dsv3 fused proj check (#7738) 2025-07-03 01:52:44 -07:00
Shangming Cai
2ff572e28c [CI][Router] Fix bench_one_batch_server for pd router test (#7731)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-02 23:18:24 -07:00
AniZpZ
84f2e4a0f8 fix awq and dsv3 fused gemm compatible (#7735) 2025-07-02 22:56:57 -07:00
Chunyuan WU
8f844db699 [CPU] fix all_reduce and all_gather (#6770)
Co-authored-by: blzheng <beilei.zheng@intel.com>
2025-07-02 22:39:45 -07:00
Ziming Huang
1bebd3154e Fix num_tokens_pre_allocated in disaggregation log (#7714) 2025-07-02 22:31:49 -07:00
Albert
d3c275b117 Support updating weights at once by stopping all requests (#6698)
Signed-off-by: Tianyu Zhou <albert.zty@antgroup.com>
Co-authored-by: Zilin Zhu <zhuzilinallen@gmail.com>
2025-07-02 22:26:06 -07:00
Chunyuan WU
40e5cb7a9c [CPU] Bind threads and numa node for each TP rank (#6549)
Co-authored-by: srinarayan-srikanthan <srinarayan.srikanthan@intel.com>
2025-07-02 19:57:59 -07:00
Xiaoyu Zhang
8e64140e35 [b200] support trt-llm allreduce fuse rms_norm_add kernel (#7621) 2025-07-02 19:36:20 -07:00
Zilin Zhu
0626f678de [RL] support update_weights_from_distributed with different group and multiple weights (#7292) 2025-07-02 19:29:11 -07:00
Zilin Zhu
09e699bba4 [RL] add --skip-warmup (#7416) 2025-07-02 18:50:43 -07:00
Baizhou Zhang
88f484ce4c Apply dsv3 router gemm kernel for deepseek-r1 fp4 (#7677) 2025-07-02 12:30:18 -07:00
AniZpZ
8e03b641ba [1/n] apply wna16marlin kernel in moe weight only quantization (#7683)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: HandH1998 <1335248067@qq.com>
Co-authored-by: 弋云 <yiyun.wyt@antgroup.com>
Co-authored-by: walker-ai <2398833647@qq.com>
2025-07-01 23:21:25 -07:00
Kyungmin Lee
b3fa5dc3c8 Fix GPTQMarlinMoE (#7697) 2025-07-01 22:34:43 -07:00
Ke Bao
00aec6ad6c Apply dsv3_fused_a_gemm kernel (#7635) 2025-07-01 22:32:05 -07:00
Lifu Huang
1a08358aed Improve error handling for requests with unloaded LoRA path(s) (#7642) 2025-07-01 20:05:34 -07:00
Yineng Zhang
f18a8fddd4 chore: upgrade flashinfer v0.2.7.post1 (#7698) 2025-07-01 14:05:57 -07:00
Simon_CQK
a7efbb2757 fix(model loader): use safe_open to prevent file handle leaks. (#7684) 2025-07-01 13:18:35 -07:00
Zhiqiang Xie
f9eb04ddb2 upgrade sgl kernel to 0.2.1 for main (#7676) 2025-07-01 00:00:13 -07:00
Xinyuan Tong
3a911b854d Refactor mm processors and Enable mixed modality processing (#7629)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-06-30 23:14:48 -07:00
lukec
886d344964 support llama4 eagle3 (#6985)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
Co-authored-by: Shenggui Li <somerlee.9@gmail.com>
Co-authored-by: Yingyi Huang <yingyihuang2000@outlook.com>
Co-authored-by: yizhang2077 <1109276519@qq.com>
2025-06-30 22:34:10 -07:00
narutolhy
3e34e9004f Fix: sync prepare_fp8_layer_for_marlin with latest vllm changes (#7648) 2025-06-30 21:51:01 -07:00
Yineng Zhang
392e441ad1 chore: upgrade flashinfer v0.2.7 jit (#7663) 2025-06-30 13:26:26 -07:00
Lianmin Zheng
22352d47a9 Improve streaming, log_level, memory report, weight loading, and benchmark script (#7632)
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-06-29 23:16:19 -07:00
Chunyuan WU
c5131f7a2f [CPU] add c++ kernel to bind CPU cores and memory node (#7524) 2025-06-29 19:45:25 -07:00
Lianmin Zheng
78700893ee [EAGLE] remove a wrong adjustment for page_size > 1 & topk > 1 in server_args.py (#7643) 2025-06-29 19:25:28 -07:00
JieXin Liang
b691dcc490 [misc] reduce weird rope_scaling_factor warning (#7176) 2025-06-29 15:42:45 -07:00
fzyzcjy
0c9c6c75a8 Move files related to EPLB (#7580) 2025-06-29 15:39:38 -07:00
Xinyuan Tong
8f335b5bd6 Fix stream reasoning parser and Adds Kimi reasoning parser (#7432)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-06-29 14:39:05 -07:00
Lianmin Zheng
071a1f51ae [Minor] clean up multimodal processor and tokenizer manager (#7624) 2025-06-29 02:50:14 -07:00
Xinyuan Tong
c45e49d817 oai: Adds support for OpenAI chat completions API in bench_serving (#7036)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: yhyang201 <47235274+yhyang201@users.noreply.github.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
2025-06-28 22:59:20 +00:00
fzyzcjy
00c7b1ad07 Let EP prefill support new DeepGEMM (#7310) 2025-06-28 01:45:30 -07:00
fzyzcjy
82eccae44e Let ep_scatter support arbitrary strides / ue8m0 format (#7309) 2025-06-28 01:38:33 -07:00