Commit Graph

2906 Commits

Author SHA1 Message Date
Qi Yuhang
c268c11c71 [feat]Support fusion kernel for constructing quant input and scale factor for fp8_blockwise_scaled_grouped_mm (#8023) 2025-07-15 00:02:44 -07:00
杨睿
9b560c3e1c fix: modality length mismatch with image_data (#7887) 2025-07-15 14:27:54 +08:00
ehuaa
64e78bb31b prevent server crash from potential invalid grammar (#7897) 2025-07-15 11:21:45 +08:00
hzh0425
7c39e8a198 Fix Bug 'get_cpu_copy not Implemented' in pd offloading mode (#7982) 2025-07-14 14:57:10 -07:00
Lifu Huang
d969504d9a Fix flaky CI: test_vlm_models (#8006) 2025-07-14 14:56:41 -07:00
ykcombat
d4d0c7c367 [Feature]TP Group Switching for PD-Multiplexing (#7653) 2025-07-15 02:35:46 +08:00
Lianmin Zheng
8d2cf38c79 [Minor] Remove redundant print (#8005) 2025-07-14 10:55:13 -07:00
Hank Han
2117f82def [ci] CI supports use cached models (#7874) 2025-07-14 11:42:21 +00:00
Yusong Gao
c07f647c9f perf: add kimi k2 fused_moe tuning config for h30_3e (#8021)
Co-authored-by: yudian0504 <yudian.zy@antgroup.com>
2025-07-14 02:56:11 -07:00
Chunyuan WU
07452cbe8e [CPU] fix no attribute 'can_fuse_mlp_allreduce' error (#8010) 2025-07-14 01:32:43 -07:00
mqhc2020
a562c8a35c [Dockerfile] Multi-arch support for ROCm (#7902)
Co-authored-by: Lin, Soga <soga.lin@amd.com>
Co-authored-by: HaiShaw <hixiao@gmail.com>
2025-07-14 06:13:09 +00:00
Praneth Paruchuri
cb736df854 Support for Phi-1.5 & Phi-2 models (#7862) 2025-07-13 18:43:40 -07:00
Lifu Huang
e2ed9d049a Refactor dynamic LoRA update to fix incorrect handling of variant weight shapes (#7844) 2025-07-13 18:36:01 -07:00
Hanming Lu
9379da77de SWA Prefix Cache (#7367)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
2025-07-13 12:31:07 -07:00
ehuaa
0c55cbcfc5 [BugFix] add verify logit_bias to avoid crash because of IndexError (#7749) 2025-07-14 02:44:12 +08:00
fzyzcjy
c46e069d34 Tiny fix mooncake log warning wrong output (#7952) 2025-07-12 21:22:44 -07:00
Ying Sheng
42fc44100a [minor] Add server_args check for Llama4 with hybrid (#7988) 2025-07-12 20:13:40 -07:00
Morpheus Guo
5f6756b038 [BugFix] fix pre_reorder_triton_kernel default int32 issue (#7814) 2025-07-12 13:42:36 -07:00
Cheng Wan
98aa836bbf Overlap the gating function with shared experts in DeepSeek (#7978) 2025-07-12 13:41:50 -07:00
Ying Sheng
ccfa084125 [script] update loogle test (#7975) 2025-07-12 00:06:17 -07:00
Ying Sheng
bcc5ba94b4 [minor fix] SWA missing methods (#7972) 2025-07-11 23:57:02 -07:00
Ying Sheng
cee9f329c4 [minor fix] llama4 hybrid memory (#7950) 2025-07-11 23:11:36 -07:00
Yineng Zhang
eb118d88c4 chore: bump v0.4.9.post2 (#7963) 2025-07-11 21:11:20 -07:00
Yineng Zhang
732fc8e405 chore: upgrade sgl-kernel 0.2.5 (#7971) 2025-07-11 20:35:06 -07:00
fzyzcjy
2a2d3478af Fix wrong gemm branch cause 250us slower (#7969) 2025-07-11 19:45:09 -07:00
Xiaoyu Zhang
aa2056091a delete uselese code caused by fuse allreduce+add_rmsnorm pr (#7970) 2025-07-11 19:43:38 -07:00
Yineng Zhang
61bb285827 chore: upgrade xgrammar 0.1.21 (#7962) 2025-07-11 19:26:52 -07:00
fzyzcjy
880221bd3b Revert "[PD Disaggregation] replace transfer with batch transfer for better performance (#7236)" (#7968) 2025-07-11 19:03:01 -07:00
Peng Zhang
191d836ff6 fix: minor fix for modelopt weight load compatibility (#7953) 2025-07-11 14:20:58 -07:00
ronnie_zheng
86044712c6 [feature] kv transfer support of ascend npu (#7795)
Co-authored-by: liupeng <liupeng374@huawei.com>
2025-07-11 00:07:51 -07:00
Atream
615553079d Support Kimi K2 (#7940) 2025-07-11 00:02:21 -07:00
Xiaoyu Zhang
49a5915f53 [ready b200] fuse allreduce+add_rmsnorm in prepare_attention + mlp module (#7775) 2025-07-10 15:12:39 -07:00
ronnie_zheng
766392c6bd [feature]Ascend quantization support (#7791)
Co-authored-by: ichernob <ichernobnn@gmail.com>
Co-authored-by: liupeng <liupeng374@huawei.com>
2025-07-10 09:17:37 -07:00
likesen-alibaba
4a0d19198b Fix bug of deepseek-v3 under DP+EP mode with large batchsize/seqlen (#6449) 2025-07-10 01:19:56 -07:00
Zaili Wang
5748241549 add sentencepiece as dependency explicitly (#7922) 2025-07-10 01:06:27 -07:00
Binyao Jiang
2d54d4bb64 Feat: Support Phi-3.5-MoE in SGLang (#7907) 2025-07-09 23:51:33 -07:00
Mick
b5e3d6031c vlm: support video as an input modality (#5888) 2025-07-09 23:48:35 -07:00
kyleliang-nv
dd445a41f5 [feature] Add start step profile argument in /start_profile (#7608) 2025-07-09 18:42:15 -07:00
almaslof
f9df11ae86 Remove unused imports (#7898) 2025-07-09 22:36:48 +08:00
jianan-gu
d389bedf72 [CPU][Qwen3 MoE] Enable fused_topk CPU fusion and enhance FP8 TP padding (#7838) 2025-07-09 02:04:21 -07:00
Cheng Wan
d487555f84 [CI] Add deepep tests to CI (#7872) 2025-07-09 01:49:47 -07:00
Xinyuan Tong
e5888eddda Fixes typo in assertion message (#7895)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-09 01:47:14 -07:00
Yineng Zhang
066f4ec91f chore: bump v0.4.9.post1 (#7882) 2025-07-09 00:28:17 -07:00
Yineng Zhang
b6b6268ccf Revert "Embedding parallel by attn_tp (#7623)" (#7880) 2025-07-08 22:03:09 -07:00
Shangming Cai
64c5907e12 [PD] Add guidance for prefill bootstrap timeout (#7846)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-08 21:00:34 -07:00
Chunyuan WU
128f16a817 [CPU]convert topk_weights to fp32 for INT8 and FP8 paths (for llama4) and fix LmHead weight pack (#7818) 2025-07-08 19:27:24 -07:00
ybyang
4986104618 Bump xgrammar's version to 0.1.20 (#7866) 2025-07-08 17:55:30 -07:00
Brayden Zhong
a37e1247c1 [Multimodal][Perf] Use pybase64 instead of base64 (#7724) 2025-07-08 14:00:58 -07:00
Xinyuan Tong
136c6e0431 fix: Handles input_embeds in GenerateReqInput when n>1 (#7830)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:42 -07:00
Xinyuan Tong
43e20c0647 Support Mimo-VL (#7579)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:25 -07:00