Commit Graph

2740 Commits

Author SHA1 Message Date
Ying Sheng
42fc44100a [minor] Add server_args check for Llama4 with hybrid (#7988) 2025-07-12 20:13:40 -07:00
Morpheus Guo
5f6756b038 [BugFix] fix pre_reorder_triton_kernel default int32 issue (#7814) 2025-07-12 13:42:36 -07:00
Cheng Wan
98aa836bbf Overlap the gating function with shared experts in DeepSeek (#7978) 2025-07-12 13:41:50 -07:00
Ying Sheng
ccfa084125 [script] update loogle test (#7975) 2025-07-12 00:06:17 -07:00
Ying Sheng
bcc5ba94b4 [minor fix] SWA missing methods (#7972) 2025-07-11 23:57:02 -07:00
Ying Sheng
cee9f329c4 [minor fix] llama4 hybrid memory (#7950) 2025-07-11 23:11:36 -07:00
Yineng Zhang
eb118d88c4 chore: bump v0.4.9.post2 (#7963) 2025-07-11 21:11:20 -07:00
Yineng Zhang
732fc8e405 chore: upgrade sgl-kernel 0.2.5 (#7971) 2025-07-11 20:35:06 -07:00
fzyzcjy
2a2d3478af Fix wrong gemm branch cause 250us slower (#7969) 2025-07-11 19:45:09 -07:00
Xiaoyu Zhang
aa2056091a delete uselese code caused by fuse allreduce+add_rmsnorm pr (#7970) 2025-07-11 19:43:38 -07:00
Yineng Zhang
61bb285827 chore: upgrade xgrammar 0.1.21 (#7962) 2025-07-11 19:26:52 -07:00
fzyzcjy
880221bd3b Revert "[PD Disaggregation] replace transfer with batch transfer for better performance (#7236)" (#7968) 2025-07-11 19:03:01 -07:00
Peng Zhang
191d836ff6 fix: minor fix for modelopt weight load compatibility (#7953) 2025-07-11 14:20:58 -07:00
ronnie_zheng
86044712c6 [feature] kv transfer support of ascend npu (#7795)
Co-authored-by: liupeng <liupeng374@huawei.com>
2025-07-11 00:07:51 -07:00
Atream
615553079d Support Kimi K2 (#7940) 2025-07-11 00:02:21 -07:00
Xiaoyu Zhang
49a5915f53 [ready b200] fuse allreduce+add_rmsnorm in prepare_attention + mlp module (#7775) 2025-07-10 15:12:39 -07:00
ronnie_zheng
766392c6bd [feature]Ascend quantization support (#7791)
Co-authored-by: ichernob <ichernobnn@gmail.com>
Co-authored-by: liupeng <liupeng374@huawei.com>
2025-07-10 09:17:37 -07:00
likesen-alibaba
4a0d19198b Fix bug of deepseek-v3 under DP+EP mode with large batchsize/seqlen (#6449) 2025-07-10 01:19:56 -07:00
Zaili Wang
5748241549 add sentencepiece as dependency explicitly (#7922) 2025-07-10 01:06:27 -07:00
Binyao Jiang
2d54d4bb64 Feat: Support Phi-3.5-MoE in SGLang (#7907) 2025-07-09 23:51:33 -07:00
Mick
b5e3d6031c vlm: support video as an input modality (#5888) 2025-07-09 23:48:35 -07:00
kyleliang-nv
dd445a41f5 [feature] Add start step profile argument in /start_profile (#7608) 2025-07-09 18:42:15 -07:00
almaslof
f9df11ae86 Remove unused imports (#7898) 2025-07-09 22:36:48 +08:00
jianan-gu
d389bedf72 [CPU][Qwen3 MoE] Enable fused_topk CPU fusion and enhance FP8 TP padding (#7838) 2025-07-09 02:04:21 -07:00
Cheng Wan
d487555f84 [CI] Add deepep tests to CI (#7872) 2025-07-09 01:49:47 -07:00
Xinyuan Tong
e5888eddda Fixes typo in assertion message (#7895)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-09 01:47:14 -07:00
Yineng Zhang
066f4ec91f chore: bump v0.4.9.post1 (#7882) 2025-07-09 00:28:17 -07:00
Yineng Zhang
b6b6268ccf Revert "Embedding parallel by attn_tp (#7623)" (#7880) 2025-07-08 22:03:09 -07:00
Shangming Cai
64c5907e12 [PD] Add guidance for prefill bootstrap timeout (#7846)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-08 21:00:34 -07:00
Chunyuan WU
128f16a817 [CPU]convert topk_weights to fp32 for INT8 and FP8 paths (for llama4) and fix LmHead weight pack (#7818) 2025-07-08 19:27:24 -07:00
ybyang
4986104618 Bump xgrammar's version to 0.1.20 (#7866) 2025-07-08 17:55:30 -07:00
Brayden Zhong
a37e1247c1 [Multimodal][Perf] Use pybase64 instead of base64 (#7724) 2025-07-08 14:00:58 -07:00
Xinyuan Tong
136c6e0431 fix: Handles input_embeds in GenerateReqInput when n>1 (#7830)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:42 -07:00
Xinyuan Tong
43e20c0647 Support Mimo-VL (#7579)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:25 -07:00
Xinyuan Tong
4bab50a6b5 Fix llama4 vision (#7840)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-08 14:00:03 -07:00
Xiaoyu Zhang
2e7ab862e3 Fix illegal memory in trtllm allreduce fusion (#7864) 2025-07-08 11:47:17 -07:00
kk
653b873b91 Fix cache modules of triton import error (#7832) 2025-07-08 02:50:09 -07:00
Shangming Cai
d379bda4fa [Bugfix] Fix two batch overlap with auto DeepEP Dispatch (#7853)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-08 02:49:32 -07:00
Zhiyu
659907e32b Enable ModelOpt Llama4 fp8 checkpoint deployment in SGLang (#7129) 2025-07-08 00:19:50 -07:00
SijiaYang
cb9d91ea8a feat: support DeepSeek-R1-W4AFP8 model with ep-moe mode (#7762)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
2025-07-07 14:47:21 -07:00
Haohui Mai
076313bd09 [AMD] Fail gracefully when AITER is unavailable gfx90a GPUs (#7187) 2025-07-07 09:09:58 +00:00
Ziming Huang
9abe1163ac fix duplicate args in schedule_batch (#7816) 2025-07-07 01:31:03 -07:00
Zhiqiang Xie
2fc824b84c Kernels for efficient KV cache IO (#7313) 2025-07-06 22:53:36 -07:00
Yuan Luo
253454de9b Integrate triton moe kernel (#7689)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-07-06 20:05:49 -07:00
yuhsuan-t
8d4a01cbd7 Log the timestamps of each prefill/decode iteration (#6094)
Co-authored-by: yuhsuan-t <12108766+yuhsaun-t@users.noreply.github.com>
2025-07-07 01:57:27 +00:00
Nan Jiang
ba69c153f6 [RL]: Fix error tagging in multi-stage wake up (#7812)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
2025-07-06 16:51:29 -07:00
Stefan He
3589aa79b0 [RL] Fix illegal memory for _import_static_state (#7733)
Co-authored-by: nanjiangwill <willjiang2018@gmail.com>
2025-07-06 16:25:21 -07:00
Lifu Huang
ea4bf12286 Fix division-by-zero bug in LoRA triton kernels. (#7785) 2025-07-06 00:45:29 -07:00
fzyzcjy
a291439a59 Support logprobs in two-batch overlap (#7709) 2025-07-05 19:05:32 -07:00
JieXin Liang
54411f6afa fix: disable dsv3_router_gemm in dsv3_nextn (#7793) 2025-07-05 19:01:01 -07:00