Commit Graph

2782 Commits

Author SHA1 Message Date
yilian49
8aa5ae6b04 load draft model fix (#7506) 2025-07-17 21:41:32 -07:00
Minglei Zhu
8a32355704 Feat: Support Granite 3.0 MoE in SGLang (#7959) 2025-07-17 20:56:03 -07:00
Mick
e1020dc588 refactor: simply MultimodalTokens logic (#7924) 2025-07-17 17:59:15 -07:00
Zhao Chen
3586b4cef2 feat: add production metric for retracted requests due to insufficient kvcache (#7030)
Signed-off-by: Zhao Chen <zhaochen.zju@gmail.com>
2025-07-17 11:59:05 -07:00
Asher
4296021499 [Hunyuan]: Fix Dense Model Support (#8117)
Signed-off-by: Asher Zhang <asherszhang@tencent.com>
2025-07-17 10:00:11 -07:00
Ziqi Fan
01857fab61 fix: update HostKVCache init to report correct msg when available memory is not enough (#8102) 2025-07-17 21:24:34 +08:00
fzyzcjy
519ff5c8e6 Super tiny fix typo (#8046) 2025-07-17 21:15:51 +08:00
Cheng Wan
49b8777460 Refactor: move all quantization-related code to srt/layer/quantization (#7989) 2025-07-17 00:47:07 -07:00
hzh0425
5c08a36cbf [Fix] ensure DeepGEMM is only enabled for FP8_W8A8 models (#8110) 2025-07-16 21:33:29 -07:00
Cheng Wan
9069884b51 [ci] disable memory imbalance check for draft worker (#8108) 2025-07-16 20:41:47 -07:00
Yingchun Lai
795668dc73 feat: add tp_rank, pp_rank and dp_rank labels for scheduler metrics (#7597)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
2025-07-16 17:55:59 -07:00
Mick
4395c87a9b refactor: unify names of the feature field of MultimodalDataItem (#8075) 2025-07-16 17:52:38 -07:00
Peng Zhang
c28ad1990d [1/n] chore: decouple quantization implementation from vLLM dependency (#7992) 2025-07-16 15:56:26 -07:00
Xiaoze Fan
570d33437b [Feature] Layer-wise Prefill (#7634)
Signed-off-by: jason-fxz <jason341132@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-17 01:57:46 +08:00
YanbingJiang
b188a89a5d Fix CI xeon test with triton 3.3.1 (#8086) 2025-07-16 02:12:23 -07:00
Mick
497efe747d Revert "feat: replace Decord with video_reader-rs" (#8077) 2025-07-15 20:04:56 -07:00
Qiaolin Yu
69f453e5a4 Use device_group for all_gather when disabling overlap scheduling (#8001) 2025-07-15 19:38:58 -07:00
Qiaolin Yu
3bc43c683e Fix different device type adjustment in PP (#7760) 2025-07-15 19:37:14 -07:00
Xinyuan Tong
7498522f7d update transformers to 4.53.2 (#8029)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-15 18:24:39 -07:00
strgrb
194841e329 remove kv_a.congigous in DeepseekV2AttentionMLA (#8058)
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
2025-07-15 18:20:41 -07:00
kozo
ebff5fcb06 feat: replace Decord with video_reader-rs (#5163)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-15 18:17:34 -07:00
yhang
14f1f1514b H20 tune config for Kimi (#8047) 2025-07-15 13:48:31 -07:00
Albert
38216cf049 concurrently load weights of DeepseekV2ForCausalLM (#7943)
Signed-off-by: Tianyu Zhou <albert.zty@antgroup.com>
2025-07-15 13:41:19 -07:00
jiawei
f1f1d1d40d Fix the input tools format and history tool_calls in OpenAI API (#6556) 2025-07-15 00:58:55 -07:00
Xinyuan Tong
9120e83d03 fix: remove redundant rotary embedding cache recomputation in MiniCPM (#8022) 2025-07-15 00:12:45 -07:00
Xinyuan Tong
6e923dbd30 feat: update multimodal data handling in engine entrypoint (#8002)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-15 00:12:22 -07:00
Qi Yuhang
c268c11c71 [feat]Support fusion kernel for constructing quant input and scale factor for fp8_blockwise_scaled_grouped_mm (#8023) 2025-07-15 00:02:44 -07:00
杨睿
9b560c3e1c fix: modality length mismatch with image_data (#7887) 2025-07-15 14:27:54 +08:00
ehuaa
64e78bb31b prevent server crash from potential invalid grammar (#7897) 2025-07-15 11:21:45 +08:00
hzh0425
7c39e8a198 Fix Bug 'get_cpu_copy not Implemented' in pd offloading mode (#7982) 2025-07-14 14:57:10 -07:00
Lifu Huang
d969504d9a Fix flaky CI: test_vlm_models (#8006) 2025-07-14 14:56:41 -07:00
ykcombat
d4d0c7c367 [Feature]TP Group Switching for PD-Multiplexing (#7653) 2025-07-15 02:35:46 +08:00
Lianmin Zheng
8d2cf38c79 [Minor] Remove redundant print (#8005) 2025-07-14 10:55:13 -07:00
Hank Han
2117f82def [ci] CI supports use cached models (#7874) 2025-07-14 11:42:21 +00:00
Yusong Gao
c07f647c9f perf: add kimi k2 fused_moe tuning config for h30_3e (#8021)
Co-authored-by: yudian0504 <yudian.zy@antgroup.com>
2025-07-14 02:56:11 -07:00
Chunyuan WU
07452cbe8e [CPU] fix no attribute 'can_fuse_mlp_allreduce' error (#8010) 2025-07-14 01:32:43 -07:00
mqhc2020
a562c8a35c [Dockerfile] Multi-arch support for ROCm (#7902)
Co-authored-by: Lin, Soga <soga.lin@amd.com>
Co-authored-by: HaiShaw <hixiao@gmail.com>
2025-07-14 06:13:09 +00:00
Praneth Paruchuri
cb736df854 Support for Phi-1.5 & Phi-2 models (#7862) 2025-07-13 18:43:40 -07:00
Lifu Huang
e2ed9d049a Refactor dynamic LoRA update to fix incorrect handling of variant weight shapes (#7844) 2025-07-13 18:36:01 -07:00
Hanming Lu
9379da77de SWA Prefix Cache (#7367)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
2025-07-13 12:31:07 -07:00
ehuaa
0c55cbcfc5 [BugFix] add verify logit_bias to avoid crash because of IndexError (#7749) 2025-07-14 02:44:12 +08:00
fzyzcjy
c46e069d34 Tiny fix mooncake log warning wrong output (#7952) 2025-07-12 21:22:44 -07:00
Ying Sheng
42fc44100a [minor] Add server_args check for Llama4 with hybrid (#7988) 2025-07-12 20:13:40 -07:00
Morpheus Guo
5f6756b038 [BugFix] fix pre_reorder_triton_kernel default int32 issue (#7814) 2025-07-12 13:42:36 -07:00
Cheng Wan
98aa836bbf Overlap the gating function with shared experts in DeepSeek (#7978) 2025-07-12 13:41:50 -07:00
Ying Sheng
ccfa084125 [script] update loogle test (#7975) 2025-07-12 00:06:17 -07:00
Ying Sheng
bcc5ba94b4 [minor fix] SWA missing methods (#7972) 2025-07-11 23:57:02 -07:00
Ying Sheng
cee9f329c4 [minor fix] llama4 hybrid memory (#7950) 2025-07-11 23:11:36 -07:00
Yineng Zhang
eb118d88c4 chore: bump v0.4.9.post2 (#7963) 2025-07-11 21:11:20 -07:00
Yineng Zhang
732fc8e405 chore: upgrade sgl-kernel 0.2.5 (#7971) 2025-07-11 20:35:06 -07:00