Commit Graph

3194 Commits

Author SHA1 Message Date
Zhiyu
2256d62d36 Modelopt quant config adaptation (#8829) 2025-08-18 11:27:30 -07:00
Lianmin Zheng
c480a3f6ea Minor style fixes for sgl-kernel (#9289) 2025-08-18 09:38:35 -07:00
fzyzcjy
4c0bb411e5 Further fix memory pool leak error (#9298) 2025-08-18 00:58:06 -07:00
b8zhong
716e682721 [Fix] Add undefined update_tensor_inplace function (#6307) 2025-08-18 11:11:00 +08:00
zifeitong
84b30d9e00 Set the default attention backend for GLM-4.5v to fa3 (#9245) 2025-08-17 16:34:19 -07:00
blzheng
ebbb75e917 [CPU] Fix TP padding issue on Phi-4 (#8289) 2025-08-17 16:25:26 -07:00
fzyzcjy
b498cd21d7 Tiny make fp4 moe method parameters more static (#8520) 2025-08-17 13:26:02 -07:00
kousakawang
0fc54b971e [fix]: fix cutlass moe ut and and Opt H20 cutlass groupGemm performance (#9272)
Co-authored-by: wanghanpei <wanghanpei@bytedance.com>
2025-08-17 13:09:49 -07:00
fzyzcjy
b3c1f2e4f2 Fix memory pool leak error (#9271) 2025-08-17 12:53:34 -07:00
Ke Bao
be1a3cd9b4 Fix swa eagle verify accuracy for Triton backend (#9279) 2025-08-17 12:52:02 -07:00
Lifu Huang
4b74c3fcca [chore] Clean up redundant lora_weight_names concept to simplify code (#9131) 2025-08-17 12:36:58 -07:00
Netanel Haber
3d77a31885 from python.sglang.srt -> from sglang.srt (#9268) 2025-08-17 02:45:45 -07:00
Netanel Haber
845d12a979 model: support nvidia/Llama-3_3-Nemotron-Super-49B-v1 (#9067)
Co-authored-by: Kyle Huang <kylhuang@nvidia.com>
2025-08-17 01:48:15 -07:00
Stefan He
e47800e176 Quick Fix GLM (#9264) 2025-08-16 23:43:41 -07:00
Mick
1df84ff414 ci: simplify multi-modality tests by using mixins (#9006) 2025-08-16 22:25:02 -07:00
Binyao Jiang
66d6be0874 Bug fix: use correct mm_items in embed_mm_inputs (#8893) 2025-08-16 19:55:56 -07:00
kk
1c1f8a118e Combine fp4.py and mxfp4.py into one file and support dynamic mxfp4 quantization in mxfp4.py (#9049)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-16 19:01:54 -07:00
Shangming Cai
384f8ab5ce [PD] Support PD disaggregation with Prefill PP (#8846)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <csmthu@gmail.com>
Co-authored-by: root <huzhiyuan@xiaohongshu.com>
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
Co-authored-by: Francis <38564764+ssssnow@users.noreply.github.com>
Co-authored-by: zitto <zhjc1124@gmail.com>
2025-08-16 18:31:31 -07:00
zyksir
6a9d6ca33c fix unexcepted answer in EAGLE mode (#9252) 2025-08-16 17:45:36 -07:00
VDV1985
94371dbbd6 [feature] Ascend NPU graph support (#8027)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
2025-08-16 17:25:17 -07:00
Hank Han
81da16f6d3 [CI] add deepseek w4a8 test on h20 ci (#7758) 2025-08-16 01:54:13 -07:00
Brayden Zhong
bc938ea13f Fix DP load for embedding (#9165) 2025-08-15 23:58:44 -07:00
Trevor Morris
eff4eb3fdd Add fp4 quantize before all-gather for Flashinfer cutlass MoE DP (max throughput) (#7667) 2025-08-15 22:08:11 -07:00
kk
983aa4967b Fix nan value generated after custom all reduce (#8663)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-15 12:33:54 -07:00
Hubert Lu
9c3e95d98b [AMD] Expand test coverage for AMD CI and enable apply_token_bitmask_inplace_cuda in sgl-kernel (#8268) 2025-08-15 12:32:51 -07:00
Cheng Wan
84b006b278 Cleanup MoE Refactor (#9223) 2025-08-15 02:28:33 -07:00
Xuchun Shang
189af90896 [Eagle Warning fix] replace the deprecated 'and' with & (#9215)
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
2025-08-15 15:43:36 +08:00
Cheng Wan
e3e75a786a Fix the deprecation warning for enable_flashinfer_mxfp4_moe (#9214) 2025-08-14 23:59:35 -07:00
shilinlee
d4db9b028b fix: the store_dtype typo for ascend mla (#9208)
Signed-off-by: shilinlee_com <836160610@qq.com>
2025-08-14 23:58:42 -07:00
hzh0425
f7dd651dbd feat(hicache-3fs): 3FS-SGLang Hierarchical Cache Deployment Guide​ (#9213) 2025-08-14 23:54:31 -07:00
Cheng Wan
295895120d [6/N] MoE Refactor: Cleanup MoE-related configs (#8849) 2025-08-14 21:14:53 -07:00
Mick
584e1ab2d0 fix: fix unsupported palette mode of images in bench_serving for mmmu (#9206) 2025-08-14 18:44:46 -07:00
Philo
004f7f1972 [typo fix] Fix a typo in communicator.py (#9183)
Signed-off-by: Philo <lul16@foxmail.com>
2025-08-14 17:29:38 -07:00
zixuanzhang226
d2fbf2de0c feat: add fused moe config for Qwen3-235B-A22B-FP8 on B200 (#9204) 2025-08-14 17:21:30 -07:00
Yineng Zhang
fab0f6e77d chore: bump v0.5.0rc2 (#9203) 2025-08-14 16:11:16 -07:00
Yineng Zhang
27985c27aa feat: update model config (#9202) 2025-08-14 15:15:27 -07:00
Yineng Zhang
ac474869d4 chore: upgrade transformers 4.55.2 (#9197) 2025-08-14 13:51:02 -07:00
Adarsh Shirawalmath
0b1e04f083 [VLM] Improving multimodal tensor hash kernel (#9008) 2025-08-14 13:45:55 -07:00
Chengxing Xie
c1c7dc4534 feat: Add model version tracking with API endpoints and response metadata (#8795) 2025-08-14 12:13:46 -07:00
Hongbo Xu
2cc9eeab01 [4/n]decouple quantization implementation from vLLM dependency (#9191)
Co-authored-by: AniZpZ <aniz1905@gmail.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-08-14 12:05:46 -07:00
Xiaoyu Zhang
63d82a776a refine mxfp4 shuffling log (#9194) 2025-08-14 10:57:29 -07:00
Peng Zhang
5aa1ebd242 [2/n]decouple quantization implementation from vLLM dependency (#8112)
Co-authored-by: walker-ai <yiyun.wyt@antgroup.com>
Co-authored-by: leoneo <1320612015@qq.com>
2025-08-14 03:19:03 -07:00
eigen
4dbf43601d fix: zero_init buffer (#9065)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-08-14 02:39:09 -07:00
lukec
3d6be1fbce add w8a8-fp8-block-wise H20-3e triton config (#8018) 2025-08-13 23:15:09 -07:00
Jun Liu
4063234c1a Add H200 fused MoE kernel configs for DeepSeek-V3 in triton 3.3.1 (#7687) 2025-08-13 23:14:09 -07:00
Tommy Yang
83feef5b2c Add H20 fused MoE kernel configs for Dpsk & Qwen3 (#7631) 2025-08-13 23:13:22 -07:00
Brayden Zhong
2871eacc05 Add Triton Fused MoE kernel config for E=16 on B200 (#7004) 2025-08-13 23:12:27 -07:00
forestlee95
ac15bdc194 Add H200 fused MoE kernel tuning configs for Qwen3-Coder-480B-A35B-Instruct (#8852) 2025-08-13 23:11:11 -07:00
Li Hui
d6451c3f65 Add A800 fused MoE kernel tuning configs for GLM4.5 and GLM4.5-Air (#8808) 2025-08-13 23:03:17 -07:00
pansicheng
733446dd36 fix io group (#9154)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-14 12:46:42 +08:00