Commit Graph

3227 Commits

Author SHA1 Message Date
Cao E
c674bf9c6b Fix biased_grouped_topk_cpu (#9420) 2025-08-20 19:18:48 -07:00
Qiaolin Yu
af1973b871 Fix max_seq_len_k in trtllm_mha attention backend (#9416) 2025-08-20 19:17:13 -07:00
strgrb
88fbc31b50 Support trtllm_allreduce_fusion in flashinfer for cuda<12.8 (#9339)
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
2025-08-20 16:54:30 -07:00
nathan
8f5b9910c1 Add support for Qwen3-seq-cls (#9357) 2025-08-20 16:51:56 -07:00
Xinyuan Tong
84719b527a fix: InternS1 don't recognize image, updates image token for InternVL processor (#9381)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-20 16:43:03 -07:00
jiapingW
e99729c9f3 Fixed the issue where eagle3 TPOT was not as good as without eagle3. (#9404) 2025-08-20 16:42:01 -07:00
Nicolas Castet
c10b8e6a0f Support DP attention with GPT-OSS (#9359) 2025-08-20 16:36:31 -07:00
Lifu Huang
d4bce29721 Fix incorrect logic in chat template handling. (#9336)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-20 16:25:36 -07:00
Lifu Huang
b0980af89f Support pinning adapter via server args. (#9249) 2025-08-20 16:25:01 -07:00
Nathan Wang
24eaebeb4b Fix FlashInfer GPU <-> CPU sync (#9409) 2025-08-20 15:26:12 -07:00
Trevor Morris
a91e90d9a3 [2/2] Fuse routed scaling factor into select_experts (#8690) 2025-08-20 15:10:16 -07:00
Xiaoyu Zhang
f96413c444 Refactor allreduce add rmsnorm pattern (#9278) 2025-08-20 02:03:08 -07:00
Liangsheng Yin
08ebdf79d0 Fix the --allow-auto-truncate argument in tokenizer manager. (#9391)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-20 16:56:47 +08:00
fzyzcjy
42c8704560 Add PDL support for quant kernel and rope kernel (#9106) 2025-08-20 01:56:29 -07:00
Even Zhou
de2dd73831 Revert "[feature] Rework Ascend NPU graph support" (#9385) 2025-08-20 00:35:10 -07:00
Lianmin Zheng
1ec9769753 [Docs] Update contribution guide (#9383) 2025-08-19 23:37:45 -07:00
Lianmin Zheng
f20b6a3f2b [minor] Sync style changes (#9376) 2025-08-19 21:35:01 -07:00
Even Zhou
3680d6f88b [feature] Rework Ascend NPU graph support (#9350)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
2025-08-19 20:32:27 -07:00
Keyang Ru
f515449582 Fix gpt-oss response api streaming issue (#9368) 2025-08-19 20:19:42 -07:00
Ke Bao
e0ce171d79 Fix triton backend eagle illegal memory access (#9344) 2025-08-19 20:16:26 -07:00
fzyzcjy
fe43e889f8 Fix mini lb timeout issue (#9369) 2025-08-19 20:15:16 -07:00
Even Zhou
f4fafacc5d Revert "[feature] Ascend NPU graph support (#8027)" (#9348) 2025-08-19 10:11:23 -07:00
chenxu140
01d47a27b6 [Bugfix] fix kv buffer register & dp attention & deepepmoe (#9327) 2025-08-19 10:09:48 -07:00
Enrique Shockwave
e483ab6d20 enable marlin fp8 blockwise (#8990) 2025-08-18 18:53:15 -07:00
Jiaqi Gu
3c2c9f6c9e [Bug] Fix input arguments of flashinfer_trtllm_moe (#9317) 2025-08-18 18:03:19 -07:00
zxy
a31ea44824 support for interns1-mini (#9299) 2025-08-18 17:56:04 -07:00
fzyzcjy
5626e20b2b Tiny fix CI (#9306) 2025-08-18 16:54:36 -07:00
Binyao Jiang
c2fbf60f39 [GLM4.1V and GLM4.5V] Add vision transformer num_dummy_head support: max tp=4 -> max tp=8 (#9059) 2025-08-18 14:40:13 -07:00
datdo-msft
98b44e9e56 [PD] Propagate internal server errors from aborted requests to clients instead of blindly returning 200's (#8936) 2025-08-18 14:23:46 -07:00
Swipe4057
6805f6da40 upgrade xgrammar 0.1.23 and openai-harmony 0.0.4 (#9284) 2025-08-18 14:02:00 -07:00
江家瑋
ca533580f2 [Docs] Correct and clarify notes in Engine docstring (#9313)
Signed-off-by: JiangJiaWei1103 <waynechuang97@gmail.com>
2025-08-18 13:24:19 -07:00
Keyang Ru
886454e8e7 [MISC] use dynamic choices for tool-call-parser argument (#9316) 2025-08-18 13:02:10 -07:00
gongwei-130
0cf3fbeb18 should return invalide request for empty prompt (#9315) 2025-08-18 11:44:11 -07:00
Zhiyu
2256d62d36 Modelopt quant config adaptation (#8829) 2025-08-18 11:27:30 -07:00
Lianmin Zheng
c480a3f6ea Minor style fixes for sgl-kernel (#9289) 2025-08-18 09:38:35 -07:00
fzyzcjy
4c0bb411e5 Further fix memory pool leak error (#9298) 2025-08-18 00:58:06 -07:00
b8zhong
716e682721 [Fix] Add undefined update_tensor_inplace function (#6307) 2025-08-18 11:11:00 +08:00
zifeitong
84b30d9e00 Set the default attention backend for GLM-4.5v to fa3 (#9245) 2025-08-17 16:34:19 -07:00
blzheng
ebbb75e917 [CPU] Fix TP padding issue on Phi-4 (#8289) 2025-08-17 16:25:26 -07:00
fzyzcjy
b498cd21d7 Tiny make fp4 moe method parameters more static (#8520) 2025-08-17 13:26:02 -07:00
kousakawang
0fc54b971e [fix]: fix cutlass moe ut and and Opt H20 cutlass groupGemm performance (#9272)
Co-authored-by: wanghanpei <wanghanpei@bytedance.com>
2025-08-17 13:09:49 -07:00
fzyzcjy
b3c1f2e4f2 Fix memory pool leak error (#9271) 2025-08-17 12:53:34 -07:00
Ke Bao
be1a3cd9b4 Fix swa eagle verify accuracy for Triton backend (#9279) 2025-08-17 12:52:02 -07:00
Lifu Huang
4b74c3fcca [chore] Clean up redundant lora_weight_names concept to simplify code (#9131) 2025-08-17 12:36:58 -07:00
Netanel Haber
3d77a31885 from python.sglang.srt -> from sglang.srt (#9268) 2025-08-17 02:45:45 -07:00
Netanel Haber
845d12a979 model: support nvidia/Llama-3_3-Nemotron-Super-49B-v1 (#9067)
Co-authored-by: Kyle Huang <kylhuang@nvidia.com>
2025-08-17 01:48:15 -07:00
Stefan He
e47800e176 Quick Fix GLM (#9264) 2025-08-16 23:43:41 -07:00
Mick
1df84ff414 ci: simplify multi-modality tests by using mixins (#9006) 2025-08-16 22:25:02 -07:00
Binyao Jiang
66d6be0874 Bug fix: use correct mm_items in embed_mm_inputs (#8893) 2025-08-16 19:55:56 -07:00
kk
1c1f8a118e Combine fp4.py and mxfp4.py into one file and support dynamic mxfp4 quantization in mxfp4.py (#9049)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-08-16 19:01:54 -07:00