Commit Graph

4977 Commits

Author SHA1 Message Date
Wenxuan Tan
0f587e80d3 Use Tensor Core Decode when gqa group size >= 4 (#8624)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-22 23:25:15 +08:00
huangtingwei
6078d5fcc0 [HiCacheStorage] backup optimization for MLA model (#8865)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-22 18:03:51 +08:00
pansicheng
70cf4abccc 3fs zerocopy (#9109)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-22 17:56:38 +08:00
Xuchun Shang
cebf45994b [bugfix] Make --enable-hierarchical-cache and --disable-radix-cache mutually exclusive (#9452)
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
2025-08-22 17:49:52 +08:00
Qiaolin Yu
9c0c1e30b2 Disable torch.compile for get_last_loc_large_page_size_large_top_k (#9507)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-08-22 02:05:02 -07:00
Mick
a1f011d09a minor: determine mm attn backend based on platforms (#9303) 2025-08-22 01:08:41 -07:00
Qiaolin Yu
9ec314c6ac Support speculative decoding in the trtllm_mha attention backend (#9331)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-08-21 23:53:35 -07:00
Xinyuan Tong
fedfe91c1a [Docs] Add doc and quick demo for gpt-oss responses api & buildin tools (#9497)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-21 23:51:52 -07:00
kk
988accbc1e Update docker file for supporting PD-Disaggregation on MI300x (#9494)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: Colin Wang <kangwang@amd.com>
2025-08-21 23:48:40 -07:00
Yineng Zhang
b6b2287e4b chore: bump sgl-kernel v0.3.6.post2 (#9475) 2025-08-21 23:02:08 -07:00
Elfie Guo
243e745d07 Add trtllm_mla and cutlass_mla for ragged fmha for chunked prefill (#9480) 2025-08-21 23:01:36 -07:00
timmy-feng
61a0e600df torch.compile() mrope (#9487) 2025-08-21 23:01:08 -07:00
Simo Lin
0f8cee8cd3 [router] fix router load guard tracking for streaming (#9491) 2025-08-21 22:48:29 -07:00
Chang Su
816c4c8572 [router] add tool parser base structure and partial json parser (#9482) 2025-08-21 22:08:56 -07:00
Xinyuan Tong
13ec8d427e [Docs]Update reasoning parser doc & fix outdated link (#9492)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-21 22:08:28 -07:00
Chayenne
05bd789791 [docs]: fix reasoning context in docs (#9483) 2025-08-21 20:04:12 -07:00
kousakawang
5fd311d33e [code clean] add H20 cutlass groupGemm default config (#9333)
Co-authored-by: wanghanpei <wanghanpei@bytedance.com>
2025-08-21 19:23:29 -07:00
Chang Su
53e2cd464f [router] remove all tokenizer metrics for performance (#9474) 2025-08-21 18:35:24 -07:00
Yongfei Xu
9708d353b7 Support MHA with chunked prefix cache for flashinfer/flashmla backend, support page size > 1 for MHA chunked prefix (#8616)
Co-authored-by: xuyongfei.xyf <xuyongfei.xyf@antgroup.com>
2025-08-21 18:19:44 -07:00
Hubert Lu
704ced1b2e [AMD] Remove the deprecated C10_WARP_SIZE (#9356) 2025-08-21 18:16:35 -07:00
Pavani Majety
3cc3d9b950 Add Support for Page Size greater than 1 for Flashinfer MLA Backend (#8593)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-08-21 18:15:06 -07:00
Xinyuan Tong
0b3a5b1151 Update reasoning parser doc (#9468)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-21 17:25:30 -07:00
Xinyuan Tong
6c855db82c Revert "bugfix: Fix output_ids extraction in detokenizer_manager" (#9467) 2025-08-21 17:24:25 -07:00
Yineng Zhang
0f9318f7d0 feat: update auto_choose_speculative_params (#9470) 2025-08-21 17:12:12 -07:00
Yineng Zhang
849957bc76 fix: tmp revert gpt oss tp sharding on hopper (#9469) 2025-08-21 17:03:21 -07:00
Stefan He
cded039b57 [FA3] Init Spec Page Table only when Spec is enabled to save ~40MB (#9455) 2025-08-21 15:11:38 -07:00
zixuanzhang226
275f9df381 feat: add fused moe config for GLM-4.5-Air-FP8 on B200 (#9463) 2025-08-21 15:10:20 -07:00
Xinyuan Tong
e8449ab515 Add deepseek v3.1 thinking parser support and update docs (#9464)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-21 15:09:40 -07:00
Yineng Zhang
4746aaea41 fix: support fb fp8 (#9462) 2025-08-21 14:31:43 -07:00
gongwei-130
10d34f74e2 fix: should return a invalid request response when schema missing (#9461) 2025-08-21 14:06:50 -07:00
gongwei-130
9ba7253094 accomendate reasoning_effort set in chat_template_kwargs (#9458) 2025-08-21 13:22:03 -07:00
Hongbo Xu
9c8e4f69c3 [5/n]decouple quantization implementation from vLLM dependency (#9454) 2025-08-21 12:52:07 -07:00
Simo Lin
78ae175866 [router] add tokenizer benchmark (#9427) 2025-08-21 11:09:39 -07:00
hlu1
dae9a80f43 [fix] Fix mxfp4 weight loading bug with TP sharding in GPT-OSS (#9433)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-21 03:50:51 -07:00
fzyzcjy
e85cb1ce9d Fix quant kernel test errors and benchmark wrong output speeds (#7604) 2025-08-21 03:48:41 -07:00
fzyzcjy
55d336cb08 Refactor weight offloading logic (#8521) 2025-08-21 03:48:13 -07:00
Yuhao Yao
de4990a5b2 [Bug] Fix w4afp8 moe kernel (#9392) 2025-08-21 03:45:18 -07:00
DiweiSun
029e0af31d ci: enhance xeon ci (#9395) 2025-08-21 03:35:17 -07:00
pranavm-nvidia
64574ef8c0 Enables speculative decoding for the trtllm_mla attention backend (#9238) 2025-08-21 01:18:21 -07:00
Kaixi Hou
18da2c96ec [NVIDIA] Fix trtllm fp4 moe backend when used in MTP (#9384) 2025-08-21 00:54:01 -07:00
Liangsheng Yin
9b5f0f64f5 Fix tiny misalign with previous truncation setting in tokenizer_manager (#9430) 2025-08-21 14:05:35 +08:00
Azure
70bb066ee4 Fix FP4 inference corruption issue in glm4.5-air model (#9346) 2025-08-20 22:13:47 -07:00
VDV1985
2c4b4b786b [feature] Ascend NPU graph support (#9399)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
2025-08-20 21:13:27 -07:00
Martin Vit
7cd2ee06d7 feat: Add Triton fallback option and SM120 MoE configs for FP8 models (#9251) 2025-08-20 19:33:15 -07:00
Liangsheng Yin
eb19ccadae [bug] fix errors related to context length in SD (#9388) 2025-08-21 10:32:34 +08:00
Shangming Cai
25ef53f05f [PD] Fix nvlink transport accuracy through transferring metadata with tcp (#9261)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-08-20 19:29:10 -07:00
Cao E
c674bf9c6b Fix biased_grouped_topk_cpu (#9420) 2025-08-20 19:18:48 -07:00
Qiaolin Yu
af1973b871 Fix max_seq_len_k in trtllm_mha attention backend (#9416) 2025-08-20 19:17:13 -07:00
Chang Su
5cfbb4c136 [router] add glm and step3 reasoning parser (#9415) 2025-08-20 18:33:10 -07:00
Chang Su
e65231022f [router] add tokenizer integration test with real mini tokenizer (#9413) 2025-08-20 17:56:23 -07:00