zixuanzhang226
|
275f9df381
|
feat: add fused moe config for GLM-4.5-Air-FP8 on B200 (#9463)
|
2025-08-21 15:10:20 -07:00 |
|
Xinyuan Tong
|
e8449ab515
|
Add deepseek v3.1 thinking parser support and update docs (#9464)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-21 15:09:40 -07:00 |
|
Yineng Zhang
|
4746aaea41
|
fix: support fb fp8 (#9462)
|
2025-08-21 14:31:43 -07:00 |
|
gongwei-130
|
10d34f74e2
|
fix: should return a invalid request response when schema missing (#9461)
|
2025-08-21 14:06:50 -07:00 |
|
gongwei-130
|
9ba7253094
|
accomendate reasoning_effort set in chat_template_kwargs (#9458)
|
2025-08-21 13:22:03 -07:00 |
|
Hongbo Xu
|
9c8e4f69c3
|
[5/n]decouple quantization implementation from vLLM dependency (#9454)
|
2025-08-21 12:52:07 -07:00 |
|
hlu1
|
dae9a80f43
|
[fix] Fix mxfp4 weight loading bug with TP sharding in GPT-OSS (#9433)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-21 03:50:51 -07:00 |
|
fzyzcjy
|
e85cb1ce9d
|
Fix quant kernel test errors and benchmark wrong output speeds (#7604)
|
2025-08-21 03:48:41 -07:00 |
|
fzyzcjy
|
55d336cb08
|
Refactor weight offloading logic (#8521)
|
2025-08-21 03:48:13 -07:00 |
|
DiweiSun
|
029e0af31d
|
ci: enhance xeon ci (#9395)
|
2025-08-21 03:35:17 -07:00 |
|
pranavm-nvidia
|
64574ef8c0
|
Enables speculative decoding for the trtllm_mla attention backend (#9238)
|
2025-08-21 01:18:21 -07:00 |
|
Kaixi Hou
|
18da2c96ec
|
[NVIDIA] Fix trtllm fp4 moe backend when used in MTP (#9384)
|
2025-08-21 00:54:01 -07:00 |
|
Liangsheng Yin
|
9b5f0f64f5
|
Fix tiny misalign with previous truncation setting in tokenizer_manager (#9430)
|
2025-08-21 14:05:35 +08:00 |
|
VDV1985
|
2c4b4b786b
|
[feature] Ascend NPU graph support (#9399)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
|
2025-08-20 21:13:27 -07:00 |
|
Martin Vit
|
7cd2ee06d7
|
feat: Add Triton fallback option and SM120 MoE configs for FP8 models (#9251)
|
2025-08-20 19:33:15 -07:00 |
|
Liangsheng Yin
|
eb19ccadae
|
[bug] fix errors related to context length in SD (#9388)
|
2025-08-21 10:32:34 +08:00 |
|
Shangming Cai
|
25ef53f05f
|
[PD] Fix nvlink transport accuracy through transferring metadata with tcp (#9261)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
|
2025-08-20 19:29:10 -07:00 |
|
Cao E
|
c674bf9c6b
|
Fix biased_grouped_topk_cpu (#9420)
|
2025-08-20 19:18:48 -07:00 |
|
Qiaolin Yu
|
af1973b871
|
Fix max_seq_len_k in trtllm_mha attention backend (#9416)
|
2025-08-20 19:17:13 -07:00 |
|
strgrb
|
88fbc31b50
|
Support trtllm_allreduce_fusion in flashinfer for cuda<12.8 (#9339)
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
|
2025-08-20 16:54:30 -07:00 |
|
nathan
|
8f5b9910c1
|
Add support for Qwen3-seq-cls (#9357)
|
2025-08-20 16:51:56 -07:00 |
|
Xinyuan Tong
|
84719b527a
|
fix: InternS1 don't recognize image, updates image token for InternVL processor (#9381)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-20 16:43:03 -07:00 |
|
jiapingW
|
e99729c9f3
|
Fixed the issue where eagle3 TPOT was not as good as without eagle3. (#9404)
|
2025-08-20 16:42:01 -07:00 |
|
Nicolas Castet
|
c10b8e6a0f
|
Support DP attention with GPT-OSS (#9359)
|
2025-08-20 16:36:31 -07:00 |
|
Lifu Huang
|
d4bce29721
|
Fix incorrect logic in chat template handling. (#9336)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-20 16:25:36 -07:00 |
|
Lifu Huang
|
b0980af89f
|
Support pinning adapter via server args. (#9249)
|
2025-08-20 16:25:01 -07:00 |
|
Nathan Wang
|
24eaebeb4b
|
Fix FlashInfer GPU <-> CPU sync (#9409)
|
2025-08-20 15:26:12 -07:00 |
|
Trevor Morris
|
a91e90d9a3
|
[2/2] Fuse routed scaling factor into select_experts (#8690)
|
2025-08-20 15:10:16 -07:00 |
|
Xiaoyu Zhang
|
f96413c444
|
Refactor allreduce add rmsnorm pattern (#9278)
|
2025-08-20 02:03:08 -07:00 |
|
Liangsheng Yin
|
08ebdf79d0
|
Fix the --allow-auto-truncate argument in tokenizer manager. (#9391)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
2025-08-20 16:56:47 +08:00 |
|
fzyzcjy
|
42c8704560
|
Add PDL support for quant kernel and rope kernel (#9106)
|
2025-08-20 01:56:29 -07:00 |
|
Even Zhou
|
de2dd73831
|
Revert "[feature] Rework Ascend NPU graph support" (#9385)
|
2025-08-20 00:35:10 -07:00 |
|
Lianmin Zheng
|
1ec9769753
|
[Docs] Update contribution guide (#9383)
|
2025-08-19 23:37:45 -07:00 |
|
Lianmin Zheng
|
f20b6a3f2b
|
[minor] Sync style changes (#9376)
|
2025-08-19 21:35:01 -07:00 |
|
Even Zhou
|
3680d6f88b
|
[feature] Rework Ascend NPU graph support (#9350)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
|
2025-08-19 20:32:27 -07:00 |
|
Keyang Ru
|
f515449582
|
Fix gpt-oss response api streaming issue (#9368)
|
2025-08-19 20:19:42 -07:00 |
|
Ke Bao
|
e0ce171d79
|
Fix triton backend eagle illegal memory access (#9344)
|
2025-08-19 20:16:26 -07:00 |
|
fzyzcjy
|
fe43e889f8
|
Fix mini lb timeout issue (#9369)
|
2025-08-19 20:15:16 -07:00 |
|
Even Zhou
|
f4fafacc5d
|
Revert "[feature] Ascend NPU graph support (#8027)" (#9348)
|
2025-08-19 10:11:23 -07:00 |
|
chenxu140
|
01d47a27b6
|
[Bugfix] fix kv buffer register & dp attention & deepepmoe (#9327)
|
2025-08-19 10:09:48 -07:00 |
|
Enrique Shockwave
|
e483ab6d20
|
enable marlin fp8 blockwise (#8990)
|
2025-08-18 18:53:15 -07:00 |
|
Jiaqi Gu
|
3c2c9f6c9e
|
[Bug] Fix input arguments of flashinfer_trtllm_moe (#9317)
|
2025-08-18 18:03:19 -07:00 |
|
zxy
|
a31ea44824
|
support for interns1-mini (#9299)
|
2025-08-18 17:56:04 -07:00 |
|
fzyzcjy
|
5626e20b2b
|
Tiny fix CI (#9306)
|
2025-08-18 16:54:36 -07:00 |
|
Binyao Jiang
|
c2fbf60f39
|
[GLM4.1V and GLM4.5V] Add vision transformer num_dummy_head support: max tp=4 -> max tp=8 (#9059)
|
2025-08-18 14:40:13 -07:00 |
|
datdo-msft
|
98b44e9e56
|
[PD] Propagate internal server errors from aborted requests to clients instead of blindly returning 200's (#8936)
|
2025-08-18 14:23:46 -07:00 |
|
Swipe4057
|
6805f6da40
|
upgrade xgrammar 0.1.23 and openai-harmony 0.0.4 (#9284)
|
2025-08-18 14:02:00 -07:00 |
|
江家瑋
|
ca533580f2
|
[Docs] Correct and clarify notes in Engine docstring (#9313)
Signed-off-by: JiangJiaWei1103 <waynechuang97@gmail.com>
|
2025-08-18 13:24:19 -07:00 |
|
Keyang Ru
|
886454e8e7
|
[MISC] use dynamic choices for tool-call-parser argument (#9316)
|
2025-08-18 13:02:10 -07:00 |
|
gongwei-130
|
0cf3fbeb18
|
should return invalide request for empty prompt (#9315)
|
2025-08-18 11:44:11 -07:00 |
|