Yingchun Lai
|
ed6f7597b3
|
Fix the missing 'lof' choice of --schedule-policy server args (#7114)
|
2025-08-03 12:29:42 -07:00 |
|
Guanhua Wang
|
f7b2853ff8
|
[feat] support minimum token load balance in dp attention (#7379)
|
2025-08-03 00:46:47 -07:00 |
|
Lifu Huang
|
8675bdf246
|
Support limiting max loaded loras in CPU. (#8650)
|
2025-08-03 00:02:23 -07:00 |
|
Lianmin Zheng
|
e314b084c5
|
[FIX] Fix the nightly CI by disabling swa mem pool for gemma2 (#8693)
|
2025-08-02 18:43:14 -07:00 |
|
Nicolas Castet
|
82e6c3a65a
|
Add support for NCCL symmetric memory for TP allreduces (#8238)
|
2025-08-01 23:30:55 +00:00 |
|
Cheng Wan
|
6c88f6c8d9
|
[5/N] MoE Refactor: Update MoE parallelism arguments (#8658)
|
2025-08-01 01:20:03 -07:00 |
|
Zhiqiang Xie
|
dd7ca00601
|
Interface change for kvcache io to support page first layout (#8318)
|
2025-08-01 11:37:49 +08:00 |
|
Kaixi Hou
|
aa4c66b564
|
[NVIDIA] Enable Flashinfer MoE blockscale fp8 backend for TP MoE (#8450)
Co-authored-by: kushanam <42385577+kushanam@users.noreply.github.com>
|
2025-07-31 19:56:34 -07:00 |
|
Faraz
|
4b04998d38
|
TRTLLM Gen MLA Decode Kernel Integration (same as #7938) (#8632)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
|
2025-07-31 16:03:40 -07:00 |
|
Trevor Morris
|
b7170cc820
|
[bugfix] Fix flashinfer cutlass EP moe after MoE refactor (#8630)
|
2025-07-31 13:57:08 -07:00 |
|
Vishwanath Venkatesan
|
2cd2e27f80
|
SGLang HiCache NIXL Connector (#8488)
Signed-off-by: Vishwanath Venkatesan <vvenkatesan@nvidia.com>
Co-authored-by: Moein Khazraee <moein@nvidia.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-31 13:09:42 -07:00 |
|
Brayden Zhong
|
4acf690206
|
[Optimization][Perf] Disable the GC during CUDA graph capture to speed up by up to 3x (#8577)
|
2025-07-31 11:31:21 -07:00 |
|
Cheng Wan
|
7a1f7fc504
|
[Feature] Hybrid EP and TP (#8590)
|
2025-07-31 02:53:25 -07:00 |
|
Chang Su
|
51c38163c1
|
model: support Step3V (#8583)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: nnnobody-code <nnnobody@foxmail.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Qiaolin-Yu <qy254@cornell.edu>
Co-authored-by: Qiaolin-Yu <liin1211@outlook.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-07-31 02:41:00 -07:00 |
|
huangtingwei
|
d904959233
|
Support l3 cache (mooncake store) for hiradix cache (#7211)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
Co-authored-by: AniZpZ <zhuangsen.zp@antgroup.com>
Co-authored-by: zuoyuan <zhangzuo21@mails.tsinghua.edu.cn>
Co-authored-by: @wangyueneng.wyn <wangyueneng.wyn@antgroup.com>
Co-authored-by: JinYan Su <jinyansu792@gmail.com>
|
2025-07-30 23:15:51 -07:00 |
|
pansicheng
|
299803343d
|
Add hf3fs support for hicache storage (based on #7704) (#7280)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-30 17:42:41 -07:00 |
|
Lianmin Zheng
|
a4c3b121d8
|
Split the scheduler into multiple mixin classes to reduce the file size (#8483)
|
2025-07-29 12:46:50 -07:00 |
|
harrisonlimh
|
747dd45077
|
feat: throttle requests at scheduler based on --max_queued_requests (#7565)
|
2025-07-28 22:32:33 +08:00 |
|
Yuxuan Zhang
|
6d6a8bc278
|
GLM-4.5 Model Support (#8224)
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-27 22:54:07 -07:00 |
|
Qiaolin Yu
|
2810338401
|
[feat] Support different attention backends for prefill and decode (#6338)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
|
2025-07-28 11:42:29 +08:00 |
|
Kaixi Hou
|
85486b6f6f
|
[NVIDIA] Add Flashinfer MoE blockscale fp8 backend (#8036)
|
2025-07-27 00:34:41 -07:00 |
|
fzyzcjy
|
da0c026084
|
Tiny assert EPLB is used together with expert parallel (#8381)
|
2025-07-26 03:20:39 -07:00 |
|
Trevor Morris
|
58c468f404
|
Fix FP4 MoE accuracy from missing routed_scaling_factor (#8333)
|
2025-07-25 16:40:23 -07:00 |
|
Lianmin Zheng
|
ed2e313eb6
|
Clean up server_args, triton cache manager (#8332)
|
2025-07-25 14:14:51 -07:00 |
|
Chang Su
|
f8260f2539
|
[Bugfix][Feat] Add XML-ish grammar in EBNFComposer and fix misc bugs in Qwen3 detector (#8357)
|
2025-07-25 12:03:16 -07:00 |
|
Lifu Huang
|
8abd3e77fe
|
Introduce Stable LoRA ID System for Overlapped Updates and Prefix Caching (#8261)
|
2025-07-23 00:32:16 -07:00 |
|
yhyang201
|
0dfe2491ac
|
Preliminary Support for Qwen3XMLDetector (#8260)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
|
2025-07-23 06:49:38 +08:00 |
|
Lifu Huang
|
4e3defe5a7
|
Support start up LoRA server without initial adapters (#8019)
|
2025-07-19 15:38:09 -07:00 |
|
Lianmin Zheng
|
bb0e8a32b5
|
Clean up server args (#8161)
|
2025-07-19 11:32:52 -07:00 |
|
Haohui Mai
|
d918ab7985
|
Support NVFP4 quantized dense models on AMD CDNA2/CDNA3 GPUs (#7302)
Co-authored-by: HAI <hixiao@gmail.com>
Co-authored-by: Sai Enduri <saimanas.enduri@amd.com>
|
2025-07-18 19:59:39 -07:00 |
|
Mick
|
3964b352c3
|
chore: tune mem fraction static for vlm (#6881)
|
2025-07-18 17:19:27 -07:00 |
|
Zhiqiang Xie
|
9d33fcfb8e
|
Hicache Storage Layer Prototype (#7704)
|
2025-07-18 15:20:19 +08:00 |
|
Yingchun Lai
|
795668dc73
|
feat: add tp_rank, pp_rank and dp_rank labels for scheduler metrics (#7597)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-16 17:55:59 -07:00 |
|
ykcombat
|
d4d0c7c367
|
[Feature]TP Group Switching for PD-Multiplexing (#7653)
|
2025-07-15 02:35:46 +08:00 |
|
Lifu Huang
|
e2ed9d049a
|
Refactor dynamic LoRA update to fix incorrect handling of variant weight shapes (#7844)
|
2025-07-13 18:36:01 -07:00 |
|
Hanming Lu
|
9379da77de
|
SWA Prefix Cache (#7367)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
|
2025-07-13 12:31:07 -07:00 |
|
Ying Sheng
|
42fc44100a
|
[minor] Add server_args check for Llama4 with hybrid (#7988)
|
2025-07-12 20:13:40 -07:00 |
|
ronnie_zheng
|
86044712c6
|
[feature] kv transfer support of ascend npu (#7795)
Co-authored-by: liupeng <liupeng374@huawei.com>
|
2025-07-11 00:07:51 -07:00 |
|
Atream
|
615553079d
|
Support Kimi K2 (#7940)
|
2025-07-11 00:02:21 -07:00 |
|
Xinyuan Tong
|
e5888eddda
|
Fixes typo in assertion message (#7895)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-09 01:47:14 -07:00 |
|
SijiaYang
|
cb9d91ea8a
|
feat: support DeepSeek-R1-W4AFP8 model with ep-moe mode (#7762)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
|
2025-07-07 14:47:21 -07:00 |
|
Zhiqiang Xie
|
2fc824b84c
|
Kernels for efficient KV cache IO (#7313)
|
2025-07-06 22:53:36 -07:00 |
|
Yuan Luo
|
253454de9b
|
Integrate triton moe kernel (#7689)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-07-06 20:05:49 -07:00 |
|
Cheng Wan
|
8fc910db03
|
DP Attention with Auto DeepEP Dispatch (#7222)
|
2025-07-05 01:54:24 -07:00 |
|
Zilin Zhu
|
16a6b1d83a
|
[RL] Add --nccl-port to prevent port conflict (#7418)
|
2025-07-04 18:48:57 -07:00 |
|
Lianmin Zheng
|
14229ccf8f
|
Move mem_fraction_static adjustment for multimodal models to server_args.py & Fix session control & Other cleanups (#7748)
|
2025-07-04 16:33:33 -07:00 |
|
ronnie_zheng
|
1e0e549766
|
Ascend attention backend(PA&MLA) (#7722)
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: VDV1985 <vladdv85@mail.ru>
|
2025-07-03 09:23:19 -07:00 |
|
Xiaoyu Zhang
|
8e64140e35
|
[b200] support trt-llm allreduce fuse rms_norm_add kernel (#7621)
|
2025-07-02 19:36:20 -07:00 |
|
Zilin Zhu
|
09e699bba4
|
[RL] add --skip-warmup (#7416)
|
2025-07-02 18:50:43 -07:00 |
|
Lifu Huang
|
1a08358aed
|
Improve error handling for requests with unloaded LoRA path(s) (#7642)
|
2025-07-01 20:05:34 -07:00 |
|