Commit Graph

618 Commits

Author SHA1 Message Date
Netanel Haber
845d12a979 model: support nvidia/Llama-3_3-Nemotron-Super-49B-v1 (#9067)
Co-authored-by: Kyle Huang <kylhuang@nvidia.com>
2025-08-17 01:48:15 -07:00
Cheng Wan
295895120d [6/N] MoE Refactor: Cleanup MoE-related configs (#8849) 2025-08-14 21:14:53 -07:00
Yineng Zhang
fab0f6e77d chore: bump v0.5.0rc2 (#9203) 2025-08-14 16:11:16 -07:00
Chengxing Xie
c1c7dc4534 feat: Add model version tracking with API endpoints and response metadata (#8795) 2025-08-14 12:13:46 -07:00
Lianmin Zheng
9e426466af Clean up allocators (#9134)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-13 13:56:04 -07:00
Zhihao Liu
65736dc524 [Model] Support Qwen3ForSequenceClassification for Qwen3-Embed Model (#7957) 2025-08-13 11:14:54 -07:00
jacky.cheng
25caa7a8a9 [AMD] Support Wave attention backend with AMD GPU optimizations (#8660)
Signed-off-by: Stanley Winata <stanley.winata@amd.com>
Signed-off-by: Harsh Menon <harsh@nod-labs.com>
Signed-off-by: nithinsubbiah <nithinsubbiah@gmail.com>
Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com>
Signed-off-by: xintin <gaurav.verma@amd.com>
Co-authored-by: Harsh Menon <harsh@nod-labs.com>
Co-authored-by: Stanley Winata <stanley.winata@amd.com>
Co-authored-by: Stanley Winata <68087699+raikonenfnu@users.noreply.github.com>
Co-authored-by: Stanley Winata <stanley@nod-labs.com>
Co-authored-by: Ivan Butygin <ivan.butygin@gmail.com>
Co-authored-by: nithinsubbiah <nithinsubbiah@gmail.com>
Co-authored-by: Nithin Meganathan <18070964+nithinsubbiah@users.noreply.github.com>
Co-authored-by: Ivan Butygin <ibutygin@amd.com>
2025-08-12 13:49:11 -07:00
Hangzhi
03d114496f Fix typos in supported models documentation (#9119) 2025-08-12 13:35:24 -07:00
li chaoran
2ecbd8b8bf [feat] add ascend readme and docker release (#8700)
Signed-off-by: mywaaagh_admin <pkwarcraft@gmail.com>
Signed-off-by: lichaoran <pkwarcraft@gmail.com>
Co-authored-by: Even Zhou <even.y.zhou@outlook.com>
Co-authored-by: ronnie_zheng <zl19940307@163.com>
2025-08-12 13:25:42 -07:00
Simo Lin
1ce30dd13e [router] update router documentation (#9121) 2025-08-12 13:16:34 -07:00
Yichao Cheng
fcc11e5ed5 update support new models doc (#9096)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-12 01:21:02 -07:00
Zhiqiang Xie
0eec4cb6cc HiCache, add bench long context plus minor fixs (#9086)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-11 16:54:52 -07:00
Faraz
f508cd3cb7 TRTLLM-MLA FP8 path (#8638)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
2025-08-11 14:02:13 -07:00
Lianmin Zheng
8c07fabda7 Update hyperparameter_tuning.md (#9083) 2025-08-11 13:44:11 -07:00
Liangsheng Yin
f9afa7dceb Fix docs for clip max new tokens (#9082) 2025-08-11 13:15:21 -07:00
Jimmy
0d9e89ec69 [PD]decode: add CLIP_MAX_NEW_TOKEN for pop_preallocated (#8866) 2025-08-11 13:08:11 -07:00
Hangzhi
3d64fda376 Fix broken Kimi models HuggingFace link (#9080) 2025-08-11 12:15:00 -07:00
Baizhou Zhang
75e6a7cde1 Support radix cache for Lora feature (#7216) 2025-08-11 10:14:11 -07:00
Lianmin Zheng
2e8e7e353b Improve docs and developer guide (#9044) 2025-08-10 21:05:18 -07:00
Lianmin Zheng
2449a0afe2 Refactor the docs (#9031) 2025-08-10 19:49:45 -07:00
Lifu Huang
f8a173bb50 Improve LoRA Perf by Deprecating FlashInfer and Eliminating Redundant Tensor Ops (#8940) 2025-08-10 01:04:45 -07:00
Binyao Jiang
f29aba8c6e Support glm4.1v and glm4.5v (#8798)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: zRzRzRzRzRzRzR <2448370773@qq.com>
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-09 00:59:13 -07:00
Lianmin Zheng
706bd69cc5 Clean up server_args.py to have a dedicated function for model specific adjustments (#8983) 2025-08-08 19:56:50 -07:00
Yineng Zhang
9020f7fc32 chore: bump v0.5.0rc0 (#8959) 2025-08-08 09:16:18 -07:00
Wenbo Yang
1132547496 Add ernie4.py for ERNIE-4.5 (#7657) 2025-08-08 00:55:48 -07:00
Xinyuan Tong
3fa3c6cd6a Enables force reasoning based on chat template for Qwen3-Thinking (#8369)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Chang Su <csu272@usc.edu>
2025-08-06 20:02:47 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
HouseWest
ca47e24f5d [Feature] improve TBO: two chunk overlap (#8144) 2025-08-05 21:11:01 -07:00
Praneth Paruchuri
d26ca84f39 Support bailing moe (#8680) 2025-08-05 20:40:34 -07:00
Yineng Zhang
8cd344586e chore: bump v0.4.10.post2 (#8727) 2025-08-03 03:43:29 -07:00
Guanhua Wang
f7b2853ff8 [feat] support minimum token load balance in dp attention (#7379) 2025-08-03 00:46:47 -07:00
Lifu Huang
8675bdf246 Support limiting max loaded loras in CPU. (#8650) 2025-08-03 00:02:23 -07:00
Nicolas Castet
82e6c3a65a Add support for NCCL symmetric memory for TP allreduces (#8238) 2025-08-01 23:30:55 +00:00
Zac
b17c5b0118 fix arg typo for --disaggregation-transfer-backend (#8664) 2025-08-01 10:00:47 -07:00
Cheng Wan
6c88f6c8d9 [5/N] MoE Refactor: Update MoE parallelism arguments (#8658) 2025-08-01 01:20:03 -07:00
Ke Bao
33f0de337d chore: bump v0.4.10.post1 (#8652) 2025-08-01 12:07:30 +08:00
Faraz
4b04998d38 TRTLLM Gen MLA Decode Kernel Integration (same as #7938) (#8632)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
2025-07-31 16:03:40 -07:00
Yineng Zhang
023288645b chore: bump v0.4.10 (#8608) 2025-07-31 20:50:17 +08:00
Chang Su
51c38163c1 model: support Step3V (#8583)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: nnnobody-code <nnnobody@foxmail.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Qiaolin-Yu <qy254@cornell.edu>
Co-authored-by: Qiaolin-Yu <liin1211@outlook.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
2025-07-31 02:41:00 -07:00
Adarsh Shirawalmath
ec5f944271 [Model] Add support for Arcee Foundational Model (#8154) 2025-07-30 10:45:25 -07:00
Rui Chen
a730ce8162 [feature] [sgl-router] Add a dp-aware routing strategy (#6869) 2025-07-30 05:58:48 -07:00
Yineng Zhang
6478831be9 chore: bump v0.4.9.post6 (#8517) 2025-07-29 02:30:07 -07:00
Kaixi Hou
134fa43e19 [NVIDIA] Change to use num_local_experts (#8453) 2025-07-28 10:38:19 -07:00
Yineng Zhang
45bc170b36 chore: bump v0.4.9.post5 (#8458) 2025-07-28 02:11:06 -07:00
Qiaolin Yu
484d0e021d doc: add bench_one_batch_server in the benchmark doc (#8441) 2025-07-27 23:07:54 -07:00
Qiaolin Yu
2810338401 [feat] Support different attention backends for prefill and decode (#6338)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-07-28 11:42:29 +08:00
Kevin Xiang Li
44d600cd67 Support precomputed_embeddings for Llama 4 (#8156)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-07-27 01:14:49 -07:00
Yineng Zhang
2272c2a5b5 chore: bump v0.4.9.post4 (#8305) 2025-07-25 17:12:47 -07:00
Chang Su
d8ee15643b [Feat] Add reasoning parser for Qwen/Qwen3-235B-A22B-Thinking-2507 (#8363) 2025-07-25 14:59:42 -07:00
Xiaoyu Zhang
9045cc1eb8 [torch.compile bug] avoid biased_grouped_topk_impl func repeatedly triggering torch.compile in forward pass (#8353) 2025-07-25 21:17:47 +08:00