Commit Graph

127 Commits

Author SHA1 Message Date
Netanel Haber
a98496834b Feature/nano v2 offline modelopt fp8 and nvfp4 (#12018)
Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
2025-10-23 11:16:46 -07:00
Liangsheng Yin
6c18addb6f Revert "Support nvidia/NVIDIA-Nemotron-Nano-9B-v2-FP8/NVFP4" (#12015) 2025-10-23 21:27:58 +08:00
Netanel Haber
d6fee73d1f Support nvidia/NVIDIA-Nemotron-Nano-9B-v2-FP8/NVFP4 (#11866) 2025-10-23 17:29:02 +08:00
Shane A
d383e6616e [Model] Add Olmo 3 model support (#11396) 2025-10-19 23:59:16 -07:00
b8zhong
f4f8a1b4d8 ci: update lmms-eval to speed up multimodal CI (#11000) 2025-10-19 02:51:19 +08:00
Shangming Cai
1de3924b18 [CI] Add GLM4MoE model test (#11706)
Signed-off-by: Shangming Cai <csmthu@gmail.com>
2025-10-16 16:25:58 +08:00
Lianmin Zheng
61055cb309 Reorder PD disagg CI tests (#11438) 2025-10-10 17:56:49 -07:00
Netanel Haber
d6837aea4d model: Support Hybrid Mamba2 NemotronHForCausalLM (nvidia/NVIDIA-Nemotron-Nano-9B-v2) (#10909)
Signed-off-by: Netanel Haber <nhaber@nvidia.com>
2025-10-09 00:37:38 +08:00
Liangsheng Yin
4726c9197f [minor] fix the lint (#11198) 2025-10-04 01:04:58 +08:00
vikram singh shekhawat
586e81a28a [Test] Initialize mem_fraction_static in setUpClass to fix pytest VLM test crashes. (#10859)
Co-authored-by: svc_repro_tool <svc_repro_tool@habana.ai>
2025-10-04 00:14:48 +08:00
ilyasch2
083629c235 [model] Add mamba2 and Falcon-H1 support. (#10988)
Co-authored-by: Younes Belkada <younes.belkada@tii.ae>
Co-authored-by: Younes B <49240599+younesbelkada@users.noreply.github.com>
2025-10-02 19:15:36 +08:00
fzyzcjy
3b25dc127a [1/2] Speed up trtllm_mla attention backend (>10% e2e) (#10473) 2025-09-15 11:53:21 -07:00
Praneth Paruchuri
a45d9a4ee8 model: support solar (#8189) 2025-09-16 02:21:13 +08:00
Jintao Zhang
f9ee6ae17a [router]: Add Embedding routing logic (#10129)
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
Co-authored-by: Waël Boukhobza <wawa_wael@live.fr>
2025-09-14 18:44:35 -07:00
Yi Zhang
fe6cdf8972 add qwen3-next ut (#10355) 2025-09-12 18:06:48 +08:00
EduardDurech
46d8fb1c98 model: support Apertus (#9774) 2025-09-11 20:49:10 -07:00
wenhuipeng
16ff3d4b05 Support opt model (#10165) 2025-09-09 12:45:00 +08:00
tc-mb
03dbf1aa8e [model] support MiniCPM-V 4.0 (#8747)
Signed-off-by: tc-mb <caitianchi@modelbest.cn>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
2025-09-02 15:33:03 -07:00
Netanel Haber
4cd08dc592 model: Support nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 (#9301) 2025-08-26 15:33:40 +08:00
Netanel Haber
845d12a979 model: support nvidia/Llama-3_3-Nemotron-Super-49B-v1 (#9067)
Co-authored-by: Kyle Huang <kylhuang@nvidia.com>
2025-08-17 01:48:15 -07:00
Lianmin Zheng
2c7f01bc89 Reorganize CI and test files (#9027) 2025-08-10 12:30:06 -07:00
Zheng Wengang
2d120f8b18 [Feature][Multimodal] Implement LRU cache for multimodal embeddings (#8292)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-08-06 23:21:40 -07:00
Lifu Huang
6210e2c4f0 Support GPU pinning for LoRA (#8697) 2025-08-06 19:39:45 -07:00
Praneth Paruchuri
d26ca84f39 Support bailing moe (#8680) 2025-08-05 20:40:34 -07:00
Lifu Huang
8675bdf246 Support limiting max loaded loras in CPU. (#8650) 2025-08-03 00:02:23 -07:00
Lifu Huang
46e9d1c7c1 Increase tolerance to address CI failures (#8643) 2025-08-01 02:32:10 -07:00
Lifu Huang
67e53b16f5 Bump transfomers to 4.54.1 to fix Gemma cache issue. (#8541) 2025-07-30 19:50:54 -07:00
Stefan He
4ad9737045 chore: bump transformer to 4.54.0 (#8416)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
2025-07-27 21:27:25 -07:00
Lifu Huang
8abd3e77fe Introduce Stable LoRA ID System for Overlapped Updates and Prefix Caching (#8261) 2025-07-23 00:32:16 -07:00
Praneth Paruchuri
83c104b188 Feat: Support for Persimmon Model (#7983) 2025-07-19 23:07:47 -07:00
Pavel Logachev
877e35d775 Add get_hidden_dim to qwen3.py for correct lora (#7312) 2025-07-19 19:31:16 -07:00
Clay
cbdfb77123 Enable FlashInfer support encoder models and add head_dim padding workaround (#6230) 2025-07-19 19:30:16 -07:00
Lifu Huang
4e3defe5a7 Support start up LoRA server without initial adapters (#8019) 2025-07-19 15:38:09 -07:00
Lifu Huang
3de617a75b Fix LoRA buffer contamination during adapter eviction (#8103) 2025-07-19 13:14:08 -07:00
Lianmin Zheng
bb0e8a32b5 Clean up server args (#8161) 2025-07-19 11:32:52 -07:00
Praneth Paruchuri
cb736df854 Support for Phi-1.5 & Phi-2 models (#7862) 2025-07-13 18:43:40 -07:00
Lifu Huang
e2ed9d049a Refactor dynamic LoRA update to fix incorrect handling of variant weight shapes (#7844) 2025-07-13 18:36:01 -07:00
Binyao Jiang
2d54d4bb64 Feat: Support Phi-3.5-MoE in SGLang (#7907) 2025-07-09 23:51:33 -07:00
Lifu Huang
01f9873048 Fix CI test OOM issue. (#7799) 2025-07-05 15:11:02 -07:00
YanbingJiang
4de0395343 Add V2-lite model test (#7390)
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
2025-07-03 22:25:50 -07:00
Lifu Huang
1a08358aed Improve error handling for requests with unloaded LoRA path(s) (#7642) 2025-07-01 20:05:34 -07:00
Lifu Huang
49538d111b Support dynamic LoRA loading / unloading in engine/server API (#7446) 2025-06-27 21:00:27 -07:00
Lifu Huang
2373faa317 Fix flakiness in LoRA batch test. (#7552) 2025-06-27 19:51:43 -07:00
woodx
e30ef368ab Feat/support rerank (#6058) 2025-06-16 10:50:01 -07:00
Baizhou Zhang
3b014bc13d Fix test_lora.py CI (#7061) 2025-06-10 12:24:46 -07:00
Pan Lyu
451ffe74d9 support qwen3 emebedding (#6990) 2025-06-09 01:32:49 -07:00
Marc Sun
37f1547587 [FEAT] Add transformers backend support (#5929) 2025-06-03 21:05:29 -07:00
Ravi Theja
c6a0cacc35 Update CI tests for Llama4 models (#6421) 2025-06-01 11:52:15 +08:00
ryang
a6ae3af15e Support XiaomiMiMo inference with mtp (#6059) 2025-05-22 14:14:49 -07:00
HAI
5c0b38f369 aiter attention-backend (default enabled on AMD/ROCm) (#6381) 2025-05-20 22:52:41 -07:00