Commit Graph

550 Commits

Author SHA1 Message Date
Zhiqiang Xie
70645f4d7d upstream hicache fixes (#5570) 2025-04-20 23:08:30 -07:00
Qingquan Song
188f0955fa Add Speculative Decoding Eagle3 topk > 1 (#5318)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
Co-authored-by: Yubo Wang <yubowang2019@gmail.com>
2025-04-20 22:58:28 -07:00
kyle-pena-kuzco
9f3bd2ad39 Feat: Implement JSON Mode (response_format.type="json_object") (#4733)
Co-authored-by: Kyle Pena <kylepena@kyles-macbook-pro.turkey-marlin.ts.net>
2025-04-20 17:41:22 -07:00
Adarsh Shirawalmath
8b39274e34 [Feature] Prefill assistant response - add continue_final_message parameter (#4226)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
2025-04-20 17:37:18 -07:00
Baizhou Zhang
5156d5a413 Add test config yamls for Deepseek v3 (#5433) 2025-04-20 17:28:52 -07:00
Xiaoyu Zhang
bf86c5e990 restruct compressed_tensors_w8a8_fp8 (#5475) 2025-04-19 04:52:15 -07:00
woodx
3bface15e6 Feat/support encoder model (like bert) (#4887) 2025-04-17 01:50:48 -07:00
eigen
8f783c1943 [Model Support] unsloth/Phi-4-mini bnb model (#4982)
Co-authored-by: yhyang201 <yhyang201@gmail.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-04-16 19:58:20 -07:00
Lianmin Zheng
177320a582 Clean up imports (#5467) 2025-04-16 15:26:49 -07:00
Baizhou Zhang
a42736bbb8 Support MHA with chunked prefix cache for DeepSeek chunked prefill (#5113) 2025-04-15 22:01:22 -07:00
ryang
bc24205b32 Support BNB quantization for llama/mllama (#5038)
Co-authored-by: Yuhao Yang <yyh073@foxmail.com>
2025-04-15 18:00:31 -07:00
Chang Su
27a009bb00 Fix ignore_eos parameter when loading a chat template (#5264) 2025-04-15 17:09:45 -07:00
JieXin Liang
f88f7e1943 [misc] fix ci flaky case (#5352) 2025-04-15 01:37:16 -07:00
Yineng Zhang
ac5b78baf6 fix: update test config (#5392) 2025-04-14 17:39:47 -07:00
yhyang201
072df75354 Support for Qwen2.5-VL Model in bitsandbytes Format (#5003) 2025-04-14 02:03:40 -07:00
fzyzcjy
defede5073 Fix DeepSeek DP Attention + torch compile (#5367)
Co-authored-by: ispobock <ispobaoke@163.com>
2025-04-14 01:07:58 -07:00
Yineng Zhang
39d90449f3 feat: update experiment_runner (#5360) 2025-04-13 15:37:05 -07:00
tianlian yi
bc92107b03 Support server based rollout in Verlengine (#4848)
Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Jinn <47354855+jhinpan@users.noreply.github.com>
2025-04-12 10:07:52 -07:00
Ke Bao
5ad0571903 Adjust ci test threshold (#5271) 2025-04-11 22:03:37 -07:00
Ke Bao
1078396f47 Update deps for mllama4 (#5215) 2025-04-10 09:12:44 -07:00
saienduri
7f875f1293 update grok test (#5171) 2025-04-09 11:09:47 -07:00
Mick
fbebcb7aa4 model: support mllama4 (#5144) 2025-04-09 09:28:44 -07:00
Xiaoyu Zhang
87eddedfa2 [ci] fix ci test fused_moe op (#5102) 2025-04-09 08:52:46 -07:00
HandH1998
4065248214 Support Llama4 fp8 inference (#5194)
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-04-09 20:14:34 +08:00
fzyzcjy
39efad4fbc Tiny disable model that does not work (#5175) 2025-04-08 18:42:37 -07:00
XinyuanTong
d09a51f1f6 [feat&refactor] Enhance multimodal input support with refactor io_struct (#4938)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-04-08 14:48:07 -07:00
Yubo Wang
fd5a55cfd3 Use public model for FA3 speculative decode testing (#5152) 2025-04-08 00:08:25 -07:00
Yubo Wang
804d9f2e4c Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760) 2025-04-07 23:20:51 -07:00
Yun Dai
9731eca77b [modelopt] automatically inspect if model is ModelOpt quantized and set quantization method (#5145) 2025-04-07 22:12:11 -07:00
Baizhou Zhang
efbae697b3 [Revision] Replace enable_flashinfer_mla argument with attention_backend (#5052) 2025-04-05 01:23:02 -07:00
AniZpZ
d95269f9b3 [2/3] fix dsv3 awq issue (#4625)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
2025-04-03 17:36:39 -07:00
Lianmin Zheng
74885a848b Revert "Replace enable_flashinfer_mla argument with attention_backend" (#5048) 2025-04-03 13:30:56 -07:00
Baizhou Zhang
e8999b13b7 Replace enable_flashinfer_mla argument with attention_backend (#5005) 2025-04-03 02:53:58 -07:00
Zhiqiang Xie
e119f04215 Large page size aligned hierarchical caching (#4581) 2025-04-01 22:38:15 -07:00
Mick
5cb552b1d4 refactor: multimodal data (#4754) 2025-03-31 09:57:51 -07:00
Zhiqiang Xie
a169b9f813 Fix oom error for large page size (#4913)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-03-30 21:34:21 -07:00
Baizhou Zhang
42873eac09 [Fix] Improve Lora tests and reduce CI runtime (#4925) 2025-03-30 19:40:14 -07:00
Lianmin Zheng
9adf178cc2 Fix 2-gpu CI test and suppress some warnings (#4930) 2025-03-30 12:51:44 -07:00
Lianmin Zheng
4ede6770cd Fix retract for page size > 1 (#4914) 2025-03-30 02:57:15 -07:00
Lianmin Zheng
b26bc86b36 Support page size > 1 + eagle (#4908) 2025-03-30 00:46:23 -07:00
Lianmin Zheng
74e0ac1dbd Clean up import vllm in quantization/__init__.py (#4834) 2025-03-28 10:34:10 -07:00
chaobo jia
ef9a378a20 [Feature] add multi-rank support for Lora (#4492)
Co-authored-by: rudy152 <czh1137892874@gmail.com>
2025-03-28 09:38:44 -07:00
Lianmin Zheng
47e6628aae Fix CI tests (#4853) 2025-03-28 00:28:35 -07:00
Juwan Yoo
7907f9eb20 test: reduce mem_fraction_static for gemma3 vision test (#4840) 2025-03-27 23:20:10 -07:00
vikram singh shekhawat
6dbf99982f Fix missing arguments in SchedulePolicy and RadixCache initialization in tests. (#4712) 2025-03-27 22:23:51 -07:00
Vincent
e2e2ab70e0 IPv6 support (#3949)
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-03-27 21:42:13 -07:00
fzyzcjy
0d3e3072ee Fix CI of test_patch_torch (#4844) 2025-03-27 21:22:45 -07:00
fzyzcjy
62dd95870c Remove retry in nightly tests (#4846) 2025-03-27 21:18:29 -07:00
Qiaolin Yu
9fdc6d6abc Fix the lora adapter when lora path is none (#4799)
Co-authored-by: Beichen Ma <mabeichen12@gmail.com>
2025-03-27 21:03:08 -07:00
Jon Durbin
04eb6062e4 Include context length in /v1/models response. (#4809) 2025-03-27 20:23:18 -07:00