eigen
|
8f783c1943
|
[Model Support] unsloth/Phi-4-mini bnb model (#4982)
Co-authored-by: yhyang201 <yhyang201@gmail.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
|
2025-04-16 19:58:20 -07:00 |
|
Lianmin Zheng
|
177320a582
|
Clean up imports (#5467)
|
2025-04-16 15:26:49 -07:00 |
|
Baizhou Zhang
|
a42736bbb8
|
Support MHA with chunked prefix cache for DeepSeek chunked prefill (#5113)
|
2025-04-15 22:01:22 -07:00 |
|
ryang
|
bc24205b32
|
Support BNB quantization for llama/mllama (#5038)
Co-authored-by: Yuhao Yang <yyh073@foxmail.com>
|
2025-04-15 18:00:31 -07:00 |
|
Chang Su
|
27a009bb00
|
Fix ignore_eos parameter when loading a chat template (#5264)
|
2025-04-15 17:09:45 -07:00 |
|
JieXin Liang
|
f88f7e1943
|
[misc] fix ci flaky case (#5352)
|
2025-04-15 01:37:16 -07:00 |
|
Yineng Zhang
|
ac5b78baf6
|
fix: update test config (#5392)
|
2025-04-14 17:39:47 -07:00 |
|
yhyang201
|
072df75354
|
Support for Qwen2.5-VL Model in bitsandbytes Format (#5003)
|
2025-04-14 02:03:40 -07:00 |
|
fzyzcjy
|
defede5073
|
Fix DeepSeek DP Attention + torch compile (#5367)
Co-authored-by: ispobock <ispobaoke@163.com>
|
2025-04-14 01:07:58 -07:00 |
|
Yineng Zhang
|
39d90449f3
|
feat: update experiment_runner (#5360)
|
2025-04-13 15:37:05 -07:00 |
|
tianlian yi
|
bc92107b03
|
Support server based rollout in Verlengine (#4848)
Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Jinn <47354855+jhinpan@users.noreply.github.com>
|
2025-04-12 10:07:52 -07:00 |
|
Ke Bao
|
5ad0571903
|
Adjust ci test threshold (#5271)
|
2025-04-11 22:03:37 -07:00 |
|
Ke Bao
|
1078396f47
|
Update deps for mllama4 (#5215)
|
2025-04-10 09:12:44 -07:00 |
|
saienduri
|
7f875f1293
|
update grok test (#5171)
|
2025-04-09 11:09:47 -07:00 |
|
Mick
|
fbebcb7aa4
|
model: support mllama4 (#5144)
|
2025-04-09 09:28:44 -07:00 |
|
Xiaoyu Zhang
|
87eddedfa2
|
[ci] fix ci test fused_moe op (#5102)
|
2025-04-09 08:52:46 -07:00 |
|
HandH1998
|
4065248214
|
Support Llama4 fp8 inference (#5194)
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
|
2025-04-09 20:14:34 +08:00 |
|
fzyzcjy
|
39efad4fbc
|
Tiny disable model that does not work (#5175)
|
2025-04-08 18:42:37 -07:00 |
|
XinyuanTong
|
d09a51f1f6
|
[feat&refactor] Enhance multimodal input support with refactor io_struct (#4938)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-04-08 14:48:07 -07:00 |
|
Yubo Wang
|
fd5a55cfd3
|
Use public model for FA3 speculative decode testing (#5152)
|
2025-04-08 00:08:25 -07:00 |
|
Yubo Wang
|
804d9f2e4c
|
Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760)
|
2025-04-07 23:20:51 -07:00 |
|
Yun Dai
|
9731eca77b
|
[modelopt] automatically inspect if model is ModelOpt quantized and set quantization method (#5145)
|
2025-04-07 22:12:11 -07:00 |
|
Baizhou Zhang
|
efbae697b3
|
[Revision] Replace enable_flashinfer_mla argument with attention_backend (#5052)
|
2025-04-05 01:23:02 -07:00 |
|
AniZpZ
|
d95269f9b3
|
[2/3] fix dsv3 awq issue (#4625)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
|
2025-04-03 17:36:39 -07:00 |
|
Lianmin Zheng
|
74885a848b
|
Revert "Replace enable_flashinfer_mla argument with attention_backend" (#5048)
|
2025-04-03 13:30:56 -07:00 |
|
Baizhou Zhang
|
e8999b13b7
|
Replace enable_flashinfer_mla argument with attention_backend (#5005)
|
2025-04-03 02:53:58 -07:00 |
|
Zhiqiang Xie
|
e119f04215
|
Large page size aligned hierarchical caching (#4581)
|
2025-04-01 22:38:15 -07:00 |
|
Mick
|
5cb552b1d4
|
refactor: multimodal data (#4754)
|
2025-03-31 09:57:51 -07:00 |
|
Zhiqiang Xie
|
a169b9f813
|
Fix oom error for large page size (#4913)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
|
2025-03-30 21:34:21 -07:00 |
|
Baizhou Zhang
|
42873eac09
|
[Fix] Improve Lora tests and reduce CI runtime (#4925)
|
2025-03-30 19:40:14 -07:00 |
|
Lianmin Zheng
|
9adf178cc2
|
Fix 2-gpu CI test and suppress some warnings (#4930)
|
2025-03-30 12:51:44 -07:00 |
|
Lianmin Zheng
|
4ede6770cd
|
Fix retract for page size > 1 (#4914)
|
2025-03-30 02:57:15 -07:00 |
|
Lianmin Zheng
|
b26bc86b36
|
Support page size > 1 + eagle (#4908)
|
2025-03-30 00:46:23 -07:00 |
|
Lianmin Zheng
|
74e0ac1dbd
|
Clean up import vllm in quantization/__init__.py (#4834)
|
2025-03-28 10:34:10 -07:00 |
|
chaobo jia
|
ef9a378a20
|
[Feature] add multi-rank support for Lora (#4492)
Co-authored-by: rudy152 <czh1137892874@gmail.com>
|
2025-03-28 09:38:44 -07:00 |
|
Lianmin Zheng
|
47e6628aae
|
Fix CI tests (#4853)
|
2025-03-28 00:28:35 -07:00 |
|
Juwan Yoo
|
7907f9eb20
|
test: reduce mem_fraction_static for gemma3 vision test (#4840)
|
2025-03-27 23:20:10 -07:00 |
|
vikram singh shekhawat
|
6dbf99982f
|
Fix missing arguments in SchedulePolicy and RadixCache initialization in tests. (#4712)
|
2025-03-27 22:23:51 -07:00 |
|
Vincent
|
e2e2ab70e0
|
IPv6 support (#3949)
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
|
2025-03-27 21:42:13 -07:00 |
|
fzyzcjy
|
0d3e3072ee
|
Fix CI of test_patch_torch (#4844)
|
2025-03-27 21:22:45 -07:00 |
|
fzyzcjy
|
62dd95870c
|
Remove retry in nightly tests (#4846)
|
2025-03-27 21:18:29 -07:00 |
|
Qiaolin Yu
|
9fdc6d6abc
|
Fix the lora adapter when lora path is none (#4799)
Co-authored-by: Beichen Ma <mabeichen12@gmail.com>
|
2025-03-27 21:03:08 -07:00 |
|
Jon Durbin
|
04eb6062e4
|
Include context length in /v1/models response. (#4809)
|
2025-03-27 20:23:18 -07:00 |
|
tarinkk
|
7f19e083c1
|
Support (1 <= dp < tp) in the dp attention in DeepEP (#4770)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
|
2025-03-27 17:09:35 -07:00 |
|
Lianmin Zheng
|
2a882e8f3a
|
Fix the nightly eval by lowering the threshold of neuralmagic/gemma-2-2b-it-FP8 (#4830)
|
2025-03-27 16:09:49 -07:00 |
|
fzyzcjy
|
92bb49a7f9
|
Patch PyTorch's bug that cross-process tensor transfer will lead to wrong device (#4565)
|
2025-03-27 00:22:33 -07:00 |
|
Pan Lyu
|
c913ed4046
|
support clip embedding model (#4506)
|
2025-03-27 00:18:15 -07:00 |
|
Xihuai Wang
|
1afe3d0798
|
Align finish reason and stream mode in openai api (#4388)
|
2025-03-27 00:16:52 -07:00 |
|
Xiaoyu Zhang
|
04e3ff6975
|
Support compressed tensors fp8w8a8 (#4743)
|
2025-03-26 13:21:25 -07:00 |
|
fzyzcjy
|
26f07294f1
|
Warn users when release_memory_occupation is called without memory saver enabled (#4566)
|
2025-03-26 00:18:14 -07:00 |
|