Commit Graph

521 Commits

Author SHA1 Message Date
Baizhou Zhang
efbae697b3 [Revision] Replace enable_flashinfer_mla argument with attention_backend (#5052) 2025-04-05 01:23:02 -07:00
AniZpZ
d95269f9b3 [2/3] fix dsv3 awq issue (#4625)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
2025-04-03 17:36:39 -07:00
Lianmin Zheng
74885a848b Revert "Replace enable_flashinfer_mla argument with attention_backend" (#5048) 2025-04-03 13:30:56 -07:00
Baizhou Zhang
e8999b13b7 Replace enable_flashinfer_mla argument with attention_backend (#5005) 2025-04-03 02:53:58 -07:00
Zhiqiang Xie
e119f04215 Large page size aligned hierarchical caching (#4581) 2025-04-01 22:38:15 -07:00
Mick
5cb552b1d4 refactor: multimodal data (#4754) 2025-03-31 09:57:51 -07:00
Zhiqiang Xie
a169b9f813 Fix oom error for large page size (#4913)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-03-30 21:34:21 -07:00
Baizhou Zhang
42873eac09 [Fix] Improve Lora tests and reduce CI runtime (#4925) 2025-03-30 19:40:14 -07:00
Lianmin Zheng
9adf178cc2 Fix 2-gpu CI test and suppress some warnings (#4930) 2025-03-30 12:51:44 -07:00
Lianmin Zheng
4ede6770cd Fix retract for page size > 1 (#4914) 2025-03-30 02:57:15 -07:00
Lianmin Zheng
b26bc86b36 Support page size > 1 + eagle (#4908) 2025-03-30 00:46:23 -07:00
Lianmin Zheng
74e0ac1dbd Clean up import vllm in quantization/__init__.py (#4834) 2025-03-28 10:34:10 -07:00
chaobo jia
ef9a378a20 [Feature] add multi-rank support for Lora (#4492)
Co-authored-by: rudy152 <czh1137892874@gmail.com>
2025-03-28 09:38:44 -07:00
Lianmin Zheng
47e6628aae Fix CI tests (#4853) 2025-03-28 00:28:35 -07:00
Juwan Yoo
7907f9eb20 test: reduce mem_fraction_static for gemma3 vision test (#4840) 2025-03-27 23:20:10 -07:00
vikram singh shekhawat
6dbf99982f Fix missing arguments in SchedulePolicy and RadixCache initialization in tests. (#4712) 2025-03-27 22:23:51 -07:00
Vincent
e2e2ab70e0 IPv6 support (#3949)
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-03-27 21:42:13 -07:00
fzyzcjy
0d3e3072ee Fix CI of test_patch_torch (#4844) 2025-03-27 21:22:45 -07:00
fzyzcjy
62dd95870c Remove retry in nightly tests (#4846) 2025-03-27 21:18:29 -07:00
Qiaolin Yu
9fdc6d6abc Fix the lora adapter when lora path is none (#4799)
Co-authored-by: Beichen Ma <mabeichen12@gmail.com>
2025-03-27 21:03:08 -07:00
Jon Durbin
04eb6062e4 Include context length in /v1/models response. (#4809) 2025-03-27 20:23:18 -07:00
tarinkk
7f19e083c1 Support (1 <= dp < tp) in the dp attention in DeepEP (#4770)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
2025-03-27 17:09:35 -07:00
Lianmin Zheng
2a882e8f3a Fix the nightly eval by lowering the threshold of neuralmagic/gemma-2-2b-it-FP8 (#4830) 2025-03-27 16:09:49 -07:00
fzyzcjy
92bb49a7f9 Patch PyTorch's bug that cross-process tensor transfer will lead to wrong device (#4565) 2025-03-27 00:22:33 -07:00
Pan Lyu
c913ed4046 support clip embedding model (#4506) 2025-03-27 00:18:15 -07:00
Xihuai Wang
1afe3d0798 Align finish reason and stream mode in openai api (#4388) 2025-03-27 00:16:52 -07:00
Xiaoyu Zhang
04e3ff6975 Support compressed tensors fp8w8a8 (#4743) 2025-03-26 13:21:25 -07:00
fzyzcjy
26f07294f1 Warn users when release_memory_occupation is called without memory saver enabled (#4566) 2025-03-26 00:18:14 -07:00
fzyzcjy
15ddd84322 Add retry for flaky tests in CI (#4755) 2025-03-25 16:53:12 -07:00
fzyzcjy
eb934bdf3b Fix test_expert_distribution failure (#4752) 2025-03-25 01:17:03 -07:00
DarkSharpness
ac3fae8445 [Feature] Support "strict" in function calling (#4310) 2025-03-24 22:15:25 -07:00
HandH1998
2d1b83e57a add dsv3 int8 test (#4705) 2025-03-24 21:57:58 -07:00
yuhsaun-t
199bb01d00 Add endpoints to dump selected expert ids (#4435)
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
2025-03-24 21:34:19 -07:00
Mick
1e86457c90 model: Minicpmo (#3023) 2025-03-24 20:08:40 -07:00
Ximingwang-09
22c3702e1e [Model] Support Qwen2ForSequenceClassification (#4609)
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>
2025-03-24 19:13:44 -07:00
Alex Sun
af6535e7aa [ROCm] Enable MTP (NextN) on AMD GPU (#4631) 2025-03-23 22:58:05 -07:00
Mick
11577cedb7 refactor: bug fixes and refactor for vlm (#4661) 2025-03-22 22:48:49 -07:00
Yi Zhang
3c09548d1f close gemma2 in test_verl_engine.py temporarily (#4685) 2025-03-22 16:36:46 -07:00
Yineng Zhang
e7a8610d51 fix flaky ut (#4670) 2025-03-22 12:36:50 -07:00
Adarsh Shirawalmath
a2cc62a6db [CI fix] test skipping modelopt on AMD (#4677) 2025-03-22 12:36:02 -07:00
Yun Dai
8cd4250401 [quantization] fix channelwise conversion with scalar weight scale (#4596) 2025-03-22 00:47:52 -07:00
JieXin Liang
9e93ef3f8e [fix] fix illegal mem access and clean up triton attention backend (#4571) 2025-03-20 02:01:52 -07:00
Jinyan Chen
f44db16c8e [Feature] Integrate DeepEP into SGLang (#4232)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
Co-authored-by: Xuting Zhou <xutingz@nvidia.com>
2025-03-19 08:16:31 -07:00
JieXin Liang
c0e9a36c5f Optimize Triton decoding kernel for dynamic workload (#4553) 2025-03-18 21:25:38 -07:00
aoshen524
588865f0e0 [Feature] Support Tensor Parallelism and Weight Slicing for Lora (#4274)
Co-authored-by: ShenAo1111 <1377693092@qq.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-03-18 20:33:07 -07:00
Cheng Wan
3196999f63 Reduce computation and communication in DP attention (#4521) 2025-03-18 13:41:36 -07:00
James Liu
9e0186f352 [Feature] Support EAGLE 3 (#4247) 2025-03-18 07:35:23 -07:00
Yineng Zhang
c787298547 use sgl custom all reduce (#4441) 2025-03-18 00:46:41 -07:00
Ke Bao
45212ce18b Add deepseek v2 torch compile pr test (#4538) 2025-03-18 00:29:24 -07:00
Mick
d373a48c98 fix: second_per_grid_ts should be used to get mrope position (#3682) 2025-03-17 18:12:38 -07:00