Commit Graph

162 Commits

Author SHA1 Message Date
Chang Su
41ba767f0c feat: Add warnings for invalid tool_choice and UTs (#6582) 2025-05-27 16:53:19 -07:00
Junrong Lin
2103b80607 [CI] update verlengine ci to 4-gpu test (#6007) 2025-05-27 14:32:23 -07:00
Xinyuan Tong
681fdc264b Refactor vlm embedding routine to use precomputed feature (#6543)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-05-24 18:39:21 -07:00
Chang Su
ed0c3035cd feat(Tool Calling): Support required and specific function mode (#6550) 2025-05-23 21:00:37 -07:00
Byron Hsu
d2e0881a34 [PD] support spec decode (#6507)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-05-23 12:03:05 -07:00
Yineng Zhang
0b07c4a99f chore: upgrade sgl-kernel v0.1.4 (#6532) 2025-05-22 13:28:16 -07:00
fzyzcjy
f11481b921 Add 4-GPU runner tests and split existing tests (#6383) 2025-05-18 11:56:51 -07:00
Sai Enduri
73eb67c087 Enable unit tests for AMD CI. (#6283) 2025-05-14 12:55:36 -07:00
shangmingc
f1c896007a [PD] Add support for different TP sizes per DP rank (#5922)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-05-12 13:55:42 -07:00
shangmingc
3ee40ff919 [CI] Re-enable pd disaggregation test (#6231)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-05-12 10:09:12 -07:00
Lianmin Zheng
fba8eccd7e Log if cuda graph is used & extend cuda graph capture to cuda-graph-max-bs (#6201)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-05-12 00:17:33 -07:00
Lianmin Zheng
03227c5fa6 [CI] Reorganize the 8 gpu tests (#6192) 2025-05-11 10:55:06 -07:00
Lianmin Zheng
17c36c5511 [CI] Disabled deepep tests temporarily because it takes too much time. (#6186) 2025-05-10 23:40:50 -07:00
shangmingc
31d1f6e7f4 [PD] Add simple unit test for disaggregation feature (#5654)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-05-11 13:35:27 +08:00
Yineng Zhang
66fc63d6b1 Revert "feat: add thinking_budget (#6089)" (#6181) 2025-05-10 16:07:45 -07:00
thyecust
63484f9fd6 feat: add thinking_budget (#6089) 2025-05-09 08:22:09 -07:00
Stefan He
24c13ca950 Clean up fa3 test from 8 gpus (#6105) 2025-05-07 18:38:40 -07:00
Jinyan Chen
8a828666a3 Add DeepEP to CI PR Test (#5655)
Co-authored-by: Jinyan Chen <jinyanc@nvidia.com>
2025-05-06 17:36:03 -07:00
Baizhou Zhang
bdd17998e6 [Fix] Fix and rename flashmla CI test (#6045) 2025-05-06 13:25:15 -07:00
Huapeng Zhou
b8559764f6 [Test] Add flashmla attention backend test (#5587) 2025-05-05 10:32:02 -07:00
mlmz
256c4c2519 fix: correct stream response when enable_thinking is set to false (#5881) 2025-04-30 19:44:37 -07:00
Ying Sheng
11383cec3c [PP] Add pipeline parallelism (#5724) 2025-04-30 18:18:07 -07:00
saienduri
e3a5304475 Add AMD MI300x Nightly Testing. (#5861) 2025-04-29 17:34:32 -07:00
Chang Su
2b06484bd1 feat: support pythonic tool call and index in tool call streaming (#5725) 2025-04-29 17:30:44 -07:00
Chang Su
9419e75d60 [CI] Add test_function_calling.py to run_suite.py (#5896) 2025-04-29 15:54:53 -07:00
Qiaolin Yu
8c0cfca87d Feat: support cuda graph for LoRA (#4115)
Co-authored-by: Beichen Ma <mabeichen12@gmail.com>
2025-04-28 23:30:44 -07:00
Lianmin Zheng
daed453e84 [CI] Improve github summary & enable fa3 for more models (#5796) 2025-04-27 15:29:46 -07:00
Baizhou Zhang
f9fb33efc3 Add 8-GPU Test for Deepseek-V3 (#5691)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-04-27 12:46:12 -07:00
Lianmin Zheng
621e96bf9b [CI] Fix ci tests (#5769) 2025-04-27 07:18:10 -07:00
Lianmin Zheng
35ca04d2fa [CI] fix port conflicts (#5789) 2025-04-27 05:17:44 -07:00
Lianmin Zheng
3c4e0ee64d [CI] Tune threshold (#5787) 2025-04-27 04:10:22 -07:00
Lianmin Zheng
4d23ba08f5 Simplify FA3 tests (#5779) 2025-04-27 01:30:17 -07:00
Baizhou Zhang
a45a4b239d Split local attention test from fa3 test (#5774) 2025-04-27 01:03:31 -07:00
Lianmin Zheng
981a2619d5 Fix eagle test case (#5776) 2025-04-27 01:00:54 -07:00
Stefan He
408ba02218 Add Llama 4 to FA3 test (#5509) 2025-04-26 19:49:31 -07:00
Mick
02723e1b0d CI: rewrite test_vision_chunked_prefill to speedup (#5682) 2025-04-26 18:33:13 -07:00
Ravi Theja
7d9679b74d Add MMMU benchmark results (#4491)
Co-authored-by: Ravi Theja Desetty <ravitheja@Ravis-MacBook-Pro.local>
2025-04-25 15:23:53 +08:00
Mick
c998d04b46 vlm: enable radix cache for qwen-vl models (#5349)
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
2025-04-23 20:35:05 -07:00
Qingquan Song
188f0955fa Add Speculative Decoding Eagle3 topk > 1 (#5318)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
Co-authored-by: Yubo Wang <yubowang2019@gmail.com>
2025-04-20 22:58:28 -07:00
Xiaoyu Zhang
bf86c5e990 restruct compressed_tensors_w8a8_fp8 (#5475) 2025-04-19 04:52:15 -07:00
yhyang201
072df75354 Support for Qwen2.5-VL Model in bitsandbytes Format (#5003) 2025-04-14 02:03:40 -07:00
Xiaoyu Zhang
87eddedfa2 [ci] fix ci test fused_moe op (#5102) 2025-04-09 08:52:46 -07:00
HandH1998
4065248214 Support Llama4 fp8 inference (#5194)
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-04-09 20:14:34 +08:00
Yubo Wang
804d9f2e4c Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760) 2025-04-07 23:20:51 -07:00
Lianmin Zheng
9adf178cc2 Fix 2-gpu CI test and suppress some warnings (#4930) 2025-03-30 12:51:44 -07:00
Lianmin Zheng
4ede6770cd Fix retract for page size > 1 (#4914) 2025-03-30 02:57:15 -07:00
Lianmin Zheng
b26bc86b36 Support page size > 1 + eagle (#4908) 2025-03-30 00:46:23 -07:00
chaobo jia
ef9a378a20 [Feature] add multi-rank support for Lora (#4492)
Co-authored-by: rudy152 <czh1137892874@gmail.com>
2025-03-28 09:38:44 -07:00
fzyzcjy
0d3e3072ee Fix CI of test_patch_torch (#4844) 2025-03-27 21:22:45 -07:00
fzyzcjy
92bb49a7f9 Patch PyTorch's bug that cross-process tensor transfer will lead to wrong device (#4565) 2025-03-27 00:22:33 -07:00