Commit Graph

186 Commits

Author SHA1 Message Date
Lifu Huang
2373faa317 Fix flakiness in LoRA batch test. (#7552) 2025-06-27 19:51:43 -07:00
Ata Fatahi
031f64aa1b Add e2e test for multi instance multi stage memory release/resume occupuation (#7208)
Signed-off-by: Ata Fatahi <immrata@gmail.com>
2025-06-26 17:40:38 -07:00
Chang Su
fa42e41962 ci: Revert openai_server related tests in AMD suites (#7449) 2025-06-23 15:28:22 -07:00
Chang Su
b7a2df0a44 refactor(test): reorganize OpenAI test file structure (#7408) 2025-06-21 19:37:48 -07:00
Chang Su
72676cd6c0 feat(oai refactor): Replace openai_api with entrypoints/openai (#7351)
Co-authored-by: Jin Pan <jpan236@wisc.edu>
2025-06-21 13:21:06 -07:00
Ata Fatahi
1ab6be1b26 Purge VerlEngine (#7326)
Signed-off-by: Ata Fatahi <immrata@gmail.com>
2025-06-19 23:47:21 -07:00
Stefan He
3774f07825 Multi-Stage Awake: Support Resume and Pause KV Cache and Weights separately (#7099) 2025-06-19 00:56:37 -07:00
Jinn
ffd1a26e09 Add more refactored openai test & in CI (#7284) 2025-06-18 13:52:55 -07:00
YanbingJiang
094c116f7d Update python API of activation, topk, norm and rope and remove vllm dependency (#6614)
Co-authored-by: Wu, Chunyuan <chunyuan.wu@intel.com>
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: sdp <sdp@gnr799219.jf.intel.com>
2025-06-17 22:11:50 -07:00
woodx
e30ef368ab Feat/support rerank (#6058) 2025-06-16 10:50:01 -07:00
Lianmin Zheng
ba589b88fc Improve test cases for eagle infer (#7173) 2025-06-13 22:25:13 -07:00
Lianmin Zheng
0fc3d992bb Split the eagle test into two files (#7170) 2025-06-13 20:14:26 -07:00
Baizhou Zhang
2a5f0100e0 Fix GGuf and add back test_gguf.py (#7067) 2025-06-10 21:07:20 -07:00
kyle-pena-kuzco
b56de8f943 Open AI API hidden states (#6716) 2025-06-10 14:37:29 -07:00
Yineng Zhang
2f58445531 Revert "Add sanity checks when a test file is not added to CI (#6947)" (#7063) 2025-06-10 12:43:25 -07:00
fzyzcjy
fe55947acd Add sanity checks when a test file is not added to CI (#6947) 2025-06-10 12:34:57 -07:00
Yineng Zhang
56ccd3c22c chore: upgrade flashinfer v0.2.6.post1 jit (#6958)
Co-authored-by: alcanderian <alcanderian@gmail.com>
Co-authored-by: Qiaolin Yu <qy254@cornell.edu>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-06-09 09:22:39 -07:00
Sai Enduri
2c18642502 Enable more unit tests for AMD CI. (#6983) 2025-06-08 19:41:55 -07:00
Hubert Lu
4740288303 [AMD] Add more tests to per-commit-amd (#6926) 2025-06-08 01:08:37 -07:00
Zaili Wang
562f279a2d [CPU] enable CI for PRs, add Dockerfile and auto build task (#6458)
Co-authored-by: diwei sun <diwei.sun@intel.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
2025-06-05 13:43:54 -07:00
Chang Su
8b2474898b bugfix(OAI): Fix image_data processing for jinja chat templates (#6877) 2025-06-05 13:37:01 -07:00
Marc Sun
37f1547587 [FEAT] Add transformers backend support (#5929) 2025-06-03 21:05:29 -07:00
Lianmin Zheng
2d72fc47cf Improve profiler and integrate profiler in bench_one_batch_server (#6787) 2025-05-31 15:53:55 -07:00
Jianan Ji
22630ca242 Support sliding window in triton backend (#6509) 2025-05-30 01:11:53 -07:00
Chang Su
41ba767f0c feat: Add warnings for invalid tool_choice and UTs (#6582) 2025-05-27 16:53:19 -07:00
Junrong Lin
2103b80607 [CI] update verlengine ci to 4-gpu test (#6007) 2025-05-27 14:32:23 -07:00
Xinyuan Tong
681fdc264b Refactor vlm embedding routine to use precomputed feature (#6543)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-05-24 18:39:21 -07:00
Chang Su
ed0c3035cd feat(Tool Calling): Support required and specific function mode (#6550) 2025-05-23 21:00:37 -07:00
Byron Hsu
d2e0881a34 [PD] support spec decode (#6507)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-05-23 12:03:05 -07:00
Yineng Zhang
0b07c4a99f chore: upgrade sgl-kernel v0.1.4 (#6532) 2025-05-22 13:28:16 -07:00
fzyzcjy
f11481b921 Add 4-GPU runner tests and split existing tests (#6383) 2025-05-18 11:56:51 -07:00
Sai Enduri
73eb67c087 Enable unit tests for AMD CI. (#6283) 2025-05-14 12:55:36 -07:00
shangmingc
f1c896007a [PD] Add support for different TP sizes per DP rank (#5922)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-05-12 13:55:42 -07:00
shangmingc
3ee40ff919 [CI] Re-enable pd disaggregation test (#6231)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-05-12 10:09:12 -07:00
Lianmin Zheng
fba8eccd7e Log if cuda graph is used & extend cuda graph capture to cuda-graph-max-bs (#6201)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
2025-05-12 00:17:33 -07:00
Lianmin Zheng
03227c5fa6 [CI] Reorganize the 8 gpu tests (#6192) 2025-05-11 10:55:06 -07:00
Lianmin Zheng
17c36c5511 [CI] Disabled deepep tests temporarily because it takes too much time. (#6186) 2025-05-10 23:40:50 -07:00
shangmingc
31d1f6e7f4 [PD] Add simple unit test for disaggregation feature (#5654)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-05-11 13:35:27 +08:00
Yineng Zhang
66fc63d6b1 Revert "feat: add thinking_budget (#6089)" (#6181) 2025-05-10 16:07:45 -07:00
thyecust
63484f9fd6 feat: add thinking_budget (#6089) 2025-05-09 08:22:09 -07:00
Stefan He
24c13ca950 Clean up fa3 test from 8 gpus (#6105) 2025-05-07 18:38:40 -07:00
Jinyan Chen
8a828666a3 Add DeepEP to CI PR Test (#5655)
Co-authored-by: Jinyan Chen <jinyanc@nvidia.com>
2025-05-06 17:36:03 -07:00
Baizhou Zhang
bdd17998e6 [Fix] Fix and rename flashmla CI test (#6045) 2025-05-06 13:25:15 -07:00
Huapeng Zhou
b8559764f6 [Test] Add flashmla attention backend test (#5587) 2025-05-05 10:32:02 -07:00
mlmz
256c4c2519 fix: correct stream response when enable_thinking is set to false (#5881) 2025-04-30 19:44:37 -07:00
Ying Sheng
11383cec3c [PP] Add pipeline parallelism (#5724) 2025-04-30 18:18:07 -07:00
saienduri
e3a5304475 Add AMD MI300x Nightly Testing. (#5861) 2025-04-29 17:34:32 -07:00
Chang Su
2b06484bd1 feat: support pythonic tool call and index in tool call streaming (#5725) 2025-04-29 17:30:44 -07:00
Chang Su
9419e75d60 [CI] Add test_function_calling.py to run_suite.py (#5896) 2025-04-29 15:54:53 -07:00
Qiaolin Yu
8c0cfca87d Feat: support cuda graph for LoRA (#4115)
Co-authored-by: Beichen Ma <mabeichen12@gmail.com>
2025-04-28 23:30:44 -07:00