Commit Graph

208 Commits

Author SHA1 Message Date
Lianmin Zheng
86fc0d79d0 Add a watch dog thread (#1816) 2024-10-27 02:00:50 -07:00
Lianmin Zheng
2b80978859 Provide an argument to set the maximum batch size for cuda graph (#1809) 2024-10-26 15:09:33 -07:00
Lianmin Zheng
6aa94b967c Update ci workflows (#1804) 2024-10-26 04:32:36 -07:00
Lianmin Zheng
fb99aaa527 [Fix] Fix --skip-tokenizer-init (#1798) 2024-10-25 18:51:59 -07:00
Lianmin Zheng
e646c5901e Fix logprob in the overlapped mode (#1795) 2024-10-25 11:06:57 -07:00
Lianmin Zheng
c555ce2ca2 Revert "Fix memory leak when doing chunked prefill" (#1797) 2024-10-25 10:24:44 -07:00
Lianmin Zheng
40900baea7 [Fix] Fix the log parsing in chunked prefill uni tests (#1794) 2024-10-25 08:31:08 -07:00
Liangsheng Yin
a2f5e7555f Fix memory leak when doing chunked prefill (#1787) 2024-10-25 08:01:17 -07:00
Lianmin Zheng
1701b0db31 Enhance the test case for chunked prefill (#1785) 2024-10-24 21:23:09 -07:00
Lianmin Zheng
05b3bf5e8e Crash the server on warnings in CI (#1772) 2024-10-23 16:27:13 -07:00
Ying Sheng
2fce449b1c [API] add get memory pool size (#1760)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-10-23 07:02:29 +00:00
Lianmin Zheng
ad4125d1a9 Fuse more ops & Simplify token mapping (#1758) 2024-10-22 23:20:43 -07:00
Liangsheng Yin
94cde10920 Llama3.2 vision model support (#1551) 2024-10-21 15:01:21 -07:00
Lianmin Zheng
00611286a1 Fix sliding window attention and gemma-2 unit tests in CI (#1746) 2024-10-21 13:47:12 -07:00
Lianmin Zheng
cf470fea32 Make token mapping non-blocking in the overlapped mode (#1740) 2024-10-20 23:25:14 -07:00
sixgod
45d5af2416 Add GLM-4 TextGeneration Model support for SGLang (#1736) 2024-10-21 04:08:30 +00:00
yizhang2077
554fbf93cd [Bugfix] qwen2vl forward_extend (#1727) 2024-10-20 02:38:35 -07:00
Lianmin Zheng
b48edff67f Split the overlapped version of TpModelWorkerClient into a separate file (#1726) 2024-10-20 00:29:29 -07:00
Lianmin Zheng
593b19f29d Temporarily skip this test_mixed_batch for QWen2VL (#1725) 2024-10-20 00:05:45 -07:00
Yineng Zhang
cbbc82b7b8 Support qwen2 vl model (#1721)
Co-authored-by: yizhang2077 <1109276519@qq.com>
Co-authored-by: ispobock <ISPObaoke@163.com>
2024-10-19 21:44:38 -07:00
Yineng Zhang
8bee20f80b Update vllm to 0.6.3 (#1711) (#1720)
Co-authored-by: Ke Bao <ISPObaoke@163.com>
2024-10-19 20:45:41 -07:00
Gleb Drozdov
a95d5589c3 Add matched_stop token or str to distinguish between eos or stop str finish_reason generation (#1684) 2024-10-17 18:06:52 +00:00
Lianmin Zheng
d17d19e5b8 Fix mixed batch for multi modal models (#1702) 2024-10-17 10:27:26 -07:00
Lianmin Zheng
dd3809fad8 Fix engine unit test (#1701) 2024-10-17 09:53:32 -07:00
Lianmin Zheng
7feba41584 Fix failed ci tests on long prompts; Better error messages for embedding models (#1700) 2024-10-17 09:23:29 -07:00
Lianmin Zheng
30ee36305e Fix the failed unit tests (#1699) 2024-10-17 08:13:29 -07:00
havetc
ecb8bad276 Returning a per request metric for number of cached_tokens read (#1599) 2024-10-16 11:49:22 -07:00
Lianmin Zheng
9116b2896f Add a new event loop (#1677) 2024-10-16 01:33:20 -07:00
Jani Monoses
a5114b6f91 Add OLMo model (#1676) 2024-10-16 00:11:18 -07:00
Shuo Yang
061e546313 Support double sparsity (#1459) 2024-10-14 02:00:41 -07:00
Lianmin Zheng
0c1e87964b Move filter_batch out of stream_output (#1663) 2024-10-14 01:15:34 -07:00
Lianmin Zheng
869f1c02c4 Add a test case to test retract (#1662) 2024-10-13 20:32:37 -07:00
Lianmin Zheng
dafb6a5266 [Fix] Fix the style of test_large_max_new_tokens.py (#1638) 2024-10-11 16:05:58 -07:00
Byron Hsu
862cd265e5 [engine] support async and streaming (#1614) 2024-10-11 15:26:25 -07:00
Lianmin Zheng
5d09ca5735 Fix constrained decoding (#1634) 2024-10-11 06:26:20 -07:00
Lianmin Zheng
aba9eae4c6 Fix the correctness test in bench_latency.py when tp > 1 and test_generation_models.py (#1631) 2024-10-11 05:03:20 -07:00
Byron Hsu
e8613df071 [Engine] Fix generate hanging issue after the first call (#1606) 2024-10-08 04:26:56 +00:00
Ke Bao
68f8b60d22 Fix chunked prefill condition (#1594) 2024-10-07 06:34:14 +00:00
Byron Hsu
551a3a9d38 Provide an offline engine API (#1567) 2024-10-06 20:27:03 -07:00
Byron Hsu
17e998f1a8 Test consistency for single and batch seperately (#1590) 2024-10-06 22:02:27 +00:00
Ying Sheng
c98e84c21e [Minor, Performance] Use torch.argmax for greedy sampling (#1589) 2024-10-06 13:15:05 -07:00
Lianmin Zheng
9244f27f0a [Minor] Improve the style and fix flaky tests (#1584) 2024-10-06 00:10:48 -07:00
Byron Hsu
2422de5193 Support min_tokens in sgl.gen (#1573) 2024-10-05 21:51:12 -07:00
Ying Sheng
04b262cd91 [Fix] Fix major performance bug in certain cases (#1563)
Co-authored-by: hnyls2002 <hnyls2002@gmail.com>
2024-10-04 08:51:11 +00:00
Lianmin Zheng
32eb6e96f2 Organize sampling batch info better (#1562) 2024-10-03 18:29:49 -07:00
Minsang Song
e6852b0dd2 [Fix] Fix AttributeError in Qwen2.5 LoRA: 'Qwen2ForCausalLM' object has no attribute 'get_hidden_dim' (#1536)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
2024-10-02 20:41:15 -07:00
Theresa Barton
2c7d0a5b8b [Fix] Fix all the Huggingface paths (#1553) 2024-10-02 10:12:07 -07:00
Liangsheng Yin
99ec439da4 Organize Attention Backends (#1547) 2024-09-30 15:54:18 -07:00
Ying Sheng
0f4fb19bc8 [Fix, LoRA] fix LoRA with updates in main (#1545) 2024-09-30 10:06:08 -07:00
Lianmin Zheng
3f0fe08d37 Let ModelRunner take InputMetadata as input, instead of ScheduleBatch (#1541) 2024-09-29 20:28:45 -07:00