Commit Graph

857 Commits

Author SHA1 Message Date
Byron Hsu
c26507484f fix int conversion for SGLANG_CPU_COUNT (#1803) 2024-10-26 00:09:44 -07:00
Liangsheng Yin
07bf2e846a Allow consecutive ports when launching multiple sglang servers. (#1802) 2024-10-26 06:43:24 +00:00
Liangsheng Yin
a628dd8e31 Set ZMQ buffer size heuristic (#1801) 2024-10-25 23:15:56 -07:00
Liangsheng Yin
1e8903414a Fix possible ZMQ hanging (#1800) 2024-10-25 23:07:07 -07:00
Hui Liu
9ce8e1a93c move max_position_embeddings to the last (#1799) 2024-10-25 19:30:50 -07:00
Lianmin Zheng
fb99aaa527 [Fix] Fix --skip-tokenizer-init (#1798) 2024-10-25 18:51:59 -07:00
DarkSharpness
b77a02cdfd [Performance] Support both xgrammar and outlines for constrained decoding (#1752) 2024-10-25 21:47:02 +00:00
Lianmin Zheng
30643fed7f Release v0.3.4.post2 (#1796)
Co-authored-by: DarkSharpness <76582120+DarkSharpness@users.noreply.github.com>
2024-10-25 11:07:19 -07:00
Lianmin Zheng
e646c5901e Fix logprob in the overlapped mode (#1795) 2024-10-25 11:06:57 -07:00
Lianmin Zheng
c555ce2ca2 Revert "Fix memory leak when doing chunked prefill" (#1797) 2024-10-25 10:24:44 -07:00
Lianmin Zheng
40900baea7 [Fix] Fix the log parsing in chunked prefill uni tests (#1794) 2024-10-25 08:31:08 -07:00
Liangsheng Yin
a2f5e7555f Fix memory leak when doing chunked prefill (#1787) 2024-10-25 08:01:17 -07:00
Lianmin Zheng
2148914e1b Fix log parsing in the chunked prefill unit tests (#1793) 2024-10-25 08:00:55 -07:00
yizhang2077
def55bc876 Qwen2vl support cuda graph and disable radix cache (#1780) 2024-10-25 10:45:17 -04:00
Lianmin Zheng
86a2c473b7 [Fix] Fix seq_lens_sum for cuda graph runner in padded cases (#1789) 2024-10-24 21:26:05 -07:00
Lianmin Zheng
1701b0db31 Enhance the test case for chunked prefill (#1785) 2024-10-24 21:23:09 -07:00
Lianmin Zheng
384d85ba35 Re-introduce get_cuda_graph_seq_len_fill_value (#1783) 2024-10-24 13:30:11 -07:00
Xiaoyu Zhang
605972195b check user-specified model_max_len with hf derived max_model_len (#1778) 2024-10-24 12:40:36 -07:00
Lianmin Zheng
fc82f5a743 [Fix] Fix cuda graph padding for triton attention backend (#1782) 2024-10-24 12:33:15 -07:00
Lianmin Zheng
0089c4bc96 [Fix] Fix NaN issues by fixing the cuda graph padding values for flashinfer (#1779) 2024-10-24 04:16:59 -07:00
zolinthecow
72e7b57a75 [Bug] Catch any errors caused by parsing json schema (#1776) 2024-10-24 01:54:53 -07:00
Lianmin Zheng
87a7cfa080 Fix MockTokenizer in the unit tests (#1774) 2024-10-23 17:47:05 -07:00
Lianmin Zheng
8f8f96a621 Fix the perf regression due to additional_stop_token_ids (#1773) 2024-10-23 16:45:21 -07:00
Lianmin Zheng
05b3bf5e8e Crash the server on warnings in CI (#1772) 2024-10-23 16:27:13 -07:00
Liangsheng Yin
3f5ac88d02 Fix out of memory message. (#1771) 2024-10-23 15:20:39 -07:00
Lianmin Zheng
0d800090b4 Fix missing additional_stop_token_ids (#1769) 2024-10-23 12:18:59 -07:00
Lianmin Zheng
80a905475d Fix stop condition for <|eom_id|> (#1766) 2024-10-23 10:47:12 -07:00
Lianmin Zheng
9af7b88e3c [Fix] Fix abort in dp (#1767) 2024-10-23 10:46:29 -07:00
Lianmin Zheng
fbcbb26327 Fix perf regression for set_kv_buffer (#1765) 2024-10-23 09:57:08 -07:00
Ying Sheng
2fce449b1c [API] add get memory pool size (#1760)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-10-23 07:02:29 +00:00
Lianmin Zheng
ad4125d1a9 Fuse more ops & Simplify token mapping (#1758) 2024-10-22 23:20:43 -07:00
Byron Hsu
17536e7e3d Fix edge case for truncated (#1747) 2024-10-23 00:00:25 -04:00
Lianmin Zheng
1f26e8b8e4 Release v0.3.4.post1 (#1749) 2024-10-21 21:16:43 -07:00
Liangsheng Yin
5e1558f1f2 Update max_req_len and max_req_input_len (#1748) 2024-10-21 16:12:04 -07:00
Liangsheng Yin
94cde10920 Llama3.2 vision model support (#1551) 2024-10-21 15:01:21 -07:00
Lianmin Zheng
00611286a1 Fix sliding window attention and gemma-2 unit tests in CI (#1746) 2024-10-21 13:47:12 -07:00
Lianmin Zheng
7ce3606891 Faster overlap mode scheduler (#1738) 2024-10-21 04:30:52 -07:00
Liangsheng Yin
efb099cdee Fix prefill oom (#1743) 2024-10-21 03:54:35 -07:00
Lianmin Zheng
09603c6dc9 Maintain seq_lens_sum to make more FlashInfer operations non-blocking (#1741) 2024-10-21 01:43:16 -07:00
Lianmin Zheng
cf470fea32 Make token mapping non-blocking in the overlapped mode (#1740) 2024-10-20 23:25:14 -07:00
sixgod
45d5af2416 Add GLM-4 TextGeneration Model support for SGLang (#1736) 2024-10-21 04:08:30 +00:00
Lianmin Zheng
b121bc03a3 Simplify batch result resolution (#1735) 2024-10-20 19:47:14 -07:00
Lianmin Zheng
e12358dc91 Simplify the usage of device (#1734) 2024-10-20 18:17:41 -07:00
yizhang2077
554fbf93cd [Bugfix] qwen2vl forward_extend (#1727) 2024-10-20 02:38:35 -07:00
Lianmin Zheng
b48edff67f Split the overlapped version of TpModelWorkerClient into a separate file (#1726) 2024-10-20 00:29:29 -07:00
Lianmin Zheng
59cbf47626 Unify the memory pool api and tp worker API (#1724) 2024-10-19 23:19:26 -07:00
Yineng Zhang
cbbc82b7b8 Support qwen2 vl model (#1721)
Co-authored-by: yizhang2077 <1109276519@qq.com>
Co-authored-by: ispobock <ISPObaoke@163.com>
2024-10-19 21:44:38 -07:00
Yineng Zhang
8bee20f80b Update vllm to 0.6.3 (#1711) (#1720)
Co-authored-by: Ke Bao <ISPObaoke@163.com>
2024-10-19 20:45:41 -07:00
Lianmin Zheng
12cad0feae Simplify the interface of tp_worker (#1718) 2024-10-19 17:39:38 -07:00
Lianmin Zheng
b6cd903604 Update readme and workflow (#1716) 2024-10-19 13:01:44 -07:00