yizhang2077
|
d04899d7ca
|
stop_str of qwen2-vl template should be a tuple not a str (#1834)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2024-10-29 20:30:41 +00:00 |
|
Yanyi Liu
|
5e6c32657e
|
Support setting use_thread in the run_program for easier debugging. (#1823)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2024-10-29 06:51:47 +00:00 |
|
Byron Hsu
|
680cad2023
|
fix get_memory_pool_size deadlock for DP (#1830)
|
2024-10-28 23:07:14 -07:00 |
|
Byron Hsu
|
0a24eb850a
|
Fix update_weights deadlock for DP (#1825)
|
2024-10-28 12:02:23 -07:00 |
|
Byron Hsu
|
6fcd6d7d6d
|
Support token ids in engine.generate (#1820)
|
2024-10-27 14:02:34 -07:00 |
|
Ke Bao
|
c77762d57f
|
Fix Triton decode kernel & ut (#1819)
|
2024-10-27 10:54:38 -07:00 |
|
Lianmin Zheng
|
eaade87a42
|
Fix unit tests (#1817)
|
2024-10-27 03:04:54 -07:00 |
|
Lianmin Zheng
|
86fc0d79d0
|
Add a watch dog thread (#1816)
|
2024-10-27 02:00:50 -07:00 |
|
Lianmin Zheng
|
86e0dde555
|
Improve the user control of new_token_ratio (#1811)
|
2024-10-26 16:39:41 -07:00 |
|
Lianmin Zheng
|
2b80978859
|
Provide an argument to set the maximum batch size for cuda graph (#1809)
|
2024-10-26 15:09:33 -07:00 |
|
Chayenne
|
ced362f7c6
|
Simplify our docs with complicated functions into utils (#1807)
Co-authored-by: Chayenne <zhaochenyang@ucla.edu>
|
2024-10-26 17:44:11 +00:00 |
|
Lianmin Zheng
|
9084a86445
|
Update links (#1805)
|
2024-10-26 04:46:01 -07:00 |
|
Byron Hsu
|
c26507484f
|
fix int conversion for SGLANG_CPU_COUNT (#1803)
|
2024-10-26 00:09:44 -07:00 |
|
Liangsheng Yin
|
07bf2e846a
|
Allow consecutive ports when launching multiple sglang servers. (#1802)
|
2024-10-26 06:43:24 +00:00 |
|
Liangsheng Yin
|
a628dd8e31
|
Set ZMQ buffer size heuristic (#1801)
|
2024-10-25 23:15:56 -07:00 |
|
Liangsheng Yin
|
1e8903414a
|
Fix possible ZMQ hanging (#1800)
|
2024-10-25 23:07:07 -07:00 |
|
Hui Liu
|
9ce8e1a93c
|
move max_position_embeddings to the last (#1799)
|
2024-10-25 19:30:50 -07:00 |
|
Lianmin Zheng
|
fb99aaa527
|
[Fix] Fix --skip-tokenizer-init (#1798)
|
2024-10-25 18:51:59 -07:00 |
|
DarkSharpness
|
b77a02cdfd
|
[Performance] Support both xgrammar and outlines for constrained decoding (#1752)
|
2024-10-25 21:47:02 +00:00 |
|
Lianmin Zheng
|
30643fed7f
|
Release v0.3.4.post2 (#1796)
Co-authored-by: DarkSharpness <76582120+DarkSharpness@users.noreply.github.com>
|
2024-10-25 11:07:19 -07:00 |
|
Lianmin Zheng
|
e646c5901e
|
Fix logprob in the overlapped mode (#1795)
|
2024-10-25 11:06:57 -07:00 |
|
Lianmin Zheng
|
c555ce2ca2
|
Revert "Fix memory leak when doing chunked prefill" (#1797)
|
2024-10-25 10:24:44 -07:00 |
|
Lianmin Zheng
|
40900baea7
|
[Fix] Fix the log parsing in chunked prefill uni tests (#1794)
|
2024-10-25 08:31:08 -07:00 |
|
Liangsheng Yin
|
a2f5e7555f
|
Fix memory leak when doing chunked prefill (#1787)
|
2024-10-25 08:01:17 -07:00 |
|
Lianmin Zheng
|
2148914e1b
|
Fix log parsing in the chunked prefill unit tests (#1793)
|
2024-10-25 08:00:55 -07:00 |
|
yizhang2077
|
def55bc876
|
Qwen2vl support cuda graph and disable radix cache (#1780)
|
2024-10-25 10:45:17 -04:00 |
|
Lianmin Zheng
|
86a2c473b7
|
[Fix] Fix seq_lens_sum for cuda graph runner in padded cases (#1789)
|
2024-10-24 21:26:05 -07:00 |
|
Lianmin Zheng
|
1701b0db31
|
Enhance the test case for chunked prefill (#1785)
|
2024-10-24 21:23:09 -07:00 |
|
Lianmin Zheng
|
384d85ba35
|
Re-introduce get_cuda_graph_seq_len_fill_value (#1783)
|
2024-10-24 13:30:11 -07:00 |
|
Xiaoyu Zhang
|
605972195b
|
check user-specified model_max_len with hf derived max_model_len (#1778)
|
2024-10-24 12:40:36 -07:00 |
|
Lianmin Zheng
|
fc82f5a743
|
[Fix] Fix cuda graph padding for triton attention backend (#1782)
|
2024-10-24 12:33:15 -07:00 |
|
Lianmin Zheng
|
0089c4bc96
|
[Fix] Fix NaN issues by fixing the cuda graph padding values for flashinfer (#1779)
|
2024-10-24 04:16:59 -07:00 |
|
zolinthecow
|
72e7b57a75
|
[Bug] Catch any errors caused by parsing json schema (#1776)
|
2024-10-24 01:54:53 -07:00 |
|
Lianmin Zheng
|
87a7cfa080
|
Fix MockTokenizer in the unit tests (#1774)
|
2024-10-23 17:47:05 -07:00 |
|
Lianmin Zheng
|
8f8f96a621
|
Fix the perf regression due to additional_stop_token_ids (#1773)
|
2024-10-23 16:45:21 -07:00 |
|
Lianmin Zheng
|
05b3bf5e8e
|
Crash the server on warnings in CI (#1772)
|
2024-10-23 16:27:13 -07:00 |
|
Liangsheng Yin
|
3f5ac88d02
|
Fix out of memory message. (#1771)
|
2024-10-23 15:20:39 -07:00 |
|
Lianmin Zheng
|
0d800090b4
|
Fix missing additional_stop_token_ids (#1769)
|
2024-10-23 12:18:59 -07:00 |
|
Lianmin Zheng
|
80a905475d
|
Fix stop condition for <|eom_id|> (#1766)
|
2024-10-23 10:47:12 -07:00 |
|
Lianmin Zheng
|
9af7b88e3c
|
[Fix] Fix abort in dp (#1767)
|
2024-10-23 10:46:29 -07:00 |
|
Lianmin Zheng
|
fbcbb26327
|
Fix perf regression for set_kv_buffer (#1765)
|
2024-10-23 09:57:08 -07:00 |
|
Ying Sheng
|
2fce449b1c
|
[API] add get memory pool size (#1760)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2024-10-23 07:02:29 +00:00 |
|
Lianmin Zheng
|
ad4125d1a9
|
Fuse more ops & Simplify token mapping (#1758)
|
2024-10-22 23:20:43 -07:00 |
|
Byron Hsu
|
17536e7e3d
|
Fix edge case for truncated (#1747)
|
2024-10-23 00:00:25 -04:00 |
|
Lianmin Zheng
|
1f26e8b8e4
|
Release v0.3.4.post1 (#1749)
|
2024-10-21 21:16:43 -07:00 |
|
Liangsheng Yin
|
5e1558f1f2
|
Update max_req_len and max_req_input_len (#1748)
|
2024-10-21 16:12:04 -07:00 |
|
Liangsheng Yin
|
94cde10920
|
Llama3.2 vision model support (#1551)
|
2024-10-21 15:01:21 -07:00 |
|
Lianmin Zheng
|
00611286a1
|
Fix sliding window attention and gemma-2 unit tests in CI (#1746)
|
2024-10-21 13:47:12 -07:00 |
|
Lianmin Zheng
|
7ce3606891
|
Faster overlap mode scheduler (#1738)
|
2024-10-21 04:30:52 -07:00 |
|
Liangsheng Yin
|
efb099cdee
|
Fix prefill oom (#1743)
|
2024-10-21 03:54:35 -07:00 |
|