Commit Graph

886 Commits

Author SHA1 Message Date
Chayenne
6aed0445ed turn off log (#1895) 2024-11-03 00:19:12 -07:00
Ran Chen
146f613405 Fix incorrect context length for llama3.2-11b (#1873) 2024-11-02 00:04:50 -07:00
Ke Bao
16eb33ffe2 Update vocab embedding deps and add TP switch (#1856) 2024-10-31 20:13:07 -07:00
Liangsheng Yin
b9fd178f1b Fix retraction + overlap (#1860)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2024-10-31 18:27:42 -07:00
HAI
d8e9d61f86 [Build, ROCm] Dockerfile.rocm for Instinct GPUs, with package updates (#1861) 2024-10-31 16:38:16 -07:00
Lianmin Zheng
a2e0424abf Fix memory leak for chunked prefill 2 (#1858)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2024-10-31 14:51:51 -07:00
geeker-smallwhite
8ce202a493 delete unused character (#1855) 2024-10-31 19:33:55 +08:00
Byron Hsu
438526a814 Refactor tokenizer manager (#1846) 2024-10-30 21:32:18 -07:00
Lianmin Zheng
f7102fbd2b Fix mixed chunked prefill (#1850) 2024-10-30 21:20:41 -07:00
Byron Hsu
a7a0a6886b Make decode log interval configurable (#1847) 2024-10-30 19:59:20 -07:00
HAI
2d4ce1b792 [Performance, Triton Kernel Args] _decode_grouped_softmax_reducev_fwd… (#1845) 2024-10-30 17:33:36 -07:00
HAI
5f65e2b830 [Performance, Hardware] MoE weights padding to AMD MI300x GPUs (#1836) 2024-10-30 12:17:32 -07:00
Ying Sheng
4e2af03cfa [Production] Drain requests before exit when receive SIGTERM (#1838) 2024-10-30 10:22:56 -07:00
Lianmin Zheng
b548801ddb Update docs (#1839) 2024-10-30 02:49:08 -07:00
Chayenne
539df95d2c Imporve openai api documents (#1827)
Co-authored-by: Chayenne <zhaochenyang@g.ucla.edu>
2024-10-30 00:39:41 -07:00
DanielC12321
5e00ddebc0 Add new model: Gpt2 (#1833) 2024-10-29 17:52:33 -07:00
HAI
54dd3ea122 [FP8 KV Cache, Mixtral] Avoid KeyError at loading pre-quantized FP8 m… (#1835) 2024-10-29 13:58:03 -07:00
yizhang2077
d04899d7ca stop_str of qwen2-vl template should be a tuple not a str (#1834)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-10-29 20:30:41 +00:00
Yanyi Liu
5e6c32657e Support setting use_thread in the run_program for easier debugging. (#1823)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-10-29 06:51:47 +00:00
Byron Hsu
680cad2023 fix get_memory_pool_size deadlock for DP (#1830) 2024-10-28 23:07:14 -07:00
Byron Hsu
0a24eb850a Fix update_weights deadlock for DP (#1825) 2024-10-28 12:02:23 -07:00
Byron Hsu
6fcd6d7d6d Support token ids in engine.generate (#1820) 2024-10-27 14:02:34 -07:00
Ke Bao
c77762d57f Fix Triton decode kernel & ut (#1819) 2024-10-27 10:54:38 -07:00
Lianmin Zheng
eaade87a42 Fix unit tests (#1817) 2024-10-27 03:04:54 -07:00
Lianmin Zheng
86fc0d79d0 Add a watch dog thread (#1816) 2024-10-27 02:00:50 -07:00
Lianmin Zheng
86e0dde555 Improve the user control of new_token_ratio (#1811) 2024-10-26 16:39:41 -07:00
Lianmin Zheng
2b80978859 Provide an argument to set the maximum batch size for cuda graph (#1809) 2024-10-26 15:09:33 -07:00
Chayenne
ced362f7c6 Simplify our docs with complicated functions into utils (#1807)
Co-authored-by: Chayenne <zhaochenyang@ucla.edu>
2024-10-26 17:44:11 +00:00
Lianmin Zheng
9084a86445 Update links (#1805) 2024-10-26 04:46:01 -07:00
Byron Hsu
c26507484f fix int conversion for SGLANG_CPU_COUNT (#1803) 2024-10-26 00:09:44 -07:00
Liangsheng Yin
07bf2e846a Allow consecutive ports when launching multiple sglang servers. (#1802) 2024-10-26 06:43:24 +00:00
Liangsheng Yin
a628dd8e31 Set ZMQ buffer size heuristic (#1801) 2024-10-25 23:15:56 -07:00
Liangsheng Yin
1e8903414a Fix possible ZMQ hanging (#1800) 2024-10-25 23:07:07 -07:00
Hui Liu
9ce8e1a93c move max_position_embeddings to the last (#1799) 2024-10-25 19:30:50 -07:00
Lianmin Zheng
fb99aaa527 [Fix] Fix --skip-tokenizer-init (#1798) 2024-10-25 18:51:59 -07:00
DarkSharpness
b77a02cdfd [Performance] Support both xgrammar and outlines for constrained decoding (#1752) 2024-10-25 21:47:02 +00:00
Lianmin Zheng
30643fed7f Release v0.3.4.post2 (#1796)
Co-authored-by: DarkSharpness <76582120+DarkSharpness@users.noreply.github.com>
2024-10-25 11:07:19 -07:00
Lianmin Zheng
e646c5901e Fix logprob in the overlapped mode (#1795) 2024-10-25 11:06:57 -07:00
Lianmin Zheng
c555ce2ca2 Revert "Fix memory leak when doing chunked prefill" (#1797) 2024-10-25 10:24:44 -07:00
Lianmin Zheng
40900baea7 [Fix] Fix the log parsing in chunked prefill uni tests (#1794) 2024-10-25 08:31:08 -07:00
Liangsheng Yin
a2f5e7555f Fix memory leak when doing chunked prefill (#1787) 2024-10-25 08:01:17 -07:00
Lianmin Zheng
2148914e1b Fix log parsing in the chunked prefill unit tests (#1793) 2024-10-25 08:00:55 -07:00
yizhang2077
def55bc876 Qwen2vl support cuda graph and disable radix cache (#1780) 2024-10-25 10:45:17 -04:00
Lianmin Zheng
86a2c473b7 [Fix] Fix seq_lens_sum for cuda graph runner in padded cases (#1789) 2024-10-24 21:26:05 -07:00
Lianmin Zheng
1701b0db31 Enhance the test case for chunked prefill (#1785) 2024-10-24 21:23:09 -07:00
Lianmin Zheng
384d85ba35 Re-introduce get_cuda_graph_seq_len_fill_value (#1783) 2024-10-24 13:30:11 -07:00
Xiaoyu Zhang
605972195b check user-specified model_max_len with hf derived max_model_len (#1778) 2024-10-24 12:40:36 -07:00
Lianmin Zheng
fc82f5a743 [Fix] Fix cuda graph padding for triton attention backend (#1782) 2024-10-24 12:33:15 -07:00
Lianmin Zheng
0089c4bc96 [Fix] Fix NaN issues by fixing the cuda graph padding values for flashinfer (#1779) 2024-10-24 04:16:59 -07:00
zolinthecow
72e7b57a75 [Bug] Catch any errors caused by parsing json schema (#1776) 2024-10-24 01:54:53 -07:00