Commit Graph

348 Commits

Author SHA1 Message Date
Ying Sheng
83d2b30d75 format 2024-07-24 10:53:07 +00:00
Ying Sheng
4367f4bb8d Fix prefill size (#711) 2024-07-24 03:41:15 -07:00
Lianmin Zheng
00e4baa728 Update schedule_heuristic.py 2024-07-24 01:22:30 -07:00
Liangsheng Yin
4cd64b8ee6 Auto adjust new ratio (#708) 2024-07-23 22:06:02 -07:00
Lianmin Zheng
01d66ae2e8 Fix multi-node deadlock (#709) 2024-07-23 21:53:36 -07:00
Mingyi
a523a3c13a Reduce hardcoded logic of kernel usage (#707) 2024-07-23 16:42:21 -07:00
Ying Sheng
9f94728f5a bump version to 0.1.23 (#706) 2024-07-23 13:53:19 -07:00
Ying Sheng
444a02441a Update vllm version to support llama3.1 (#705) 2024-07-23 13:49:34 -07:00
zhyncs
fa7ccb3316 feat: add e2e latency (#704) 2024-07-24 05:51:10 +10:00
Liangsheng Yin
268684439b Use min new token ratio at start (#701) 2024-07-23 11:52:50 -07:00
Ke Bao
824a77d04d Fix hf config loading (#702) 2024-07-23 11:39:08 -07:00
Ying Sheng
cf99eab7d5 Fix flashinfer (#700) 2024-07-23 01:27:01 -07:00
zhyncs
9fdea29d05 misc: fix typo (#698) 2024-07-23 02:00:27 +10:00
Ying Sheng
df7c4c19b4 Fix trt benchmark (#697) 2024-07-22 23:32:41 +10:00
Ying Sheng
c3f1aac811 Tune params (#696) 2024-07-22 03:19:24 -07:00
zhyncs
d198791fe8 misc: update output token logic (#695) 2024-07-22 19:34:05 +10:00
zhyncs
c07526e46c fix: update bench serving (#694) 2024-07-22 18:23:33 +10:00
Ke Bao
5303c1ed22 Support Mistral-Nemo (#691) 2024-07-22 03:36:53 +10:00
zhyncs
65bd13386b misc: recommend to use chat model for benchmark (#690) 2024-07-22 00:13:33 +10:00
Liangsheng Yin
eedc12e12e Support Deepseek MoE Model (#689) 2024-07-21 03:09:29 -07:00
zhyncs
6a846bb1fd misc: update output file logic (#686) 2024-07-21 18:07:30 +10:00
zhyncs
0fdb3127a1 feat: update bench serving (#685) 2024-07-21 16:46:58 +10:00
Max Shawabkeh
5ad033a070 Fix StreamExecutor.fork() losing the current role start index. (#684) 2024-07-20 23:32:11 -07:00
Lianmin Zheng
77e592e8e0 support non-streaming benchmark (#682) 2024-07-20 18:36:42 -07:00
Liangsheng Yin
caaad53b52 Support gpt-bigcode model class (#681) 2024-07-20 18:34:37 -07:00
Liangsheng Yin
69d19188fc Decouple kv (#679) 2024-07-20 14:16:45 -07:00
zhyncs
4b4a67f814 feat: support TRT LLM benchmark and multiple benchmarks (#670) 2024-07-20 11:05:35 -07:00
Ke Bao
0ac94c36cb Fallback when sampling failed (#678) 2024-07-20 10:44:54 -07:00
Ying Sheng
2b4c646277 Update version to 0.1.22 (#677) 2024-07-20 03:39:50 -07:00
Liangsheng Yin
f424e76d96 Fix illegal tokens during sampling (#676) 2024-07-20 03:11:15 -07:00
Lianmin Zheng
490a1f39dd Fix cuda graph with flashinfer (#675) 2024-07-20 02:43:55 -07:00
Ying Sheng
06487f126e refactor model loader: initial refactor (#664) 2024-07-20 02:18:22 -07:00
Liangsheng Yin
39c57317e1 Revert "Temporary fix invalid sample results" (#673) 2024-07-20 02:06:31 -07:00
Lianmin Zheng
9592a1f3bd Fix random dataset (#671) 2024-07-20 01:57:43 -07:00
Lianmin Zheng
35759efa91 Support random dataset in bench_serving.py (#669) 2024-07-20 01:06:43 -07:00
Liangsheng Yin
8f4b1559e7 Temporary fix invalid sample results (#668) 2024-07-20 00:51:05 -07:00
Mingyi
e3046ea3a8 Update OpenAI API (#667) 2024-07-19 23:20:54 -07:00
yichuan~
49c5e0eca9 Add support for OpenAI API parallel sampling (#640) 2024-07-19 23:10:01 -07:00
Ke Bao
ec2150b294 Fix kill process util (#666) 2024-07-19 21:43:11 -07:00
Liangsheng Yin
7620cd37dd Fix jump forward when streaming (#665) 2024-07-19 16:42:06 -07:00
Ying Sheng
11c8efff73 Add benchmark instructions (#663) 2024-07-19 11:12:23 -07:00
Ying Sheng
e87c7fd501 Improve docs (#662) 2024-07-19 10:58:03 -07:00
zhyncs
630479c3a6 feat: update check env (#661) 2024-07-19 09:54:15 -07:00
Ying Sheng
51fda1439f Update Readme (#660)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2024-07-19 09:54:01 -07:00
zhyncs
dc4e4a6acc misc: update SGLang package description (#659) 2024-07-19 09:27:39 -07:00
Ying Sheng
2d96da813e refactor model loader [unreachable code]: initial refactor (#655) 2024-07-19 09:27:06 -07:00
zhyncs
c126a6ccba feat: add benchmark serving (#657) 2024-07-19 09:15:21 -07:00
zhyncs
ac971ff633 perf: reduce ttft and itl with stream_interval 1 (#658) 2024-07-19 09:14:22 -07:00
Lianmin Zheng
e1792cca24 Remove cached triton launcher (#656) 2024-07-18 23:28:40 -07:00
shrirajh
1b7adbb5a0 TokenizerManager.context_len should inherit from `server_args.conte… (#654) 2024-07-18 21:55:29 -07:00