Commit Graph

979 Commits

Author SHA1 Message Date
James Xu
f6f713797b Add support for Qwen2-VL-based embedding models (#2055) 2024-11-21 14:24:25 -08:00
HAI
f35cb46cc3 ROCm: Fix MoE padding for none FP8 cases (#2111) 2024-11-21 12:23:21 -08:00
Jerry Zhang
7f8fcd39cd Turn off autotune for scaled mm for fp8 dynamic quant in torchao (#2116) 2024-11-21 12:19:49 -08:00
Jerry Zhang
5c6a41facf Error out when torchao-config option is not recognized (#2107) 2024-11-20 17:37:28 -08:00
Lianmin Zheng
722530fa01 Enable overlap scheduler by default for the triton attention backend (#2105) 2024-11-20 02:58:35 -08:00
Lianmin Zheng
3295cd8af2 Allow skipping warmup in bench_offline_throughput.py (#2103) 2024-11-20 01:25:21 -08:00
Ying Sheng
5942dfc00a [feat] Add session control (#2073) 2024-11-20 00:36:53 -08:00
Lianmin Zheng
7d671e4ad2 Enable overlap by default (#2067) 2024-11-19 22:07:58 -08:00
Ke Bao
699384cb01 Set schedule policy more conservative for DP attention (#2096) 2024-11-19 20:57:18 -08:00
Lianmin Zheng
ffd20fcd03 Make constrained decoding work for overlap scheduler (#2095) 2024-11-19 15:04:43 -08:00
Yineng Zhang
55bd97f3e5 minor: add dataset dump and questions shuffle (#2093) 2024-11-19 14:07:27 -08:00
HAI
e57c3e12b8 Use native fp8 format on MI300X (#2094) 2024-11-19 14:06:29 -08:00
Alexander Waitz
929c7621af Fix: incorrect top_logprobs in chat completion (#2088) 2024-11-19 12:21:36 +00:00
Lianmin Zheng
b7a065eae3 Use cuda event wait and synchronization instead of busy waiting (#2089) 2024-11-19 00:21:46 -08:00
Lianmin Zheng
b110453802 Simplify logits penalizer (#2086) 2024-11-18 17:48:28 -08:00
Lianmin Zheng
3b44bbeecf Allow passing extra request body to bench_offline_throughput.py (#2085) 2024-11-18 14:59:15 -08:00
Lianmin Zheng
80e2c4a8de Fix chunked prefill with output logprob (#2083) 2024-11-18 13:16:28 -08:00
Jani Monoses
66318ffe96 Rename layer_idx to layer_id for consistency (#2078) 2024-11-18 13:00:02 -08:00
Yineng Zhang
766192610e feat: update torch 2.5.1 (#2069) 2024-11-18 21:29:13 +08:00
yukavio
2a3992b6f1 support set role as 'tool' (#2075)
Co-authored-by: kavioyu <kavioyu@tencent.com>
2024-11-18 01:06:59 -08:00
Lianmin Zheng
4af3f889fc Simplify flashinfer indices update for prefill (#2074)
Co-authored-by: kavioyu <kavioyu@tencent.com>
Co-authored-by: kavioyu <kavioyu@gmail.com>
2024-11-18 00:02:36 -08:00
Lianmin Zheng
df7fe4521a Crash the CI jobs on model import errors (#2072) 2024-11-17 22:18:11 -08:00
Lianmin Zheng
116685337e Fix cuda illegal memory access in overlap mode (#2070) 2024-11-17 21:29:30 -08:00
Lianmin Zheng
a9e90b4bce [Minor] Fix styles for overlap mode (#2068) 2024-11-17 19:49:20 -08:00
Tanjiro
8c280cee55 add phi-3 small support (#2062)
Co-authored-by: Tushar Goel <114812108+AI-Tushar@users.noreply.github.com>
2024-11-17 18:47:43 -08:00
DarkSharpness
9c745d078e [Performance] Update xgrammar-related constrained decoding (#2056) 2024-11-17 16:58:49 -08:00
Lianmin Zheng
ebaa2f3199 Rename arguments --disable-nan-detection to --enable-nan-detection (#2066) 2024-11-17 16:53:44 -08:00
Ke Bao
62832bb272 Support cuda graph for DP attention (#2061) 2024-11-17 16:29:20 -08:00
Lianmin Zheng
11f881d173 Deprecate --disable-flashinfer and --disable-flashinfer-sampling (#2065) 2024-11-17 16:20:58 -08:00
Lianmin Zheng
38625e2139 Remove monkey_patch_vllm_dummy_weight_loader (#2064) 2024-11-17 15:48:12 -08:00
Lianmin Zheng
c1f401fc58 Revert "chore: update torch v2.5.1" (#2063) 2024-11-17 15:29:38 -08:00
Yineng Zhang
3b878863f7 chore: update torch v2.5.1 (#1849) 2024-11-18 00:06:00 +08:00
Lianmin Zheng
f719d9aebc Launch dp ranks in parallel (#2053)
Co-authored-by: Haotian Liu <6631389+haotian-liu@users.noreply.github.com>
2024-11-16 17:39:39 -08:00
Lianmin Zheng
edad373135 Fix illegal memory access in overlap mode & Use more fused triton kernels for building meta data (#2051) 2024-11-16 16:14:23 -08:00
Ke Bao
976bc302e5 Support DP MLA (#1970) 2024-11-16 09:01:43 +00:00
Lianmin Zheng
2f2e07439c Fix weight update for data parallelism (#2050) 2024-11-16 00:30:39 -08:00
HAI
2ffe0a7363 Add get_amdgpu_memory_capacity() (#2049) 2024-11-15 22:51:48 -08:00
Ke Wen
cf2489762b Add Tensor Parallel to torch_native_llama (#1876) 2024-11-15 21:26:00 -08:00
HAI
e5c6715003 Fix core (MI300X) with --enable-overlap (#2048) 2024-11-15 21:24:42 -08:00
Lianmin Zheng
32c9a7ec11 Release v0.3.5.post2 (#2046) 2024-11-15 06:54:00 -08:00
Lianmin Zheng
b01df48cf2 [Fix] Adjust default chunked prefill size and cuda graph max bs according to GPU memory capacity (#2044) 2024-11-15 06:21:57 -08:00
Lianmin Zheng
c29b98e043 Fix json benchmark (#2043) 2024-11-15 05:33:43 -08:00
Lianmin Zheng
2558d6a675 Fix the default arguments of bench_offline_throughput.py & simplify detokenizer manager (#2042) 2024-11-15 05:02:44 -08:00
ws
29ebe3dff4 fix: align enable_overlap_scheduler naming between code and docs (#2038) 2024-11-15 03:39:10 -08:00
zolinthecow
f6dd648620 Offline LLM Engine Benchmark Throughput (#1968)
Co-authored-by: ByronHsu <byronhsu1230@gmail.com>
2024-11-14 21:59:33 -08:00
Lianmin Zheng
ea53c63bad Expose no_stop_trim and skip_special_tokens in openai api (#2039) 2024-11-14 19:09:21 -08:00
Lianmin Zheng
a10d530943 Fix outlines version (#2036) 2024-11-14 12:52:40 -08:00
Lianmin Zheng
aae5434bdf Fix unit tests (#2034) 2024-11-14 11:08:37 -08:00
Lianmin Zheng
c3eac1b010 Fix torch.compile for MoE (#2033) 2024-11-14 01:30:24 -08:00
Patrick Yi
13ce3e4b5d Add download_dir ServerArgs property (#2027) 2024-11-13 23:26:56 -08:00