Commit Graph

960 Commits

Author SHA1 Message Date
yukavio
2a3992b6f1 support set role as 'tool' (#2075)
Co-authored-by: kavioyu <kavioyu@tencent.com>
2024-11-18 01:06:59 -08:00
Lianmin Zheng
4af3f889fc Simplify flashinfer indices update for prefill (#2074)
Co-authored-by: kavioyu <kavioyu@tencent.com>
Co-authored-by: kavioyu <kavioyu@gmail.com>
2024-11-18 00:02:36 -08:00
Lianmin Zheng
df7fe4521a Crash the CI jobs on model import errors (#2072) 2024-11-17 22:18:11 -08:00
Lianmin Zheng
116685337e Fix cuda illegal memory access in overlap mode (#2070) 2024-11-17 21:29:30 -08:00
Lianmin Zheng
a9e90b4bce [Minor] Fix styles for overlap mode (#2068) 2024-11-17 19:49:20 -08:00
Tanjiro
8c280cee55 add phi-3 small support (#2062)
Co-authored-by: Tushar Goel <114812108+AI-Tushar@users.noreply.github.com>
2024-11-17 18:47:43 -08:00
DarkSharpness
9c745d078e [Performance] Update xgrammar-related constrained decoding (#2056) 2024-11-17 16:58:49 -08:00
Lianmin Zheng
ebaa2f3199 Rename arguments --disable-nan-detection to --enable-nan-detection (#2066) 2024-11-17 16:53:44 -08:00
Ke Bao
62832bb272 Support cuda graph for DP attention (#2061) 2024-11-17 16:29:20 -08:00
Lianmin Zheng
11f881d173 Deprecate --disable-flashinfer and --disable-flashinfer-sampling (#2065) 2024-11-17 16:20:58 -08:00
Lianmin Zheng
38625e2139 Remove monkey_patch_vllm_dummy_weight_loader (#2064) 2024-11-17 15:48:12 -08:00
Lianmin Zheng
c1f401fc58 Revert "chore: update torch v2.5.1" (#2063) 2024-11-17 15:29:38 -08:00
Yineng Zhang
3b878863f7 chore: update torch v2.5.1 (#1849) 2024-11-18 00:06:00 +08:00
Lianmin Zheng
f719d9aebc Launch dp ranks in parallel (#2053)
Co-authored-by: Haotian Liu <6631389+haotian-liu@users.noreply.github.com>
2024-11-16 17:39:39 -08:00
Lianmin Zheng
edad373135 Fix illegal memory access in overlap mode & Use more fused triton kernels for building meta data (#2051) 2024-11-16 16:14:23 -08:00
Ke Bao
976bc302e5 Support DP MLA (#1970) 2024-11-16 09:01:43 +00:00
Lianmin Zheng
2f2e07439c Fix weight update for data parallelism (#2050) 2024-11-16 00:30:39 -08:00
HAI
2ffe0a7363 Add get_amdgpu_memory_capacity() (#2049) 2024-11-15 22:51:48 -08:00
Ke Wen
cf2489762b Add Tensor Parallel to torch_native_llama (#1876) 2024-11-15 21:26:00 -08:00
HAI
e5c6715003 Fix core (MI300X) with --enable-overlap (#2048) 2024-11-15 21:24:42 -08:00
Lianmin Zheng
32c9a7ec11 Release v0.3.5.post2 (#2046) 2024-11-15 06:54:00 -08:00
Lianmin Zheng
b01df48cf2 [Fix] Adjust default chunked prefill size and cuda graph max bs according to GPU memory capacity (#2044) 2024-11-15 06:21:57 -08:00
Lianmin Zheng
c29b98e043 Fix json benchmark (#2043) 2024-11-15 05:33:43 -08:00
Lianmin Zheng
2558d6a675 Fix the default arguments of bench_offline_throughput.py & simplify detokenizer manager (#2042) 2024-11-15 05:02:44 -08:00
ws
29ebe3dff4 fix: align enable_overlap_scheduler naming between code and docs (#2038) 2024-11-15 03:39:10 -08:00
zolinthecow
f6dd648620 Offline LLM Engine Benchmark Throughput (#1968)
Co-authored-by: ByronHsu <byronhsu1230@gmail.com>
2024-11-14 21:59:33 -08:00
Lianmin Zheng
ea53c63bad Expose no_stop_trim and skip_special_tokens in openai api (#2039) 2024-11-14 19:09:21 -08:00
Lianmin Zheng
a10d530943 Fix outlines version (#2036) 2024-11-14 12:52:40 -08:00
Lianmin Zheng
aae5434bdf Fix unit tests (#2034) 2024-11-14 11:08:37 -08:00
Lianmin Zheng
c3eac1b010 Fix torch.compile for MoE (#2033) 2024-11-14 01:30:24 -08:00
Patrick Yi
13ce3e4b5d Add download_dir ServerArgs property (#2027) 2024-11-13 23:26:56 -08:00
chottolabs
fb9fb3518b set content to empty string (#2026) 2024-11-14 01:06:02 +00:00
Lianmin Zheng
c722d9bdc3 Fix dependency and error message for xgrammar (#2024) 2024-11-13 14:04:25 -08:00
Lianmin Zheng
218ab3611d Do not let invalid grammar crash the server (#2023) 2024-11-13 11:39:16 -08:00
Lianmin Zheng
f407fcf9ef Release v0.3.5.post1 (#2022) 2024-11-13 10:27:12 -08:00
Lianmin Zheng
54479d6f30 Fix grammar backend for tensor parallelism (#2020) 2024-11-13 01:49:45 -08:00
Lianmin Zheng
ba069a24d3 Fix grammar backend (#2018) 2024-11-12 21:17:38 -08:00
DarkSharpness
125b1199c5 support parallel grammar preprocessing (#1996)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2024-11-12 08:45:28 -08:00
Xiaoyu Zhang
a1bd719031 fix a bug in v1_embeeding_request (#2014) 2024-11-12 16:49:45 +08:00
Lianmin Zheng
78c1d6445f Fix finish reason (#2013) 2024-11-11 23:24:41 -08:00
Xiaoyu Zhang
027e65248f support echo=true and logprobs in openai api when logprobs=1 in lm-evaluation-harness (#1998) 2024-11-11 23:21:20 -08:00
Ke Bao
b808a38365 Filter empty prompt in random bench serving (#2011) 2024-11-12 14:53:41 +08:00
Lianmin Zheng
530ae1bdc8 Fix weight loading for tied word embedding when TP > 1 (#2009) 2024-11-11 17:52:42 -08:00
Lianmin Zheng
befc6beb86 Fix a typo in io_struct.py (#2008) 2024-11-11 16:34:10 -08:00
Lianmin Zheng
59a5ba9be0 [Minor] Remove unused imports (#2006) 2024-11-11 15:36:14 -08:00
RangiLyu
f18b9c7252 support internlm2-reward (#1994) 2024-11-11 15:09:58 -08:00
James Xu
ddeb9d42de Add engine encode (#1995)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2024-11-11 11:48:17 -08:00
HAI
087ab83223 [Performance, Triton] Optimize over mask compute to tl.load in fused_moe_kernel (#1980) 2024-11-10 18:54:43 -08:00
Byron Hsu
8169c6f4ef Add gen-shared-prefix dataset in bench_serving (#1990) 2024-11-11 08:39:56 +08:00
yizhang2077
a8aad9357d qwen2vl fix bug for #1971 #1897 (#1984) 2024-11-10 08:10:45 -08:00