Commit Graph

989 Commits

Author SHA1 Message Date
Henry Hyeonmok Ko
c35cd1f8c7 Expose max total num tokens from Runtime & Engine API (#2092) 2024-11-22 15:10:10 -08:00
Xuehai Pan
62a4a339eb docs: fix module docstrings and copyright headers (#2077) 2024-11-22 22:16:53 +08:00
Yineng Zhang
2797bc3422 fix: add xgrammar dependency (#2126) 2024-11-22 20:53:11 +08:00
Yineng Zhang
9a00e6f453 chore: bump v0.3.6 (#2120) 2024-11-22 19:27:30 +08:00
Yineng Zhang
4f8c3aeafc minor: update gsm8k threshold (#2125) 2024-11-22 19:23:58 +08:00
Lianmin Zheng
2369e88209 [minor] Clean up unused imports (#2122)
Co-authored-by: rinrin32 <rinrin.int@gmail.com>
2024-11-22 01:50:42 -08:00
bjmsong
ad30d5cf9a Benchmark with Pytorch Profiler easily (#2110)
Co-authored-by: root <bjmsong@126.com>
2024-11-21 23:29:50 -08:00
Lianmin Zheng
dfec7fca06 Rename sglang.bench_latency to sglang.bench_one_batch (#2118) 2024-11-21 20:07:48 -08:00
Jake Poznanski
8048c28c11 Fix #2037 - Context length check does not take into out pad tokens for visual models (#2106) 2024-11-21 19:05:41 -08:00
Byron Hsu
30af7dfb34 [router] add base_gpu_id server args & merged radix tree python reference (#2115) 2024-11-21 17:13:33 -08:00
James Xu
f6f713797b Add support for Qwen2-VL-based embedding models (#2055) 2024-11-21 14:24:25 -08:00
HAI
f35cb46cc3 ROCm: Fix MoE padding for none FP8 cases (#2111) 2024-11-21 12:23:21 -08:00
Jerry Zhang
7f8fcd39cd Turn off autotune for scaled mm for fp8 dynamic quant in torchao (#2116) 2024-11-21 12:19:49 -08:00
Jerry Zhang
5c6a41facf Error out when torchao-config option is not recognized (#2107) 2024-11-20 17:37:28 -08:00
Lianmin Zheng
722530fa01 Enable overlap scheduler by default for the triton attention backend (#2105) 2024-11-20 02:58:35 -08:00
Lianmin Zheng
3295cd8af2 Allow skipping warmup in bench_offline_throughput.py (#2103) 2024-11-20 01:25:21 -08:00
Ying Sheng
5942dfc00a [feat] Add session control (#2073) 2024-11-20 00:36:53 -08:00
Lianmin Zheng
7d671e4ad2 Enable overlap by default (#2067) 2024-11-19 22:07:58 -08:00
Ke Bao
699384cb01 Set schedule policy more conservative for DP attention (#2096) 2024-11-19 20:57:18 -08:00
Lianmin Zheng
ffd20fcd03 Make constrained decoding work for overlap scheduler (#2095) 2024-11-19 15:04:43 -08:00
Yineng Zhang
55bd97f3e5 minor: add dataset dump and questions shuffle (#2093) 2024-11-19 14:07:27 -08:00
HAI
e57c3e12b8 Use native fp8 format on MI300X (#2094) 2024-11-19 14:06:29 -08:00
Alexander Waitz
929c7621af Fix: incorrect top_logprobs in chat completion (#2088) 2024-11-19 12:21:36 +00:00
Lianmin Zheng
b7a065eae3 Use cuda event wait and synchronization instead of busy waiting (#2089) 2024-11-19 00:21:46 -08:00
Lianmin Zheng
b110453802 Simplify logits penalizer (#2086) 2024-11-18 17:48:28 -08:00
Lianmin Zheng
3b44bbeecf Allow passing extra request body to bench_offline_throughput.py (#2085) 2024-11-18 14:59:15 -08:00
Lianmin Zheng
80e2c4a8de Fix chunked prefill with output logprob (#2083) 2024-11-18 13:16:28 -08:00
Jani Monoses
66318ffe96 Rename layer_idx to layer_id for consistency (#2078) 2024-11-18 13:00:02 -08:00
Yineng Zhang
766192610e feat: update torch 2.5.1 (#2069) 2024-11-18 21:29:13 +08:00
yukavio
2a3992b6f1 support set role as 'tool' (#2075)
Co-authored-by: kavioyu <kavioyu@tencent.com>
2024-11-18 01:06:59 -08:00
Lianmin Zheng
4af3f889fc Simplify flashinfer indices update for prefill (#2074)
Co-authored-by: kavioyu <kavioyu@tencent.com>
Co-authored-by: kavioyu <kavioyu@gmail.com>
2024-11-18 00:02:36 -08:00
Lianmin Zheng
df7fe4521a Crash the CI jobs on model import errors (#2072) 2024-11-17 22:18:11 -08:00
Lianmin Zheng
116685337e Fix cuda illegal memory access in overlap mode (#2070) 2024-11-17 21:29:30 -08:00
Lianmin Zheng
a9e90b4bce [Minor] Fix styles for overlap mode (#2068) 2024-11-17 19:49:20 -08:00
Tanjiro
8c280cee55 add phi-3 small support (#2062)
Co-authored-by: Tushar Goel <114812108+AI-Tushar@users.noreply.github.com>
2024-11-17 18:47:43 -08:00
DarkSharpness
9c745d078e [Performance] Update xgrammar-related constrained decoding (#2056) 2024-11-17 16:58:49 -08:00
Lianmin Zheng
ebaa2f3199 Rename arguments --disable-nan-detection to --enable-nan-detection (#2066) 2024-11-17 16:53:44 -08:00
Ke Bao
62832bb272 Support cuda graph for DP attention (#2061) 2024-11-17 16:29:20 -08:00
Lianmin Zheng
11f881d173 Deprecate --disable-flashinfer and --disable-flashinfer-sampling (#2065) 2024-11-17 16:20:58 -08:00
Lianmin Zheng
38625e2139 Remove monkey_patch_vllm_dummy_weight_loader (#2064) 2024-11-17 15:48:12 -08:00
Lianmin Zheng
c1f401fc58 Revert "chore: update torch v2.5.1" (#2063) 2024-11-17 15:29:38 -08:00
Yineng Zhang
3b878863f7 chore: update torch v2.5.1 (#1849) 2024-11-18 00:06:00 +08:00
Lianmin Zheng
f719d9aebc Launch dp ranks in parallel (#2053)
Co-authored-by: Haotian Liu <6631389+haotian-liu@users.noreply.github.com>
2024-11-16 17:39:39 -08:00
Lianmin Zheng
edad373135 Fix illegal memory access in overlap mode & Use more fused triton kernels for building meta data (#2051) 2024-11-16 16:14:23 -08:00
Ke Bao
976bc302e5 Support DP MLA (#1970) 2024-11-16 09:01:43 +00:00
Lianmin Zheng
2f2e07439c Fix weight update for data parallelism (#2050) 2024-11-16 00:30:39 -08:00
HAI
2ffe0a7363 Add get_amdgpu_memory_capacity() (#2049) 2024-11-15 22:51:48 -08:00
Ke Wen
cf2489762b Add Tensor Parallel to torch_native_llama (#1876) 2024-11-15 21:26:00 -08:00
HAI
e5c6715003 Fix core (MI300X) with --enable-overlap (#2048) 2024-11-15 21:24:42 -08:00
Lianmin Zheng
32c9a7ec11 Release v0.3.5.post2 (#2046) 2024-11-15 06:54:00 -08:00