Commit Graph

1348 Commits

Author SHA1 Message Date
Wen-Heng (Jack) Chung
d9eb9358cc Tune paged attention parameters for AMD GPU. (#3255) 2025-02-01 17:29:45 -08:00
Yineng Zhang
959dca4fc7 use srt VocabParallelEmbedding (#3252) 2025-02-01 22:23:09 +08:00
Yineng Zhang
8db776f049 support QuickGELU (#3250) 2025-02-01 19:31:47 +08:00
Yineng Zhang
4eb4b401cc update and simplify CustomOp (#3249) 2025-02-01 18:56:44 +08:00
Yineng Zhang
34e405e01f update sgl-kernel version for sglang (#3238) 2025-02-01 02:14:41 +08:00
Ke Bao
1ebe1d6de5 Optimize MoE topk with torch compile (#3236) 2025-02-01 01:36:50 +08:00
Yineng Zhang
7811bfdaa7 compatible with flashinfer v0.2 (#3235) 2025-02-01 01:32:18 +08:00
Yineng Zhang
cf0f7eafe6 chore: bump v0.4.2.post1 (#3233) 2025-01-31 20:35:55 +08:00
Ke Bao
c02e313914 Fix block wise fp8 torch compile (#3232) 2025-01-31 19:56:02 +08:00
Byron Hsu
734daedd8f [fix] Clamp logprob with dtype min to prevent -inf (#3224) 2025-01-31 17:04:04 +08:00
Mick
9f635ea50d [Fix] Address remaining issues of supporting MiniCPMV (#2977) 2025-01-28 00:22:13 -08:00
Byron Hsu
988d0a4bfc [kernel] Use sgl_kernel rope (#3169)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-01-28 14:33:11 +08:00
Jhin
7b9b4f4426 Docs fix about EAGLE and streaming output (#3166)
Co-authored-by: Chayenne <zhaochenyang@ucla.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Jhin <jhinpan@umich.edu>
2025-01-27 18:10:45 -08:00
Zhiqiang Xie
08104b56de Sanity check to prevent performance regression (#3171)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-01-27 12:28:17 -08:00
Yineng Zhang
4ab43cfb3e chore: bump v0.4.2 (#3180) 2025-01-27 21:42:05 +08:00
Yineng Zhang
2f79f58873 feat: use sgl-kernel 0.0.3 in sglang (#3179) 2025-01-27 21:39:52 +08:00
Lianmin Zheng
53cef81587 Improve weight loading and code style (#3174) 2025-01-27 03:00:41 -08:00
yigex
351a72d40b add dsv3 mi300 triton config for block scale (#3146) 2025-01-27 17:25:53 +08:00
Lianmin Zheng
52c03f16b9 Add activation parameters to fused_moe (#3170) 2025-01-27 00:23:37 -08:00
YAMY
b045841bae Feature/function calling update (#2700)
Co-authored-by: Mingyuan Ma <mamingyuan2001@berkeley.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
2025-01-26 09:57:51 -08:00
Lianmin Zheng
1dda8c5e4c Return more infos for computing average acceptance length (#3152) 2025-01-26 04:51:54 -08:00
Yineng Zhang
7e0976133c udpate sgl-kernel version for srt (#3150) 2025-01-26 20:22:34 +08:00
Lianmin Zheng
d1a0863251 Add a test case for cached_tokens (#3145) 2025-01-26 01:39:28 -08:00
Hubert Lu
f8b28e461a Add CPU affinity setting to latency benchmark (#3085) 2025-01-25 23:52:05 -08:00
Lianmin Zheng
4f118a39d7 Fix repetition penalty (#3139) 2025-01-25 21:48:58 -08:00
yigex
66283dbc0c [Fix] Not skip NVML Check on AMD Platform (#3135) 2025-01-25 21:33:51 -08:00
Hui Liu
8e48ca8cc1 enable kv_scale for Gemma2 (#3113) 2025-01-25 18:29:14 -08:00
Lianmin Zheng
27acf63bbd Use torch.compile for scaling penalty (#3133) 2025-01-25 18:27:33 -08:00
Lianmin Zheng
ea535dc574 Revert "disable custom allreduce on HIP" (#3067) 2025-01-22 21:33:35 -08:00
Ke Wen
862bcff833 Support loading of larger models with on-the-fly quantization (#3061) 2025-01-22 21:33:17 -08:00
Lianmin Zheng
8b84e69f25 Fix tp token sync for dp attention (#3062) 2025-01-22 18:51:40 -08:00
Lianmin Zheng
022614d26e Add some flags to allow sync token ids across TP ranks (#3060) 2025-01-22 15:05:51 -08:00
lukec
b8ab989ff4 Fix the FP8 E4M3 parsing offline scales failure bug (#3045) 2025-01-22 14:19:33 -08:00
Hui Liu
ddc2001fb0 disable custom allreduce on HIP (#3058) 2025-01-22 13:57:22 -08:00
nstream-ai-devx
0d2148efaa fix rotary_embedding rope_scaling for phi (#3055) 2025-01-23 02:15:32 +08:00
Lianmin Zheng
3d8f1c9bcf Use int64 as indices for set_kv_buffer (#3039) 2025-01-21 19:46:09 -08:00
Lianmin Zheng
a4331cd260 Add accuracy and latency tests of eagle into CI (#3027) 2025-01-21 02:55:14 -08:00
Lianmin Zheng
287d07a669 Misc fixes for eagle (flush_cache, CPU overhead) (#3014) 2025-01-20 20:27:38 -08:00
Hui Liu
d2571dd5c7 Enable Cohere2 Models (#3018) 2025-01-20 19:21:41 -08:00
996_icu
b730aa6b9e [EAGLE] Fix some boundary situation when retract reqs and req's max token = 1 (#2939)
Co-authored-by: josephyou <josephyou@tencent.com>
2025-01-20 17:46:43 -08:00
Lianmin Zheng
60b2a44a80 Fix flaky tests in test_programs.py (#3022) 2025-01-20 16:50:39 -08:00
Hongpeng Guo
949b3fbfce [Doc] Update doc of custom logit processor (#3021)
Signed-off-by: Hongpeng Guo <hpguo@anyscale.com>
2025-01-20 16:50:25 -08:00
Hui Liu
da4e8b3892 enable kv_scale remap (#3017) 2025-01-20 14:40:45 -08:00
Enrique Shockwave
af6c5357d5 deepseek v3 and r1 chat template (#3015) 2025-01-20 14:40:12 -08:00
Yineng Zhang
e94fb7cb10 chore: bump v0.4.1.post7 (#3009) 2025-01-20 21:50:55 +08:00
Lianmin Zheng
73401fd016 Sync distributed package from vllm 0.6.4.post1 (#3010) 2025-01-20 04:57:14 -08:00
Lianmin Zheng
89cd923581 Roll back to use vllm custom allreduce (#3006) 2025-01-20 04:03:15 -08:00
Lianmin Zheng
dc1881326f Fix perf regression on small batch sizes (#3008) 2025-01-20 03:39:49 -08:00
Hongpeng Guo
583697cd71 [Enhancement] Custom Logit Processor Improvement (#2998)
Signed-off-by: Hongpeng Guo <hpguo@anyscale.com>
2025-01-20 02:00:35 -08:00
Lianmin Zheng
09bcbe0123 Update TypeBasedDispatcher and balance CI tests (#3001) 2025-01-19 23:37:27 -08:00