Commit Graph

1363 Commits

Author SHA1 Message Date
Ke Bao
de5533341e Update Triton extend backend interface (#3309) 2025-02-05 18:12:22 +08:00
Yineng Zhang
7aad8d1854 chore: bump v0.4.2.post2 (#3313) 2025-02-05 17:35:02 +08:00
Baizhou Zhang
76fa2d152c Fix lora flashinfer import bug on ROCM (#3312) 2025-02-05 16:36:49 +08:00
Wen-Heng (Jack) Chung
7ab84948d8 [ROCm] Logic to decide whether to used manually unrolled kernel. (#3306) 2025-02-04 19:12:20 -08:00
kk
4885b90802 Use forward_cuda to execute custom op for hip platform (#3305)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-02-05 02:58:17 +00:00
Wen-Heng (Jack) Chung
c2723a42a5 [ROCm] Manually unroll _w8a8_block_fp8_matmul kernel on AMD GPU. (#3299) 2025-02-05 07:15:40 +08:00
Wen-Heng (Jack) Chung
c7256ca836 [ROCm] Add tuning configs for AMD Radeon Graphics. (#3294) 2025-02-04 10:34:57 -08:00
Ke Bao
a07364ccc5 Update Triton decode backend interface (#3292) 2025-02-04 23:26:04 +08:00
HAI
2c1a695ff1 ROCm: sgl-kernel enablement starting with sgl_moe_align_block (#3287) 2025-02-04 21:44:44 +08:00
Yineng Zhang
d39899e85c upgrade flashinfer v0.2.0.post2 (#3288)
Co-authored-by: pankajroark <pankajroark@users.noreply.github.com>
2025-02-04 21:41:40 +08:00
Baizhou Zhang
70817a7eae [Feature] Define backends and add Triton backend for Lora (#3161)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
2025-02-03 22:09:13 -08:00
kushanam
d54cee1441 adding Triton configs for DeepSeekV3 on Blackwell (#3272) 2025-02-04 04:12:09 +08:00
Yineng Zhang
013021b6a1 refactor EAGLE 2 (#3269)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
Co-authored-by: merrymercy <lianminzheng@gmail.com>
Co-authored-by: Ying1123 <sqy1415@gmail.com>
2025-02-03 20:52:30 +08:00
zifeitong
28b0a62bb3 Bug: Fix min_p sampling crash when using flashinfer backend (#3207)
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
2025-02-02 15:36:07 -08:00
HAI
566d61d90f ROCm: bump 6.3.0 (#3259) 2025-02-03 04:13:40 +08:00
Wen-Heng (Jack) Chung
d9eb9358cc Tune paged attention parameters for AMD GPU. (#3255) 2025-02-01 17:29:45 -08:00
Yineng Zhang
959dca4fc7 use srt VocabParallelEmbedding (#3252) 2025-02-01 22:23:09 +08:00
Yineng Zhang
8db776f049 support QuickGELU (#3250) 2025-02-01 19:31:47 +08:00
Yineng Zhang
4eb4b401cc update and simplify CustomOp (#3249) 2025-02-01 18:56:44 +08:00
Yineng Zhang
34e405e01f update sgl-kernel version for sglang (#3238) 2025-02-01 02:14:41 +08:00
Ke Bao
1ebe1d6de5 Optimize MoE topk with torch compile (#3236) 2025-02-01 01:36:50 +08:00
Yineng Zhang
7811bfdaa7 compatible with flashinfer v0.2 (#3235) 2025-02-01 01:32:18 +08:00
Yineng Zhang
cf0f7eafe6 chore: bump v0.4.2.post1 (#3233) 2025-01-31 20:35:55 +08:00
Ke Bao
c02e313914 Fix block wise fp8 torch compile (#3232) 2025-01-31 19:56:02 +08:00
Byron Hsu
734daedd8f [fix] Clamp logprob with dtype min to prevent -inf (#3224) 2025-01-31 17:04:04 +08:00
Mick
9f635ea50d [Fix] Address remaining issues of supporting MiniCPMV (#2977) 2025-01-28 00:22:13 -08:00
Byron Hsu
988d0a4bfc [kernel] Use sgl_kernel rope (#3169)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-01-28 14:33:11 +08:00
Jhin
7b9b4f4426 Docs fix about EAGLE and streaming output (#3166)
Co-authored-by: Chayenne <zhaochenyang@ucla.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Jhin <jhinpan@umich.edu>
2025-01-27 18:10:45 -08:00
Zhiqiang Xie
08104b56de Sanity check to prevent performance regression (#3171)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-01-27 12:28:17 -08:00
Yineng Zhang
4ab43cfb3e chore: bump v0.4.2 (#3180) 2025-01-27 21:42:05 +08:00
Yineng Zhang
2f79f58873 feat: use sgl-kernel 0.0.3 in sglang (#3179) 2025-01-27 21:39:52 +08:00
Lianmin Zheng
53cef81587 Improve weight loading and code style (#3174) 2025-01-27 03:00:41 -08:00
yigex
351a72d40b add dsv3 mi300 triton config for block scale (#3146) 2025-01-27 17:25:53 +08:00
Lianmin Zheng
52c03f16b9 Add activation parameters to fused_moe (#3170) 2025-01-27 00:23:37 -08:00
YAMY
b045841bae Feature/function calling update (#2700)
Co-authored-by: Mingyuan Ma <mamingyuan2001@berkeley.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
2025-01-26 09:57:51 -08:00
Lianmin Zheng
1dda8c5e4c Return more infos for computing average acceptance length (#3152) 2025-01-26 04:51:54 -08:00
Yineng Zhang
7e0976133c udpate sgl-kernel version for srt (#3150) 2025-01-26 20:22:34 +08:00
Lianmin Zheng
d1a0863251 Add a test case for cached_tokens (#3145) 2025-01-26 01:39:28 -08:00
Hubert Lu
f8b28e461a Add CPU affinity setting to latency benchmark (#3085) 2025-01-25 23:52:05 -08:00
Lianmin Zheng
4f118a39d7 Fix repetition penalty (#3139) 2025-01-25 21:48:58 -08:00
yigex
66283dbc0c [Fix] Not skip NVML Check on AMD Platform (#3135) 2025-01-25 21:33:51 -08:00
Hui Liu
8e48ca8cc1 enable kv_scale for Gemma2 (#3113) 2025-01-25 18:29:14 -08:00
Lianmin Zheng
27acf63bbd Use torch.compile for scaling penalty (#3133) 2025-01-25 18:27:33 -08:00
Lianmin Zheng
ea535dc574 Revert "disable custom allreduce on HIP" (#3067) 2025-01-22 21:33:35 -08:00
Ke Wen
862bcff833 Support loading of larger models with on-the-fly quantization (#3061) 2025-01-22 21:33:17 -08:00
Lianmin Zheng
8b84e69f25 Fix tp token sync for dp attention (#3062) 2025-01-22 18:51:40 -08:00
Lianmin Zheng
022614d26e Add some flags to allow sync token ids across TP ranks (#3060) 2025-01-22 15:05:51 -08:00
lukec
b8ab989ff4 Fix the FP8 E4M3 parsing offline scales failure bug (#3045) 2025-01-22 14:19:33 -08:00
Hui Liu
ddc2001fb0 disable custom allreduce on HIP (#3058) 2025-01-22 13:57:22 -08:00
nstream-ai-devx
0d2148efaa fix rotary_embedding rope_scaling for phi (#3055) 2025-01-23 02:15:32 +08:00