Commit Graph

155 Commits

Author SHA1 Message Date
fzyzcjy
bd7f882142 Support copying tensor from cpu to gpu without using copy engines (#10007) 2025-09-05 20:07:19 +08:00
fzyzcjy
339f8eef09 [1/2] Optimizations and refactors about quant kernel (#9534) 2025-09-05 18:45:08 +08:00
Yineng Zhang
a96c5b5c14 chore: bump v0.3.8 sgl-kernel (#9907) 2025-09-02 01:27:26 -07:00
Yineng Zhang
c5082f0f73 chore: fix cuda driver api issue and bump sgl-kernel 0.3.7.post1 (#9746) 2025-08-30 02:01:54 -07:00
Kaixi Hou
5c34b4f1c7 [NVIDIA] [2/N] Optimize silu_and_mul_scaled_fp4_grouped_quant perf (#9556) 2025-08-29 17:17:03 -07:00
Hubert Lu
711390a971 [AMD] Support Hierarchical Caching on AMD GPUs (#8236) 2025-08-28 15:27:07 -07:00
PGFLMG
aa3eba8eb4 [sgl-kernel] misc: update deepgemm version for sgl-kernel (#9340)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: fzyzcjy <ch271828n@outlook.com>
2025-08-27 12:01:30 -07:00
Kaixi Hou
e5638573c1 [NVIDA] [1/N] Nvfp4 Masked Gemm: Add quant op for the flashinfer grouped gemm (#9200) 2025-08-22 12:19:45 -07:00
Yineng Zhang
b6b2287e4b chore: bump sgl-kernel v0.3.6.post2 (#9475) 2025-08-21 23:02:08 -07:00
Azure
70bb066ee4 Fix FP4 inference corruption issue in glm4.5-air model (#9346) 2025-08-20 22:13:47 -07:00
fzyzcjy
42c8704560 Add PDL support for quant kernel and rope kernel (#9106) 2025-08-20 01:56:29 -07:00
Yichen Yan
c9bf3877a0 Reduce overhead for fa by not calling heavy CUDA property check (#7375) 2025-08-20 16:26:28 +08:00
Lianmin Zheng
ecc9f3e47a [Minor] Fix the style of sgl-kernel (#9332) 2025-08-18 23:45:00 -07:00
JieXin Liang
6cdcbcc674 [fix] fix enable_pdl for blackwell (#9011) 2025-08-19 01:16:08 +08:00
Lianmin Zheng
c480a3f6ea Minor style fixes for sgl-kernel (#9289) 2025-08-18 09:38:35 -07:00
Yineng Zhang
a1c7f742f9 chore: bump sgl-kernel v0.3.6.post1 (#9286) 2025-08-17 16:26:17 -07:00
Yineng Zhang
87dab54824 Revert "chore: bump sgl-kernel v0.3.6 (#9220)" (#9247) 2025-08-15 17:24:36 -07:00
Liangsheng Yin
0c8594e67d Optional extension for green context (#9231) 2025-08-15 21:33:52 +08:00
Yineng Zhang
c186feed7f chore: bump sgl-kernel v0.3.6 (#9220) 2025-08-15 02:50:50 -07:00
Yuan Luo
53dcc750b6 [sgl-kernel] Support FlashInfer top_k_top_p_sampling_from_logits (#9060)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-08-14 10:56:36 -07:00
Yineng Zhang
1fea998a45 chore: bump sgl-kernel v0.3.5 (#9185) 2025-08-14 03:20:48 -07:00
Peng Zhang
5aa1ebd242 [2/n]decouple quantization implementation from vLLM dependency (#8112)
Co-authored-by: walker-ai <yiyun.wyt@antgroup.com>
Co-authored-by: leoneo <1320612015@qq.com>
2025-08-14 03:19:03 -07:00
Yineng Zhang
71fb8c9527 feat: update fa3 (#9126) 2025-08-13 20:07:08 +08:00
Ke Bao
94f44b88d1 Update fa3 interface and add unit test (#9150) 2025-08-13 20:05:02 +08:00
Trevor Morris
13c48dcf88 [1/2][resubmit again] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (#9088) 2025-08-12 20:12:38 -07:00
DarkSharpness
86a0be65d8 [Feature] Support custom set kv buffer kernel (#8884) 2025-08-12 16:56:51 -07:00
Liangsheng Yin
445f9dca6e Runtime check CUDA driver version to avoid unresolved green context symbols (#9021) 2025-08-12 09:26:10 -07:00
Yineng Zhang
3a9afe2a42 chore: bump sgl-kernel v0.3.4 (#9103) 2025-08-12 01:48:47 -07:00
fzyzcjy
9aea255522 Fuse writing KV buffer into rope kernel (part 1: sgl-kernel) (#9077) 2025-08-12 01:46:40 -07:00
Yineng Zhang
dd949ace23 Revert "[1/2][resubmit] sgl-kernel: Fuse routed scaling factor into m… (#9035) 2025-08-10 17:34:54 -07:00
huangtingwei
86497d99f2 fix page first per layer pf2lf kernel (#8915)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-09 17:16:11 -07:00
Trevor Morris
591c232f7c [1/2][resubmit] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (select_experts) (#8770) 2025-08-08 17:55:06 -07:00
Yineng Zhang
54ea57f245 chore: bump sgl-kernel v0.3.3 (#8957) 2025-08-08 01:35:37 -07:00
Hongbo Xu
39fd178831 refactor: Move scalar_types.py to sgl-kernel to avoid circular import (#8720) 2025-08-07 19:22:16 -07:00
Yineng Zhang
75df31b60e chore: bump sgl-kernel v0.3.2 (#8802) 2025-08-05 02:35:20 -07:00
Yineng Zhang
02bc1c7d80 chore: bump sgl-kernel v0.3.1 (#8771) 2025-08-04 13:18:54 -07:00
Yineng Zhang
5ce5093b97 chore: bump sgl-kernel 0.3.0 with torch 2.8.0 (#8718) 2025-08-03 02:31:50 -07:00
Yineng Zhang
0a56b721d5 chore: bump sgl-kernel v0.2.9 (#8713) 2025-08-02 16:21:56 -07:00
Liangsheng Yin
603f5ce020 [Bug] fix green context's incompatibility with cuda < 12.4 (#8701) 2025-08-02 15:23:11 -07:00
Liangsheng Yin
f9f0138f80 Revert "[1/2] sgl-kernel: Fuse routed scaling factor into select_experts" (#8706) 2025-08-02 20:14:30 +08:00
Trevor Morris
f642524fd9 [1/2] sgl-kernel: Fuse routed scaling factor into select_experts (#8364) 2025-08-01 18:14:24 -07:00
Yineng Zhang
43118f5f2a chore: bump sgl-kernel v0.2.8 (#8599) 2025-07-30 22:23:52 -07:00
Cheng Wan
a5f5ab4030 update sgl-kernel for EP: kernel part (#8514)
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: Ke Bao <ispobaoke@gmail.com>
2025-07-30 22:19:55 -07:00
Hubert Lu
af4b9bae95 [AMD] Add silu_and_mul, gelu_and_mul, gelu_tanh_and_mul, and gelu_quick kernels for AMD GPUs (#7135)
Co-authored-by: yiakwy-xpu-ml-framework-team <961186938@qq.com>
Co-authored-by: HAI <hixiao@gmail.com>
2025-07-24 23:44:28 -07:00
li haoyang
28d4d47280 [Feature] Integrate quick allreduce and select the best allreduce implementation (#6619)
Signed-off-by: Haoyang Li <Haoyang.Li@amd.com>
Co-authored-by: ilmarkov <imarkov@redhat.com>
2025-07-24 20:48:42 -07:00
Zhiqiang Xie
d40846d456 breakdown kernel update (#8334) 2025-07-25 08:33:17 +08:00
Yineng Zhang
4c605235aa fix: workaround for deepgemm warmup issue (#8302) 2025-07-23 12:01:51 -07:00
Zhiqiang Xie
b43263307f Hicache IO kernel refactoring (#8264) 2025-07-23 16:49:03 +08:00
Yineng Zhang
429bb0efa2 chore: bump sgl-kernel v0.2.6.post1 (#8200) 2025-07-20 19:50:28 -07:00
Baizhou Zhang
282eb59ff3 Add bf16 output option for dsv3_router_gemm kernel (#7999) 2025-07-20 09:49:37 +08:00