Commit Graph

169 Commits

Author SHA1 Message Date
Yineng Zhang
5207424014 chore: bump v0.3.10 sgl-kernel (#10478) 2025-09-15 15:20:09 -07:00
fzyzcjy
3b25dc127a [1/2] Speed up trtllm_mla attention backend (>10% e2e) (#10473) 2025-09-15 11:53:21 -07:00
fzyzcjy
ca63f075b7 Revert "Fix FA4 import cause moe_fused_gate output be illegal memory" (#10432) 2025-09-14 19:03:27 -07:00
Lianmin Zheng
c9ec4cae5b Fix the style of sgl kernel (#10398) 2025-09-12 22:20:21 -07:00
fzyzcjy
3a77c80b26 Fix FA4 import cause moe_fused_gate output be illegal memory (#10368) 2025-09-12 03:21:26 -07:00
Yineng Zhang
532f998b0f chore: bump sgl-kernel 0.3.9.post2 (#10311) 2025-09-11 01:29:50 -07:00
Yineng Zhang
5b7448de77 chore: bump sgl-kernel 0.3.9.post1 (#10294) 2025-09-10 18:26:34 -07:00
Yineng Zhang
6d55f60e77 Revert "[1/2] Optimizations and refactors about quant kernel (#9534)" (#10292) 2025-09-10 18:24:23 -07:00
huangtingwei
5be8c2f7f7 Page first direct IO kernel (#10060)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-09-10 13:35:34 +08:00
Yi Zhang
8cbe1538ef Add mamba kernel (#10234) 2025-09-09 12:58:43 -07:00
Yineng Zhang
f3817cb0b2 chore: bump v0.3.9 sgl-kernel (#10208) 2025-09-09 01:40:05 -07:00
Yineng Zhang
94fb4e9e54 feat: support fa cute in sgl-kernel (#10205)
Co-authored-by: cicirori <32845984+cicirori@users.noreply.github.com>
2025-09-09 00:14:39 -07:00
fzyzcjy
0096798ed6 [1/2] Speed up prefill mla attention (#10156) 2025-09-08 09:00:33 -07:00
hlu1
5f1eb20484 [chore] Remove unused ep_moe cuda kernels (#9956) 2025-09-06 01:35:50 -07:00
fzyzcjy
bd7f882142 Support copying tensor from cpu to gpu without using copy engines (#10007) 2025-09-05 20:07:19 +08:00
fzyzcjy
339f8eef09 [1/2] Optimizations and refactors about quant kernel (#9534) 2025-09-05 18:45:08 +08:00
Yineng Zhang
a96c5b5c14 chore: bump v0.3.8 sgl-kernel (#9907) 2025-09-02 01:27:26 -07:00
Yineng Zhang
c5082f0f73 chore: fix cuda driver api issue and bump sgl-kernel 0.3.7.post1 (#9746) 2025-08-30 02:01:54 -07:00
Kaixi Hou
5c34b4f1c7 [NVIDIA] [2/N] Optimize silu_and_mul_scaled_fp4_grouped_quant perf (#9556) 2025-08-29 17:17:03 -07:00
Hubert Lu
711390a971 [AMD] Support Hierarchical Caching on AMD GPUs (#8236) 2025-08-28 15:27:07 -07:00
PGFLMG
aa3eba8eb4 [sgl-kernel] misc: update deepgemm version for sgl-kernel (#9340)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: fzyzcjy <ch271828n@outlook.com>
2025-08-27 12:01:30 -07:00
Kaixi Hou
e5638573c1 [NVIDA] [1/N] Nvfp4 Masked Gemm: Add quant op for the flashinfer grouped gemm (#9200) 2025-08-22 12:19:45 -07:00
Yineng Zhang
b6b2287e4b chore: bump sgl-kernel v0.3.6.post2 (#9475) 2025-08-21 23:02:08 -07:00
Azure
70bb066ee4 Fix FP4 inference corruption issue in glm4.5-air model (#9346) 2025-08-20 22:13:47 -07:00
fzyzcjy
42c8704560 Add PDL support for quant kernel and rope kernel (#9106) 2025-08-20 01:56:29 -07:00
Yichen Yan
c9bf3877a0 Reduce overhead for fa by not calling heavy CUDA property check (#7375) 2025-08-20 16:26:28 +08:00
Lianmin Zheng
ecc9f3e47a [Minor] Fix the style of sgl-kernel (#9332) 2025-08-18 23:45:00 -07:00
JieXin Liang
6cdcbcc674 [fix] fix enable_pdl for blackwell (#9011) 2025-08-19 01:16:08 +08:00
Lianmin Zheng
c480a3f6ea Minor style fixes for sgl-kernel (#9289) 2025-08-18 09:38:35 -07:00
Yineng Zhang
a1c7f742f9 chore: bump sgl-kernel v0.3.6.post1 (#9286) 2025-08-17 16:26:17 -07:00
Yineng Zhang
87dab54824 Revert "chore: bump sgl-kernel v0.3.6 (#9220)" (#9247) 2025-08-15 17:24:36 -07:00
Liangsheng Yin
0c8594e67d Optional extension for green context (#9231) 2025-08-15 21:33:52 +08:00
Yineng Zhang
c186feed7f chore: bump sgl-kernel v0.3.6 (#9220) 2025-08-15 02:50:50 -07:00
Yuan Luo
53dcc750b6 [sgl-kernel] Support FlashInfer top_k_top_p_sampling_from_logits (#9060)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-08-14 10:56:36 -07:00
Yineng Zhang
1fea998a45 chore: bump sgl-kernel v0.3.5 (#9185) 2025-08-14 03:20:48 -07:00
Peng Zhang
5aa1ebd242 [2/n]decouple quantization implementation from vLLM dependency (#8112)
Co-authored-by: walker-ai <yiyun.wyt@antgroup.com>
Co-authored-by: leoneo <1320612015@qq.com>
2025-08-14 03:19:03 -07:00
Yineng Zhang
71fb8c9527 feat: update fa3 (#9126) 2025-08-13 20:07:08 +08:00
Ke Bao
94f44b88d1 Update fa3 interface and add unit test (#9150) 2025-08-13 20:05:02 +08:00
Trevor Morris
13c48dcf88 [1/2][resubmit again] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (#9088) 2025-08-12 20:12:38 -07:00
DarkSharpness
86a0be65d8 [Feature] Support custom set kv buffer kernel (#8884) 2025-08-12 16:56:51 -07:00
Liangsheng Yin
445f9dca6e Runtime check CUDA driver version to avoid unresolved green context symbols (#9021) 2025-08-12 09:26:10 -07:00
Yineng Zhang
3a9afe2a42 chore: bump sgl-kernel v0.3.4 (#9103) 2025-08-12 01:48:47 -07:00
fzyzcjy
9aea255522 Fuse writing KV buffer into rope kernel (part 1: sgl-kernel) (#9077) 2025-08-12 01:46:40 -07:00
Yineng Zhang
dd949ace23 Revert "[1/2][resubmit] sgl-kernel: Fuse routed scaling factor into m… (#9035) 2025-08-10 17:34:54 -07:00
huangtingwei
86497d99f2 fix page first per layer pf2lf kernel (#8915)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-08-09 17:16:11 -07:00
Trevor Morris
591c232f7c [1/2][resubmit] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (select_experts) (#8770) 2025-08-08 17:55:06 -07:00
Yineng Zhang
54ea57f245 chore: bump sgl-kernel v0.3.3 (#8957) 2025-08-08 01:35:37 -07:00
Hongbo Xu
39fd178831 refactor: Move scalar_types.py to sgl-kernel to avoid circular import (#8720) 2025-08-07 19:22:16 -07:00
Yineng Zhang
75df31b60e chore: bump sgl-kernel v0.3.2 (#8802) 2025-08-05 02:35:20 -07:00
Yineng Zhang
02bc1c7d80 chore: bump sgl-kernel v0.3.1 (#8771) 2025-08-04 13:18:54 -07:00