Commit Graph

522 Commits

Author SHA1 Message Date
Hubert Lu
fe68c1486f Fix errors of hicache kernels in sgl-kernel for ROCm (#10339) 2025-09-11 14:54:34 -07:00
Yineng Zhang
532f998b0f chore: bump sgl-kernel 0.3.9.post2 (#10311) 2025-09-11 01:29:50 -07:00
Yineng Zhang
de15d1405a Revert "Fix flashinfer version in sgl-kernel (#10135)" (#10310) 2025-09-11 01:27:58 -07:00
Yineng Zhang
5b7448de77 chore: bump sgl-kernel 0.3.9.post1 (#10294) 2025-09-10 18:26:34 -07:00
Yineng Zhang
6d55f60e77 Revert "[1/2] Optimizations and refactors about quant kernel (#9534)" (#10292) 2025-09-10 18:24:23 -07:00
Rain Jiang
2286e85e77 pass a_scale from fp8 quant result instead of hard code to 1.0f (#10241)
Co-authored-by: Yichen Wang <yichen.wang@bytedance.com>
Co-authored-by: Jinwu Guo <641876696@qq.com>
2025-09-10 12:56:05 -07:00
huangtingwei
5be8c2f7f7 Page first direct IO kernel (#10060)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-09-10 13:35:34 +08:00
Yi Zhang
8cbe1538ef Add mamba kernel (#10234) 2025-09-09 12:58:43 -07:00
Yineng Zhang
f3817cb0b2 chore: bump v0.3.9 sgl-kernel (#10208) 2025-09-09 01:40:05 -07:00
Yineng Zhang
94fb4e9e54 feat: support fa cute in sgl-kernel (#10205)
Co-authored-by: cicirori <32845984+cicirori@users.noreply.github.com>
2025-09-09 00:14:39 -07:00
blzheng
d1d4074c4e [CPU] Add gelu_and_mul kernel in sgl-kernel and add ut (#9300) 2025-09-08 23:23:13 -07:00
Keyang Ru
718f25ae6e Explicitly export CMAKE_BUILD_PARALLEL_LEVEL (#10193) 2025-09-08 22:35:27 -07:00
Yineng Zhang
cdc56ef6c1 feat: use sgl-kernel cu129 as default (#10188) 2025-09-08 22:01:17 -07:00
fzyzcjy
0096798ed6 [1/2] Speed up prefill mla attention (#10156) 2025-09-08 09:00:33 -07:00
Yuhao Yao
ee0b3c5bad [1/N][Bug] Fix w4afp8 MoE NaN issue (sgl-kernel, fixed) (#10108) 2025-09-07 21:39:07 -07:00
Rain Jiang
6049ca209e move compile threads to an option to avoid OOM on low memory host (#10123) 2025-09-07 21:36:14 -07:00
Cao E
7577f0e40f Add graph runner support with torch compile on CPU (#7843) 2025-09-07 21:33:58 -07:00
Lianmin Zheng
76a2c86b88 Fix flashinfer version in sgl-kernel (#10135) 2025-09-07 12:54:07 -07:00
Qi Yuhang
85ed8e0a5e Optimize nvfp4 block scaled gemm kernel when M is small. (#10101) 2025-09-06 22:31:00 -07:00
Jianying
dd1e268938 CUTLASS fp8 blockwise gemm support of sm120 (#9969) 2025-09-06 22:28:54 -07:00
hlu1
5f1eb20484 [chore] Remove unused ep_moe cuda kernels (#9956) 2025-09-06 01:35:50 -07:00
hlu1
039cef76aa Remove non-accelerated targets(100 and up) from cmake (#10041)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-09-06 01:35:28 -07:00
hlu1
4c22ebe2e8 Disable kernel cutlass_mla_decode on SM103 (#10058)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-09-06 01:35:18 -07:00
DevashishLal-CB
dbb1235d58 [Fix] illegal sync based on undefined behaviour (#9620)
Signed-off-by: Devashish Lal <devashish@rivosinc.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
2025-09-06 11:54:48 +08:00
Yineng Zhang
0e78c63c0e Revert "[1/N][Bug] Fix w4afp8 MoE NaN issue (sgl-kernel) (#9953)" (#10097) 2025-09-05 19:57:53 -07:00
fzyzcjy
bd7f882142 Support copying tensor from cpu to gpu without using copy engines (#10007) 2025-09-05 20:07:19 +08:00
fzyzcjy
339f8eef09 [1/2] Optimizations and refactors about quant kernel (#9534) 2025-09-05 18:45:08 +08:00
Yuhao Yao
f78b7fd16d [1/N][Bug] Fix w4afp8 MoE NaN issue (sgl-kernel) (#9953) 2025-09-03 18:28:27 +08:00
Lianmin Zheng
d631290e32 Remove annoying warnings in sgl kernel build (#9905) 2025-09-02 20:18:25 -07:00
Yineng Zhang
8766b3aca8 fix: update router deps (#9921) 2025-09-02 03:28:58 -07:00
Yineng Zhang
a96c5b5c14 chore: bump v0.3.8 sgl-kernel (#9907) 2025-09-02 01:27:26 -07:00
Lifu Huang
1fbfdebe6b [chore] fix dead links in doc (#9913) 2025-09-02 00:28:26 -07:00
chenxj
d4a938417d [feat] Support tp mode for DeepSeek-R1-W4AFP8 (#8118)
Co-authored-by: yuhyao <827623970@qq.com>
2025-09-01 22:17:26 -07:00
PGFLMG
7fe89f7cdb [sgl-kernel] fix: fix missing FetchContent_Populate for fmt (#9826) 2025-08-30 12:57:42 -07:00
Yineng Zhang
c5082f0f73 chore: fix cuda driver api issue and bump sgl-kernel 0.3.7.post1 (#9746) 2025-08-30 02:01:54 -07:00
hlu1
1e85589dc5 Make fp4_quantize kernels work on sm103 (#9807)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-29 21:15:08 -07:00
Kaixi Hou
5c34b4f1c7 [NVIDIA] [2/N] Optimize silu_and_mul_scaled_fp4_grouped_quant perf (#9556) 2025-08-29 17:17:03 -07:00
hlu1
7a16db9bd9 Make sm100 fp8 kernels available on sm103 (#9789)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-28 23:47:29 -07:00
hlu1
a7d825fccc Skip some tests on Blackwell (#9777)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-28 20:00:32 -07:00
Ma Mingfei
5ad296bda1 Optimize prefill performance on cpu backend (#8750) 2025-08-28 17:21:55 -07:00
Hubert Lu
711390a971 [AMD] Support Hierarchical Caching on AMD GPUs (#8236) 2025-08-28 15:27:07 -07:00
Rain Jiang
6b39f9cf8c Support compile sgl-kernel on cuda 13.0 (#9721) 2025-08-28 10:18:03 -07:00
PGFLMG
aa3eba8eb4 [sgl-kernel] misc: update deepgemm version for sgl-kernel (#9340)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: fzyzcjy <ch271828n@outlook.com>
2025-08-27 12:01:30 -07:00
Rain Jiang
79e6a8a6ac support cuda 13.0 and trtllm kernel by Aug 25 2025 (#9495) 2025-08-26 23:13:27 -07:00
Qi Yuhang
fda4792620 Update CUTLASS 4.2 & Enable K-Major Scale Factor for SM90 FP8 Blockwise Group GEMM (#9559) 2025-08-24 23:24:43 -07:00
Kaixi Hou
e5638573c1 [NVIDA] [1/N] Nvfp4 Masked Gemm: Add quant op for the flashinfer grouped gemm (#9200) 2025-08-22 12:19:45 -07:00
Yineng Zhang
b6b2287e4b chore: bump sgl-kernel v0.3.6.post2 (#9475) 2025-08-21 23:02:08 -07:00
kousakawang
5fd311d33e [code clean] add H20 cutlass groupGemm default config (#9333)
Co-authored-by: wanghanpei <wanghanpei@bytedance.com>
2025-08-21 19:23:29 -07:00
Hubert Lu
704ced1b2e [AMD] Remove the deprecated C10_WARP_SIZE (#9356) 2025-08-21 18:16:35 -07:00
fzyzcjy
e85cb1ce9d Fix quant kernel test errors and benchmark wrong output speeds (#7604) 2025-08-21 03:48:41 -07:00