Commit Graph

123 Commits

Author SHA1 Message Date
Hubert Lu
af4b9bae95 [AMD] Add silu_and_mul, gelu_and_mul, gelu_tanh_and_mul, and gelu_quick kernels for AMD GPUs (#7135)
Co-authored-by: yiakwy-xpu-ml-framework-team <961186938@qq.com>
Co-authored-by: HAI <hixiao@gmail.com>
2025-07-24 23:44:28 -07:00
li haoyang
28d4d47280 [Feature] Integrate quick allreduce and select the best allreduce implementation (#6619)
Signed-off-by: Haoyang Li <Haoyang.Li@amd.com>
Co-authored-by: ilmarkov <imarkov@redhat.com>
2025-07-24 20:48:42 -07:00
Yuan Luo
0c8dab9e67 [sgl-kernel] Opt per_token_quant_fp8 with warp reduce (#8130)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-07-23 21:22:59 +08:00
Zhiqiang Xie
b43263307f Hicache IO kernel refactoring (#8264) 2025-07-23 16:49:03 +08:00
Baizhou Zhang
282eb59ff3 Add bf16 output option for dsv3_router_gemm kernel (#7999) 2025-07-20 09:49:37 +08:00
Peng Zhang
719b29f218 feat: enchance green context stream creation robust with backward compatibility (#8136) 2025-07-18 02:45:03 -07:00
Qi Yuhang
6e92da8fca [Fix][Ready]Fix register spilling in cutlass nvfp4 gemm kernel on Blackwell (#8127) 2025-07-17 20:49:36 -07:00
Yuan Luo
af1cc8fe2d [kernel] opt moe align block kernel by block/warp scan algorithm (#7884) 2025-07-17 19:33:02 +08:00
Peng Zhang
6dc4af4937 fix greenctx stream compability (#8090) 2025-07-16 07:08:46 -07:00
ykcombat
1ebec1a8b0 [Feature] CUDA Green Context Support (#7649) 2025-07-15 02:49:16 +08:00
likesen-alibaba
4a0d19198b Fix bug of deepseek-v3 under DP+EP mode with large batchsize/seqlen (#6449) 2025-07-10 01:19:56 -07:00
Chunyuan WU
ac80f4da57 [CPU] [FP8] set SGLANG_CPU_FP8_CVT_FTZ in CMakeLists.txt (#7885) 2025-07-09 01:53:53 -07:00
Chunyuan WU
128f16a817 [CPU]convert topk_weights to fp32 for INT8 and FP8 paths (for llama4) and fix LmHead weight pack (#7818) 2025-07-08 19:27:24 -07:00
Ke Bao
a3398d8478 Optimize moe align block size kernel (#7794) 2025-07-07 09:20:30 +08:00
Lianmin Zheng
5589b75024 Add treemask mode to build_eagle_tree & release sgl-kernel 0.2.3 (#7756)
Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>
2025-07-05 12:17:05 -07:00
Mick
c797322280 fix: fix apply_shuffle_mul_sum (#7444) 2025-07-04 23:23:30 -07:00
Qi Yuhang
8e9fb43d82 Optimize Hopper CUTLASS FP8 Blockwise Grouped GEMM Kernel in Small K Scenario (#7782) 2025-07-04 22:25:49 -07:00
SijiaYang
da3890e82a [1/n]: add cutlass W4A8 moe kernel for hopper architecture (#7772)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>
2025-07-04 20:50:12 -07:00
Yi Zhang
2998c4bdf4 [optimize] fuse renormalize into moe_topk_softmax (#7744)
Co-authored-by: ispobock <ispobaoke@gmail.com>
2025-07-03 12:42:44 -07:00
ayrnb
2c4feaf308 Add CUTLASS FP8 Blockscale MoE kernel for Hopper architecture (#7278)
Co-authored-by: HydraQYH <QYH820@Outlook.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
2025-07-02 23:27:03 -07:00
Chunyuan WU
36cc3ffdc7 [CPU] [sgl-kernel] set dispatch key of initialize to CatchAll (#7734) 2025-07-02 22:39:24 -07:00
YanbingJiang
b044400dd3 Support non-contiguous query input for extend/decode attention (#7462) 2025-07-02 19:59:45 -07:00
AniZpZ
8e03b641ba [1/n] apply wna16marlin kernel in moe weight only quantization (#7683)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: HandH1998 <1335248067@qq.com>
Co-authored-by: 弋云 <yiyun.wyt@antgroup.com>
Co-authored-by: walker-ai <2398833647@qq.com>
2025-07-01 23:21:25 -07:00
Chunyuan WU
6005eceee3 [CPU] remove process_group from inputs of shm_allreduce and shm_allgather (#7486) 2025-06-30 21:54:11 -07:00
Baizhou Zhang
7248272ccc Add dsv3 router gemm kernel (#7627) 2025-06-29 23:31:55 -07:00
Chunyuan WU
c5131f7a2f [CPU] add c++ kernel to bind CPU cores and memory node (#7524) 2025-06-29 19:45:25 -07:00
Ke Bao
04b35190e2 Add dsv3 fused a gemm to sgl-kernel (#7630) 2025-06-29 02:52:24 -07:00
Chunyuan WU
7eb47b0f3d [CPU] [BF16] Call fused_experts_cpu, weight_packed_linear and bmm_cpu kernel in DeepSeek model (#6641)
Co-authored-by: Thien Tran <gau.nernst@yahoo.com.sg>
2025-06-25 01:43:33 -07:00
Ke Bao
57ab776910 Fuse sorted_token_ids padding to moe_align_block_size kernel (#7437) 2025-06-24 17:44:27 -07:00
Zhiqiang Xie
34c3f9b2d3 kvcache io kernels and test case (#7382) 2025-06-23 11:58:59 -07:00
AniZpZ
3eb4a800e8 Fix AWQ Dequant and Weight Loading of deepseek v2 (#6842) 2025-06-17 13:45:10 -07:00
Lianmin Zheng
cfceb83d05 Fix sampling for speculative decoding & simplify kernels (#7207) 2025-06-16 03:28:30 -07:00
JieXin Liang
ab1a4fa5cb [fix] fix cutlass_mla_backend with cuda_graph and add sm_scale for sgl-kernel cutlass_mla (#7184) 2025-06-14 12:45:41 -07:00
fzyzcjy
5c66c4424f Support new DeepGEMM format in per token group quant (#7146) 2025-06-13 02:00:22 -07:00
fzyzcjy
aa46ed34d2 Remove 200us slow concat kernel (part 1: kernel) (#7145) 2025-06-13 01:58:29 -07:00
Yuan Luo
84727a5139 [sgl-kernel] Add cuda kernel for moe_ep_silu_and_mul (#6919)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-06-11 20:43:08 -07:00
fzyzcjy
19995dd78e Tiny fix cutlass_mla_get_workspace_size stub incorrect signature (#7057) 2025-06-10 12:27:57 -07:00
YanbingJiang
fcde67b016 CPU: map changes from developing branch in sgl-kernel (#6833)
Co-authored-by: mingfeima <mingfei.ma@intel.com>
2025-06-10 01:08:15 -07:00
JieXin Liang
18efb5e8e0 [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (#6929) 2025-06-08 19:37:34 -07:00
Elfie Guo
3e56f557fd Add a CUDA kernel for fusing mapping and weighted sum for MoE. (#6916)
Co-authored-by: Elfie Guo <elfiegxf@gmail.com>
2025-06-07 15:24:39 -07:00
Xiaoyu Zhang
8b5f83ed3b reduce torch.zeros overhead in moe align block size kernel (#6369) 2025-06-07 02:47:36 -07:00
Yuan Luo
43baba649e [EP] Add cuda kernel for moe_ep_post_reorder (#6837)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-06-05 00:33:47 -07:00
zyksir
8e3797be1c support 1 shot allreduce in 1-node and 2-node using mscclpp (#6277) 2025-06-04 22:11:24 -07:00
Cheng Wan
81964328b7 Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736) 2025-06-04 15:53:22 -07:00
Xiaoyu Zhang
bd75690f4e fix ep_moe_reorder kernel bugs (#6858)
Co-authored-by: JieXin Liang <Alcanderian@users.noreply.github.com>
2025-06-04 19:13:59 +08:00
Cheng Wan
8a5480528d [Refactor] Rename n_share_experts_fusion as num_fused_shared_experts (#6735) 2025-06-03 17:48:24 -07:00
jianan-gu
ff00895c46 Add CPU optimized kernels for topk and rope fusions (#6456) 2025-06-02 17:37:34 -07:00
Pavani Majety
eb38c7d1ca [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (#6093)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-02 13:48:03 -07:00
Yuan Luo
55444ed667 [EP] Add cuda kernel for moe_ep_pre_reorder (#6699)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-06-01 20:49:01 -07:00
Chunyuan WU
3ded6235c9 Add fp8 fused_experts kernel for CPU in sgl-kernel and add UT (#6404) 2025-05-23 02:01:55 -07:00