Logo
Explore Help
Register Sign In
EngineX-Hygon/sglang
5
0
Fork 0
You've already forked sglang
Code Issues Pull Requests Actions 7 Projects Releases Wiki Activity
Files
528bd1ed856e4a9225eef3a4e9eeddff41c8a940
sglang/sgl-kernel/csrc/moe
History
Yuan Luo af1cc8fe2d [kernel] opt moe align block kernel by block/warp scan algorithm (#7884)
2025-07-17 19:33:02 +08:00
..
cutlass_moe/w4a8
[1/n]: add cutlass W4A8 moe kernel for hopper architecture (#7772)
2025-07-04 20:50:12 -07:00
marlin_moe_wna16
[1/n] apply wna16marlin kernel in moe weight only quantization (#7683)
2025-07-01 23:21:25 -07:00
cutlass_moe_helper.cu
[1/2] Add FP8 Blockscale MoE CUTLASS kernel for Blackwell (#5281)
2025-04-22 22:28:20 -07:00
ep_moe_reorder_kernel.cu
[EP] Add cuda kernel for moe_ep_post_reorder (#6837)
2025-06-05 00:33:47 -07:00
ep_moe_silu_and_mul_kernel.cu
[sgl-kernel] Add cuda kernel for moe_ep_silu_and_mul (#6919)
2025-06-11 20:43:08 -07:00
fp8_blockwise_moe_kernel.cu
Optimize Hopper CUTLASS FP8 Blockwise Grouped GEMM Kernel in Small K Scenario (#7782)
2025-07-04 22:25:49 -07:00
moe_align_kernel.cu
[kernel] opt moe align block kernel by block/warp scan algorithm (#7884)
2025-07-17 19:33:02 +08:00
moe_fused_gate.cu
Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736)
2025-06-04 15:53:22 -07:00
moe_topk_softmax_kernels.cu
[optimize] fuse renormalize into moe_topk_softmax (#7744)
2025-07-03 12:42:44 -07:00
nvfp4_blockwise_moe.cu
[1/2] Add Kernel support for Cutlass based Fused FP4 MoE (#6093)
2025-06-02 13:48:03 -07:00
prepare_moe_input.cu
fix: fix apply_shuffle_mul_sum (#7444)
2025-07-04 23:23:30 -07:00
Powered by Gitea Version: 1.24.3 Page: 105ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API