Logo
Explore Help
Register Sign In
EngineX-Hygon/sglang
5
0
Fork 0
You've already forked sglang
Code Issues Pull Requests Actions 7 Projects Releases Wiki Activity
Files
2373faa3171ca6cb00fd8cf8b422ec0017b0bc4a
sglang/sgl-kernel/csrc/moe
History
Ke Bao 57ab776910 Fuse sorted_token_ids padding to moe_align_block_size kernel (#7437)
2025-06-24 17:44:27 -07:00
..
cutlass_moe_helper.cu
[1/2] Add FP8 Blockscale MoE CUTLASS kernel for Blackwell (#5281)
2025-04-22 22:28:20 -07:00
ep_moe_reorder_kernel.cu
[EP] Add cuda kernel for moe_ep_post_reorder (#6837)
2025-06-05 00:33:47 -07:00
ep_moe_silu_and_mul_kernel.cu
[sgl-kernel] Add cuda kernel for moe_ep_silu_and_mul (#6919)
2025-06-11 20:43:08 -07:00
fp8_blockwise_moe_kernel.cu
Add a CUDA kernel for fusing mapping and weighted sum for MoE. (#6916)
2025-06-07 15:24:39 -07:00
moe_align_kernel.cu
Fuse sorted_token_ids padding to moe_align_block_size kernel (#7437)
2025-06-24 17:44:27 -07:00
moe_fused_gate.cu
Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736)
2025-06-04 15:53:22 -07:00
moe_topk_softmax_kernels.cu
Add moe topk softmax templated from vllm (#4302)
2025-03-14 12:03:33 -07:00
nvfp4_blockwise_moe.cu
[1/2] Add Kernel support for Cutlass based Fused FP4 MoE (#6093)
2025-06-02 13:48:03 -07:00
prepare_moe_input.cu
Add a CUDA kernel for fusing mapping and weighted sum for MoE. (#6916)
2025-06-07 15:24:39 -07:00
Powered by Gitea Version: 1.24.3 Page: 154ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API