Commit Graph

76 Commits

Author SHA1 Message Date
Yineng Zhang
8db3ac55a9 chore: bump sgl-kernel v0.1.6.post1 (#6955) 2025-06-07 15:25:46 -07:00
Elfie Guo
3e56f557fd Add a CUDA kernel for fusing mapping and weighted sum for MoE. (#6916)
Co-authored-by: Elfie Guo <elfiegxf@gmail.com>
2025-06-07 15:24:39 -07:00
Yineng Zhang
d664ca18f2 chore: bump sgl-kernel v0.1.6 (#6943) 2025-06-07 00:25:22 -07:00
Pavani Majety
0df6765c83 [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (#6887)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-05 13:13:14 -07:00
Yuan Luo
43baba649e [EP] Add cuda kernel for moe_ep_post_reorder (#6837)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-06-05 00:33:47 -07:00
zyksir
8e3797be1c support 1 shot allreduce in 1-node and 2-node using mscclpp (#6277) 2025-06-04 22:11:24 -07:00
Cheng Wan
81964328b7 Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736) 2025-06-04 15:53:22 -07:00
Cheng Wan
8a5480528d [Refactor] Rename n_share_experts_fusion as num_fused_shared_experts (#6735) 2025-06-03 17:48:24 -07:00
Pavani Majety
eb38c7d1ca [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (#6093)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-06-02 13:48:03 -07:00
Yuan Luo
55444ed667 [EP] Add cuda kernel for moe_ep_pre_reorder (#6699)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-06-01 20:49:01 -07:00
Wenxuan Tan
c429919def misc: cache is_hopper_arch (#6799) 2025-06-01 15:28:31 -07:00
Huapeng Zhou
2f7420bc84 [Feat] Enable PDL automatically on Hopper architecture (#5981) 2025-06-01 12:30:17 -07:00
Yineng Zhang
b520d02888 chore: bump sgl-kernel v0.1.5 (#6794) 2025-05-31 14:54:00 -07:00
Yineng Zhang
d71f3f0a2a chore: bump sgl-kernel v0.1.4 (#6522) 2025-05-22 09:47:42 -07:00
HandH1998
4d643f6c7a [1/2] Support Qserve (#6457)
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
2025-05-21 19:48:59 -07:00
Yineng Zhang
3d7f7a43c8 chore: bump sgl-kernel v0.1.3 (#6368) 2025-05-17 00:15:55 -07:00
Elfie Guo
6fc9357503 [2/2] Add python wrapper for CUTLASS FP8 Blockscale MoE Kernel. (#5694) 2025-05-16 13:14:07 -07:00
Lianmin Zheng
e8e18dcdcc Revert "fix some typos" (#6244) 2025-05-12 12:53:26 -07:00
applesaucethebun
d738ab52f8 fix some typos (#6209)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-05-13 01:42:38 +08:00
Yineng Zhang
45b4dcf037 chore: bump sgl-kernel v0.1.2.post1 (#6195) 2025-05-11 02:24:10 -07:00
applesaucethebun
2ce8793519 Add typo checker in pre-commit (#6179)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-05-11 12:55:00 +08:00
Yineng Zhang
6578cf27de chore: bump sgl-kernel 0.1.2 (#6131) 2025-05-08 15:16:28 -07:00
Stefan He
087751a8f2 Remove unecessary is_fa3_supported check (#6112) 2025-05-08 14:45:33 -07:00
Yineng Zhang
d353d08b4e chore: bump sgl-kernel 0.1.1 (#5932) 2025-04-30 14:01:49 -07:00
PGFLMG
08acdb5c3d [Feat] Scale up fa3 kernel to sm8x arch (#5912)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-04-30 13:59:36 -07:00
Johnny
2c7dbb7cc2 [FEATURE] Enhance platform compatibility for ARM (#5746) 2025-04-29 15:06:16 -07:00
PGFLMG
ee71ed8a41 [Feat] QWen-1M context support[1/2]: Update block sparse attention backend utils kernel (#5847)
Co-authored-by: sighingnow <sighingnow@gmail.com>
2025-04-28 11:03:17 -07:00
Trevor Morris
84810da4ae Add Cutlass MLA attention backend (#5390) 2025-04-27 20:58:53 -07:00
Yineng Zhang
7d0edf3cae chore: bump sgl-kernel 0.1.0 (#5688) 2025-04-23 14:23:59 -07:00
Yineng Zhang
15fabcc07f fix sgl-kernel unit tests (#5666) 2025-04-23 01:18:30 -07:00
Elfie Guo
e62c49557d [1/2] Add FP8 Blockscale MoE CUTLASS kernel for Blackwell (#5281) 2025-04-22 22:28:20 -07:00
Yubo Wang
20f1c8e374 Fix sampler nan check when calling top_k_top_p_sampling_from_probs (#5546) 2025-04-19 21:47:23 -07:00
Yineng Zhang
f28d82997a chore: bump sgl-kernel 0.0.9.post2 (#5518) 2025-04-17 23:42:39 -07:00
Xiaoyu Zhang
8e09b37077 Sgl kernel fused_moe_gate support n_shared_experts (#5440) 2025-04-17 23:05:15 -07:00
PGFLMG
c08a717c77 [Feat] Update sgl-kernel flashinfer to latest main version (#5500)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-04-17 12:43:23 -07:00
Trevor Morris
e8f62b20ca BLackwell cutlass mla: Add check for bad page size/block num combinations (#5431) 2025-04-15 14:07:42 -07:00
Yineng Zhang
6f509d5503 chore: bump sgl-kernel v0.0.9.post1 (#5430) 2025-04-15 11:00:21 -07:00
Yineng Zhang
e940dc4f06 chore: bump sgl-kernel 0.0.9 (#5400) 2025-04-14 21:34:04 -07:00
DefTruth
388e15c0db kernel: support slightly faster merge_state_v2 cuda kernel (#5381) 2025-04-14 21:28:23 -07:00
Yineng Zhang
b62e7e99b8 feat: adapt merge_state (#5337) 2025-04-12 21:14:04 -07:00
Yineng Zhang
b371f7cd36 chore: bump sgl-kernel v0.0.8.post3 (#5332) 2025-04-12 12:53:37 -07:00
PGFLMG
4879e50c6d [Feat] Add sparse attn to sgl-kernel (#5327) 2025-04-12 11:36:36 -07:00
Yineng Zhang
115ae2e728 chore: bump sgl-kernel v0.0.8.post2 (#5317) 2025-04-11 23:42:03 -07:00
Baizhou Zhang
e4155e96d0 Add flash_attn_varlen_func to sgl-kernel (#5315) 2025-04-11 23:36:36 -07:00
Trevor Morris
f65b8d5c89 Blackwell Cutlass MLA kernel (#5142) 2025-04-11 22:16:51 -07:00
Yineng Zhang
4f288113ce fix: update flash attn (#5308) 2025-04-11 16:23:09 -07:00
Yineng Zhang
136b8e6afb fix: remove cublas_grouped_gemm (#5307) 2025-04-11 16:22:37 -07:00
Yineng Zhang
c163bf4ff1 chore: bump sgl-kernel v0.0.8.post1 (#5289) 2025-04-11 02:11:53 -07:00
Yineng Zhang
496dde8491 bump sgl-kernel 0.0.8 (#5089) 2025-04-05 14:28:04 -07:00
Yi Zhang
bcbbf519f9 sgl-kernel transfer custom allreduce from trt kernel to vllm kernel (#5079) 2025-04-05 14:23:20 -07:00