Peng Zhang
|
c28ad1990d
|
[1/n] chore: decouple quantization implementation from vLLM dependency (#7992)
|
2025-07-16 15:56:26 -07:00 |
|
Qi Yuhang
|
c268c11c71
|
[feat]Support fusion kernel for constructing quant input and scale factor for fp8_blockwise_scaled_grouped_mm (#8023)
|
2025-07-15 00:02:44 -07:00 |
|
ykcombat
|
1ebec1a8b0
|
[Feature] CUDA Green Context Support (#7649)
|
2025-07-15 02:49:16 +08:00 |
|
Qi Yuhang
|
26118a133d
|
[fix]Update unitest for fp8_blockwise_scaled_grouped_mm kernel (#7932)
|
2025-07-11 14:29:13 -07:00 |
|
SijiaYang
|
da3890e82a
|
[1/n]: add cutlass W4A8 moe kernel for hopper architecture (#7772)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>
|
2025-07-04 20:50:12 -07:00 |
|
Yi Zhang
|
2998c4bdf4
|
[optimize] fuse renormalize into moe_topk_softmax (#7744)
Co-authored-by: ispobock <ispobaoke@gmail.com>
|
2025-07-03 12:42:44 -07:00 |
|
ayrnb
|
2c4feaf308
|
Add CUTLASS FP8 Blockscale MoE kernel for Hopper architecture (#7278)
Co-authored-by: HydraQYH <QYH820@Outlook.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
|
2025-07-02 23:27:03 -07:00 |
|
AniZpZ
|
8e03b641ba
|
[1/n] apply wna16marlin kernel in moe weight only quantization (#7683)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: HandH1998 <1335248067@qq.com>
Co-authored-by: 弋云 <yiyun.wyt@antgroup.com>
Co-authored-by: walker-ai <2398833647@qq.com>
|
2025-07-01 23:21:25 -07:00 |
|
Baizhou Zhang
|
7248272ccc
|
Add dsv3 router gemm kernel (#7627)
|
2025-06-29 23:31:55 -07:00 |
|
Ke Bao
|
04b35190e2
|
Add dsv3 fused a gemm to sgl-kernel (#7630)
|
2025-06-29 02:52:24 -07:00 |
|
Ke Bao
|
57ab776910
|
Fuse sorted_token_ids padding to moe_align_block_size kernel (#7437)
|
2025-06-24 17:44:27 -07:00 |
|
Zhiqiang Xie
|
34c3f9b2d3
|
kvcache io kernels and test case (#7382)
|
2025-06-23 11:58:59 -07:00 |
|
xutizhou
|
506c4928f5
|
feat: integrate deepgemm into EPMoE (#6821)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
|
2025-06-23 01:38:58 -07:00 |
|
Yineng Zhang
|
0650e5176f
|
fix: only enable flash_attn test on sm80 sm90 (#7289)
|
2025-06-17 16:56:41 -07:00 |
|
AniZpZ
|
3eb4a800e8
|
Fix AWQ Dequant and Weight Loading of deepseek v2 (#6842)
|
2025-06-17 13:45:10 -07:00 |
|
Lianmin Zheng
|
cfceb83d05
|
Fix sampling for speculative decoding & simplify kernels (#7207)
|
2025-06-16 03:28:30 -07:00 |
|
JieXin Liang
|
ab1a4fa5cb
|
[fix] fix cutlass_mla_backend with cuda_graph and add sm_scale for sgl-kernel cutlass_mla (#7184)
|
2025-06-14 12:45:41 -07:00 |
|
fzyzcjy
|
5c66c4424f
|
Support new DeepGEMM format in per token group quant (#7146)
|
2025-06-13 02:00:22 -07:00 |
|
fzyzcjy
|
aa46ed34d2
|
Remove 200us slow concat kernel (part 1: kernel) (#7145)
|
2025-06-13 01:58:29 -07:00 |
|
Yuan Luo
|
84727a5139
|
[sgl-kernel] Add cuda kernel for moe_ep_silu_and_mul (#6919)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-06-11 20:43:08 -07:00 |
|
JieXin Liang
|
18efb5e8e0
|
[perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (#6929)
|
2025-06-08 19:37:34 -07:00 |
|
Yuan Luo
|
43baba649e
|
[EP] Add cuda kernel for moe_ep_post_reorder (#6837)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-06-05 00:33:47 -07:00 |
|
zyksir
|
8e3797be1c
|
support 1 shot allreduce in 1-node and 2-node using mscclpp (#6277)
|
2025-06-04 22:11:24 -07:00 |
|
Cheng Wan
|
81964328b7
|
Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736)
|
2025-06-04 15:53:22 -07:00 |
|
Xiaoyu Zhang
|
bd75690f4e
|
fix ep_moe_reorder kernel bugs (#6858)
Co-authored-by: JieXin Liang <Alcanderian@users.noreply.github.com>
|
2025-06-04 19:13:59 +08:00 |
|
Cheng Wan
|
8a5480528d
|
[Refactor] Rename n_share_experts_fusion as num_fused_shared_experts (#6735)
|
2025-06-03 17:48:24 -07:00 |
|
Brayden Zhong
|
006ead9dcb
|
[FA][Test] Fix Sparse FA test (#6306)
|
2025-05-26 01:27:48 -07:00 |
|
Yuan Luo
|
121f92c583
|
Add main for merge state tests (#6492)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-05-21 21:56:25 -07:00 |
|
HandH1998
|
4d643f6c7a
|
[1/2] Support Qserve (#6457)
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
|
2025-05-21 19:48:59 -07:00 |
|
Elfie Guo
|
6fc9357503
|
[2/2] Add python wrapper for CUTLASS FP8 Blockscale MoE Kernel. (#5694)
|
2025-05-16 13:14:07 -07:00 |
|
Lianmin Zheng
|
e8e18dcdcc
|
Revert "fix some typos" (#6244)
|
2025-05-12 12:53:26 -07:00 |
|
applesaucethebun
|
d738ab52f8
|
fix some typos (#6209)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
|
2025-05-13 01:42:38 +08:00 |
|
applesaucethebun
|
2ce8793519
|
Add typo checker in pre-commit (#6179)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
|
2025-05-11 12:55:00 +08:00 |
|
Trevor Morris
|
0ab3f437ab
|
Cutlass MLA: Disable split kv due to https://github.com/NVIDIA/cutlass/issues/2274 (#6101)
|
2025-05-08 18:44:30 -07:00 |
|
PGFLMG
|
08acdb5c3d
|
[Feat] Scale up fa3 kernel to sm8x arch (#5912)
Co-authored-by: zhyncs <me@zhyncs.com>
|
2025-04-30 13:59:36 -07:00 |
|
PGFLMG
|
ee71ed8a41
|
[Feat] QWen-1M context support[1/2]: Update block sparse attention backend utils kernel (#5847)
Co-authored-by: sighingnow <sighingnow@gmail.com>
|
2025-04-28 11:03:17 -07:00 |
|
Yineng Zhang
|
15fabcc07f
|
fix sgl-kernel unit tests (#5666)
|
2025-04-23 01:18:30 -07:00 |
|
Elfie Guo
|
e62c49557d
|
[1/2] Add FP8 Blockscale MoE CUTLASS kernel for Blackwell (#5281)
|
2025-04-22 22:28:20 -07:00 |
|
Xiaoyu Zhang
|
8e09b37077
|
Sgl kernel fused_moe_gate support n_shared_experts (#5440)
|
2025-04-17 23:05:15 -07:00 |
|
PGFLMG
|
c08a717c77
|
[Feat] Update sgl-kernel flashinfer to latest main version (#5500)
Co-authored-by: zhyncs <me@zhyncs.com>
|
2025-04-17 12:43:23 -07:00 |
|
Baizhou Zhang
|
81c891111f
|
Add test for flash_attn_varlen_func kernel (#5484)
|
2025-04-17 01:42:56 -07:00 |
|
Trevor Morris
|
e8f62b20ca
|
BLackwell cutlass mla: Add check for bad page size/block num combinations (#5431)
|
2025-04-15 14:07:42 -07:00 |
|
DefTruth
|
388e15c0db
|
kernel: support slightly faster merge_state_v2 cuda kernel (#5381)
|
2025-04-14 21:28:23 -07:00 |
|
Yineng Zhang
|
b62e7e99b8
|
feat: adapt merge_state (#5337)
|
2025-04-12 21:14:04 -07:00 |
|
PGFLMG
|
4879e50c6d
|
[Feat] Add sparse attn to sgl-kernel (#5327)
|
2025-04-12 11:36:36 -07:00 |
|
Zhaoyi Li
|
3c9740d200
|
update variable naming and comments for rocm (#5299)
|
2025-04-11 23:15:05 -07:00 |
|
Trevor Morris
|
f65b8d5c89
|
Blackwell Cutlass MLA kernel (#5142)
|
2025-04-11 22:16:51 -07:00 |
|
Yineng Zhang
|
136b8e6afb
|
fix: remove cublas_grouped_gemm (#5307)
|
2025-04-11 16:22:37 -07:00 |
|
Yineng Zhang
|
c1dd773c19
|
fix: use fa3 unit test on hopper only (#5304)
|
2025-04-11 15:10:49 -07:00 |
|
PGFLMG
|
ed01b4515e
|
[Misc] Clean sgl-kernel test (#5216)
|
2025-04-10 11:28:41 -07:00 |
|