Yineng Zhang
|
dd949ace23
|
Revert "[1/2][resubmit] sgl-kernel: Fuse routed scaling factor into m… (#9035)
|
2025-08-10 17:34:54 -07:00 |
|
huangtingwei
|
86497d99f2
|
fix page first per layer pf2lf kernel (#8915)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-08-09 17:16:11 -07:00 |
|
cctry
|
5c31b35db2
|
[hicache] Optimization for DMA copy (#8245)
|
2025-08-09 17:16:07 -07:00 |
|
Trevor Morris
|
591c232f7c
|
[1/2][resubmit] sgl-kernel: Fuse routed scaling factor into moe_fused_gate (select_experts) (#8770)
|
2025-08-08 17:55:06 -07:00 |
|
triple-mu
|
444013585d
|
Fix typos and unify size(s)/stride(s) API calls (#8799)
|
2025-08-08 00:18:08 -07:00 |
|
Chunyuan WU
|
08f8f49016
|
[CPU][sgl-kernel] biased_grouped_topk: fix correction_bias dtype to float32 (#8212)
Co-authored-by: jianan-gu <jianan.gu@intel.com>
Co-authored-by: YanbingJiang <yanbing.jiang@intel.com>
|
2025-08-04 18:28:31 -07:00 |
|
Xiaoyu Zhang
|
f57d2dc162
|
[sgl-kernel] avoid per_token_quant_fp8.cu hardcode sm_count (#8738)
|
2025-08-04 12:55:57 +08:00 |
|
Qi Yuhang
|
d9def43dcd
|
[Perf]Use Cooperative Schedule for H100 & H200 & H800 in fp8_blockwise_scaled_grouped_mm (#8722)
|
2025-08-02 21:13:47 -07:00 |
|
Liangsheng Yin
|
603f5ce020
|
[Bug] fix green context's incompatibility with cuda < 12.4 (#8701)
|
2025-08-02 15:23:11 -07:00 |
|
Liangsheng Yin
|
f9f0138f80
|
Revert "[1/2] sgl-kernel: Fuse routed scaling factor into select_experts" (#8706)
|
2025-08-02 20:14:30 +08:00 |
|
Trevor Morris
|
f642524fd9
|
[1/2] sgl-kernel: Fuse routed scaling factor into select_experts (#8364)
|
2025-08-01 18:14:24 -07:00 |
|
YanbingJiang
|
1fe691a429
|
Fix FP8 block quantization when N or K is not multiples of 128 (#8648)
|
2025-08-01 15:57:19 -07:00 |
|
Stefan He
|
db7343c992
|
fix per token cuda kernel hidden dim cannot divide by 16 (#8543)
|
2025-08-01 09:27:18 -07:00 |
|
Peter Pan
|
6bdd27861b
|
[Kimi K2] dsv3_router_gemm supports NUM_EXPERTS == 384 (#8013)
|
2025-08-01 22:01:24 +08:00 |
|
Tao He
|
5d15fb8c9d
|
[bugifx] QWen-1M context support[2/3] using current cuda stream in the DCA's kernel for bugfix. (#8611)
Signed-off-by: Tao He <linzhu.ht@alibaba-inc.com>
Co-authored-by: sa-buc <linzhu.ht@w32d09270.cloud.sqa.na131>
|
2025-07-31 22:41:39 +08:00 |
|
Cheng Wan
|
a5f5ab4030
|
update sgl-kernel for EP: kernel part (#8514)
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: Ke Bao <ispobaoke@gmail.com>
|
2025-07-30 22:19:55 -07:00 |
|
Qi Yuhang
|
9b9e82539b
|
[Fix]Fix index oob in get_group_gemm_starts kernel. (#8564)
|
2025-07-30 19:49:35 -07:00 |
|
Yuan Luo
|
3bdcdd134b
|
[Hot-Fix] moe_aligned_block_size CI failed in AMD (#8461)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
Co-authored-by: JieXin Liang <Alcanderian@users.noreply.github.com>
|
2025-07-31 00:28:32 +08:00 |
|
Xiaoyu Zhang
|
7a4309cc8a
|
[sgl-kernel performace] fix fp8 quant kernels dispatch __nv_fp8_e4m3 bug to improve performance 10%-20% (#8499)
Co-authored-by: Ke Bao <ispobaoke@gmail.com>
|
2025-07-29 23:31:54 +08:00 |
|
Xiaoyu Zhang
|
2262369905
|
Revert "[kernel] opt moe align block kernel by block/warp scan algorithm" (#8457)
|
2025-07-28 01:35:43 -07:00 |
|
strgrb
|
fb4ce17de6
|
Fix per_token_group_quant_8bit when hidden_dim // group_size is not divided by 4. (#8449)
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
|
2025-07-28 01:32:46 -07:00 |
|
Baizhou Zhang
|
91e3d1542e
|
Update Cutlass in sgl-kernel to v4.1 (#8392)
|
2025-07-27 00:36:15 -07:00 |
|
Hubert Lu
|
af4b9bae95
|
[AMD] Add silu_and_mul, gelu_and_mul, gelu_tanh_and_mul, and gelu_quick kernels for AMD GPUs (#7135)
Co-authored-by: yiakwy-xpu-ml-framework-team <961186938@qq.com>
Co-authored-by: HAI <hixiao@gmail.com>
|
2025-07-24 23:44:28 -07:00 |
|
li haoyang
|
28d4d47280
|
[Feature] Integrate quick allreduce and select the best allreduce implementation (#6619)
Signed-off-by: Haoyang Li <Haoyang.Li@amd.com>
Co-authored-by: ilmarkov <imarkov@redhat.com>
|
2025-07-24 20:48:42 -07:00 |
|
Yuan Luo
|
0c8dab9e67
|
[sgl-kernel] Opt per_token_quant_fp8 with warp reduce (#8130)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-07-23 21:22:59 +08:00 |
|
Zhiqiang Xie
|
b43263307f
|
Hicache IO kernel refactoring (#8264)
|
2025-07-23 16:49:03 +08:00 |
|
Baizhou Zhang
|
282eb59ff3
|
Add bf16 output option for dsv3_router_gemm kernel (#7999)
|
2025-07-20 09:49:37 +08:00 |
|
Peng Zhang
|
719b29f218
|
feat: enchance green context stream creation robust with backward compatibility (#8136)
|
2025-07-18 02:45:03 -07:00 |
|
Qi Yuhang
|
6e92da8fca
|
[Fix][Ready]Fix register spilling in cutlass nvfp4 gemm kernel on Blackwell (#8127)
|
2025-07-17 20:49:36 -07:00 |
|
Yuan Luo
|
af1cc8fe2d
|
[kernel] opt moe align block kernel by block/warp scan algorithm (#7884)
|
2025-07-17 19:33:02 +08:00 |
|
Peng Zhang
|
6dc4af4937
|
fix greenctx stream compability (#8090)
|
2025-07-16 07:08:46 -07:00 |
|
ykcombat
|
1ebec1a8b0
|
[Feature] CUDA Green Context Support (#7649)
|
2025-07-15 02:49:16 +08:00 |
|
likesen-alibaba
|
4a0d19198b
|
Fix bug of deepseek-v3 under DP+EP mode with large batchsize/seqlen (#6449)
|
2025-07-10 01:19:56 -07:00 |
|
Chunyuan WU
|
ac80f4da57
|
[CPU] [FP8] set SGLANG_CPU_FP8_CVT_FTZ in CMakeLists.txt (#7885)
|
2025-07-09 01:53:53 -07:00 |
|
Chunyuan WU
|
128f16a817
|
[CPU]convert topk_weights to fp32 for INT8 and FP8 paths (for llama4) and fix LmHead weight pack (#7818)
|
2025-07-08 19:27:24 -07:00 |
|
Ke Bao
|
a3398d8478
|
Optimize moe align block size kernel (#7794)
|
2025-07-07 09:20:30 +08:00 |
|
Lianmin Zheng
|
5589b75024
|
Add treemask mode to build_eagle_tree & release sgl-kernel 0.2.3 (#7756)
Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>
|
2025-07-05 12:17:05 -07:00 |
|
Mick
|
c797322280
|
fix: fix apply_shuffle_mul_sum (#7444)
|
2025-07-04 23:23:30 -07:00 |
|
Qi Yuhang
|
8e9fb43d82
|
Optimize Hopper CUTLASS FP8 Blockwise Grouped GEMM Kernel in Small K Scenario (#7782)
|
2025-07-04 22:25:49 -07:00 |
|
SijiaYang
|
da3890e82a
|
[1/n]: add cutlass W4A8 moe kernel for hopper architecture (#7772)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>
|
2025-07-04 20:50:12 -07:00 |
|
Yi Zhang
|
2998c4bdf4
|
[optimize] fuse renormalize into moe_topk_softmax (#7744)
Co-authored-by: ispobock <ispobaoke@gmail.com>
|
2025-07-03 12:42:44 -07:00 |
|
ayrnb
|
2c4feaf308
|
Add CUTLASS FP8 Blockscale MoE kernel for Hopper architecture (#7278)
Co-authored-by: HydraQYH <QYH820@Outlook.com>
Co-authored-by: TianQiLin666666 <1834987979@qq.com>
|
2025-07-02 23:27:03 -07:00 |
|
Chunyuan WU
|
36cc3ffdc7
|
[CPU] [sgl-kernel] set dispatch key of initialize to CatchAll (#7734)
|
2025-07-02 22:39:24 -07:00 |
|
YanbingJiang
|
b044400dd3
|
Support non-contiguous query input for extend/decode attention (#7462)
|
2025-07-02 19:59:45 -07:00 |
|
AniZpZ
|
8e03b641ba
|
[1/n] apply wna16marlin kernel in moe weight only quantization (#7683)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: HandH1998 <1335248067@qq.com>
Co-authored-by: 弋云 <yiyun.wyt@antgroup.com>
Co-authored-by: walker-ai <2398833647@qq.com>
|
2025-07-01 23:21:25 -07:00 |
|
Chunyuan WU
|
6005eceee3
|
[CPU] remove process_group from inputs of shm_allreduce and shm_allgather (#7486)
|
2025-06-30 21:54:11 -07:00 |
|
Baizhou Zhang
|
7248272ccc
|
Add dsv3 router gemm kernel (#7627)
|
2025-06-29 23:31:55 -07:00 |
|
Chunyuan WU
|
c5131f7a2f
|
[CPU] add c++ kernel to bind CPU cores and memory node (#7524)
|
2025-06-29 19:45:25 -07:00 |
|
Ke Bao
|
04b35190e2
|
Add dsv3 fused a gemm to sgl-kernel (#7630)
|
2025-06-29 02:52:24 -07:00 |
|
Chunyuan WU
|
7eb47b0f3d
|
[CPU] [BF16] Call fused_experts_cpu, weight_packed_linear and bmm_cpu kernel in DeepSeek model (#6641)
Co-authored-by: Thien Tran <gau.nernst@yahoo.com.sg>
|
2025-06-25 01:43:33 -07:00 |
|