Zhengyi Lai
|
81fd2b0ee0
|
fix(deepep): resolve benchmark failure on 4×IB-card setup by aligning tuning config with DeepEP commit bdd119f8 (#11965)
|
2025-10-22 21:20:54 -07:00 |
|
Liangsheng Yin
|
9d61205dac
|
[lint] improve ruff check (#11922)
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
|
2025-10-22 11:32:50 +08:00 |
|
Cheng Wan
|
5b214b50b6
|
[Refactor] move deep_gemm_wrapper out of quantization (#11784)
|
2025-10-17 18:57:54 -07:00 |
|
Cheng Wan
|
3c06b673af
|
[8/N] MoE Refactor: deprecate EPMoE (#11211)
|
2025-10-07 21:51:41 -07:00 |
|
Yuan Luo
|
4f42c8cd3e
|
[sgl-kernel] Support float64 moe_sum_reduce cuda kernel (#11068)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-10-07 14:31:11 +00:00 |
|
Yuan Luo
|
590f2da052
|
[Feat] Support Torch Symm Mem AllReduce (#10571)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-10-05 13:55:19 -07:00 |
|
Yuan Luo
|
42245551ef
|
[sgl-kernel] Optimize concat_mla_k kernel (#10543)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: PGFLMG <1106310035@qq.com>
|
2025-09-28 23:04:22 +08:00 |
|
lukec
|
77830a265e
|
Add fuse_moe per-channel tune (#10915)
|
2025-09-25 21:12:09 +08:00 |
|
Xiaoyu Zhang
|
c4e314f986
|
Restruct sgl-kernel benchmark (#10861)
|
2025-09-25 07:45:25 +08:00 |
|
Yiakwy
|
984730b732
|
add tunning files for QWEN-3-NEXT (#10794)
|
2025-09-23 12:46:30 -07:00 |
|
Yuan Luo
|
616a3e20df
|
[sgl-kernel] Support moe_sum_reduce cuda kernel (#10321)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
|
2025-09-19 14:12:09 +08:00 |
|
strgrb
|
fac07c9b08
|
Support LingV2 model (#10359)
Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
Co-authored-by: guoyuhong <yuhong.gyh@antgroup.com>
|
2025-09-11 23:53:52 -07:00 |
|
Yuan Luo
|
cb3918a091
|
Optimize moe_sum_reduce_kernel (#9477)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: Xiaoyu Zhang <35585791+BBuf@users.noreply.github.com>
|
2025-09-07 09:16:18 +08:00 |
|
Xiaoyu Zhang
|
b1fb7e458c
|
[benchmark] add flashinfer_allreduce_fusion benchmark (#9937)
|
2025-09-03 16:31:01 +08:00 |
|
Kaixi Hou
|
5c34b4f1c7
|
[NVIDIA] [2/N] Optimize silu_and_mul_scaled_fp4_grouped_quant perf (#9556)
|
2025-08-29 17:17:03 -07:00 |
|
ehuaa
|
8f7b1c31e8
|
Add A100 fused MoE kernel configs for Dpsk (#9677)
|
2025-08-26 20:49:48 -07:00 |
|
Yineng Zhang
|
f8b757bcac
|
fix: resolve tuning fused moe issue (#9587)
|
2025-08-25 01:41:15 -07:00 |
|
Even Zhou
|
de2dd73831
|
Revert "[feature] Rework Ascend NPU graph support" (#9385)
|
2025-08-20 00:35:10 -07:00 |
|
Even Zhou
|
3680d6f88b
|
[feature] Rework Ascend NPU graph support (#9350)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
|
2025-08-19 20:32:27 -07:00 |
|
Chang Su
|
46fe8b8cb2
|
[CI] Fix lint issues (#9361)
|
2025-08-19 13:05:36 -07:00 |
|
mpashkovskiy
|
a3b810ebdb
|
fix: enable multi-GPU Triton fused MoE tuning (#6295)
|
2025-08-19 10:16:58 -07:00 |
|
Even Zhou
|
f4fafacc5d
|
Revert "[feature] Ascend NPU graph support (#8027)" (#9348)
|
2025-08-19 10:11:23 -07:00 |
|
Yuan Luo
|
968e181826
|
Fix triton_fused_moe unit test and benchmark (#9276)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-08-18 00:54:33 -07:00 |
|
VDV1985
|
94371dbbd6
|
[feature] Ascend NPU graph support (#8027)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com>
Co-authored-by: anon189Ty <Stari_Falcon@outlook.com>
Co-authored-by: Maksim <makcum888e@mail.ru>
Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>
|
2025-08-16 17:25:17 -07:00 |
|
Cheng Wan
|
295895120d
|
[6/N] MoE Refactor: Cleanup MoE-related configs (#8849)
|
2025-08-14 21:14:53 -07:00 |
|
Ke Bao
|
0475448ee3
|
Optimize triton swa kernel by skipping computation (#8860)
|
2025-08-06 21:37:50 +08:00 |
|
Yineng Zhang
|
1466c1b896
|
feat: support glm4 tuning (#8473)
|
2025-07-28 14:32:58 -07:00 |
|
Yuxuan Zhang
|
6d6a8bc278
|
GLM-4.5 Model Support (#8224)
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-27 22:54:07 -07:00 |
|
Cheng Wan
|
abda2542d5
|
Fix tuning_fused_moe_triton.py (#8175)
|
2025-07-19 17:33:50 -07:00 |
|
Yuan Luo
|
253454de9b
|
Integrate triton moe kernel (#7689)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-07-06 20:05:49 -07:00 |
|
Xiaoyu Zhang
|
0ae1e9a755
|
refine fused_moe benchmark (#7221)
|
2025-06-15 21:21:32 -07:00 |
|
Quanfeng Li
|
ef32677444
|
Fix positional argument (#7093)
|
2025-06-11 18:31:13 -07:00 |
|
Xiaoyu Zhang
|
3712abfaf9
|
Fuse routed scaling factor in deepseek (#6970)
|
2025-06-08 15:24:24 -07:00 |
|
Xiaoyu Zhang
|
fa3592cfeb
|
rebase h20 fused_moe config (#6966)
|
2025-06-08 05:01:34 -07:00 |
|
Yineng Zhang
|
1fb76ebb93
|
Revert "Fuse routed scaling factor in topk_reduce kernel (#6220)" (#6968)
|
2025-06-07 21:02:49 -07:00 |
|
Xiaoyu Zhang
|
515ef4facb
|
Fuse routed scaling factor in topk_reduce kernel (#6220)
|
2025-06-07 11:06:50 -07:00 |
|
zyksir
|
8e3797be1c
|
support 1 shot allreduce in 1-node and 2-node using mscclpp (#6277)
|
2025-06-04 22:11:24 -07:00 |
|
Cheng Wan
|
81964328b7
|
Set num_fused_shared_experts as num_shared_experts when shared_experts fusion is not disabled (#6736)
|
2025-06-04 15:53:22 -07:00 |
|
Cheng Wan
|
8a5480528d
|
[Refactor] Rename n_share_experts_fusion as num_fused_shared_experts (#6735)
|
2025-06-03 17:48:24 -07:00 |
|
JieXin Liang
|
d9d35def3d
|
[test] add ut and bm for get_last_loc (#6746)
|
2025-05-29 11:47:21 -07:00 |
|
fzyzcjy
|
6df81e8a39
|
Support tuning DeepEP configs (#6742)
|
2025-05-29 08:12:22 -07:00 |
|
ChangyiYang
|
485a023bd8
|
refactor apply_w8a8_block_fp8_linear in fp (#6545)
|
2025-05-29 00:15:11 -07:00 |
|
Xiaoyu Zhang
|
076103535c
|
fix log_info_on_rank0 error when run benchmark (#6260)
|
2025-05-28 00:20:01 -07:00 |
|
Yuan Luo
|
c087ddd686
|
Refine pre_reorder_triton_kernel slightly to improve performance (#6627)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-05-28 00:15:23 -07:00 |
|
fzyzcjy
|
ef8ec07b2c
|
Support tuning moe for llama 4 model (#6042)
|
2025-05-12 15:47:01 -07:00 |
|
Lianmin Zheng
|
e8e18dcdcc
|
Revert "fix some typos" (#6244)
|
2025-05-12 12:53:26 -07:00 |
|
applesaucethebun
|
d738ab52f8
|
fix some typos (#6209)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
|
2025-05-13 01:42:38 +08:00 |
|
Lifu Huang
|
6e2da51561
|
Replace time.time() to time.perf_counter() for benchmarking. (#6178)
Signed-off-by: Lifu Huang <lifu.hlf@gmail.com>
|
2025-05-11 14:32:49 -07:00 |
|
Xiaoyu Zhang
|
1cc326032d
|
simplify fused_moe config logging (#5801)
|
2025-04-28 17:04:54 -07:00 |
|
Yi Zhang
|
a0251a3fd6
|
add fused moe config for qwen3moe fp8/bf16 (#5849)
|
2025-04-28 11:55:52 -07:00 |
|