[Bugfix]fix bmm_transpose ops for cann version (#4653)

### What this PR does / why we need it?
Due to the upgrade of CANN version, custom op cannot be used in high
version. In the high level cann version, the ops will start with
redundant vector core while this ops will only use cube core, this
results in the missalign when copy data from ub memory to global memory.
So add limitation to the ops to make it use cube core only.
### Does this PR introduce _any_ user-facing change?
No

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: hust17yixuan <303660421@qq.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
Wang Yixuan
2025-12-06 10:52:46 +08:00
committed by GitHub
parent a78f49ea57
commit e0c5073956

View File

@@ -658,6 +658,7 @@ private:
extern "C" __global__ __aicore__ void batch_matmul_transpose(GM_ADDR gm_a, GM_ADDR gm_b, GM_ADDR gm_c, extern "C" __global__ __aicore__ void batch_matmul_transpose(GM_ADDR gm_a, GM_ADDR gm_b, GM_ADDR gm_c,
GM_ADDR gm_tiling_data) GM_ADDR gm_tiling_data)
{ {
KERNEL_TASK_TYPE_DEFAULT(KERNEL_TYPE_AIC_ONLY);
PpMatmulEinSum<0, false, false, half, half, DataFormat::ND> PpMatmulEinSum<0, false, false, half, half, DataFormat::ND>
einsum_0_n_fp16_nd; // swizzleDir[0] transA[0] transB[0] DtypeA[001] DtypeB[001] DtypeC[001] DataFormatA[0] einsum_0_n_fp16_nd; // swizzleDir[0] transA[0] transB[0] DtypeA[001] DtypeB[001] DtypeC[001] DataFormatA[0]
// DataFormatB[0] // DataFormatB[0]