[OPS] add bmm_transpose ops (#3990)
### What this PR does / why we need it? Add a new fusion ops to custom_op, which can cobime the torch.bmm() and transpsose to achieve better peformance. This ops is used in mla_v1 to replace the bmm and transpose ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - vLLM version: v0.11.2 --------- Signed-off-by: hust17yixuan <303660421@qq.com>
This commit is contained in:
@@ -158,4 +158,13 @@ namespace vllm_ascend {
|
||||
void* tiling,
|
||||
const uint32_t block_dim
|
||||
);
|
||||
|
||||
extern void batch_matmul_transpose_impl(
|
||||
void* stream,
|
||||
void* gm_a,
|
||||
void* gm_b,
|
||||
void* gm_c,
|
||||
void* gm_tiling_data,
|
||||
const uint32_t block_dim
|
||||
);
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user