[Ops][Misc] Refactor and optimize CausalConv1d for Ascend (#7495)
### What this PR does / why we need it?
During the prefill phase of Qwen3-Next and Qwen3.5, the
`torch.ops._C_ascend.causal_conv1d_fn` operator exhibits significant
performance bottlenecks. To address this, we have re-implemented the
optimization using `torch.ops._C_ascend.npu_causal_conv1d_custom`.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
1 accuracy test
```
[2026-03-20 16:44:22,961] [ais_bench] [INFO] Start launch task state board ...
+-----------------------------+-----------+------------+-------------+----------+-------------------------------------------+---------------------+
| Task Name | Process | Progress | Time Cost | Status | Log Path | Extend Parameters |
+=============================+===========+============+=============+==========+===========================================+=====================+
| vllm-api-general-chat/gsm8k | 2918978 | NA | 0:00:01 | finish | logs/eval/vllm-api-general-chat/gsm8k.out | None |
+-----------------------------+-----------+------------+-------------+----------+-------------------------------------------+---------------------+
[2026-03-20 16:44:34,284] [ais_bench] [INFO] Evaluation tasks completed.
[2026-03-20 16:44:34,287] [ais_bench] [INFO] Summarizing evaluation results...
dataset version metric mode vllm-api-general-chat
--------- --------- -------- ------ -----------------------
gsm8k 271d0b accuracy gen 96.21
```
2 ut modify test
`pytest -sv
/home/c30006096/vllm-ascend/tests/e2e/nightly/single_node/ops/singlecard_ops/triton/test_causal_conv1d.py::test_ascend_causal_conv1d`
- vLLM version: v0.17.0
- vLLM main:
8b6325758c
Signed-off-by: wenba0 <3054239545@qq.com>
Signed-off-by: jiaojiao <56385650+wenba0@users.noreply.github.com>
This commit is contained in:
@@ -157,6 +157,11 @@ def causal_conv1d_fn_pytorch(
|
||||
out_ref_tensor = torch.cat(out_ref, dim=0)
|
||||
return out_ref_tensor
|
||||
|
||||
def to_int64_tuple(t):
|
||||
t = t.to(torch.int64)
|
||||
if t.dim() == 0:
|
||||
return (t.item(),)
|
||||
return tuple(t.tolist())
|
||||
|
||||
@pytest.mark.parametrize('has_initial_state', [False, True])
|
||||
@pytest.mark.parametrize('itype', [torch.bfloat16])
|
||||
@@ -227,16 +232,19 @@ def test_ascend_causal_conv1d(dim, width, extra_state_len, seq_len, has_bias,
|
||||
x_origin=x.transpose(-1, -2)
|
||||
weight_origin=weight.transpose(-1, -2)
|
||||
conv_states_origin=conv_states.transpose(-1, -2)
|
||||
out = torch.ops._C_ascend.causal_conv1d_fn(
|
||||
activation_num = 1 if activation else 0
|
||||
out = torch.ops._C_ascend.npu_causal_conv1d_custom(
|
||||
x_origin,
|
||||
weight_origin,
|
||||
bias,
|
||||
activation=activation,
|
||||
conv_state=conv_states_origin,
|
||||
has_initial_state=has_initial_state_tensor,
|
||||
non_spec_state_indices_tensor=cache_indices,
|
||||
non_spec_query_start_loc=query_start_loc,
|
||||
bias_opt=bias,
|
||||
query_start_loc_opt=to_int64_tuple(query_start_loc),
|
||||
cache_indices_opt=to_int64_tuple(cache_indices),
|
||||
initial_state_mode_opt=to_int64_tuple(has_initial_state_tensor),
|
||||
num_accepted_tokens_opt=[],
|
||||
activation_mode=activation_num,
|
||||
pad_slot_id=PAD_SLOT_ID,
|
||||
run_mode=0
|
||||
).transpose(-1, -2)
|
||||
validate_cmp(out, out_ref, itype)
|
||||
validate_cmp(conv_states, conv_states_ref, itype)
|
||||
|
||||
Reference in New Issue
Block a user