[Ops][Misc] Refactor and optimize CausalConv1d for Ascend (#7495)

### What this PR does / why we need it?
During the prefill phase of Qwen3-Next and Qwen3.5, the
`torch.ops._C_ascend.causal_conv1d_fn` operator exhibits significant
performance bottlenecks. To address this, we have re-implemented the
optimization using `torch.ops._C_ascend.npu_causal_conv1d_custom`.

### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
1 accuracy test
```
[2026-03-20 16:44:22,961] [ais_bench] [INFO] Start launch task state board ...
+-----------------------------+-----------+------------+-------------+----------+-------------------------------------------+---------------------+
| Task Name                   |   Process | Progress   | Time Cost   | Status   | Log Path                                  | Extend Parameters   |
+=============================+===========+============+=============+==========+===========================================+=====================+
| vllm-api-general-chat/gsm8k |   2918978 | NA         | 0:00:01     | finish   | logs/eval/vllm-api-general-chat/gsm8k.out | None                |
+-----------------------------+-----------+------------+-------------+----------+-------------------------------------------+---------------------+
[2026-03-20 16:44:34,284] [ais_bench] [INFO] Evaluation tasks completed.
[2026-03-20 16:44:34,287] [ais_bench] [INFO] Summarizing evaluation results...
dataset    version    metric    mode      vllm-api-general-chat
---------  ---------  --------  ------  -----------------------
gsm8k      271d0b     accuracy  gen                       96.21
```
2 ut modify test
`pytest -sv
/home/c30006096/vllm-ascend/tests/e2e/nightly/single_node/ops/singlecard_ops/triton/test_causal_conv1d.py::test_ascend_causal_conv1d`

- vLLM version: v0.17.0
- vLLM main:
8b6325758c

Signed-off-by: wenba0 <3054239545@qq.com>
Signed-off-by: jiaojiao <56385650+wenba0@users.noreply.github.com>
This commit is contained in:
jiaojiao
2026-03-24 00:07:12 +08:00
committed by GitHub
parent e942b62d74
commit 1de805ce0a
16 changed files with 907 additions and 554 deletions

View File

@@ -485,19 +485,21 @@ npu_copy_and_expand_eagle_inputs_meta(
out_new_token_indices, out_hidden_state_mapping};
}
at::Tensor causal_conv1d_fn_meta(
const at::Tensor& mixed_qkv_non_spec_T,
const at::Tensor& conv_weights,
const c10::optional<at::Tensor>& bias_opt,
c10::string_view activation,
at::Tensor npu_causal_conv1d_custom_meta(
const at::Tensor& x,
const at::Tensor& weight,
const at::Tensor& conv_state,
const at::Tensor& has_initial_state,
const at::Tensor& non_spec_state_indices_tensor,
const at::Tensor& non_spec_query_start_loc,
int64_t pad_slot_id)
const c10::optional<at::Tensor>& bias_opt,
at::IntArrayRef query_start_loc_opt,
at::IntArrayRef cache_indices_opt,
at::IntArrayRef initial_state_mode_opt,
at::IntArrayRef num_accepted_tokens_opt,
int64_t activation_mode,
int64_t pad_slot_id,
int64_t run_mode)
{
at::Tensor output = at::empty_symint(mixed_qkv_non_spec_T.sym_sizes(), mixed_qkv_non_spec_T.options());
at::Tensor output = at::empty_symint(x.sym_sizes(), x.options());
return output;
}
@@ -611,7 +613,7 @@ TORCH_LIBRARY_IMPL_EXPAND(CONCAT(_C, _ascend), Meta, ops) {
// CopyAndExpandEagleInputs
ops.impl("npu_copy_and_expand_eagle_inputs", &vllm_ascend::meta::npu_copy_and_expand_eagle_inputs_meta);
// causal_conv1d_fn
ops.impl("causal_conv1d_fn", &vllm_ascend::meta::causal_conv1d_fn_meta);
ops.impl("npu_causal_conv1d_custom", &vllm_ascend::meta::npu_causal_conv1d_custom_meta);
// moe_grouped_matmul
ops.impl("moe_grouped_matmul", &vllm_ascend::meta::moe_grouped_matmul_meta);
// Lightning indexer quant