[Refactor]refactor 310p attention impl and add ut (#6579)

### What this PR does / why we need it?
This pull request significantly refactors the attention mechanism for
the Ascend 310P hardware, enhancing its architecture by separating mask
generation concerns from the core attention implementation. It
introduces a dedicated mask builder class capable of handling various
mask types, including causal, splitfuse, and sliding window attention
masks, all optimized for the NPU's fractal data format. This change not
only cleans up the codebase but also lays the groundwork for more robust
and feature-rich attention operations on Ascend devices, backed by new,
extensive unit tests.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test with qwen3 and qwen3-moe
- vLLM version: v0.15.0
- vLLM main:
d7e17aaacd

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
This commit is contained in:
pu-zhe
2026-02-07 09:26:26 +08:00
committed by GitHub
parent 23524f2ca4
commit 4f33e25046
5 changed files with 487 additions and 135 deletions

View File

@@ -1,5 +1,5 @@
#
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
# Copyright (c) 2026 Huawei Technologies Co., Ltd. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,19 +15,26 @@
# This file is a part of the vllm-ascend project.
#
from __future__ import annotations
from typing import Any
import torch
from vllm.config import VllmConfig
from vllm.v1.kv_cache_interface import AttentionSpec
from vllm_ascend._310p.attention.attention_mask import AttentionMaskBuilder
from vllm_ascend.attention.attention_v1 import AscendAttentionMetadataBuilder as _BaseBuilder
from vllm_ascend._310p.attention.attention_mask import AttentionMaskBuilder310
from vllm_ascend.attention.attention_v1 import AscendAttentionMetadataBuilder
class AscendAttentionMetadataBuilder310P(_BaseBuilder):
class AscendAttentionMetadataBuilder310(AscendAttentionMetadataBuilder):
"""
Metadata builder specialized for the Huawei Ascend 310P NPU.
This class extends the base Ascend attention metadata builder to use
the 310P-specific attention mask builder, ensuring that masks are
generated in the correct format (FRACTAL_NZ) and logic required by
the 310P hardware.
"""
def __init__(
self,
kv_cache_spec: AttentionSpec,
@@ -35,6 +42,16 @@ class AscendAttentionMetadataBuilder310P(_BaseBuilder):
vllm_config: VllmConfig,
device: torch.device,
):
"""
Initializes the metadata builder and the 310P-specific mask builder.
Args:
kv_cache_spec (AttentionSpec): Specification for the KV cache (block size, etc.).
layer_names (list[str]): List of layer names in the model.
vllm_config (VllmConfig): Global vLLM configuration object.
device (torch.device): The device (NPU) to run operations on.
"""
super().__init__(kv_cache_spec, layer_names, vllm_config, device)
self.attn_mask_builder: Any = AttentionMaskBuilder(self.device)
# Override the mask builder with the 310P-specific version
self.attn_mask_builder: Any = AttentionMaskBuilder310(self.device)