[1/N][Draft][Refactor]torchair pangu_moe modeling refactor (#2437)

### What this PR does / why we need it?

1. Similar to #2384 , this PR add a torchair-specific modeling for
pangu.
2. Fixes a bug introduced by routed_scaling_factor in #2675 .
3. remove eager test case for pangu since there has already been a
torchair test case.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?


- vLLM version: v0.10.1.1
- vLLM main:
6997a25ac6

---------

Signed-off-by: zengyanjia <z00883269@china.huawei.com>
Signed-off-by: Angazenn <supperccell@163.com>
Co-authored-by: zengyanjia <z00883269@china.huawei.com>
This commit is contained in:
Angazenn
2025-09-04 10:39:21 +08:00
committed by GitHub
parent a58013440a
commit e7409e95ee
6 changed files with 1185 additions and 55 deletions

View File

@@ -57,7 +57,6 @@ from vllm.model_executor.sampling_metadata import SamplingMetadata
from vllm.model_executor.utils import set_weight_attrs
from vllm.sequence import IntermediateTensors
from vllm_ascend.ascend_config import get_ascend_config
from vllm_ascend.utils import ACL_FORMAT_FRACTAL_NZ, is_310p
_ROUTER_SCALE = None
@@ -612,9 +611,6 @@ class PanguProMoEAttention(nn.Module):
prefix=f"{prefix}.attn",
)
ascend_config = get_ascend_config()
self.torchair_graph_enabled = ascend_config.torchair_graph_config.enabled
def forward(
self,
positions: torch.Tensor,
@@ -625,18 +621,7 @@ class PanguProMoEAttention(nn.Module):
qkv, _ = self.qkv_proj(hidden_states)
q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)
q, k = self.rotary_emb(positions, q, k)
if self.torchair_graph_enabled:
forward_kwargs = {'trace_flag': False}
output_shape = q.shape
attn_output = torch.empty(output_shape,
dtype=q.dtype,
device=q.device)
forward_kwargs['output'] = attn_output
attn_output = self.attn.impl.forward(self.attn, q, k, v, kv_cache,
attn_metadata,
**forward_kwargs)
else:
attn_output = self.attn(q, k, v)
attn_output = self.attn(q, k, v)
output, _ = self.o_proj(attn_output)
return output