[refactor] replace scattered business kwargs with typed request objects and explicit stage boundaries (#7024)

### What this PR does / why we need it?
Refactor `vllm_ascend/ops/fused_moe` to replace scattered MoE business
`**kwargs` with typed request objects and explicit stage boundaries.

- Prepare, dispatch, MLP, and quant stages now have clearer ownership.
- Main MoE path no longer depends on business `kwargs.get(...)` lookups.
- Comm and dispatcher interfaces are request-only on the main path.
- UTs can assert stage-level fields directly instead of inferring
behavior indirectly.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed.

---------

Signed-off-by: linfeng-yuan <1102311262@qq.com>
This commit is contained in:
linfeng-yuan
2026-03-20 23:23:57 +08:00
committed by GitHub
parent c860535246
commit 88d03a783f
33 changed files with 2146 additions and 947 deletions

View File

@@ -16,24 +16,30 @@
#
"""Ascend quantization module.
This module provides quantization support for Ascend NPU.
Supported quantization tools:
- ModelSlim: Use AscendModelSlimConfig
- LLM-Compressor (compressed_tensors): Use AscendCompressedTensorsConfig
Public API:
- Config classes: AscendModelSlimConfig, AscendCompressedTensorsConfig
- For scheme implementations, import from vllm_ascend.quantization.methods
This module intentionally avoids eager imports so that importing lightweight
submodules (for example ``quant_type``) does not trigger heavy registration
paths and circular imports during startup.
"""
# LLM-Compressor (compressed_tensors) quantization config
from .compressed_tensors_config import AscendCompressedTensorsConfig
from typing import TYPE_CHECKING, Any
# ModelSlim quantization config
from .modelslim_config import AscendModelSlimConfig
if TYPE_CHECKING:
from .compressed_tensors_config import AscendCompressedTensorsConfig
from .modelslim_config import AscendModelSlimConfig
__all__ = [
"AscendModelSlimConfig",
"AscendCompressedTensorsConfig",
]
def __getattr__(name: str) -> Any:
if name == "AscendModelSlimConfig":
from .modelslim_config import AscendModelSlimConfig
return AscendModelSlimConfig
if name == "AscendCompressedTensorsConfig":
from .compressed_tensors_config import AscendCompressedTensorsConfig
return AscendCompressedTensorsConfig
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")