[Platform][BugFix] Preserve hybrid block size on Ascend (#7528)

### What this PR does / why we need it
This PR fixes a startup regression for Ascend hybrid attention + mamba
models after upgrading to vLLM `0.18.0`.
However, after the vLLM `0.18.0` upgrade, worker initialization still
calls the generic platform hook:
- `current_platform.update_block_size_for_backend(vllm_config)`

### How this PR fixes it

This PR keeps the fix strictly inside `vllm-ascend`.

It adds an Ascend override for
`NPUPlatform.update_block_size_for_backend()`:

- for hybrid models, do not run the generic upstream block-size fallback
- preserve the block size that was already computed by the hybrid
model-specific config logic
- for non-hybrid models, keep the original upstream behavior unchanged

- vLLM version: v0.18.0
- vLLM main:
8b6325758c
---------
Signed-off-by: maoxx241 <maomaoyu870@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
This commit is contained in:
Qi Mao
2026-03-22 11:21:49 +08:00
committed by GitHub
parent cbf46fad3c
commit 9d0b7c8e98
2 changed files with 34 additions and 0 deletions

View File

@@ -172,6 +172,20 @@ class NPUPlatform(Platform):
def inference_mode(cls):
return torch.inference_mode()
@classmethod
def update_block_size_for_backend(cls, vllm_config: VllmConfig) -> None:
cache_config = vllm_config.cache_config
if cache_config.user_specified_block_size:
# User specified --block-size; keep it.
return
model_config = vllm_config.model_config
if model_config is not None and model_config.is_hybrid:
# Hybrid attention+mamba models rely on the model-specific sizing
# logic rather than the generic platform default.
return
super().update_block_size_for_backend(vllm_config)
@classmethod
def set_device(cls, device: torch.device):
torch.npu.set_device(device)