[Main2Main] Upgrade vLLM to 0303 (#6944)
### What this PR does / why we need it?
break:
- https://github.com/vllm-project/vllm/pull/34102
Disable_full param replaced with valid_modes/invalid_modes API
- https://github.com/vllm-project/vllm/pull/35503
Now must return float compilation_time
- https://github.com/vllm-project/vllm/pull/35564
New sequence_lengths param added
- https://github.com/vllm-project/vllm/pull/33807
A check was performed (if runner_backend != "auto")
- https://github.com/vllm-project/vllm/pull/34861
`BaseDeviceCommunicator` now accesses PyTorch's internal `pg_map` to
check process group state
- https://github.com/vllm-project/vllm/pull/35274
**Important change:**
- https://github.com/vllm-project/vllm/pull/28672
`matcher_utils` directly accesses `torch.ops._C.*` during the import
phase. In the Ascend environment, some unregistered ops trigger
`AttributeError`, causing e2e initialization failure.
https://github.com/vllm-project/vllm-ascend/actions/runs/22607260487/job/65502047131#step:10:2323
https://github.com/vllm-project/vllm/blob/main/vllm/compilation/passes/fusion/matcher_utils.py#L29
This PR adds temporary compatibility placeholders (rms_norm,
fused_add_rms_norm, rotate_embedding, static/dynamic fp8 quant,
silu_and_mul) to
`vllm_ascend/patch/platform/patch_fusion_matcher_compat_ops.py` to
ensure no crashes during the import phase. Upstream repairs will be
considered later.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.16.0
- vLLM main:
15d76f74e2
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: Meihan-chen <jcccx.cmh@gmail.com>
Co-authored-by: Claude Code <noreply@anthropic.com>
Co-authored-by: gcanlin <canlinguosdu@gmail.com>
This commit is contained in:
@@ -17,6 +17,7 @@
|
||||
import os
|
||||
|
||||
import vllm_ascend.patch.platform.patch_distributed # noqa
|
||||
import vllm_ascend.patch.platform.patch_fusion_matcher_compat_ops # noqa
|
||||
import vllm_ascend.patch.platform.patch_mamba_config # noqa
|
||||
import vllm_ascend.patch.platform.patch_sched_yield # noqa
|
||||
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
import torch
|
||||
|
||||
|
||||
class _MissingOp:
|
||||
def __init__(self, op_name: str):
|
||||
self.op_name = op_name
|
||||
self.default = self
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
raise RuntimeError(f"Missing upstream op `{self.op_name}` was invoked.")
|
||||
|
||||
|
||||
def _set_missing(namespace, op_name: str, full_name: str) -> None:
|
||||
if not hasattr(namespace, op_name):
|
||||
setattr(namespace, op_name, _MissingOp(full_name))
|
||||
|
||||
|
||||
_set_missing(torch.ops._C, "rms_norm", "torch.ops._C.rms_norm")
|
||||
_set_missing(torch.ops._C, "fused_add_rms_norm", "torch.ops._C.fused_add_rms_norm")
|
||||
_set_missing(torch.ops._C, "rotary_embedding", "torch.ops._C.rotary_embedding")
|
||||
_set_missing(torch.ops._C, "static_scaled_fp8_quant", "torch.ops._C.static_scaled_fp8_quant")
|
||||
_set_missing(torch.ops._C, "dynamic_scaled_fp8_quant", "torch.ops._C.dynamic_scaled_fp8_quant")
|
||||
_set_missing(torch.ops._C, "dynamic_per_token_scaled_fp8_quant", "torch.ops._C.dynamic_per_token_scaled_fp8_quant")
|
||||
_set_missing(torch.ops._C, "silu_and_mul", "torch.ops._C.silu_and_mul")
|
||||
Reference in New Issue
Block a user