[main2main] upgrade vllm main 0202 (#6560)

### What this PR does / why we need it?
1. Fix `TypeError: FusedMoEParallelConfig.__init__() missing 1 required
positional argument: 'is_sequence_parallel'` due to
https://github.com/vllm-project/vllm/pull/32567
2. Fix ` TypeError: '>' not supported between instances of 'MagicMock'
and 'int'` due to https://github.com/vllm-project/vllm/pull/33035
3. Fix `TypeError: Can't instantiate abstract class AscendMLAImpl with
abstract methods forward_mha, forward_mqa` and AttributeError: 'bool'
object has no attribute 'process_weights_after_loading' due to
https://github.com/vllm-project/vllm/pull/33284
4. Fix `'AscendSharedFusedMoE' object has no attribute
'_routed_input_transform'`due to
https://github.com/vllm-project/vllm/pull/32790
5. Fix `NPUModelRunner._dummy_run() got an unexpected keyword argument
'num_active_loras'` due to
https://github.com/vllm-project/vllm/pull/32005
6. Fix the problem caused by` 'tuple' object has no attribute 'job_id'`
due to https://github.com/vllm-project/vllm/pull/27492
7. Fix the problem that all_moe_layers is not equal to vllm.moe_forward,
vllm.moe_forward_shared due to
https://github.com/vllm-project/vllm/pull/33184
8. Add patch to fix the problem "got multiple values for keyword
argument 'add_special_tokens'" due to
https://github.com/vllm-project/vllm/pull/32863
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: hfadzxy <starmoon_zhang@163.com>
This commit is contained in:
meihanc
2026-02-05 19:31:17 +08:00
committed by GitHub
parent 2c1608265b
commit 922e5c163b
28 changed files with 246 additions and 30 deletions

View File

@@ -82,8 +82,13 @@ class TestAscendMultiHeadLatentAttention(TestBase):
@patch("vllm_ascend.ops.mla.get_tensor_model_parallel_world_size")
def test_initialization(self, mock_tp_size, mock_ascend_config,
mock_get_vllm_config):
# Create a proper mock for MLAAttention that has the required attributes
mock_mla_attn = MagicMock()
mock_mla_attn.process_weights_after_loading = MagicMock()
mock_mla_attn.impl = MagicMock()
mock_mla_attn.impl.process_weights_after_loading = MagicMock()
with patch("vllm_ascend.ops.mla.MLAAttention", return_value=True):
with patch("vllm_ascend.ops.mla.MLAAttention", return_value=mock_mla_attn):
mock_tp_size.return_value = 2
mock_ascend_config.return_value.enable_shared_expert_dp = True
mock_vllm_config = MagicMock(spec=VllmConfig)
@@ -126,7 +131,14 @@ class TestAscendMultiHeadLatentAttention(TestBase):
num_hidden_layers=32, first_k_dense_replace=False)
mock_get_vllm_config.return_value = mock_vllm_config
mock_vllm_config.compilation_config = CompilationConfig()
with patch("vllm_ascend.ops.mla.MLAAttention", return_value=True):
# Create a proper mock for MLAAttention that has the required attributes
mock_mla_attn = MagicMock()
mock_mla_attn.process_weights_after_loading = MagicMock()
mock_mla_attn.impl = MagicMock()
mock_mla_attn.impl.process_weights_after_loading = MagicMock()
with patch("vllm_ascend.ops.mla.MLAAttention", return_value=mock_mla_attn):
attn = AscendMultiHeadLatentAttention(
hidden_size=self.hidden_size,
num_heads=self.num_heads,