[main2main] upgrade vllm main 0202 (#6560)
### What this PR does / why we need it? 1. Fix `TypeError: FusedMoEParallelConfig.__init__() missing 1 required positional argument: 'is_sequence_parallel'` due to https://github.com/vllm-project/vllm/pull/32567 2. Fix ` TypeError: '>' not supported between instances of 'MagicMock' and 'int'` due to https://github.com/vllm-project/vllm/pull/33035 3. Fix `TypeError: Can't instantiate abstract class AscendMLAImpl with abstract methods forward_mha, forward_mqa` and AttributeError: 'bool' object has no attribute 'process_weights_after_loading' due to https://github.com/vllm-project/vllm/pull/33284 4. Fix `'AscendSharedFusedMoE' object has no attribute '_routed_input_transform'`due to https://github.com/vllm-project/vllm/pull/32790 5. Fix `NPUModelRunner._dummy_run() got an unexpected keyword argument 'num_active_loras'` due to https://github.com/vllm-project/vllm/pull/32005 6. Fix the problem caused by` 'tuple' object has no attribute 'job_id'` due to https://github.com/vllm-project/vllm/pull/27492 7. Fix the problem that all_moe_layers is not equal to vllm.moe_forward, vllm.moe_forward_shared due to https://github.com/vllm-project/vllm/pull/33184 8. Add patch to fix the problem "got multiple values for keyword argument 'add_special_tokens'" due to https://github.com/vllm-project/vllm/pull/32863 ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.15.0 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0 --------- Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com> Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com> Signed-off-by: hfadzxy <starmoon_zhang@163.com> Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com> Co-authored-by: hfadzxy <starmoon_zhang@163.com>
This commit is contained in:
@@ -1450,6 +1450,28 @@ class AscendMLAImpl(MLAAttentionImpl):
|
||||
def get_num_actual_tokens(self, attn_metadata: M):
|
||||
return attn_metadata.num_actual_tokens
|
||||
|
||||
def forward_mha(
|
||||
self,
|
||||
layer_name: str,
|
||||
hidden_states: torch.Tensor,
|
||||
kv_cache: tuple[torch.Tensor],
|
||||
attn_metadata: M,
|
||||
need_gather_q_kv: bool = False,
|
||||
output: torch.Tensor | None = None,
|
||||
) -> torch.Tensor:
|
||||
raise NotImplementedError("forward_mha is not supported for MLA attention. Use forward() instead.")
|
||||
|
||||
def forward_mqa(
|
||||
self,
|
||||
layer_name: str,
|
||||
hidden_states: torch.Tensor,
|
||||
kv_cache: tuple[torch.Tensor],
|
||||
attn_metadata: M,
|
||||
need_gather_q_kv: bool = False,
|
||||
output: torch.Tensor | None = None,
|
||||
) -> torch.Tensor:
|
||||
raise NotImplementedError("forward_mqa is not supported for MLA attention. Use forward() instead.")
|
||||
|
||||
def forward(
|
||||
self,
|
||||
layer_name,
|
||||
|
||||
@@ -1062,3 +1062,24 @@ class AscendSFAImpl(MLAAttentionImpl):
|
||||
torch.distributed.all_to_all_single(attn_output, send, group=get_tp_group().device_group)
|
||||
|
||||
return attn_output, True
|
||||
|
||||
def forward_mha(
|
||||
self,
|
||||
q: torch.Tensor,
|
||||
kv_c_normed: torch.Tensor,
|
||||
k_pe: torch.Tensor,
|
||||
kv_c_and_k_pe_cache: torch.Tensor,
|
||||
attn_metadata: M,
|
||||
k_scale: torch.Tensor,
|
||||
output: torch.Tensor,
|
||||
) -> None:
|
||||
raise NotImplementedError("forward_mha is not supported for SFA attention. Use forward() instead.")
|
||||
|
||||
def forward_mqa(
|
||||
self,
|
||||
q: torch.Tensor | tuple[torch.Tensor, torch.Tensor],
|
||||
kv_c_and_k_pe_cache: torch.Tensor,
|
||||
attn_metadata: M,
|
||||
layer,
|
||||
) -> tuple[torch.Tensor, torch.Tensor | None]:
|
||||
raise NotImplementedError("forward_mqa is not supported for SFA attention. Use forward() instead.")
|
||||
|
||||
Reference in New Issue
Block a user