[CustomOp] Register AscendApplyRotaryEmb CustomOp and remove related patch (#4667)

### What this PR does / why we need it?

Following https://github.com/vllm-project/vllm/pull/29873, register
`AscendApplyRotaryEmb` CustomOp and remove related patch.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

####  Test Qwen2.5-VL

Run:

```bash
vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-Instruct \
--max_model_len 16384
```

Output:

```
{"id":"chatcmpl-b02c1ff3415d2462","object":"chat.completion","created":1766129265,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen2.5-VL-7B-In struct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen.\" The word \"TONGYI\" is writ  ten in blue, and \"Qwen\" is written in gray. The text appears to be part of a logo or branding design.","refusal":null,"annotations":null,"audio":   null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null,"tok    en_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":78,"total_tokens":129,"completion_tokens":51,"prompt_tokens_d
```

####  Test Qwen3-VL

Run:

```bash
vllm serve /root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct \
--max_model_len 16384
```

Output:

```
{"id":"chatcmpl-a3a7de5a900a9321","object":"chat.completion","created":1766129586,"model":"/root/.cache/modelscope/hub/models/Qwen/Qwen3-VL-8B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is **“TONGYI Qwen”**.\n\n### How it looks:\n- **“TONGYI”** is written in **uppercase letters** in a **bold, modern sans-serif font**, colored **blue**.\n- **“Qwen”** is written in **lowercase letters** in a **slightly thinner, elegant sans-serif font**, colored **dark gray**.\n- The two lines of text are stacked vertically, with “TONG","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"length","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":112,"total_tokens":212,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null}
```

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: shen-shanshan <467638484@qq.com>
This commit is contained in:
Shanshan Shen
2025-12-23 10:04:37 +08:00
committed by GitHub
parent 35dbdbb398
commit 6c478531f8
7 changed files with 71 additions and 260 deletions

View File

@@ -160,53 +160,7 @@
# Future Plan:
# Identify this pattern in torch-npu and remove this patch.
#
# ** 5. File: worker/patch_qwen2_5_omni.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.models.qwen2_5_omni_thinker.Qwen2_5OmniThinkerForConditionalGeneration`
# Why:
# we have ascend forward context which doesn't work with upstream.
# How
# override forward_context in the model file
# Related PR (if no, explain why):
# This is a bug by Ascend only. we should drop set_ascend_forward_context
# Future Plan:
# Remove this patch once forward_context is refactor.
#
# ** 6. File: worker/patch_qwen2_5_vl.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.models.qwen2_5_vl.Qwen2_5_VLForConditionalGeneration`
# Why:
# we have ascend forward context which doesn't work with upstream.
# How
# override forward_context in the model file
# Related PR (if no, explain why):
# This is a bug by Ascend only. we should drop set_ascend_forward_context
# Future Plan:
# Remove this patch once forward_context is refactor.
#
# 2. `vllm.model_executor.models.qwen2_vl.Qwen2VisionAttention.forward`
# Why:
# the attention is not custom ops
# How
# make it to custom ops and pluggable
# Related PR (if no, explain why):
# https://github.com/vllm-project/vllm/pull/30125
# Future Plan:
# Remove this patch one the PR is merged into vLLM.
#
# ** 7. File: worker/patch_qwen3_vl.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.models.qwen3_vl.Qwen3_VisionTransformer.forward`
# Why:
# the attention is not custom ops
# How
# make it to custom ops and pluggable
# Related PR (if no, explain why):
# https://github.com/vllm-project/vllm/pull/30125
# Future Plan:
# Remove this patch one the PR is merged into vLLM.
#
# ** 8. File: worker/patch_roberta.py **
# ** 5. File: worker/patch_roberta.py **
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.models.bert `
# Why:
@@ -218,7 +172,7 @@
# Future Plan:
# Revert this when CANN support shift aclnn operation
#
# ** 9. File: worker/patch_triton.py**
# ** 6. File: worker/patch_triton.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.layers.mamba.ops`, `vllm.model_executor.layers.fla.ops`
# Why:
@@ -230,7 +184,7 @@
# Future Plan:
# Remove this patch when vLLM support the dispatch function.
#
# ** 10. File: worker/patch_weight_loader.py**
# ** 7. File: worker/patch_weight_loader.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.layers.linear.UnquantizedLinearMethod`
# Why:
@@ -242,7 +196,7 @@
# Future Plan:
# Remove this patch when the bug is fixed.
#
# ** 11. File: worker/patch_qwen3_next_mtp.py**
# ** 8. File: worker/patch_qwen3_next_mtp.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.v1.worker.utils.bind_kv_cache`
# Why:
@@ -255,7 +209,7 @@
# Future Plan:
# Remove this patch after discussing with vllm community and adapting bind_kv_cache to npu.
#
# ** 12. File: worker/patch_module.py**
# ** 9. File: worker/patch_module.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.v1.attention.backends.gdn_attn.torch.argsort`
# Why:
@@ -271,7 +225,7 @@
# Remove this patch when bool is supported in 'torch.argsort' func of npu.
# Make 'torch.argsort' in `vllm.v1.attention.backends.gdn_attn` be stable.
#
# ** 13. File: worker/patch_rejection_sampler.py**
# ** 10. File: worker/patch_rejection_sampler.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.v1.sample.rejection_sampler`
# Why:
@@ -287,7 +241,7 @@
# to override them, then delete the patch file `worker/patch_rejection_sampler.py`.
# 2. make these functions as costom op, then remove AscendRejectionSampler
#
# ** 14.File: worker/patch_qwen3_next.py**
# ** 11.File: worker/patch_qwen3_next.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.models.qwen3_next.Qwen3NextGatedDeltaNet.forward`
# Why:
@@ -299,7 +253,7 @@
# Future Plan:
# Remove this patch when vLLM support these operators.
#
# ** 15. File: worker/patch_qwen3_next.py**
# ** 12. File: worker/patch_qwen3_next.py**
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 1. `vllm.model_executor.models.qwen3_next.Qwen3NextGatedDeltaNet._forward_core`
# Why: