upgrade to vllm 0.11.2 (#4400)

Bump vLLM version to v0.11.2

What's broken and changed by vLLM:
1. structured_output is broken by
https://github.com/vllm-project/vllm/pull/26866
2. get_mrope_input_positions is broken by
https://github.com/vllm-project/vllm/pull/28399
3. graph mode is broken by
https://github.com/vllm-project/vllm/pull/25110 we'll upgrade torch to
2.8 to fix the problem later
4. embedding is broken by
https://github.com/vllm-project/vllm/pull/27583
5. `get_attn_backend_cls` and attention backend is broken are broken by
https://github.com/vllm-project/vllm/pull/28534
6. spec decode is broken by
https://github.com/vllm-project/vllm/pull/28771
7. sp feature is broken by
https://github.com/vllm-project/vllm/pull/27126
8. mtp is broken by https://github.com/vllm-project/vllm/pull/27922
9. lora is broken by https://github.com/vllm-project/vllm/pull/21068
10. execute_model is broken by
https://github.com/vllm-project/vllm/pull/26866
11. `VLLM_DISABLE_SHARED_EXPERTS_STREAM` env is broken by
https://github.com/vllm-project/vllm/pull/28159
12. kv cahe is broken by https://github.com/vllm-project/vllm/pull/27753
13. dp is broken by https://github.com/vllm-project/vllm/pull/25110

 
What's broken and changed by ourself:
1. qwen vl is broken by https://github.com/vllm-project/vllm/pull/28455
We'll remove model files in the future to avoid this kind of error
2. Engine core is broken by
https://github.com/vllm-project/vllm/pull/23691 We'll remove the patch
file in the future.
3. Ascend scheduler is broken by
https://github.com/vllm-project/vllm/pull/28733 We'll remove ascend
scheudler later.
4. qwen3-next is broken by
https://github.com/vllm-project/vllm/pull/28083 We'll remove model files
in the future to avoid this kind of error
5. qwen vl is broken by https://github.com/vllm-project/vllm/pull/27764.
We'll remove model files in the future

Known issue:
1. ray doesn't work 
2. the accuracy of qwen3-next is not correct
3. qwen3-vl is broken
4. prefix cache+ ascend scheduler + deepseek v2 lite is broken.

Co-authored-by: MengqingCao <cmq0113@163.com>
Co-authored-by: hfadzxy <starmoon_zhang@163.com>
Co-authored-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: 22dimensions <waitingwind@foxmail.com>
Co-authored-by: shen-shanshan <467638484@qq.com>


- vLLM version: v0.11.2

---------

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: MengqingCao <cmq0113@163.com>
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: MengqingCao <cmq0113@163.com>
Co-authored-by: hfadzxy <starmoon_zhang@163.com>
Co-authored-by: leo-pony <nengjunma@outlook.com>
This commit is contained in:
wangxiyuan
2025-11-26 11:48:58 +08:00
committed by GitHub
parent d5f77f14d0
commit bc69d7cfe1
54 changed files with 744 additions and 437 deletions

View File

@@ -32,7 +32,7 @@ on:
description: how many pods will be pulled up via lws.yaml, indicates number of nodes we need
vllm_version:
required: false
default: "2918c1b49c88c29783c86f78d2c4221cb9622379"
default: "v0.11.2"
type: string
description: vllm version to use
vllm_ascend_remote_url:

View File

@@ -36,7 +36,7 @@ jobs:
- name: Get vLLM version
run: |
VLLM_COMMIT=2918c1b49c88c29783c86f78d2c4221cb9622379
VLLM_COMMIT=v0.11.2
echo "VLLM_COMMIT=https://github.com/vllm-project/vllm/commit/$VLLM_COMMIT" >> $GITHUB_ENV
- name: Checkout repository

View File

@@ -51,7 +51,7 @@ jobs:
strategy:
matrix:
include:
- vllm_branch: 2918c1b49c88c29783c86f78d2c4221cb9622379
- vllm_branch: v0.11.2
vllm_ascend_branch: main
max-parallel: 1
container:

View File

@@ -42,7 +42,7 @@ jobs:
lint:
uses: ./.github/workflows/pre-commit.yml
with:
vllm: 2918c1b49c88c29783c86f78d2c4221cb9622379
vllm: v0.11.2
changes:
runs-on: ubuntu-latest
outputs:
@@ -83,7 +83,7 @@ jobs:
VLLM_USE_MODELSCOPE: True
strategy:
matrix:
vllm_version: [2918c1b49c88c29783c86f78d2c4221cb9622379]
vllm_version: [v0.11.2]
steps:
- name: Install packages
run: |
@@ -121,7 +121,10 @@ jobs:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/x86_64-linux/devlib
pytest -sv --cov --cov-report=xml:unittests-coverage.xml tests/ut \
--ignore tests/ut/torchair/models/test_torchair_deepseek_mtp.py \
--ignore tests/ut/torchair/models/test_torchair_deepseek_v2.py
--ignore tests/ut/torchair/models/test_torchair_deepseek_v2.py \
--ignore tests/ut/models/test_qwen2_vl.py \
--ignore tests/ut/models/test_qwen2_5_vl.py \
--ignore tests/ut/models/test_qwen2_5_vl_without_padding.py
- name: Upload coverage to Codecov
# only upload coverage when commits merged
@@ -138,7 +141,7 @@ jobs:
name: e2e-light
strategy:
matrix:
vllm_version: [2918c1b49c88c29783c86f78d2c4221cb9622379]
vllm_version: [v0.11.2]
# Note (yikun): If CI resource are limited we can split job into two chain jobs
needs: [lint, changes]
# only trigger e2e test after lint passed and the change is e2e related with pull request.

View File

@@ -69,7 +69,7 @@ jobs:
name: e2e-full
strategy:
matrix:
vllm_version: [2918c1b49c88c29783c86f78d2c4221cb9622379]
vllm_version: [v0.11.2]
needs: [changes]
if: ${{ needs.changes.outputs.e2e_tracker == 'true' }}
uses: ./.github/workflows/_e2e_test.yaml

View File

@@ -86,7 +86,7 @@ jobs:
tests: tests/e2e/nightly/ops
uses: ./.github/workflows/_e2e_nightly_single_node.yaml
with:
vllm: 2918c1b49c88c29783c86f78d2c4221cb9622379
vllm: v0.11.2
runner: ${{ matrix.test_config.os }}
tests: ${{ matrix.test_config.tests }}
image: 'swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/vllm-ascend:nightly-a2'
@@ -125,7 +125,7 @@ jobs:
- Qwen3-Next-80B-A3B-Instruct
uses: ./.github/workflows/_e2e_nightly_single_node_models.yaml
with:
vllm: 2918c1b49c88c29783c86f78d2c4221cb9622379
vllm: v0.11.2
runner: ${{ matrix.test_config.os }}
model_list: ${{ toJson(matrix.test_config.model_list) }}
image: swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/cann:8.2.rc1-910b-ubuntu22.04-py3.11

View File

@@ -136,7 +136,7 @@ jobs:
tests: tests/e2e/nightly/models/test_deepseek_v3_2_exp_w8a8.py
uses: ./.github/workflows/_e2e_nightly_single_node.yaml
with:
vllm: 2918c1b49c88c29783c86f78d2c4221cb9622379
vllm: v0.11.2
runner: ${{ matrix.test_config.os }}
image: 'swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/vllm-ascend:nightly-a3'
tests: ${{ matrix.test_config.tests }}

View File

@@ -72,7 +72,7 @@ jobs:
- DeepSeek-V2-Lite
uses: ./.github/workflows/_e2e_nightly_single_node_models.yaml
with:
vllm: 2918c1b49c88c29783c86f78d2c4221cb9622379
vllm: v0.11.2
runner: ${{ matrix.runner }}
image: swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/cann:8.3.rc1-910b-ubuntu22.04-py3.11
model_list: ${{ toJson(matrix.model_list) }}