Update corresponding vllm commit ID to 12 29 (#5475)

### What this PR does / why we need it?
- Fixes vllm break:
1. [[BugFix] register quant scale tensors as buffer #31395]
(https://github.com/vllm-project/vllm/pull/31395)

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
5326c89803

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
This commit is contained in:
Nengjun Ma
2025-12-29 22:48:05 +08:00
committed by GitHub
parent 51da5ea543
commit 5e96f94d2a
5 changed files with 9 additions and 8 deletions

View File

@@ -34,7 +34,7 @@ jobs:
steps:
- name: Get vLLM version
run: |
VLLM_COMMIT=5326c89803566a131c928f7fdd2100b75c981a42
VLLM_COMMIT=45c1ca1ca1ee8fa06df263c8715e8a412ff408d4
echo "VLLM_COMMIT=https://github.com/vllm-project/vllm/commit/$VLLM_COMMIT" >> $GITHUB_ENV
- name: Checkout repository

View File

@@ -74,7 +74,7 @@ jobs:
name: e2e-full
strategy:
matrix:
vllm_version: [5326c89803566a131c928f7fdd2100b75c981a42, v0.13.0]
vllm_version: [45c1ca1ca1ee8fa06df263c8715e8a412ff408d4, v0.13.0]
needs: [changes]
if: ${{ needs.changes.outputs.e2e_tracker == 'true' }}
uses: ./.github/workflows/_e2e_test.yaml

View File

@@ -42,7 +42,7 @@ jobs:
lint:
uses: ./.github/workflows/_pre_commit.yml
with:
vllm: 5326c89803566a131c928f7fdd2100b75c981a42
vllm: 45c1ca1ca1ee8fa06df263c8715e8a412ff408d4
changes:
runs-on: linux-aarch64-a2-0
outputs:
@@ -90,7 +90,7 @@ jobs:
SOC_VERSION: ascend910b1
strategy:
matrix:
vllm_version: [5326c89803566a131c928f7fdd2100b75c981a42, v0.13.0]
vllm_version: [45c1ca1ca1ee8fa06df263c8715e8a412ff408d4, v0.13.0]
steps:
- name: Free up disk space
@@ -160,7 +160,7 @@ jobs:
name: e2e-light
strategy:
matrix:
vllm_version: [5326c89803566a131c928f7fdd2100b75c981a42, v0.13.0]
vllm_version: [45c1ca1ca1ee8fa06df263c8715e8a412ff408d4, v0.13.0]
# Note (yikun): If CI resource are limited we can split job into two chain jobs
needs: [lint, changes]
# only trigger e2e test after lint passed and the change is e2e related with pull request.

View File

@@ -51,7 +51,7 @@ If you're using v0.7.3, don't forget to install [mindie-turbo](https://pypi.org/
For main branch of vLLM Ascend, we usually make it compatible with the latest vLLM release and a newer commit hash of vLLM. Please note that this table is usually updated. Please check it regularly.
| vLLM Ascend | vLLM | Python | Stable CANN | PyTorch/torch_npu |
|-------------|--------------|------------------|-------------|--------------------|
| main | 5326c89803566a131c928f7fdd2100b75c981a42, v0.13.0 tag | >= 3.10, < 3.12 | 8.3.RC2 | 2.8.0 / 2.8.0 |
| main | 45c1ca1ca1ee8fa06df263c8715e8a412ff408d4, v0.13.0 tag | >= 3.10, < 3.12 | 8.3.RC2 | 2.8.0 / 2.8.0 |
## Release cadence

View File

@@ -27,7 +27,7 @@ import torch_npu
import vllm.envs as envs_vllm
from torch_npu.op_plugin.atb._atb_ops import _register_atb_extensions
from torch_npu.profiler import dynamic_profile as dp
from vllm.config import VllmConfig
from vllm.config import VllmConfig, set_current_vllm_config
from vllm.distributed import (ensure_model_parallel_initialized,
init_distributed_environment)
from vllm.distributed.ec_transfer import ensure_ec_transfer_initialized
@@ -351,7 +351,8 @@ class NPUWorker(WorkerBase):
else:
from contextlib import nullcontext
context = nullcontext() # type: ignore
with context:
with context, set_current_vllm_config(self.vllm_config):
self.model_runner.load_model()
def compile_or_warm_up_model(self) -> None: