[CI] add codespell CI and fix format.sh (#827)

1. Fix format check error to make format.sh work
2. Add codespell check CI 
3. Add the missing required package for vllm-ascend.

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
wangxiyuan
2025-05-12 22:04:48 +08:00
committed by GitHub
parent 5998704c08
commit 6193ba679b
11 changed files with 60 additions and 8 deletions

47
.github/workflows/codespell.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
#
# Copyright 2023 The vLLM team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Adapted from vllm-project/vllm/blob/main/.github
#
name: codespell
on:
pull_request:
branches:
- 'main'
- '*-dev'
jobs:
codespell:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.12"]
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements-lint.txt
- name: Run codespell check
run: |
CODESPELL_EXCLUDES=('--skip' 'tests/prompts/**,./benchmarks/sonnet.txt,*tests/lora/data/**,build/**,./vllm_ascend.egg-info/**')
CODESPELL_IGNORE_WORDS=('-L' 'CANN,cann,NNAL,nnal,ASCEND,ascend,EnQue')
codespell --toml pyproject.toml "${CODESPELL_EXCLUDES[@]}" "${CODESPELL_IGNORE_WORDS[@]}"

View File

@@ -74,7 +74,7 @@ Usually, each minor version of vLLM (such as 0.7) will correspond to a vLLM Asce
For main branch, vLLM Ascend should works with vLLM main branch and latest 1 or 2 release version. So to ensure the backward compatibility, we will do the following:
- Both main branch and target vLLM release is tested by Ascend E2E CI. For example, currently, vLLM main branch and vLLM 0.8.4 are tested now.
- For code changes, we will make sure that the changes are compatible with the latest 1 or 2 vLLM release version as well. In this case, vLLM Ascend introduced a version check machinism inner the code. It'll check the version of installed vLLM pacakge first to decide which code logic to use. If users hit the `InvalidVersion` error, it sometimes means that they have installed an dev/editable version of vLLM package. In this case, we provide the env variable `VLLM_VERSION` to let users specify the version of vLLM package to use.
- For code changes, we will make sure that the changes are compatible with the latest 1 or 2 vLLM release version as well. In this case, vLLM Ascend introduced a version check machinism inner the code. It'll check the version of installed vLLM package first to decide which code logic to use. If users hit the `InvalidVersion` error, it sometimes means that they have installed an dev/editable version of vLLM package. In this case, we provide the env variable `VLLM_VERSION` to let users specify the version of vLLM package to use.
- For documentation changes, we will make sure that the changes are compatible with the latest 1 or 2 vLLM release version as well. Note should be added if there are any breaking changes.
## Document Branch Policy

View File

@@ -84,7 +84,7 @@ Currently, only 1P1D is supported by vllm. For vllm-ascend, it'll be done by [th
### 10. Does vllm-ascend support quantization method?
Currently, w8a8 quantization is already supported by vllm-ascend originally on v0.8.4rc2 or heigher, If you're using vllm 0.7.3 version, w8a8 quantization is supporeted with the integration of vllm-ascend and mindie-turbo, please use `pip install vllm-ascend[mindie-turbo]`.
Currently, w8a8 quantization is already supported by vllm-ascend originally on v0.8.4rc2 or higher, If you're using vllm 0.7.3 version, w8a8 quantization is supporeted with the integration of vllm-ascend and mindie-turbo, please use `pip install vllm-ascend[mindie-turbo]`.
### 11. How to run w8a8 DeepSeek model?

View File

@@ -2,7 +2,7 @@
## Run docker container:
:::{note}
w8a8 quantization feature is supported by v0.8.4rc2 or highter
w8a8 quantization feature is supported by v0.8.4rc2 or higher
:::
```{code-block} bash

View File

@@ -33,8 +33,8 @@ This is the second release candidate of v0.8.4 for vllm-ascend. Please follow th
- DeepSeek V3/R1 works with DP, TP and MTP now. Please note that it's still in experimental status. Let us know if you hit any problem. [#429](https://github.com/vllm-project/vllm-ascend/pull/429) [#585](https://github.com/vllm-project/vllm-ascend/pull/585) [#626](https://github.com/vllm-project/vllm-ascend/pull/626) [#636](https://github.com/vllm-project/vllm-ascend/pull/636) [#671](https://github.com/vllm-project/vllm-ascend/pull/671)
### Core
- ACLGraph feature is supported with V1 engine now. It's disabled by default because this feature rely on CANN 8.1 release. We'll make it avaiable by default in the next release [#426](https://github.com/vllm-project/vllm-ascend/pull/426)
- Upgrade PyTorch to 2.5.1. vLLM Ascend no longer relies on the dev version of torch-npu now. Now users don't need to install the torch-npu by hand. The 2.5.1 version of torch-npu will be installed automaticlly. [#661](https://github.com/vllm-project/vllm-ascend/pull/661)
- ACLGraph feature is supported with V1 engine now. It's disabled by default because this feature rely on CANN 8.1 release. We'll make it available by default in the next release [#426](https://github.com/vllm-project/vllm-ascend/pull/426)
- Upgrade PyTorch to 2.5.1. vLLM Ascend no longer relies on the dev version of torch-npu now. Now users don't need to install the torch-npu by hand. The 2.5.1 version of torch-npu will be installed automatically. [#661](https://github.com/vllm-project/vllm-ascend/pull/661)
### Other
- MiniCPM model works now. [#645](https://github.com/vllm-project/vllm-ascend/pull/645)

View File

@@ -105,7 +105,7 @@ def metadata_collect_trigger(poller, router_socket):
start_time = time.time()
socks = dict(poller.poll(timeout=500)) # timeout in 500ms
if socks:
logger.debug("receive socks from moniter threads: ", socks)
logger.debug("receive socks from monitor threads: ", socks)
if router_socket in socks:
messages = router_socket.recv_multipart()
try:

View File

@@ -3,6 +3,7 @@
requires = [
"cmake>=3.26",
"decorator",
"einops",
"numpy<2.0.0",
"packaging",
"pip",

View File

@@ -1,8 +1,11 @@
-r requirements-lint.txt
-r requirements.txt
modelscope
openai
pytest >= 6.0
pytest-asyncio
lm-eval
ray
types-jsonschema
xgrammar
zmq

View File

@@ -1,6 +1,7 @@
# Should be mirrored in pyporject.toml
cmake>=3.26
decorator
einops
numpy<2.0.0
packaging
pip

View File

@@ -199,7 +199,7 @@ class SimpleBuffer(KVLookupBufferBase):
hidden = hidden.view(num_tokens, self.hidden_size)
except Exception as e:
logger.warning(
f"Faile to receive kv cache and hidden states of request: {orig_req_id} "
f"Fail to receive kv cache and hidden states of request: {orig_req_id} "
f"Error is {str(e)}")
return [None, None, None, None]

View File

@@ -113,7 +113,7 @@ def vllm_version_is(target_vllm_version: str):
raise ValueError(
f"Invalid vllm version {vllm_version} found. A dev version of vllm "
"is installed probably. Set the environment variable VLLM_VERSION "
"to control it by hand. And please make sure the vaule follows the "
"to control it by hand. And please make sure the value follows the "
"format of x.y.z.")