[TEST][DOC] Fix doctest and add system package installation (#1375)
### What this PR does / why we need it? - Fix [doctest](https://github.com/vllm-project/vllm-ascend/actions/workflows/vllm_ascend_doctest.yaml?query=event%3Aschedule) - add system package installation - Add doc for run doctests - Cleanup all extra steps in .github/workflows/vllm_ascend_doctest.yaml - Change schedule job from 4 ---> 12 hours ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - doctest CI passed - Local test with `/vllm-workspace/vllm-ascend/tests/e2e/run_doctests.sh`. Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
This commit is contained in:
25
.github/workflows/vllm_ascend_doctest.yaml
vendored
25
.github/workflows/vllm_ascend_doctest.yaml
vendored
@@ -30,8 +30,8 @@ on:
|
||||
- 'tests/e2e/common.sh'
|
||||
- 'tests/e2e/run_doctests.sh'
|
||||
schedule:
|
||||
# Runs every 4 hours
|
||||
- cron: '0 */4 * * *'
|
||||
# Runs every 12 hours
|
||||
- cron: '0 */12 * * *'
|
||||
|
||||
# Bash shells do not use ~/.profile or ~/.bashrc so these shells need to be explicitly
|
||||
# declared as "shell: bash -el {0}" on steps that need to be properly activated.
|
||||
@@ -65,37 +65,18 @@ jobs:
|
||||
cd /vllm-workspace/vllm
|
||||
git --no-pager log -1 || true
|
||||
|
||||
- name: Config OS mirrors - Ubuntu
|
||||
if: ${{ !endsWith(matrix.vllm_verison, '-openeuler') }}
|
||||
run: |
|
||||
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
|
||||
apt-get update -y
|
||||
apt install -y gcc g++ libnuma-dev git curl jq
|
||||
|
||||
- name: Config OS mirrors - openEuler
|
||||
if: ${{ endsWith(matrix.vllm_verison, '-openeuler') }}
|
||||
run: |
|
||||
yum update -y
|
||||
yum install -y gcc g++ numactl-devel git curl jq
|
||||
|
||||
- name: Config pip mirrors
|
||||
run: |
|
||||
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
|
||||
|
||||
- name: Checkout vllm-project/vllm-ascend repo
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run vllm-ascend/tests/e2e/run_doctests.sh
|
||||
run: |
|
||||
# PWD: /__w/vllm-ascend/vllm-ascend
|
||||
# Make sure e2e tests are latest
|
||||
echo "Replacing /vllm-workspace/vllm-ascend/tests/e2e ..."
|
||||
rm -rf /vllm-workspace/vllm-ascend/tests/e2e
|
||||
mkdir -p /vllm-workspace/vllm-ascend/tests
|
||||
cp -r tests/e2e /vllm-workspace/vllm-ascend/tests/
|
||||
|
||||
# TODO(yikun): Remove this after conf.py merged
|
||||
cp docs/source/conf.py /vllm-workspace/vllm-ascend/docs/source/
|
||||
|
||||
# Simulate container to enter directory
|
||||
cd /workspace
|
||||
|
||||
|
||||
@@ -80,6 +80,41 @@ pip install -r requirements-dev.txt
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
|
||||
### Run doctest
|
||||
|
||||
vllm-ascend provides a `vllm-ascend/tests/e2e/run_doctests.sh` command to run all doctests in the doc files.
|
||||
The doctest is a good way to make sure the docs are up to date and the examples are executable, you can run it locally as follows:
|
||||
|
||||
```{code-block} bash
|
||||
:substitutions:
|
||||
|
||||
# Update DEVICE according to your device (/dev/davinci[0-7])
|
||||
export DEVICE=/dev/davinci0
|
||||
# Update the vllm-ascend image
|
||||
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
|
||||
docker run --rm \
|
||||
--name vllm-ascend \
|
||||
--device $DEVICE \
|
||||
--device /dev/davinci_manager \
|
||||
--device /dev/devmm_svm \
|
||||
--device /dev/hisi_hdc \
|
||||
-v /usr/local/dcmi:/usr/local/dcmi \
|
||||
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
|
||||
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
|
||||
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
|
||||
-v /etc/ascend_install.info:/etc/ascend_install.info \
|
||||
-v /root/.cache:/root/.cache \
|
||||
-p 8000:8000 \
|
||||
-it $IMAGE bash
|
||||
|
||||
# Run doctest
|
||||
/vllm-workspace/vllm-ascend/tests/e2e/run_doctests.sh
|
||||
```
|
||||
|
||||
This will reproduce the same environment as the CI: [vllm_ascend_doctest.yaml](https://github.com/vllm-project/vllm-ascend/blob/main/.github/workflows/vllm_ascend_doctest.yaml).
|
||||
|
||||
|
||||
## DCO and Signed-off-by
|
||||
|
||||
When contributing changes to this project, you must agree to the DCO. Commits must include a `Signed-off-by:` header which certifies agreement with the terms of the DCO.
|
||||
|
||||
@@ -116,21 +116,22 @@ Once it's done, you can start to set up `vllm` and `vllm-ascend`.
|
||||
:selected:
|
||||
:sync: pip
|
||||
|
||||
First install system dependencies:
|
||||
First install system dependencies and config pip mirror:
|
||||
|
||||
```bash
|
||||
apt update -y
|
||||
apt install -y gcc g++ cmake libnuma-dev wget git
|
||||
# Using apt-get with mirror
|
||||
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
|
||||
apt-get update -y && apt-get install -y gcc g++ cmake libnuma-dev wget git curl jq
|
||||
# Or using yum
|
||||
# yum update -y && yum install -y gcc g++ cmake numactl-devel wget git curl jq
|
||||
# Config pip mirror
|
||||
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
|
||||
```
|
||||
|
||||
**[Optional]** Then config the extra-index of `pip` if you are working on a x86 machine or using torch-npu dev version:
|
||||
|
||||
```bash
|
||||
# For x86 machine
|
||||
pip config set global.extra-index-url https://download.pytorch.org/whl/cpu/
|
||||
# For torch-npu dev version
|
||||
pip config set global.extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
|
||||
# For x86 torch-npu dev version
|
||||
# For torch-npu dev version or x86 machine
|
||||
pip config set global.extra-index-url "https://download.pytorch.org/whl/cpu/ https://mirrors.huaweicloud.com/ascend/repos/pypi"
|
||||
```
|
||||
|
||||
|
||||
@@ -32,6 +32,8 @@ docker run --rm \
|
||||
-v /root/.cache:/root/.cache \
|
||||
-p 8000:8000 \
|
||||
-it $IMAGE bash
|
||||
# Install curl
|
||||
apt-get update -y && apt-get install -y curl
|
||||
```
|
||||
::::
|
||||
|
||||
@@ -58,6 +60,8 @@ docker run --rm \
|
||||
-v /root/.cache:/root/.cache \
|
||||
-p 8000:8000 \
|
||||
-it $IMAGE bash
|
||||
# Install curl
|
||||
yum update -y && yum install -y curl
|
||||
```
|
||||
::::
|
||||
:::::
|
||||
|
||||
@@ -16,6 +16,16 @@
|
||||
# limitations under the License.
|
||||
# This file is a part of the vllm-ascend project.
|
||||
#
|
||||
function install_system_packages() {
|
||||
if command -v apt-get >/dev/null; then
|
||||
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
|
||||
apt-get update -y && apt install -y curl
|
||||
elif command -v yum >/dev/null; then
|
||||
yum update -y && yum install -y curl
|
||||
else
|
||||
echo "Unknown package manager. Please install gcc, g++, numactl-devel, git, curl, and jq manually."
|
||||
fi
|
||||
}
|
||||
|
||||
function simple_test() {
|
||||
# Do real import test
|
||||
@@ -28,6 +38,7 @@ function quickstart_offline_test() {
|
||||
}
|
||||
|
||||
function quickstart_online_test() {
|
||||
install_system_packages
|
||||
vllm serve Qwen/Qwen2.5-0.5B-Instruct &
|
||||
wait_url_ready "vllm serve" "localhost:8000/v1/models"
|
||||
# Do real curl test
|
||||
|
||||
@@ -18,14 +18,34 @@
|
||||
#
|
||||
trap clean_venv EXIT
|
||||
|
||||
function install_system_packages() {
|
||||
if command -v apt-get >/dev/null; then
|
||||
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
|
||||
apt-get update -y && apt-get install -y gcc g++ cmake libnuma-dev wget git curl jq
|
||||
elif command -v yum >/dev/null; then
|
||||
yum update -y && yum install -y gcc g++ cmake numactl-devel wget git curl jq
|
||||
else
|
||||
echo "Unknown package manager. Please install curl manually."
|
||||
fi
|
||||
}
|
||||
|
||||
function config_pip_mirror() {
|
||||
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
|
||||
}
|
||||
|
||||
function install_binary_test() {
|
||||
|
||||
install_system_packages
|
||||
config_pip_mirror
|
||||
create_vllm_venv
|
||||
|
||||
PIP_VLLM_VERSION=$(get_version pip_vllm_version)
|
||||
PIP_VLLM_ASCEND_VERSION=$(get_version pip_vllm_ascend_version)
|
||||
_info "====> Install vllm==${PIP_VLLM_VERSION} and vllm-ascend ${PIP_VLLM_ASCEND_VERSION}"
|
||||
|
||||
# Setup extra-index-url for x86 & torch_npu dev version
|
||||
pip config set global.extra-index-url "https://download.pytorch.org/whl/cpu/ https://mirrors.huaweicloud.com/ascend/repos/pypi"
|
||||
|
||||
pip install vllm=="$(get_version pip_vllm_version)"
|
||||
pip install vllm-ascend=="$(get_version pip_vllm_ascend_version)"
|
||||
|
||||
|
||||
Reference in New Issue
Block a user