From 7cc024a2d3ccb1bfa293d3093faafc1489c2a376 Mon Sep 17 00:00:00 2001 From: Yikun Jiang Date: Mon, 17 Feb 2025 22:12:07 +0800 Subject: [PATCH] [Docs] Refeactor installation doc (#78) ### What this PR does / why we need it? Refeactor installation doc ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? CI, preview Signed-off-by: Yikun Jiang --- README.md | 3 +- README.zh.md | 2 +- docs/source/installation.md | 175 +++++++++++++++++------------------- 3 files changed, 85 insertions(+), 95 deletions(-) diff --git a/README.md b/README.md index 0bd2437..7d09f4c 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,8 @@ Run the following command to start the vLLM server with the [Qwen/Qwen2.5-0.5B-I vllm serve Qwen/Qwen2.5-0.5B-Instruct curl http://localhost:8000/v1/models ``` -**Please refer to [official docs](https://vllm-ascend.readthedocs.io/en/latest/) for more details.** + +Please refer to [QuickStart](https://vllm-ascend.readthedocs.io/en/latest/quick_start.html) and [Installation](https://vllm-ascend.readthedocs.io/en/latest/installation.html) for more details. ## Contributing See [CONTRIBUTING](docs/source/developer_guide/contributing.md) for more details, which is a step-by-step guide to help you set up development environment, build and test. diff --git a/README.zh.md b/README.zh.md index 701b556..7092dc1 100644 --- a/README.zh.md +++ b/README.zh.md @@ -69,7 +69,7 @@ vllm serve Qwen/Qwen2.5-0.5B-Instruct curl http://localhost:8000/v1/models ``` -**请参阅 [官方文档](https://vllm-ascend.readthedocs.io/en/latest/)以获取更多详细信息** +请查看[快速开始](https://vllm-ascend.readthedocs.io/en/latest/quick_start.html)和[安装指南](https://vllm-ascend.readthedocs.io/en/latest/installation.html)了解更多. ## 分支 diff --git a/docs/source/installation.md b/docs/source/installation.md index c740aa1..d159eaa 100644 --- a/docs/source/installation.md +++ b/docs/source/installation.md @@ -17,43 +17,30 @@ This document describes how to install vllm-ascend manually. ## Configure a new environment -Before installing the package, you need to make sure firmware/driver and CANN is installed correctly. +Before installing, you need to make sure firmware/driver and CANN is installed correctly. ### Install firmwares and drivers -To verify that the Ascend NPU firmware and driver were correctly installed, run `npu-smi info`. - -> Tips: Refer to [Ascend Environment Setup Guide](https://ascend.github.io/docs/sources/ascend/quick_install.html) for more details. - -### Install CANN (optional) - -The installation of CANN wouldn’t be necessary if you are using a CANN container image, you can skip this step.If you want to install vllm-ascend on a bare environment by hand, you need install CANN first. +To verify that the Ascend NPU firmware and driver were correctly installed, run: ```bash -# Create a virtual environment -python -m venv vllm-ascend-env -source vllm-ascend-env/bin/activate - -# Install required python packages. -pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple attrs numpy==1.24.0 decorator sympy cffi pyyaml pathlib2 psutil protobuf scipy requests absl-py wheel typing_extensions - -# Download and install the CANN package. -wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-toolkit_8.0.0_linux-aarch64.run -sh Ascend-cann-toolkit_8.0.0_linux-aarch64.run --full -wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run -sh Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run --full +npu-smi info ``` -Once it's done, you can read either **Set up using Python** or **Set up using Docker** section to install and use vllm-ascend. +Refer to [Ascend Environment Setup Guide](https://ascend.github.io/docs/sources/ascend/quick_install.html) for more details. -## Set up using Python +### Install CANN -> Notes: If you are installing vllm-ascend on an arch64 machine, The `-f https://download.pytorch.org/whl/torch/` command parameter in this section can be omitted. It's only used for find torch package on x86 machine. +:::::{tab-set} +:sync-group: install -Please make sure that CANN is installed. It can be done by **Configure a new environment** step. Or by using an CANN container directly: +::::{tab-item} Using pip +:selected: +:sync: pip + +The easiest way to prepare your CANN environment is using container directly: ```bash -# Setup a CANN container using docker # Update DEVICE according to your device (/dev/davinci[0-7]) DEVICE=/dev/davinci7 @@ -71,58 +58,75 @@ docker run --rm \ -it quay.io/ascend/cann:8.0.0.beta1-910b-ubuntu22.04-py3.10 bash ``` -Then you can install vllm-ascend from **pre-built wheel** or **source code**. - -### Install from Pre-built wheels (Not support yet) - -1. Install vllm - - Since vllm on pypi is not compatible with cpu, we need to install vllm from source code. - - ```bash - git clone --depth 1 --branch v0.7.1 https://github.com/vllm-project/vllm - cd vllm - VLLM_TARGET_DEVICE=empty pip install . -f https://download.pytorch.org/whl/torch/ - ``` - -2. Install vllm-ascend - - ```bash - pip install vllm-ascend -f https://download.pytorch.org/whl/torch/ - ``` - -### Install from source code - -1. Install vllm - - ```bash - git clone https://github.com/vllm-project/vllm - cd vllm - VLLM_TARGET_DEVICE=empty pip install . -f https://download.pytorch.org/whl/torch/ - ``` - -2. Install vllm-ascend - - ```bash - git clone https://github.com/vllm-project/vllm-ascend.git - cd vllm-ascend - pip install -e . -f https://download.pytorch.org/whl/torch/ - ``` - -## Set up using Docker - -> Tips: CANN, torch, torch_npu, vllm and vllm_ascend are pre-installed in the Docker image already. - -### Pre-built images (Not support yet) - -Just pull the image and run it with bash. +You can also install CANN manually: ```bash -docker pull quay.io/ascend/vllm-ascend:latest +# Create a virtual environment +python -m venv vllm-ascend-env +source vllm-ascend-env/bin/activate + +# Install required python packages. +pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple attrs numpy==1.24.0 decorator sympy cffi pyyaml pathlib2 psutil protobuf scipy requests absl-py wheel typing_extensions + +# Download and install the CANN package. +wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-toolkit_8.0.0_linux-aarch64.run +sh Ascend-cann-toolkit_8.0.0_linux-aarch64.run --full +wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run +sh Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run --full +``` + +:::: + +::::{tab-item} Using Docker +:sync: docker +No more extra step if you are using `vllm-ascend` image. +:::: +::::: + +Once it's done, you can start to set up `vllm` and `vllm-ascend`. + +## Setup vllm and vllm-ascend + +:::::{tab-set} +:sync-group: install + +::::{tab-item} Using pip +:selected: +:sync: pip + +You can install `vllm` and `vllm-ascend` from **pre-built wheel**: + +```bash +pip install vllm vllm-ascend -f https://download.pytorch.org/whl/torch/ +``` + +or build from **source code**: + +```bash +git clone https://github.com/vllm-project/vllm +cd vllm +VLLM_TARGET_DEVICE=empty pip install . -f https://download.pytorch.org/whl/torch/ + +git clone https://github.com/vllm-project/vllm-ascend.git +cd vllm-ascend +pip install -e . -f https://download.pytorch.org/whl/torch/ +``` + +:::: + +::::{tab-item} Using docker +:sync: docker + +You can just pull the **prebuilt image** and run it with bash. + +```bash + # Update DEVICE according to your device (/dev/davinci[0-7]) DEVICE=/dev/davinci7 - +# Update the vllm-ascend image +IMAGE=quay.io/ascend/vllm-ascend:main +docker pull $IMAGE docker run --rm \ --name vllm-ascend-env \ --device $DEVICE \ @@ -134,36 +138,21 @@ docker run --rm \ -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ -v /etc/ascend_install.info:/etc/ascend_install.info \ - -it quay.io/ascend/vllm-ascend:0.7.1rc1 bash + -it $IMAGE bash ``` -### Build image from source - -If you want to build the docker image from main branch, you can do it by following steps: +or build IMAGE from **source code**: ```bash git clone https://github.com/vllm-project/vllm-ascend.git cd vllm-ascend - docker build -t vllm-ascend-dev-image:latest -f ./Dockerfile . - -# Update DEVICE according to your device (/dev/davinci[0-7]) -DEVICE=/dev/davinci7 - -docker run --rm \ - --name vllm-ascend-env \ - --device $DEVICE \ - --device /dev/davinci_manager \ - --device /dev/devmm_svm \ - --device /dev/hisi_hdc \ - -v /usr/local/dcmi:/usr/local/dcmi \ - -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ - -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ - -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ - -v /etc/ascend_install.info:/etc/ascend_install.info \ - -it vllm-ascend-dev-image:latest bash ``` +:::: + +::::: + ## Extra information ### Verify installation