diff --git a/docs/source/installation.md b/docs/source/installation.md index 4d49fa1..bcee060 100644 --- a/docs/source/installation.md +++ b/docs/source/installation.md @@ -123,10 +123,15 @@ apt update -y apt install -y gcc g++ cmake libnuma-dev wget git ``` -**[Optional]** Config the extra-index of `pip` if you are working on a **x86** machine, so that the torch with cpu could be found: +**[Optional]** Then config the extra-index of `pip` if you are working on a x86 machine or using torch-npu dev version: ```bash +# For x86 machine pip config set global.extra-index-url https://download.pytorch.org/whl/cpu/ +# For torch-npu dev version +pip config set global.extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi +# For x86 torch-npu dev version +pip config set global.extra-index-url "https://download.pytorch.org/whl/cpu/ https://mirrors.huaweicloud.com/ascend/repos/pypi" ``` Then you can install `vllm` and `vllm-ascend` from **pre-built wheel**: @@ -156,7 +161,6 @@ cd .. # Install vLLM Ascend git clone --depth 1 --branch |vllm_ascend_version| https://github.com/vllm-project/vllm-ascend.git cd vllm-ascend -export PIP_EXTRA_INDEX_URL=https://mirrors.huaweicloud.com/ascend/repos/pypi pip install -v -e . cd .. ```