### What this PR does / why we need it?
Bump torch_npu version to dev20250308.3 to fix performance regression on
multi-stream case:
e04c580d07
.
### Does this PR introduce _any_ user-facing change?
NO
### How was this patch tested?
CI passed
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Update torch-npu version to fix torch npu exponential_ accuracy
With this update, the percision issue when setting `temperature > 0` is
fixed.
---------
Signed-off-by: Mengqing Cao <cmq0113@163.com>
### What this PR does / why we need it?
Add dispatch job to leverage jobs to dynamic devices include 2 stage as
below:
The dispatch job will spend extra about `10s * parallel number + 30s`
time to wait other job launch container and release lock.
- **Stage 1: Acquire lock**
add a dispatch job, this job use lockfile to acquire locks and then get
device number dynamically
- **Stage 2.1: Launch container with dynamic device**
pass the device number via output and start the container job with
dynamic device
- **Stage 2.2: Release lock**
once the job started, release the lock.
In the backend, we use multiple path to setup multiple self host runners
as load balancer:
```
$ pwd
/home/action
$ ll | grep actions
drwx------ 6 action action 4096 Mar 7 08:55 actions-runner-01
drwx------ 6 action action 4096 Mar 7 08:55 actions-runner-02
drwx------ 6 action action 4096 Mar 7 08:55 actions-runner-03
drwx------ 6 action action 4096 Mar 7 08:56 actions-runner-04
drwx------ 4 action action 4096 Jan 24 22:08 actions-runner-05
drwx------ 4 action action 4096 Jan 24 22:08 actions-runner-06
```
```
adduser -G docker action
su action
pip3 install docker prettytable
sudo yum install procmail
```
### Does this PR introduce _any_ user-facing change?
NO
### How was this patch tested?
- CI passed
- E2E test manully, triggered 3 jobs in parallel:
- [1st
job](https://github.com/vllm-project/vllm-ascend/actions/runs/13711345757/job/38348309297)
dispatch to /dev/davinci2.
- [2nd
job](https://github.com/vllm-project/vllm-ascend/actions/runs/13711348739/job/38348316250)
dispatch to /dev/davinci3
- [3rd
job](https://github.com/vllm-project/vllm-ascend/actions/runs/13711351493/job/38348324551)
dispatch to /dev/davinci4
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it?
Recover vllm-ascend dev image
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
CI passed
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it?
Add `HF_TOKEN` for downloading models that requires access rights from
huggingface hub. This will fix the CI error in #123 and #76
Signed-off-by: MengqingCao <cmq0113@163.com>
Enable CI on all branch.
Installing with the torch-npu-2.5.1.dev20250218 so that we could enable
CI on all branch and prepare for merging 0.7.1-dev to main
---------
Signed-off-by: MengqingCao <cmq0113@163.com>
### What this PR does / why we need it?
Add container image build ci:
- Enable branch, tag docker image publish
- branch image: `vllm-ascend:main`, `vllm-ascend:v0.7.1-dev`
- tag image: `vllm-ascend:v0.7.1rc1`
- Enable PR docker image build check
- other changes:
- Prepare the `REPO_OWNER` because the ghcr lowerercase required
- Add `Free up disk space` step to avoid `No space left on device` like
https://github.com/vllm-project/vllm-ascend/issues/27
- Setup qemu with image to resolve
https://github.com/docker/setup-qemu-action/issues/198
### Does this PR introduce _any_ user-facing change?
NO
### How was this patch tested?
build: CI passed [push
false](https://github.com/vllm-project/vllm-ascend/actions/runs/13347017608/job/37278724158?pr=64)
Note for test case:
1. merge commits ot `main`, `v0.7.1-dev` branch
✅ main: https://github.com/Yikun/vllm-ascend/actions/runs/13347238961
--> ghcr.io/yikun/vllm-ascend:main OK
✅v0.7.1-dev:
https://github.com/Yikun/vllm-ascend/actions/runs/13347229912 -->
ghcr.io/yikun/vllm-ascend:v0.7.1-dev OK
2. create pep440 tag from github release: v0.7.1rc1, v0.7.1,
v0.7.1rc1.dev1 all release has latest
✅ v0.7.5 --> v0.7.5, latest
✅ v0.7.5rc1 --> v0.7.5rc1
✅ v0.7.5rc1.dev1 --> v0.7.5rc1.dev1
(no latest, add a todo here) v0.7.5rc1.post1 --> v0.7.5rc1.post1
3. create unknow tag from github release:
✅ create 0.7.1 on v0.7.1-dev: not trigger ( only prefix v triggerd)
4. create tag from git:
✅ also works, `git tag v0.7.99;git push origin v0.7.99` from
publish-image
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it?
Switch to cann latest version
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
CI passed
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it?
This patch enables the doc build for vllm-ascend
- Add sphinx build for vllm-ascend
- Enable readthedocs for vllm-ascend
- Fix CI:
- exclude vllm-empty/tests/mistral_tool_use to skip `You need to agree
to share your contact information to access this model` which introduce
in
314cfade02
- Install test req to fix
https://github.com/vllm-project/vllm-ascend/actions/runs/13304112758/job/37151690770:
```
vllm-empty/tests/mistral_tool_use/conftest.py:4: in <module>
import pytest_asyncio
E ModuleNotFoundError: No module named 'pytest_asyncio'
```
- exclude docs PR
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
1. test locally:
```bash
# Install dependencies.
pip install -r requirements-docs.txt
# Build the docs and preview
make clean; make html; python -m http.server -d build/html/
```
Launch browser and open http://localhost:8000/.
2. CI passed with preview:
https://vllm-ascend--55.org.readthedocs.build/en/55/
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it?
Use `pytest.ini` to manage vllm native tests.
This will convert the original test script whitelist to a blacklist to
prevent missing the newly added test scripts of the upstream vLLM.
**note**: _we do **not** manage the test scripts of vLLM-Ascend in
`pytest.ini`, because if we do so, there will be conflicts between vLLM
and vLLM-Ascend's `conftest.py`._
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
CI passed with new existing test.
Signed-off-by: MengqingCao <cmq0113@163.com>
### What this PR does / why we need it?
vLLM Ascend plugin (vllm-ascend) is a backend plugin for running vLLM on
the Ascend NPU.
This plugin is the recommended approach for supporting the Ascend
backend within the vLLM community. It adheres to the principles outlined
in the [RFC]: Hardware pluggable, providing a hardware-pluggable
interface that decouples the integration of the Ascend NPU with vLLM.
This patch also include changes to make CI work and use cache speed up
e2e test, including:
1. Change push (post merge ci) and pull_request (pr ci) trigger branch
to main
2. Make mypy work by ignore base_communicator and clear unused deps
3. Several improvements for vllm_ascend_test:
- use cache (pip, ms, hf) speed up e2e test (25mins --> 5mins)
- switch `git clone` command to `action/checkout` to speedup checkout
and
- Enable sv for pytest for better info dump
- Remove network host to resole `docker: conflicting ontions: cannot
attach both user-defined and non-user-definednetwork-modes`, which is a
problem on docker 1.45 but not on 1.39.
4. Adapt MLA decode optimizations:
cabaf4eff3
### Does this PR introduce _any_ user-facing change?
Yes, init the PR.
### How was this patch tested?
- This is the first PR to make ascend NPU work on vLLM. All code is
tested on ascend with vLLM V0 Engine.
- CI passed
---------
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: MengqingCao <cmq0113@163.com>
Co-authored-by: wangshuai09 <391746016@qq.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: wangli <wangli858794774@gmail.com>