wemaster 339d6894f6 [CI/UT][bugfix] fix v0 spec decode (#1321)
### What this PR does / why we need it?
1. [PR913](https://github.com/vllm-project/vllm-ascend/pull/913)
introduced an error that caused V0's spec decode function to fail.
[PR1109](https://github.com/vllm-project/vllm-ascend/pull/1109) wanted
to fix this problem. Unfortunately, the fix broke the ngram function. I
fixed the ngram function in this PR. **PS**: Q: Why is there a problem
when ngram is not found when pr1109 is merged? A: The newly introduced
problem will only appear when tp>1, and the use cases on CI are all tp=1
2. In versions after 0.7.3, vllm-ascend deleted some spec decode UTs to
avoid CI taking too long, including eagle speculative UTs, which made CI
unable to take care of the eagle function. I added
it(`test_eagle_correctness.py`) back in this PR
3. Because of the reason mentioned in 2, the current version of Eagle
has a problem. I located and fixed this problem. It was because vllm's
`draft_model_runner.py` was changed and vllm-ascend was not synchronized
in time.
4. Currently, the UTs of v0 and v1 are mixed in the spec_decode
directory. I split them into two directories: spec_decode_v0 and
spec_decode_v1.
5. i found
`vllm.spec_decode.multi_step_worker.MultiStepWorker.set_include_gpu_probs_tensor`
and
`vllm.spec_decode.multi_step_worker.MultiStepWorker.set_should_modify_greedy_probs_inplace`
have changed in vllm, so i remove it in this pr.

### Does this PR introduce _any_ user-facing change?
This PR fixes the functions of ngram and eagle spec decode in the v0
engine

### How was this patch tested?
tested by CI

Signed-off-by: mengwei805 <mengwei25@huawei.com>
2025-06-23 09:05:13 +08:00
2025-02-05 10:53:12 +08:00
2025-01-29 02:44:13 -08:00
2025-04-01 09:25:33 +08:00

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥


Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
  • OS: Linux
  • Software:
    • Python >= 3.9, < 3.12
    • CANN >= 8.1.RC1
    • PyTorch >= 2.5.1, torch-npu >= 2.5.1.post1.dev20250619
    • vLLM (the same version as vllm-ascend)

Getting Started

Please refer to QuickStart and Installation for more details.

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has main branch and dev branch.

  • main: main branchcorresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example, v0.7.3-dev is the dev branch for vLLM v0.7.3 version.

Below is maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM 0.9.x branch
v0.7.1-dev Unmaintained Only doc fixed is allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 8.6 MiB
Languages
Python 66.8%
C++ 31.8%
Shell 1%
CMake 0.2%
Dockerfile 0.1%
Other 0.1%