Chenguang Li 202b39a38c Ray Worker Ops Optimization (#136)
### What this PR does / why we need it?
In the case where `backend = ray`, only the main process completes the
`forward_oot` call, while the other worker processes call
`forward_native`. (This bug should also exist when `backend = mp`.)

### Does this PR introduce _any_ user-facing change?
no.

### How was this patch tested?
**Environment:**

CANN: 8.0.0
PyTorch: 2.5.1
Torch: 2.5.1rc1
python: 3.10
python: 3.10
vllm: branch main 
vllm-ascend: branch main 
The current implementation avoids the Ray Worker initialization issue,
as addressed in the
[PR](https://github.com/vllm-project/vllm-ascend/pull/92). Then, during
the `forward_oot` call, logging will be performed.

**Script:**

```bash
python examples/offline_distributed_inference_npu.py
```

**Result:**
```bash
NPURayWorkerWrapper pid=3984223) forward_oot run. #############################################
(NPURayWorkerWrapper pid=3984223) forward_oot run. #############################################
(NPURayWorkerWrapper pid=3984223) forward_oot run. #############################################
(NPURayWorkerWrapper pid=3984223) forward_oot run. #############################################
(NPURayWorkerWrapper pid=3984223) forward_oot run. #############################################
forward_oot run. #############################################
forward_oot run. #############################################
Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:07<00:00,  1.96s/it, est. speed input: 2.80 toks/s, output: 51.00 toks/s]
Prompt: 'Hello, my name is', Generated text: ' Alex and I am a 16 year old male. I have been diagnosed with a rare genetic disorder called X-linked recessive. I have been told that I will not be able to have children. I have been told that I will not be able to have children because of the X-linked recessive disorder. I have been told that I will not be able to have children because of the X-linked recessive disorder. I have been told that I will not be able to have children because of'
Prompt: 'The president of the United States is', Generated text: ' Statesman. He is the leader of the country. He is the one who makes the decisions. He is the one who makes the laws. He is the one who makes the rules. He is the one who makes the country strong. He is the one who makes the country happy. He is the one who makes the country safe. He is the one who makes the country free. He is the one who makes the country beautiful. He is the one who makes the country great. He is'
Prompt: 'The capital of France is', Generated text: ' the city of Paris. It is the largest city in France and the second largest city in Europe. It is located in the center of the country, in the south of the country. It is situated on the banks of the Seine River, which flows through the city. The city is surrounded by the Alps and the Pyrenees mountains. The city is also surrounded by the Mediterranean Sea. The city is known for its beautiful architecture, its museums, its parks, and its food. Paris is'
Prompt: 'The future of AI is', Generated text: ' following the path of the internet, and the internet is following the path of the web. The web is a network of interconnected web pages, and the internet is a network of interconnected computers. The web is a network of interconnected computers, and the internet is a network of interconnected computers. The web is a network of interconnected computers, and the internet is a network of interconnected computers. The web is a network of interconnected computers, and the internet is a network of interconnected computers. The web is a network'
```

---------

Signed-off-by: Chenguang Li <757486878@qq.com>
2025-02-21 22:45:15 +08:00
2025-02-21 17:20:21 +08:00
2025-02-05 10:53:12 +08:00
2025-02-05 10:53:12 +08:00
2025-02-19 17:04:46 +08:00
2025-02-05 10:53:12 +08:00
2025-01-29 02:44:13 -08:00
2025-02-21 16:16:48 +08:00
2025-02-14 10:45:49 +08:00

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Documentation | Developer Slack (#sig-ascend) |

English | 中文


Latest News 🔥


Overview

vLLM Ascend plugin (vllm-ascend) is a backend plugin for running vLLM on the Ascend NPU.

This plugin is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
  • Software:
    • Python >= 3.9
    • CANN >= 8.0.RC2
    • PyTorch >= 2.4.0, torch-npu >= 2.4.0
    • vLLM (the same version as vllm-ascend)

Find more about how to setup your environment step by step in here.

Getting Started

Note

Currently, we are actively collaborating with the vLLM community to support the Ascend backend plugin, once supported you can use one line command pip install vllm vllm-ascend to compelete installation.

Installation from source code:

# Install vllm main branch according:
# https://docs.vllm.ai/en/latest/getting_started/installation/cpu/index.html#build-wheel-from-source
git clone --depth 1 https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-build.txt
VLLM_TARGET_DEVICE=empty pip install .

# Install vllm-ascend main branch
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .

Run the following command to start the vLLM server with the Qwen/Qwen2.5-0.5B-Instruct model:

# export VLLM_USE_MODELSCOPE=true to speed up download
vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models

Please refer to QuickStart and Installation for more details.

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

  • Please feel free comments here about your usage of vLLM Ascend Plugin.
  • Please let us know if you encounter a bug by filing an issue.

Branch

vllm-ascend has main branch and dev branch.

  • main: main branchcorresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example, v0.7.1-dev is the dev branch for vLLM v0.7.1 version.

Below is maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version

Please refer to Versioning policy for more details.

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 8.6 MiB
Languages
Python 66.8%
C++ 31.8%
Shell 1%
CMake 0.2%
Dockerfile 0.1%
Other 0.1%