yangqinghao-cmss 99fa0ac882 [BugFix] update the kv transfer config (#2121)
### What this PR does / why we need it?
The functions KVTransferConfig.from_cli and AscendHcclConnector are
missing in the latest vLLM version. To resolve this, I propose modifying
the kv_connector to use LLMDataDistCMgrConnector, which depends on [PR
#2079](https://github.com/vllm-project/vllm-ascend/pull/2079)

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?
vllm:main
vllm-ascend:mian
results:
```bash
Adding requests: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 374.27it/s]
Processed prompts: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 66.06it/s, est. speed input: 449.08 toks/s, output: 66.51 toks/s]
Prefill node is finished.
INFO 07-31 09:18:30 [model_runner_v1.py:2282] Graph capturing finished in 36 secs, took 0.21 GiB
INFO 07-31 09:18:30 [core.py:201] init engine (profile, create kv cache, warmup model) took 52.49 seconds
INFO 07-31 09:18:30 [factory.py:74] Creating v1 connector with name: LLMDataDistCMgrConnector and engine_id: 28c8ced8-575c-4f87-840a-48d04d0edf7e
INFO 07-31 09:18:30 [platform.py:157] PIECEWISE compilation enabled on NPU. use_inductor not supported - using only ACL Graph mode
INFO 07-31 09:18:30 [utils.py:333] Calculated maximum supported batch sizes for ACL graph: 76
INFO 07-31 09:18:30 [utils.py:359] No adjustment needed for ACL graph batch sizes: Qwen2ForCausalLM model (layers: 24) with 67 sizes
INFO 07-31 09:18:30 [llm.py:293] Supported_tasks: ['generate']
Waiting for prefill node to finish...
Adding requests: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 709.70it/s]
Processed prompts: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 16.23it/s, est. speed input: 109.70 toks/s, output: 260.01 toks/s]
Prompt: 'Hello, how are you today?', Generated text: " I'm a computer program, so I don't have feelings. But I can"
Prompt: 'Hi, what is your name?', Generated text: ' I am a computer programmer. I have a question about the programming language I am'
Prompt: 'Tell me a very long story.', Generated text: ' I want to read it. I want to read it. I want to read'
Prompt: 'what is your favourite book?', Generated text: " I'm sorry, but as an AI language model, I don't have personal"
Cleanup prefill resources
All process done
```

- vLLM version: v0.10.0
- vLLM main:
9cb497bfa3

Signed-off-by: yangqinghao-cmss <yangqinghao_yewu@cmss.chinamobile.com>
2025-08-01 08:56:55 +08:00
2025-07-30 22:31:30 +08:00
2025-02-05 10:53:12 +08:00
2025-01-29 02:44:13 -08:00
2025-07-26 15:43:29 +08:00
2025-06-27 09:14:43 +08:00

vllm-ascend

vLLM Ascend Plugin

| About Ascend | Documentation | #sig-ascend | Users Forum | Weekly Meeting |

English | 中文


Latest News 🔥

  • [2025/06] User stories page is now live! It kicks off with LLaMA-Factory/verl//TRL/GPUStack to demonstrate how vLLM Ascend assists Ascend users in enhancing their experience across fine-tuning, evaluation, reinforcement learning (RL), and deployment scenarios.
  • [2025/06] Contributors page is now live! All contributions deserve to be recorded, thanks for all contributors.
  • [2025/05] We've released first official version v0.7.3! We collaborated with the vLLM community to publish a blog post sharing our practice: Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU.
  • [2025/03] We hosted the vLLM Beijing Meetup with vLLM team! Please find the meetup slides here.
  • [2025/02] vLLM community officially created vllm-project/vllm-ascend repo for running vLLM seamlessly on the Ascend NPU.
  • [2024/12] We are working with the vLLM community to support [RFC]: Hardware pluggable.

Overview

vLLM Ascend (vllm-ascend) is a community maintained hardware plugin for running vLLM seamlessly on the Ascend NPU.

It is the recommended approach for supporting the Ascend backend within the vLLM community. It adheres to the principles outlined in the [RFC]: Hardware pluggable, providing a hardware-pluggable interface that decouples the integration of the Ascend NPU with vLLM.

By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.

Prerequisites

  • Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
  • OS: Linux
  • Software:
    • Python >= 3.9, < 3.12
    • CANN >= 8.2.rc1
    • PyTorch >= 2.5.1, torch-npu >= 2.5.1.post1.dev20250619
    • vLLM (the same version as vllm-ascend)

Getting Started

Please use the following recommended versions to get started quickly:

Version Release type Doc
v0.9.2rc1 Latest release candidate QuickStart and Installation for more details
v0.7.3.post1 Latest stable version QuickStart and Installation for more details

Contributing

See CONTRIBUTING for more details, which is a step-by-step guide to help you set up development environment, build and test.

We welcome and value any contributions and collaborations:

Branch

vllm-ascend has main branch and dev branch.

  • main: main branchcorresponds to the vLLM main branch, and is continuously monitored for quality through Ascend CI.
  • vX.Y.Z-dev: development branch, created with part of new releases of vLLM. For example, v0.7.3-dev is the dev branch for vLLM v0.7.3 version.

Below is maintained branches:

Branch Status Note
main Maintained CI commitment for vLLM main branch and vLLM 0.9.x branch
v0.7.1-dev Unmaintained Only doc fixed is allowed
v0.7.3-dev Maintained CI commitment for vLLM 0.7.3 version, only bug fix is allowed and no new release tag any more.
v0.9.1-dev Maintained CI commitment for vLLM 0.9.1 version

Please refer to Versioning policy for more details.

Weekly Meeting

License

Apache License 2.0, as found in the LICENSE file.

Description
XC-LLM: A Specially Optimized LLM Inference Engine for ModelHub XC
Readme Apache-2.0 8.6 MiB
Languages
Python 66.8%
C++ 31.8%
Shell 1%
CMake 0.2%
Dockerfile 0.1%
Other 0.1%