Files
xc-llm-ascend/docs/source/community/user_stories/index.md
lilinsiman adee9dd3b1 [Info][main] Correct the mistake in information documents (#4157)
### What this PR does / why we need it?
Correct the mistake in information documents

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
ut

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: lilinsiman <lilinsiman@gmail.com>
2025-11-13 15:53:58 +08:00

1.8 KiB

User Stories

Read case studies on how users and developers solve real, everyday problems with vLLM Ascend

  • LLaMA-Factory is an easy-to-use and efficient platform for training and fine-tuning large language models. It supports vLLM Ascend to speed up inference since LLaMA-Factory#7739, gaining 2x performance enhancement in inference.

  • Huggingface/trl is a cutting-edge library designed for post-training foundation models using advanced techniques like SFT, PPO and DPO. It uses vLLM Ascend since v0.17.0 to support RLHF on Ascend NPUs.

  • MindIE Turbo is an LLM inference engine acceleration plugin library developed by Huawei on Ascend hardware, which includes self-developed LLM optimization algorithms and optimizations related to the inference engine framework. It supports vLLM Ascend since 2.0rc1.

  • GPUStack is an open-source GPU cluster manager for running AI models. It supports vLLM Ascend since v0.6.2. See more GPUStack performance evaluation information at this link.

  • verl is a flexible, efficient, and production-ready RL training library for LLMs. It uses vLLM Ascend since v0.4.0. See more information on verl x Ascend Quickstart.

:::{toctree} :caption: More details :maxdepth: 1 llamafactory :::