[Docs] Add official doc index (#29)
Add official doc index. Move the release content to the right place. Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
85
README.md
85
README.md
@@ -31,20 +31,11 @@ This plugin is the recommended approach for supporting the Ascend backend within
|
||||
By using vLLM Ascend plugin, popular open-source models, including Transformer-like, Mixture-of-Expert, Embedding, Multi-modal LLMs can run seamlessly on the Ascend NPU.
|
||||
|
||||
## Prerequisites
|
||||
### Support Devices
|
||||
- Atlas A2 Training series (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)
|
||||
- Atlas 800I A2 Inference series (Atlas 800I A2)
|
||||
|
||||
### Dependencies
|
||||
| Requirement | Supported version | Recommended version | Note |
|
||||
|-------------|-------------------| ----------- |------------------------------------------|
|
||||
| vLLM | main | main | Required for vllm-ascend |
|
||||
| Python | >= 3.9 | [3.10](https://www.python.org/downloads/) | Required for vllm |
|
||||
| CANN | >= 8.0.RC2 | [8.0.RC3](https://www.hiascend.com/developer/download/community/result?module=cann&cann=8.0.0.beta1) | Required for vllm-ascend and torch-npu |
|
||||
| torch-npu | >= 2.4.0 | [2.5.1rc1](https://gitee.com/ascend/pytorch/releases/tag/v6.0.0.alpha001-pytorch2.5.1) | Required for vllm-ascend |
|
||||
| torch | >= 2.4.0 | [2.5.1](https://github.com/pytorch/pytorch/releases/tag/v2.5.1) | Required for torch-npu and vllm |
|
||||
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
|
||||
- Software: vLLM (the same version as vllm-ascned), Python >= 3.9, CANN >= 8.0.RC2, PyTorch >= 2.4.0, torch-npu >= 2.4.0
|
||||
|
||||
Find more about how to setup your environment in [here](docs/environment.md).
|
||||
Find more about how to setup your environment step by step in [here](docs/installation.md).
|
||||
|
||||
## Getting Started
|
||||
|
||||
@@ -73,78 +64,14 @@ Run the following command to start the vLLM server with the [Qwen/Qwen2.5-0.5B-I
|
||||
vllm serve Qwen/Qwen2.5-0.5B-Instruct
|
||||
curl http://localhost:8000/v1/models
|
||||
```
|
||||
|
||||
Please refer to [vLLM Quickstart](https://docs.vllm.ai/en/latest/getting_started/quickstart.html) for more details.
|
||||
|
||||
## Building
|
||||
|
||||
#### Build Python package from source
|
||||
|
||||
```bash
|
||||
git clone https://github.com/vllm-project/vllm-ascend.git
|
||||
cd vllm-ascend
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
#### Build container image from source
|
||||
```bash
|
||||
git clone https://github.com/vllm-project/vllm-ascend.git
|
||||
cd vllm-ascend
|
||||
docker build -t vllm-ascend-dev-image -f ./Dockerfile .
|
||||
```
|
||||
|
||||
See [Building and Testing](./CONTRIBUTING.md) for more details, which is a step-by-step guide to help you set up development environment, build and test.
|
||||
|
||||
## Feature Support Matrix
|
||||
| Feature | Supported | Note |
|
||||
|---------|-----------|------|
|
||||
| Chunked Prefill | ✗ | Plan in 2025 Q1 |
|
||||
| Automatic Prefix Caching | ✅ | Imporve performance in 2025 Q1 |
|
||||
| LoRA | ✗ | Plan in 2025 Q1 |
|
||||
| Prompt adapter | ✅ ||
|
||||
| Speculative decoding | ✅ | Impore accuracy in 2025 Q1|
|
||||
| Pooling | ✗ | Plan in 2025 Q1 |
|
||||
| Enc-dec | ✗ | Plan in 2025 Q1 |
|
||||
| Multi Modality | ✅ (LLaVA/Qwen2-vl/Qwen2-audio/internVL)| Add more model support in 2025 Q1 |
|
||||
| LogProbs | ✅ ||
|
||||
| Prompt logProbs | ✅ ||
|
||||
| Async output | ✅ ||
|
||||
| Multi step scheduler | ✅ ||
|
||||
| Best of | ✅ ||
|
||||
| Beam search | ✅ ||
|
||||
| Guided Decoding | ✗ | Plan in 2025 Q1 |
|
||||
|
||||
## Model Support Matrix
|
||||
|
||||
The list here is a subset of the supported models. See [supported_models](docs/supported_models.md) for more details:
|
||||
| Model | Supported | Note |
|
||||
|---------|-----------|------|
|
||||
| Qwen 2.5 | ✅ ||
|
||||
| Mistral | | Need test |
|
||||
| DeepSeek v2.5 | |Need test |
|
||||
| LLama3.1/3.2 | ✅ ||
|
||||
| Gemma-2 | |Need test|
|
||||
| baichuan | |Need test|
|
||||
| minicpm | |Need test|
|
||||
| internlm | ✅ ||
|
||||
| ChatGLM | ✅ ||
|
||||
| InternVL 2.5 | ✅ ||
|
||||
| Qwen2-VL | ✅ ||
|
||||
| GLM-4v | |Need test|
|
||||
| Molomo | ✅ ||
|
||||
| LLaVA 1.5 | ✅ ||
|
||||
| Mllama | |Need test|
|
||||
| LLaVA-Next | |Need test|
|
||||
| LLaVA-Next-Video | |Need test|
|
||||
| Phi-3-Vison/Phi-3.5-Vison | |Need test|
|
||||
| Ultravox | |Need test|
|
||||
| Qwen2-Audio | ✅ ||
|
||||
**Please refer to [Official Docs](./docs/index.md) for more details.**
|
||||
|
||||
## Contributing
|
||||
See [CONTRIBUTING](./CONTRIBUTING.md) for more details, which is a step-by-step guide to help you set up development environment, build and test.
|
||||
|
||||
We welcome and value any contributions and collaborations:
|
||||
- Please feel free comments [here](https://github.com/vllm-project/vllm-ascend/issues/19) about your usage of vLLM Ascend Plugin.
|
||||
- Please let us know if you encounter a bug by [filing an issue](https://github.com/vllm-project/vllm-ascend/issues).
|
||||
- Please see the guidance on how to contribute in [CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||
|
||||
## License
|
||||
|
||||
|
||||
Reference in New Issue
Block a user