Followup fix on official doc update (#34)
### What this PR does / why we need it? - Fix typos: vllm-ascned --> vllm-ascend - For version info ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? preview Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
This commit is contained in:
@@ -33,7 +33,11 @@ By using vLLM Ascend plugin, popular open-source models, including Transformer-l
|
||||
## Prerequisites
|
||||
|
||||
- Hardware: Atlas 800I A2 Inference series, Atlas A2 Training series
|
||||
- Software: vLLM (the same version as vllm-ascned), Python >= 3.9, CANN >= 8.0.RC2, PyTorch >= 2.4.0, torch-npu >= 2.4.0
|
||||
- Software:
|
||||
* Python >= 3.9
|
||||
* CANN >= 8.0.RC2
|
||||
* PyTorch >= 2.4.0, torch-npu >= 2.4.0
|
||||
* vLLM (the same version as vllm-ascend)
|
||||
|
||||
Find more about how to setup your environment step by step in [here](docs/installation.md).
|
||||
|
||||
@@ -64,7 +68,7 @@ Run the following command to start the vLLM server with the [Qwen/Qwen2.5-0.5B-I
|
||||
vllm serve Qwen/Qwen2.5-0.5B-Instruct
|
||||
curl http://localhost:8000/v1/models
|
||||
```
|
||||
**Please refer to [Official Docs](./docs/index.md) for more details.**
|
||||
**Please refer to [official docs](./docs/index.md) for more details.**
|
||||
|
||||
## Contributing
|
||||
See [CONTRIBUTING](./CONTRIBUTING.md) for more details, which is a step-by-step guide to help you set up development environment, build and test.
|
||||
|
||||
Reference in New Issue
Block a user