[Doc][Misc] Comprehensive documentation cleanup and grammatical fixes (#8073)
What this PR does / why we need it? This pull request performs a comprehensive cleanup of the vLLM Ascend documentation. It fixes numerous typos, grammatical errors, and phrasing issues across community guidelines, developer documents, hardware tutorials, and feature guides. Key improvements include correcting hardware names (e.g., Atlas 300I), fixing broken links, cleaning up code examples (removing duplicate flags and trailing commas), and improving the clarity of technical explanations. These changes are necessary to ensure the documentation is professional, accurate, and easy for users to follow. Does this PR introduce any user-facing change? No, this PR contains documentation-only updates. How was this patch tested? The changes were manually reviewed for accuracy and grammatical correctness. No functional code changes were introduced. --------- Signed-off-by: herizhen <1270637059@qq.com> Signed-off-by: herizhen <59841270+herizhen@users.noreply.github.com>
This commit is contained in:
@@ -14,7 +14,7 @@ vLLM Ascend is an open-source project under the vLLM community, where the author
|
||||
|
||||
- Contributor:
|
||||
|
||||
**Responsibility:** Help new contributors on boarding, handle and respond to community questions, review RFCs and code.
|
||||
**Responsibility:** Help new contributors onboarding, handle and respond to community questions, review RFCs and code.
|
||||
|
||||
**Requirements:** Complete at least 1 contribution. A contributor is someone who consistently and actively participates in a project, including but not limited to issue/review/commits/community involvement.
|
||||
|
||||
@@ -37,7 +37,7 @@ Maintainers will be granted write access to the [vllm-project/vllm-ascend](https
|
||||
|
||||
### The Principles
|
||||
|
||||
- Membership in vLLM Ascend is given to individuals on merit basis after they demonstrate their strong expertise in vLLM/vLLM Ascend through contributions, reviews, and discussions.
|
||||
- Membership in vLLM Ascend is given to individuals on a merit basis after they demonstrate their strong expertise in vLLM/vLLM Ascend through contributions, reviews, and discussions.
|
||||
|
||||
- For membership in the maintainer group, individuals have to demonstrate strong and continued alignment with the overall vLLM/vLLM Ascend principles.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ Read case studies on how users and developers solve real, everyday problems with
|
||||
|
||||
- [GPUStack](https://github.com/gpustack/gpustack) is an open-source GPU cluster manager for running AI models. It supports vLLM Ascend since [v0.6.2](https://github.com/gpustack/gpustack/releases/tag/v0.6.2). See more GPUStack performance evaluation information at [this link](https://mp.weixin.qq.com/s/pkytJVjcH9_OnffnsFGaew).
|
||||
|
||||
- [verl](https://github.com/volcengine/verl) is a flexible, efficient, and production-ready RL training library for LLMs. It uses vLLM Ascend since [v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0). See more information on [verl x Ascend Quickstart](https://verl.readthedocs.io/en/latest/ascend_tutorial/ascend_quick_start.html).
|
||||
- [verl](https://github.com/volcengine/verl) is a flexible, efficient, and production-ready RL training library for LLMs. It uses vLLM Ascend since [v0.4.0](https://github.com/volcengine/verl/releases/tag/v0.4.0). See more information on [verl x Ascend Quickstart](https://verl.readthedocs.io/en/latest/ascend_tutorial/quick_start/ascend_quick_start.html).
|
||||
|
||||
:::{toctree}
|
||||
:caption: More details
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) is an easy-to-use and efficient platform for training and fine-tuning large language models. With LLaMA-Factory, you can fine-tune hundreds of pre-trained models locally without writing any code.
|
||||
|
||||
LLaMA-Factory users need to evaluate and inference the model after fine-tuning.
|
||||
LLaMA-Factory users need to evaluate the model and perform inference after fine-tuning.
|
||||
|
||||
**Business challenge**
|
||||
|
||||
|
||||
@@ -174,4 +174,4 @@ Notes:
|
||||
- `torch-npu`: Ascend Extension for PyTorch (torch-npu) releases a stable version to [PyPi](https://pypi.org/project/torch-npu)
|
||||
every 3 months, a development version (aka the POC version) every month, and a nightly version every day.
|
||||
The PyPi stable version **CAN** be used in vLLM Ascend final version, the monthly dev version **ONLY CAN** be used in
|
||||
vLLM Ascend RC version for rapid iteration, and the nightly version **CANNOT** be used in vLLM Ascend any version and branch.
|
||||
vLLM Ascend RC version for rapid iteration, and the nightly version **CANNOT** be used in vLLM Ascend any version or branch.
|
||||
|
||||
Reference in New Issue
Block a user