### What this PR does / why we need it? Documented an issue in the 2-node PD mixed deployment scenario where inference may hang when concurrency exceeds 8.(GLM5) Noted that the issue has been fixed in PR: - #7235 - #7290. --------- Signed-off-by: MrZ20 <2609716663@qq.com> Signed-off-by: Mengqing Cao <cmq0113@163.com> Co-authored-by: Mengqing Cao <cmq0113@163.com>
vLLM Ascend Plugin documents
Live doc: https://docs.vllm.ai/projects/ascend
Build the docs
# Install dependencies.
pip install -r requirements-docs.txt
# Build the docs.
make clean
make html
# Build the docs with translation
make intl
# Open the docs with your browser
python -m http.server -d _build/html/
Launch your browser and open:
- English version: http://localhost:8000
- Chinese version: http://localhost:8000/zh_CN