From 4447e53d7ad5edcda978ca6b0a3a26a73c604de0 Mon Sep 17 00:00:00 2001 From: xleoken Date: Mon, 23 Jun 2025 09:01:00 +0800 Subject: [PATCH] [Doc] Change not to no in faqs.md (#1357) ### What this PR does / why we need it? Change not to no in faqs.md. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Local Test Signed-off-by: xleoken --- docs/source/faqs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/faqs.md b/docs/source/faqs.md index 5f4f16b..7099546 100644 --- a/docs/source/faqs.md +++ b/docs/source/faqs.md @@ -86,7 +86,7 @@ Currently, w8a8 quantization is already supported by vllm-ascend originally on v Please following the [quantization inferencing tutorail](https://vllm-ascend.readthedocs.io/en/main/tutorials/multi_npu_quantization.html) and replace model to DeepSeek. -### 12. There is not output in log when loading models using vllm-ascend, How to solve it? +### 12. There is no output in log when loading models using vllm-ascend, How to solve it? If you're using vllm 0.7.3 version, this is a known progress bar display issue in VLLM, which has been resolved in [this PR](https://github.com/vllm-project/vllm/pull/12428), please cherry-pick it locally by yourself. Otherwise, please fill up an issue.