[Doc] Change not to no in faqs.md (#1357)

### What this PR does / why we need it?

Change not to no in faqs.md.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?

Local Test

Signed-off-by: xleoken <xleoken@163.com>
This commit is contained in:
xleoken
2025-06-23 09:01:00 +08:00
committed by GitHub
parent a95afc011e
commit 4447e53d7a

View File

@@ -86,7 +86,7 @@ Currently, w8a8 quantization is already supported by vllm-ascend originally on v
Please following the [quantization inferencing tutorail](https://vllm-ascend.readthedocs.io/en/main/tutorials/multi_npu_quantization.html) and replace model to DeepSeek.
### 12. There is not output in log when loading models using vllm-ascend, How to solve it?
### 12. There is no output in log when loading models using vllm-ascend, How to solve it?
If you're using vllm 0.7.3 version, this is a known progress bar display issue in VLLM, which has been resolved in [this PR](https://github.com/vllm-project/vllm/pull/12428), please cherry-pick it locally by yourself. Otherwise, please fill up an issue.