From 00de2ee6adb8cc59561482c3bf104ef59e2c4345 Mon Sep 17 00:00:00 2001 From: hfadzxy <59153331+hfadzxy@users.noreply.github.com> Date: Wed, 16 Apr 2025 16:07:08 +0800 Subject: [PATCH] [Doc] update faq about progress bar display issue (#538) ### What this PR does / why we need it? update faq about progress bar display issue Signed-off-by: hfadzxy --- docs/source/faqs.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/source/faqs.md b/docs/source/faqs.md index d246aa4..6b052a4 100644 --- a/docs/source/faqs.md +++ b/docs/source/faqs.md @@ -113,3 +113,7 @@ Currently, running on v0.7.3, we should run w8a8 with vllm + vllm-ascend + mindi 1. Quantize bf16 DeepSeek, e.g. [unsloth/DeepSeek-R1-BF16](https://modelscope.cn/models/unsloth/DeepSeek-R1-BF16), with msModelSlim to get w8a8 DeepSeek. Find more details in [msModelSlim doc](https://gitee.com/ascend/msit/tree/master/msmodelslim/msmodelslim/pytorch/llm_ptq) 2. Copy the content of `quant_model_description_w8a8_dynamic.json` into the `quantization_config` of `config.json` of the quantized model files. 3. Reference with the quantized DeepSeek model. + +### 13. There is not output in log when loading models using vllm-ascend, How to solve it? + +If you're using vllm 0.7.3 version, this is a known progress bar display issue in VLLM, which has been resolved in [this PR](https://github.com/vllm-project/vllm/pull/12428), please cherry-pick it locally by yourself. Otherwise, please fill up an issue. \ No newline at end of file