Files
xc-llm-ascend/docs/source/tutorials/models/Qwen3-8B-W4A8.md
wangxiyuan 7d4833bce9 [Doc][Misc] Restructure tutorial documentation (#6501)
### What this PR does / why we need it?

This PR refactors the tutorial documentation by restructuring it into
three categories: Models, Features, and Hardware. This improves the
organization and navigation of the tutorials, making it easier for users
to find relevant information.

- The single `tutorials/index.md` is split into three separate index
files:
  - `docs/source/tutorials/models/index.md`
  - `docs/source/tutorials/features/index.md`
  - `docs/source/tutorials/hardwares/index.md`
- Existing tutorial markdown files have been moved into their respective
new subdirectories (`models/`, `features/`, `hardwares/`).
- The main `index.md` has been updated to link to these new tutorial
sections.

This change makes the documentation structure more logical and scalable
for future additions.

### Does this PR introduce _any_ user-facing change?

Yes, this PR changes the structure and URLs of the tutorial
documentation pages. Users following old links to tutorials will
encounter broken links. It is recommended to set up redirects if the
documentation framework supports them.

### How was this patch tested?

These are documentation-only changes. The documentation should be built
and reviewed locally to ensure all links are correct and the pages
render as expected.

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2026-02-10 15:03:35 +08:00

3.8 KiB

Qwen3-8B-W4A8

Run Docker Container

:::{note} w4a8 quantization feature is supported by v0.9.1rc2 and later. :::

   :substitutions:
# Update the vllm-ascend image
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version|
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash

Install modelslim and Convert Model

:::{note} You can choose to convert the model yourself or use the quantized model we uploaded, see https://www.modelscope.cn/models/vllm-ascend/Qwen3-8B-W4A8 :::

# The branch(br_release_MindStudio_8.1.RC2_TR5_20260624) has been verified
git clone -b br_release_MindStudio_8.1.RC2_TR5_20260624 https://gitcode.com/Ascend/msit

cd msit/msmodelslim

# Install by run this script
bash install.sh
pip install accelerate

cd example/Qwen
# Original weight path, Replace with your local model path
MODEL_PATH=/home/models/Qwen3-8B
# Path to save converted weight, Replace with your local path
SAVE_PATH=/home/models/Qwen3-8B-w4a8
# Set an idle NPU card
export ASCEND_RT_VISIBLE_DEVICES=0

python quant_qwen.py \
          --model_path $MODEL_PATH \
          --save_directory $SAVE_PATH \
          --device_type npu \
          --model_type qwen3 \
          --calib_file None \
          --anti_method m6 \
          --anti_calib_file ./calib_data/mix_dataset.json \
          --w_bit 4 \
          --a_bit 8 \
          --is_lowbit True \
          --open_outlier False \
          --group_size 256 \
          --is_dynamic True \
          --trust_remote_code True \
          --w_method HQQ

Verify the Quantized Model

The converted model files look like:

.
|-- config.json
|-- configuration.json
|-- generation_config.json
|-- merges.txt
|-- quant_model_description.json
|-- quant_model_weight_w4a8_dynamic-00001-of-00003.safetensors
|-- quant_model_weight_w4a8_dynamic-00002-of-00003.safetensors
|-- quant_model_weight_w4a8_dynamic-00003-of-00003.safetensors
|-- quant_model_weight_w4a8_dynamic.safetensors.index.json
|-- README.md
|-- tokenizer.json
`-- tokenizer_config.json

Run the following script to start the vLLM server with the quantized model:

export VLLM_USE_MODELSCOPE=true
export MODEL_PATH=vllm-ascend/Qwen3-8B-W4A8
vllm serve ${MODEL_PATH} --served-model-name "qwen3-8b-w4a8" --max-model-len 4096 --quantization ascend

Once your server is started, you can query the model with input prompts.

curl http://localhost:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "qwen3-8b-w4a8",
        "prompt": "what is large language model?",
        "max_completion_tokens": "128",
        "top_p": "0.95",
        "top_k": "40",
        "temperature": "0.0"
    }'

Run the following script to execute offline inference on single-NPU with the quantized model:

:::{note} To enable quantization for ascend, quantization method must be "ascend". :::


from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)

llm = LLM(model="/home/models/Qwen3-8B-w4a8",
          max_model_len=4096,
          quantization="ascend")

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")