### What this PR does / why we need it?
While using the LLM Compressor quantization tool from the VLLM community
to generate quantized weights, the VLLM Ascend engine needs to be
adapted to support the compressed tensors quantization format.
1. Support Moe model W8A8 Int8 dynamic weight.
2. Specify W4A16 quantization configuration.
Co-authored-by: menogrey 1299267905@qq.com
Co-authored-by: kunpengW-code 1289706727@qq.com
### Does this PR introduce _any_ user-facing change?
No
- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef
---------
Signed-off-by: LHXuuu <scut_xlh@163.com>
Signed-off-by: menogrey <1299267905@qq.com>
Signed-off-by: Wang Kunpeng <1289706727@qq.com>
Co-authored-by: menogrey <1299267905@qq.com>
Co-authored-by: Wang Kunpeng <1289706727@qq.com>
30 lines
783 B
Python
30 lines
783 B
Python
import torch
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
from llmcompressor import oneshot
|
|
from llmcompressor.modifiers.quantization import QuantizationModifier
|
|
|
|
MODEL_ID = "Qwen/Qwen3-30B-A3B-Instruct-2507"
|
|
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
MODEL_ID, dtype=torch.bfloat16, trust_remote_code=True
|
|
)
|
|
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
|
|
|
|
recipe = QuantizationModifier(
|
|
targets="Linear",
|
|
scheme="INT8",
|
|
ignore=["lm_head", "re:.*mlp.gate$"],
|
|
)
|
|
|
|
oneshot(
|
|
model=model,
|
|
recipe=recipe,
|
|
trust_remote_code_model=True,
|
|
)
|
|
|
|
# Save to disk in compressed-tensors format.
|
|
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-INT8_W8A8"
|
|
model.save_pretrained(SAVE_DIR, save_compressed=True)
|
|
tokenizer.save_pretrained(SAVE_DIR)
|