[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
#
|
|
|
|
|
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
|
|
|
|
|
# This file is a part of the vllm-ascend project.
|
|
|
|
|
#
|
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
|
#
|
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
#
|
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
|
# limitations under the License.
|
|
|
|
|
#
|
|
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
from typing import Any, Callable, Dict, List, Optional
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
import torch
|
2025-05-15 09:19:55 +08:00
|
|
|
import torch.distributed as dist
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
import torch_npu
|
2025-05-15 09:19:55 +08:00
|
|
|
from vllm.distributed import GroupCoordinator
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
import vllm_ascend.envs as envs_ascend
|
2025-04-23 16:23:25 +08:00
|
|
|
from vllm_ascend.distributed.parallel_state import get_ep_group
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
from vllm_ascend.ops.fused_moe import select_experts
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
VLLM_ENABLE_MC2: bool = envs_ascend.VLLM_ENABLE_MC2
|
|
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
def apply_mlp(hidden_states_wrapper: List[torch.Tensor],
|
2025-04-23 16:23:25 +08:00
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
group_list: torch.Tensor,
|
|
|
|
|
dynamic_scale: torch.Tensor = None,
|
|
|
|
|
group_list_type: int = 1) -> torch.Tensor:
|
|
|
|
|
"""
|
|
|
|
|
apply MLP: gate_up_proj -> swiglu -> down_proj
|
|
|
|
|
|
|
|
|
|
Args:
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states_wrapper: wrapper of input hidden states with shape (num_tokens, hidden_size).
|
2025-04-23 16:23:25 +08:00
|
|
|
w1: expert weights1 with shape
|
|
|
|
|
(num_experts, hidden_size, intermediate_size * 2)
|
|
|
|
|
w1_scale: weights1 scale with shape (num_experts, intermediate_size * 2)
|
|
|
|
|
w2: expert weights2 with shape
|
|
|
|
|
(num_experts, intermediate_size, hidden_size)
|
|
|
|
|
w2_scale: weights2 scale with shape (num_experts, hidden_size)
|
|
|
|
|
group_list: number of tokens for each expert, follow cumsum mode, and
|
|
|
|
|
with shape (num_experts).
|
|
|
|
|
transpose_weight:
|
|
|
|
|
w1: (num_experts, intermediate_size * 2, hidden_size) ->
|
|
|
|
|
(num_experts, hidden_size, intermediate_size * 2)
|
|
|
|
|
w2: (num_experts, hidden_size, intermediate_size) ->
|
|
|
|
|
(num_experts, intermediate_size, hidden_size)
|
|
|
|
|
|
|
|
|
|
Returns:
|
|
|
|
|
hidden_states: output hidden states after MLP.
|
|
|
|
|
"""
|
|
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
assert len(hidden_states_wrapper) == 1
|
|
|
|
|
hidden_states = hidden_states_wrapper.pop()
|
2025-04-23 16:23:25 +08:00
|
|
|
if dynamic_scale is None:
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states, pertoken_scale = torch_npu.npu_dynamic_quant(
|
|
|
|
|
hidden_states)
|
2025-04-23 16:23:25 +08:00
|
|
|
else:
|
|
|
|
|
pertoken_scale = dynamic_scale
|
|
|
|
|
|
|
|
|
|
# gmm1: gate_up_proj
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w1],
|
2025-05-15 09:19:55 +08:00
|
|
|
scale=[w1_scale],
|
|
|
|
|
per_token_scale=[pertoken_scale],
|
|
|
|
|
split_item=2,
|
2025-05-09 15:09:37 +08:00
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=group_list,
|
2025-05-15 09:19:55 +08:00
|
|
|
output_dtype=w2_scale.dtype)[0]
|
2025-05-09 15:09:37 +08:00
|
|
|
|
|
|
|
|
# act_fn: swiglu
|
2025-05-15 09:19:55 +08:00
|
|
|
hidden_states = torch_npu.npu_swiglu(hidden_states)
|
|
|
|
|
hidden_states, swiglu_out_scale = torch_npu.npu_dynamic_quant(
|
|
|
|
|
hidden_states)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
# gmm2: down_proj
|
|
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w2],
|
|
|
|
|
scale=[w2_scale],
|
|
|
|
|
per_token_scale=[swiglu_out_scale],
|
|
|
|
|
split_item=2,
|
|
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=group_list,
|
|
|
|
|
output_dtype=w2_scale.dtype)[0]
|
|
|
|
|
return hidden_states
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
|
|
|
|
|
def fused_experts_with_mc2(
|
|
|
|
|
hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None,
|
|
|
|
|
moe_all_to_all_group_name: str = "",
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
global_bs = 0
|
|
|
|
|
moe_expert_num = len(expert_map)
|
|
|
|
|
# hidden_states = hidden_states.bfloat16()
|
|
|
|
|
kwargs = {
|
|
|
|
|
"x": hidden_states,
|
|
|
|
|
"expert_ids": topk_ids,
|
|
|
|
|
"expert_shard_type": 0,
|
|
|
|
|
"shared_expert_rank_num": 0,
|
|
|
|
|
"moe_expert_num": moe_expert_num,
|
|
|
|
|
"global_bs": global_bs,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
rank = torch.distributed.get_rank()
|
|
|
|
|
|
|
|
|
|
quant_mode = 2
|
|
|
|
|
ep_group = get_ep_group().device_group
|
|
|
|
|
local_rank = torch.distributed.get_rank(group=ep_group)
|
|
|
|
|
all_to_all_group_size = torch.distributed.get_world_size(ep_group)
|
|
|
|
|
|
|
|
|
|
world_szie = torch.distributed.get_world_size()
|
|
|
|
|
tp_size = world_szie // all_to_all_group_size
|
|
|
|
|
tp_rank = rank % tp_size
|
|
|
|
|
|
|
|
|
|
stage1_kwargs = {
|
|
|
|
|
"scales": None,
|
|
|
|
|
"quant_mode": quant_mode,
|
|
|
|
|
"group_ep": moe_all_to_all_group_name,
|
|
|
|
|
"ep_world_size": all_to_all_group_size,
|
|
|
|
|
"ep_rank_id": local_rank,
|
|
|
|
|
# "group_tp": self.moe_rs_group_name,
|
|
|
|
|
"group_tp": moe_all_to_all_group_name,
|
|
|
|
|
"tp_world_size": tp_size,
|
|
|
|
|
"tp_rank_id": tp_rank,
|
|
|
|
|
}
|
|
|
|
|
kwargs.update(stage1_kwargs)
|
|
|
|
|
|
|
|
|
|
output = torch_npu.npu_moe_distribute_dispatch(**kwargs)
|
|
|
|
|
# comm_stream.wait_stream(torch.npu.current_stream())
|
|
|
|
|
expand_x, dynamic_scale, expand_idx, expert_token_nums, ep_recv_counts = output[
|
|
|
|
|
0:5]
|
|
|
|
|
|
|
|
|
|
if quant_mode == 0:
|
|
|
|
|
dynamic_scale = None
|
|
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
# place hidden_states in a list to transfer its ownership into the `apply_mlp` function
|
|
|
|
|
hidden_states_wrapper = [expand_x]
|
|
|
|
|
del expand_x
|
|
|
|
|
|
|
|
|
|
down_out_list = apply_mlp(hidden_states_wrapper,
|
2025-04-23 16:23:25 +08:00
|
|
|
w1,
|
|
|
|
|
w1_scale,
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_token_nums,
|
|
|
|
|
dynamic_scale=dynamic_scale)
|
|
|
|
|
|
|
|
|
|
# moeCombine
|
|
|
|
|
kwargs = {
|
|
|
|
|
"expand_x": down_out_list,
|
|
|
|
|
"expert_ids": topk_ids,
|
|
|
|
|
"expand_idx": expand_idx,
|
|
|
|
|
"expert_scales": topk_weights.to(torch.float32),
|
|
|
|
|
"expert_shard_type": 0,
|
|
|
|
|
"shared_expert_rank_num": 0,
|
|
|
|
|
"moe_expert_num": moe_expert_num,
|
|
|
|
|
"global_bs": 0,
|
|
|
|
|
}
|
|
|
|
|
tp_recv_counts = torch.empty(1,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=hidden_states.device)
|
|
|
|
|
stage3_kwargs = {
|
|
|
|
|
"ep_send_counts": ep_recv_counts,
|
|
|
|
|
"group_ep": moe_all_to_all_group_name,
|
|
|
|
|
"ep_world_size": all_to_all_group_size,
|
|
|
|
|
"ep_rank_id": local_rank,
|
|
|
|
|
"tp_send_counts": tp_recv_counts,
|
|
|
|
|
# "group_tp": self.moe_rs_group_name,
|
|
|
|
|
"group_tp": moe_all_to_all_group_name,
|
|
|
|
|
"tp_world_size": tp_size,
|
|
|
|
|
"tp_rank_id": tp_rank,
|
|
|
|
|
}
|
|
|
|
|
kwargs.update(stage3_kwargs)
|
|
|
|
|
|
|
|
|
|
hidden_states = torch_npu.npu_moe_distribute_combine(**kwargs)
|
|
|
|
|
|
|
|
|
|
return hidden_states
|
|
|
|
|
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
# currently expert parallelism implemented with all2all
|
|
|
|
|
# is under-optimized.
|
|
|
|
|
def fused_experts_with_all2all(
|
|
|
|
|
hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None,
|
|
|
|
|
ep_group: GroupCoordinator = None,
|
|
|
|
|
):
|
|
|
|
|
original_shape = hidden_states.shape
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
|
|
|
|
|
|
|
|
|
|
num_tokens, _ = hidden_states.shape
|
|
|
|
|
num_experts = w1.shape[0]
|
|
|
|
|
device = hidden_states.device
|
|
|
|
|
|
|
|
|
|
if expert_map is not None:
|
|
|
|
|
global_num_experts = len(expert_map)
|
|
|
|
|
local_num_experts = global_num_experts // ep_group.world_size
|
|
|
|
|
row_idx_len = num_tokens * top_k
|
|
|
|
|
row_idx = (torch.arange(0,
|
|
|
|
|
row_idx_len,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=device).view(top_k, -1).permute(
|
|
|
|
|
1, 0).contiguous())
|
|
|
|
|
hidden_states, expanded_row_idx, expanded_expert_idx = torch_npu.npu_moe_init_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
row_idx=row_idx,
|
|
|
|
|
expert_idx=topk_ids,
|
|
|
|
|
active_num=num_tokens)
|
|
|
|
|
|
|
|
|
|
global_expert_tokens = torch.bincount(expanded_expert_idx,
|
|
|
|
|
minlength=global_num_experts)
|
|
|
|
|
scatter_sizes = global_expert_tokens.view(ep_group.world_size,
|
|
|
|
|
-1).sum(-1)
|
|
|
|
|
|
|
|
|
|
gather_sizes = torch.empty_like(scatter_sizes)
|
|
|
|
|
dist.all_to_all_single(gather_sizes,
|
|
|
|
|
scatter_sizes,
|
|
|
|
|
group=ep_group.device_group)
|
|
|
|
|
scatter_size_list = scatter_sizes.cpu().tolist()
|
|
|
|
|
gather_size_list = gather_sizes.cpu().tolist()
|
|
|
|
|
|
|
|
|
|
expanded_expert_idx = expanded_expert_idx % local_num_experts
|
|
|
|
|
hidden_states = ep_group.all_to_all(hidden_states, 0, 0,
|
|
|
|
|
scatter_size_list,
|
|
|
|
|
gather_size_list)
|
|
|
|
|
local_expert_idx = ep_group.all_to_all(expanded_expert_idx, 0, 0,
|
|
|
|
|
scatter_size_list,
|
|
|
|
|
gather_size_list)
|
|
|
|
|
|
|
|
|
|
sorted_local_expert_idx, sorted_idx = torch.sort(local_expert_idx)
|
|
|
|
|
|
|
|
|
|
expert_tokens = torch_npu.npu_moe_compute_expert_tokens(
|
|
|
|
|
sorted_local_expert_idx, local_num_experts).to(torch.int64)
|
|
|
|
|
|
|
|
|
|
hidden_states = hidden_states[sorted_idx]
|
|
|
|
|
group_list_type = 0
|
|
|
|
|
else:
|
|
|
|
|
row_idx_len = num_tokens * top_k
|
|
|
|
|
row_idx = torch.arange(0,
|
|
|
|
|
row_idx_len,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=topk_weights.device).view(
|
|
|
|
|
top_k, -1).permute(1, 0).contiguous()
|
|
|
|
|
hidden_states, expanded_row_idx, expanded_expert_idx = torch_npu.npu_moe_init_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
row_idx=row_idx,
|
|
|
|
|
expert_idx=topk_ids,
|
|
|
|
|
active_num=num_tokens)
|
|
|
|
|
|
|
|
|
|
expert_tokens = torch_npu.npu_moe_compute_expert_tokens(
|
|
|
|
|
expanded_expert_idx, num_experts)
|
|
|
|
|
expert_tokens = expert_tokens.to(torch.int64)
|
|
|
|
|
group_list_type = 0
|
|
|
|
|
|
|
|
|
|
hidden_states_wrapper = [hidden_states]
|
|
|
|
|
del hidden_states
|
|
|
|
|
|
|
|
|
|
hidden_states = apply_mlp(hidden_states_wrapper,
|
|
|
|
|
w1,
|
|
|
|
|
w1_scale,
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_tokens,
|
|
|
|
|
group_list_type=group_list_type)
|
|
|
|
|
|
|
|
|
|
if expert_map is not None:
|
|
|
|
|
resorted_idx = torch.argsort(sorted_idx)
|
|
|
|
|
hidden_states = hidden_states[resorted_idx]
|
|
|
|
|
hidden_states = ep_group.all_to_all(hidden_states, 0, 0,
|
|
|
|
|
gather_size_list,
|
|
|
|
|
scatter_size_list)
|
|
|
|
|
|
|
|
|
|
final_hidden_states = torch_npu.npu_moe_finalize_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
skip1=None,
|
|
|
|
|
skip2=None,
|
|
|
|
|
bias=None,
|
|
|
|
|
scales=topk_weights,
|
|
|
|
|
expanded_src_to_dst_row=expanded_row_idx,
|
|
|
|
|
export_for_source_row=topk_ids,
|
|
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
# TODO: Reorder device memory 2 times here, replace the current
|
|
|
|
|
# implementation here when suitable operators become available.
|
|
|
|
|
final_hidden_states = torch_npu.npu_moe_finalize_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
skip1=None,
|
|
|
|
|
skip2=None,
|
|
|
|
|
bias=None,
|
|
|
|
|
scales=topk_weights,
|
|
|
|
|
expanded_src_to_dst_row=expanded_row_idx,
|
|
|
|
|
export_for_source_row=topk_ids,
|
|
|
|
|
)
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
final_hidden_states = final_hidden_states.view(original_shape)
|
|
|
|
|
return final_hidden_states
|
|
|
|
|
|
|
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
def fused_experts(hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None):
|
|
|
|
|
original_shape = hidden_states.shape
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
|
|
|
|
|
|
|
|
|
|
num_tokens, _ = hidden_states.shape
|
|
|
|
|
num_experts = w1.shape[0]
|
|
|
|
|
dtype = hidden_states.dtype
|
|
|
|
|
device = hidden_states.device
|
|
|
|
|
|
|
|
|
|
if expert_map is not None:
|
|
|
|
|
# Generate token indices and flatten
|
|
|
|
|
token_indices = (torch.arange(num_tokens,
|
|
|
|
|
device=device,
|
|
|
|
|
dtype=torch.int64).unsqueeze(1).expand(
|
|
|
|
|
-1, top_k).reshape(-1))
|
|
|
|
|
|
|
|
|
|
# Flatten token-to-expert mappings and map to local experts
|
|
|
|
|
weights_flat = topk_weights.view(-1)
|
|
|
|
|
experts_flat = topk_ids.view(-1)
|
|
|
|
|
local_experts_flat = expert_map[experts_flat]
|
|
|
|
|
|
|
|
|
|
# Filter valid token-expert pairs
|
|
|
|
|
mask = local_experts_flat != -1
|
|
|
|
|
filtered_weights = torch.where(
|
|
|
|
|
mask, weights_flat, torch.zeros_like(weights_flat)).to(dtype)
|
|
|
|
|
filtered_experts = torch.where(
|
|
|
|
|
mask, local_experts_flat,
|
|
|
|
|
torch.full_like(local_experts_flat,
|
|
|
|
|
num_experts)).to(topk_ids.dtype)
|
|
|
|
|
|
|
|
|
|
# Sort by local expert IDs
|
|
|
|
|
sort_indices = torch.argsort(filtered_experts)
|
|
|
|
|
sorted_token_indices = token_indices[sort_indices]
|
|
|
|
|
sorted_weights = filtered_weights[sort_indices]
|
|
|
|
|
|
|
|
|
|
# Compute token counts with minlength of num_experts
|
|
|
|
|
# This is equivalent to but faster than:
|
|
|
|
|
# >>> token_counts = torch.bincount(filtered_experts, minlength=num_experts)[:-1]
|
|
|
|
|
token_counts = torch.zeros(num_experts + 1,
|
|
|
|
|
device=device,
|
|
|
|
|
dtype=torch.int64)
|
|
|
|
|
ones = torch.ones_like(filtered_experts, dtype=torch.int64)
|
|
|
|
|
token_counts.scatter_add_(0, filtered_experts.to(torch.int64), ones)
|
2025-04-23 16:23:25 +08:00
|
|
|
expert_tokens = token_counts[:num_experts]
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
# Rearrange hidden_states
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states = hidden_states[sorted_token_indices]
|
2025-04-23 16:23:25 +08:00
|
|
|
group_list_type = 1
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
else:
|
|
|
|
|
row_idx_len = num_tokens * top_k
|
|
|
|
|
row_idx = torch.arange(0,
|
|
|
|
|
row_idx_len,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=topk_weights.device).view(
|
|
|
|
|
top_k, -1).permute(1, 0).contiguous()
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states, expanded_row_idx, expanded_expert_idx = torch_npu.npu_moe_init_routing(
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
hidden_states,
|
|
|
|
|
row_idx=row_idx,
|
|
|
|
|
expert_idx=topk_ids,
|
|
|
|
|
active_num=num_tokens)
|
|
|
|
|
|
|
|
|
|
expert_tokens = torch_npu.npu_moe_compute_expert_tokens(
|
|
|
|
|
expanded_expert_idx, num_experts)
|
|
|
|
|
expert_tokens = expert_tokens.to(torch.int64)
|
2025-04-23 16:23:25 +08:00
|
|
|
group_list_type = 0
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
# place hidden_states in a list to transfer its ownership into the `apply_mlp` function
|
|
|
|
|
hidden_states_wrapper = [hidden_states]
|
|
|
|
|
del hidden_states
|
|
|
|
|
|
|
|
|
|
hidden_states = apply_mlp(hidden_states_wrapper,
|
2025-04-23 16:23:25 +08:00
|
|
|
w1,
|
|
|
|
|
w1_scale,
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_tokens,
|
|
|
|
|
group_list_type=group_list_type)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
if expert_map is not None:
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states.mul_(sorted_weights.unsqueeze(1))
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
final_hidden_states = torch.zeros(*original_shape,
|
2025-05-09 15:09:37 +08:00
|
|
|
device=device,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
dtype=dtype)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
num_valid_tokens = mask.sum()
|
|
|
|
|
valid_token_mask = torch.arange(
|
|
|
|
|
0, sorted_token_indices.shape[0],
|
|
|
|
|
device=device).unsqueeze(1) < num_valid_tokens
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states = hidden_states.masked_fill_(~valid_token_mask,
|
2025-05-06 22:09:56 +08:00
|
|
|
0).to(dtype)
|
2025-05-09 15:09:37 +08:00
|
|
|
final_hidden_states.index_add_(0, sorted_token_indices, hidden_states)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
else:
|
2025-04-23 16:23:25 +08:00
|
|
|
# TODO: Reorder device memory 2 times here, replace the current
|
|
|
|
|
# implementation here when suitable operators become available.
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
final_hidden_states = torch_npu.npu_moe_finalize_routing(
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
skip1=None,
|
|
|
|
|
skip2=None,
|
|
|
|
|
bias=None,
|
|
|
|
|
scales=topk_weights,
|
|
|
|
|
expanded_src_to_dst_row=expanded_row_idx,
|
2025-04-23 16:23:25 +08:00
|
|
|
export_for_source_row=topk_ids,
|
|
|
|
|
)
|
2025-05-09 15:09:37 +08:00
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
final_hidden_states = final_hidden_states.view(original_shape)
|
|
|
|
|
return final_hidden_states
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class AscendW8A8DynamicLinearMethod:
|
|
|
|
|
"""Linear method for Ascend W8A8_DYNAMIC.
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
|
self.transpose_weight = True
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_weight(input_size: int, output_size: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
params_dict = {
|
|
|
|
|
"weight": torch.empty(output_size, input_size, dtype=torch.int8)
|
|
|
|
|
}
|
|
|
|
|
return params_dict
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_pertensor_param(params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
return {}
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_perchannel_param(
|
|
|
|
|
output_size: int,
|
|
|
|
|
params_dtype: torch.dtype,
|
|
|
|
|
) -> Dict[str, Any]:
|
|
|
|
|
params_dict = {}
|
|
|
|
|
params_dict["weight_scale"] = torch.empty(output_size,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
params_dict["weight_offset"] = torch.empty(output_size,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
return params_dict
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def apply(
|
|
|
|
|
layer: torch.nn.Module,
|
|
|
|
|
x: torch.Tensor,
|
|
|
|
|
bias: Optional[torch.Tensor] = None,
|
|
|
|
|
tp_rank: Optional[int] = 0,
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
original_dtype = x.dtype
|
|
|
|
|
# use ATB quantize
|
|
|
|
|
quant_out, dynamic_scale = torch_npu.npu_dynamic_quant(x)
|
|
|
|
|
return torch_npu.npu_quant_matmul(
|
|
|
|
|
quant_out,
|
|
|
|
|
layer.weight,
|
|
|
|
|
layer.weight_scale,
|
|
|
|
|
pertoken_scale=dynamic_scale,
|
|
|
|
|
bias=bias,
|
|
|
|
|
output_dtype=original_dtype,
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
def process_weights_after_loading(self, layer):
|
|
|
|
|
if self.transpose_weight:
|
|
|
|
|
layer.weight.data = layer.weight.data.transpose(0, 1).contiguous()
|
2025-05-01 13:51:42 +08:00
|
|
|
# cast quantized weight tensors in NZ format (29) for higher inference speed
|
|
|
|
|
layer.weight.data = torch_npu.npu_format_cast(layer.weight.data, 29)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.weight_scale.data = layer.weight_scale.data.flatten()
|
2025-05-01 13:51:42 +08:00
|
|
|
layer.weight_scale_fp32 = layer.weight_scale.data.to(torch.float32)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.weight_offset.data = layer.weight_offset.data.flatten()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class AscendW8A8DynamicFusedMoEMethod:
|
|
|
|
|
"""FusedMoe method for Ascend W8A8_DYNAMIC.
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
|
self.transpose_weight = True
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
self.ep_group = get_ep_group()
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
try:
|
2025-05-15 09:19:55 +08:00
|
|
|
device_group = self.ep_group.device_group
|
2025-04-23 16:23:25 +08:00
|
|
|
# TODO: Try local_rank = ep_group.rank_in_group
|
|
|
|
|
local_rank = torch.distributed.get_rank(group=device_group)
|
|
|
|
|
backend = device_group._get_backend(torch.device("npu"))
|
|
|
|
|
self.moe_all_to_all_group_name = backend.get_hccl_comm_name(
|
|
|
|
|
local_rank)
|
|
|
|
|
except AttributeError:
|
|
|
|
|
self.moe_all_to_all_group_name = ""
|
|
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
@staticmethod
|
|
|
|
|
def get_weight(num_experts: int, intermediate_size_per_partition: int,
|
|
|
|
|
hidden_sizes: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
param_dict = {}
|
|
|
|
|
param_dict["w13_weight"] = torch.empty(num_experts,
|
|
|
|
|
2 *
|
|
|
|
|
intermediate_size_per_partition,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
dtype=torch.int8)
|
|
|
|
|
param_dict["w2_weight"] = torch.empty(num_experts,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
intermediate_size_per_partition,
|
|
|
|
|
dtype=torch.int8)
|
|
|
|
|
return param_dict
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_dynamic_quant_param(num_experts: int,
|
|
|
|
|
intermediate_size_per_partition: int,
|
|
|
|
|
hidden_sizes: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
param_dict = {}
|
|
|
|
|
param_dict["w13_weight_scale"] = torch.empty(
|
|
|
|
|
num_experts,
|
|
|
|
|
2 * intermediate_size_per_partition,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
param_dict["w13_weight_offset"] = torch.empty(
|
|
|
|
|
num_experts,
|
|
|
|
|
2 * intermediate_size_per_partition,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
param_dict["w2_weight_scale"] = torch.empty(num_experts,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
param_dict["w2_weight_offset"] = torch.empty(num_experts,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
return param_dict
|
|
|
|
|
|
|
|
|
|
def apply(
|
2025-04-23 16:23:25 +08:00
|
|
|
self,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer: torch.nn.Module,
|
|
|
|
|
x: torch.Tensor,
|
|
|
|
|
router_logits: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
renormalize: bool,
|
|
|
|
|
use_grouped_topk: bool = False,
|
|
|
|
|
global_num_experts: int = -1,
|
|
|
|
|
expert_map: Optional[torch.Tensor] = None,
|
2025-04-23 16:23:25 +08:00
|
|
|
topk_group: Optional[int] = None,
|
|
|
|
|
num_expert_group: Optional[int] = None,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
custom_routing_function: Optional[Callable] = None,
|
|
|
|
|
scoring_func: str = "softmax",
|
|
|
|
|
e_score_correction_bias: Optional[torch.Tensor] = None,
|
2025-04-23 16:23:25 +08:00
|
|
|
is_prefill: bool = True,
|
2025-05-15 09:19:55 +08:00
|
|
|
enable_force_load_balance: bool = True,
|
|
|
|
|
dp_size: int = 1,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
**kwargs,
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
assert router_logits.shape[
|
|
|
|
|
1] == global_num_experts, "Number of global experts mismatch"
|
|
|
|
|
|
2025-04-23 16:23:25 +08:00
|
|
|
# NOTE: now npu_moe_gating_top_k can only support `group_count=256` pattern
|
|
|
|
|
if global_num_experts == 256:
|
|
|
|
|
topk_weights, topk_ids, _ = torch_npu.npu_moe_gating_top_k(
|
|
|
|
|
router_logits,
|
|
|
|
|
k=top_k, # topk当前写8
|
|
|
|
|
bias=e_score_correction_bias,
|
|
|
|
|
k_group=topk_group, # fix: 4
|
|
|
|
|
group_count=num_expert_group, # fix 8
|
|
|
|
|
group_select_mode=1, # 0: group中的最大; 1: topk2.sum(fix)
|
|
|
|
|
renorm=0, # 0: softmax->topk(fix); 1: topk->softmax
|
|
|
|
|
norm_type=1, # 0: softmax; 1: sigmoid(fix)
|
|
|
|
|
# out_flag=False, # todo new api; 第三个输出是否输出
|
|
|
|
|
# y2_flag=False, # old api; 第三个输出是否输出
|
|
|
|
|
routed_scaling_factor=1,
|
|
|
|
|
eps=float(1e-20))
|
|
|
|
|
else:
|
|
|
|
|
topk_weights, topk_ids = select_experts(
|
|
|
|
|
hidden_states=x,
|
|
|
|
|
router_logits=router_logits,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
use_grouped_topk=use_grouped_topk,
|
|
|
|
|
renormalize=renormalize,
|
|
|
|
|
topk_group=topk_group,
|
|
|
|
|
num_expert_group=num_expert_group,
|
|
|
|
|
custom_routing_function=custom_routing_function,
|
|
|
|
|
scoring_func=scoring_func,
|
|
|
|
|
e_score_correction_bias=e_score_correction_bias,
|
|
|
|
|
)
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
# this is a naive implementation for experts load balance so as
|
|
|
|
|
# to avoid accumulating too much tokens on a single rank.
|
|
|
|
|
# currently it is only activated when doing profile runs.
|
|
|
|
|
if enable_force_load_balance:
|
|
|
|
|
topk_ids = torch.randint_like(topk_ids, 0, global_num_experts)
|
|
|
|
|
|
|
|
|
|
if VLLM_ENABLE_MC2 and not is_prefill:
|
2025-04-23 16:23:25 +08:00
|
|
|
return fused_experts_with_mc2(
|
|
|
|
|
hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w2=layer.w2_weight,
|
|
|
|
|
w1_scale=layer.w13_weight_scale,
|
|
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map,
|
|
|
|
|
moe_all_to_all_group_name=self.moe_all_to_all_group_name)
|
2025-05-15 09:19:55 +08:00
|
|
|
elif dp_size == 1:
|
2025-04-23 16:23:25 +08:00
|
|
|
return fused_experts(hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w1_scale=layer.w13_weight_scale,
|
|
|
|
|
w2=layer.w2_weight,
|
|
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map)
|
2025-05-15 09:19:55 +08:00
|
|
|
else:
|
|
|
|
|
return fused_experts_with_all2all(hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w1_scale=layer.w13_weight_scale,
|
|
|
|
|
w2=layer.w2_weight,
|
|
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map,
|
|
|
|
|
ep_group=self.ep_group)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
def process_weights_after_loading(self, layer):
|
|
|
|
|
if self.transpose_weight:
|
|
|
|
|
layer.w13_weight.data = layer.w13_weight.data.transpose(
|
|
|
|
|
1, 2).contiguous()
|
|
|
|
|
layer.w2_weight.data = layer.w2_weight.data.transpose(
|
|
|
|
|
1, 2).contiguous()
|
|
|
|
|
layer.w13_weight_scale.data = layer.w13_weight_scale.data.view(
|
2025-05-15 09:19:55 +08:00
|
|
|
layer.w13_weight_scale.data.shape[0], -1)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.w13_weight_offset.data = layer.w13_weight_offset.data.view(
|
|
|
|
|
layer.w13_weight_offset.data.shape[0], -1)
|
|
|
|
|
layer.w2_weight_scale.data = layer.w2_weight_scale.data.view(
|
|
|
|
|
layer.w2_weight_scale.data.shape[0], -1)
|
|
|
|
|
layer.w2_weight_offset.data = layer.w2_weight_offset.data.view(
|
|
|
|
|
layer.w2_weight_offset.data.shape[0], -1)
|