[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
#
|
|
|
|
|
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
|
|
|
|
|
# This file is a part of the vllm-ascend project.
|
|
|
|
|
#
|
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
|
#
|
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
#
|
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
|
# limitations under the License.
|
|
|
|
|
#
|
|
|
|
|
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
from typing import Any, Callable, Dict, Optional, Tuple, Union
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
import torch
|
2025-05-15 09:19:55 +08:00
|
|
|
import torch.distributed as dist
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
import torch_npu
|
2025-07-28 14:06:20 +08:00
|
|
|
from vllm.distributed import GroupCoordinator, get_ep_group
|
|
|
|
|
from vllm.forward_context import get_forward_context
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
2025-06-23 22:03:38 +08:00
|
|
|
import vllm_ascend.envs as envs
|
2025-06-05 16:28:01 +08:00
|
|
|
from vllm_ascend.ascend_config import get_ascend_config
|
2025-07-28 14:06:20 +08:00
|
|
|
from vllm_ascend.ascend_forward_context import FusedMoEState
|
|
|
|
|
from vllm_ascend.distributed.parallel_state import get_mc2_group
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
from vllm_ascend.ops.fused_moe import select_experts
|
2025-07-21 19:43:30 +08:00
|
|
|
from vllm_ascend.torchair.utils import npu_stream_switch, npu_wait_tensor
|
2025-07-28 14:06:20 +08:00
|
|
|
from vllm_ascend.utils import (ACL_FORMAT_FRACTAL_NZ, AscendSocVersion,
|
|
|
|
|
dispose_tensor, get_ascend_soc_version)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
|
2025-07-29 23:53:19 +08:00
|
|
|
def apply_mlp_decode(hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
group_list: torch.Tensor,
|
|
|
|
|
dynamic_scale: torch.Tensor = None,
|
|
|
|
|
group_list_type: int = 1) -> torch.Tensor:
|
|
|
|
|
"""
|
|
|
|
|
apply MLP: gate_up_proj -> swiglu -> down_proj
|
|
|
|
|
Args:
|
|
|
|
|
hidden_states_wrapper: wrapper of input hidden states with shape (num_tokens, hidden_size).
|
|
|
|
|
w1: expert weights1 with shape
|
|
|
|
|
(num_experts, hidden_size, intermediate_size * 2)
|
|
|
|
|
w1_scale: weights1 scale with shape (num_experts, intermediate_size * 2)
|
|
|
|
|
w2: expert weights2 with shape
|
|
|
|
|
(num_experts, intermediate_size, hidden_size)
|
|
|
|
|
w2_scale: weights2 scale with shape (num_experts, hidden_size)
|
|
|
|
|
group_list: number of tokens for each expert, follow cumsum mode, and
|
|
|
|
|
with shape (num_experts).
|
|
|
|
|
transpose_weight:
|
|
|
|
|
w1: (num_experts, intermediate_size * 2, hidden_size) ->
|
|
|
|
|
(num_experts, hidden_size, intermediate_size * 2)
|
|
|
|
|
w2: (num_experts, hidden_size, intermediate_size) ->
|
|
|
|
|
(num_experts, intermediate_size, hidden_size)
|
|
|
|
|
Returns:
|
|
|
|
|
hidden_states: output hidden states after MLP.
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
if dynamic_scale is None:
|
|
|
|
|
unquantized_hidden_states = hidden_states
|
|
|
|
|
hidden_states, pertoken_scale = torch_npu.npu_dynamic_quant(
|
|
|
|
|
hidden_states)
|
|
|
|
|
# Dispose the original unquantized hidden states
|
|
|
|
|
# to save npu memory because they're no longer used.
|
|
|
|
|
dispose_tensor(unquantized_hidden_states)
|
|
|
|
|
else:
|
|
|
|
|
pertoken_scale = dynamic_scale
|
|
|
|
|
|
|
|
|
|
# gmm1: gate_up_proj
|
|
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w1],
|
|
|
|
|
split_item=3,
|
|
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=group_list,
|
|
|
|
|
output_dtype=torch.int32)[0]
|
|
|
|
|
|
|
|
|
|
# act_fn: swiglu
|
|
|
|
|
hidden_states, swiglu_out_scale = torch_npu.npu_dequant_swiglu_quant(
|
|
|
|
|
x=hidden_states,
|
|
|
|
|
weight_scale=w1_scale,
|
|
|
|
|
activation_scale=pertoken_scale,
|
|
|
|
|
bias=None,
|
|
|
|
|
quant_scale=None,
|
|
|
|
|
quant_offset=None,
|
|
|
|
|
group_index=group_list,
|
|
|
|
|
activate_left=True,
|
|
|
|
|
quant_mode=1,
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
# gmm2: down_proj
|
|
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w2],
|
|
|
|
|
scale=[w2_scale],
|
|
|
|
|
per_token_scale=[swiglu_out_scale],
|
|
|
|
|
split_item=2,
|
|
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=group_list,
|
|
|
|
|
output_dtype=w2_scale.dtype)[0]
|
|
|
|
|
return hidden_states
|
|
|
|
|
|
|
|
|
|
|
2025-05-29 11:48:26 +08:00
|
|
|
def apply_mlp(hidden_states: torch.Tensor,
|
2025-04-23 16:23:25 +08:00
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
group_list: torch.Tensor,
|
|
|
|
|
dynamic_scale: torch.Tensor = None,
|
2025-08-06 10:17:44 +08:00
|
|
|
group_list_type: int = 1,
|
|
|
|
|
w1_scale_bias: torch.Tensor = None,
|
|
|
|
|
w2_scale_bias: torch.Tensor = None) -> torch.Tensor:
|
2025-04-23 16:23:25 +08:00
|
|
|
"""
|
|
|
|
|
apply MLP: gate_up_proj -> swiglu -> down_proj
|
|
|
|
|
|
|
|
|
|
Args:
|
2025-05-29 11:48:26 +08:00
|
|
|
hidden_states: input hidden states with shape (num_tokens, hidden_size).
|
2025-04-23 16:23:25 +08:00
|
|
|
w1: expert weights1 with shape
|
|
|
|
|
(num_experts, hidden_size, intermediate_size * 2)
|
|
|
|
|
w1_scale: weights1 scale with shape (num_experts, intermediate_size * 2)
|
|
|
|
|
w2: expert weights2 with shape
|
|
|
|
|
(num_experts, intermediate_size, hidden_size)
|
|
|
|
|
w2_scale: weights2 scale with shape (num_experts, hidden_size)
|
|
|
|
|
group_list: number of tokens for each expert, follow cumsum mode, and
|
|
|
|
|
with shape (num_experts).
|
|
|
|
|
transpose_weight:
|
|
|
|
|
w1: (num_experts, intermediate_size * 2, hidden_size) ->
|
|
|
|
|
(num_experts, hidden_size, intermediate_size * 2)
|
|
|
|
|
w2: (num_experts, hidden_size, intermediate_size) ->
|
|
|
|
|
(num_experts, intermediate_size, hidden_size)
|
|
|
|
|
|
|
|
|
|
Returns:
|
|
|
|
|
hidden_states: output hidden states after MLP.
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
if dynamic_scale is None:
|
2025-05-29 11:48:26 +08:00
|
|
|
unquantized_hidden_states = hidden_states
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states, pertoken_scale = torch_npu.npu_dynamic_quant(
|
|
|
|
|
hidden_states)
|
2025-05-29 11:48:26 +08:00
|
|
|
# Dispose the original unquantized hidden states
|
|
|
|
|
# to save npu memory because they're no longer used.
|
|
|
|
|
dispose_tensor(unquantized_hidden_states)
|
2025-04-23 16:23:25 +08:00
|
|
|
else:
|
|
|
|
|
pertoken_scale = dynamic_scale
|
|
|
|
|
|
2025-08-06 10:17:44 +08:00
|
|
|
bias1, bias2 = None, None
|
|
|
|
|
_output_dtype = w2_scale.dtype
|
|
|
|
|
|
|
|
|
|
if w1_scale_bias is not None:
|
|
|
|
|
if group_list_type == 0:
|
|
|
|
|
group_list = torch.cat(
|
|
|
|
|
[group_list[:1], torch.diff(group_list, dim=0)])
|
|
|
|
|
group_list_type = 1
|
|
|
|
|
bias1 = [w1_scale_bias]
|
|
|
|
|
bias2 = [w2_scale_bias]
|
|
|
|
|
# TODO w4a8 scene: dynamic acquisition of dtype in the future
|
|
|
|
|
_output_dtype = torch.bfloat16
|
|
|
|
|
|
2025-04-23 16:23:25 +08:00
|
|
|
# gmm1: gate_up_proj
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w1],
|
2025-05-15 09:19:55 +08:00
|
|
|
scale=[w1_scale],
|
2025-08-06 10:17:44 +08:00
|
|
|
bias=bias1,
|
2025-05-15 09:19:55 +08:00
|
|
|
per_token_scale=[pertoken_scale],
|
|
|
|
|
split_item=2,
|
2025-05-09 15:09:37 +08:00
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=group_list,
|
2025-08-06 10:17:44 +08:00
|
|
|
output_dtype=_output_dtype)[0]
|
2025-05-09 15:09:37 +08:00
|
|
|
|
|
|
|
|
# act_fn: swiglu
|
2025-05-15 09:19:55 +08:00
|
|
|
hidden_states = torch_npu.npu_swiglu(hidden_states)
|
|
|
|
|
hidden_states, swiglu_out_scale = torch_npu.npu_dynamic_quant(
|
|
|
|
|
hidden_states)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
# gmm2: down_proj
|
|
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w2],
|
|
|
|
|
scale=[w2_scale],
|
2025-08-06 10:17:44 +08:00
|
|
|
bias=bias2,
|
2025-05-09 15:09:37 +08:00
|
|
|
per_token_scale=[swiglu_out_scale],
|
|
|
|
|
split_item=2,
|
|
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=group_list,
|
2025-08-06 10:17:44 +08:00
|
|
|
output_dtype=_output_dtype)[0]
|
2025-06-05 23:39:38 +08:00
|
|
|
|
2025-05-09 15:09:37 +08:00
|
|
|
return hidden_states
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
def fused_experts_with_mc2(
|
|
|
|
|
hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None,
|
|
|
|
|
moe_all_to_all_group_name: str = "",
|
|
|
|
|
log2phy: torch.Tensor = None,
|
|
|
|
|
global_redundant_expert_num: int = 0,
|
|
|
|
|
shared_experts: Optional[Any] = None,
|
2025-07-28 14:06:20 +08:00
|
|
|
is_torchair: bool = False,
|
|
|
|
|
quantized_x_for_share: Optional[Any] = None,
|
|
|
|
|
dynamic_scale_for_share: Optional[Any] = None,
|
|
|
|
|
mc2_mask: Optional[torch.Tensor] = None,
|
2025-07-29 23:53:19 +08:00
|
|
|
shared_gate_up: Optional[Any] = None,
|
|
|
|
|
shared_dequant_scale: Optional[Any] = None,
|
2025-08-06 10:17:44 +08:00
|
|
|
w1_scale_bias: torch.Tensor = None,
|
|
|
|
|
w2_scale_bias: torch.Tensor = None,
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
) -> Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]:
|
2025-07-28 14:06:20 +08:00
|
|
|
assert mc2_mask is not None
|
static EPLB fix bug, add unit test (#1186)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
1.add static EPLB unit test
2.fix bug: Tensor cannot be directly judged by if statements
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
Run the unit test.
---------
Signed-off-by: songshanhu07 <1763685535@qq.com>
2025-06-18 19:46:56 +08:00
|
|
|
if log2phy is not None:
|
2025-06-10 08:39:24 +08:00
|
|
|
topk_ids = log2phy[topk_ids]
|
2025-07-28 14:06:20 +08:00
|
|
|
|
|
|
|
|
quant_mode = 2
|
|
|
|
|
ep_group = get_mc2_group()
|
|
|
|
|
ep_rank_id = ep_group.rank_in_group
|
|
|
|
|
ep_world_size = ep_group.world_size
|
|
|
|
|
|
|
|
|
|
# NOTE: Currently, when in A3 or in torchair graph, we need to pass in some extra param into dispatch & combine
|
|
|
|
|
need_extra_args = (get_ascend_soc_version() == AscendSocVersion.A3
|
|
|
|
|
or is_torchair)
|
|
|
|
|
|
|
|
|
|
# NOTE: Currently, when in A3, we need to pass in some extra param into dispatch & combine
|
|
|
|
|
a3_need_extra_args = get_ascend_soc_version() == AscendSocVersion.A3
|
|
|
|
|
|
|
|
|
|
enable_dispatch_v2 = hasattr(torch_npu, "npu_moe_distribute_dispatch_v2")
|
|
|
|
|
|
2025-07-26 17:15:47 +08:00
|
|
|
if (expert_map is not None):
|
|
|
|
|
moe_expert_num = len(expert_map) + global_redundant_expert_num
|
|
|
|
|
else:
|
|
|
|
|
moe_expert_num = global_redundant_expert_num
|
2025-04-23 16:23:25 +08:00
|
|
|
# hidden_states = hidden_states.bfloat16()
|
2025-06-05 23:39:38 +08:00
|
|
|
kwargs_mc2 = {
|
2025-04-23 16:23:25 +08:00
|
|
|
"x": hidden_states,
|
|
|
|
|
"expert_ids": topk_ids,
|
|
|
|
|
"expert_shard_type": 0,
|
|
|
|
|
"shared_expert_rank_num": 0,
|
|
|
|
|
"moe_expert_num": moe_expert_num,
|
2025-07-28 14:06:20 +08:00
|
|
|
"global_bs": 0,
|
2025-04-23 16:23:25 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
stage1_kwargs = {
|
|
|
|
|
"scales": None,
|
|
|
|
|
"quant_mode": quant_mode,
|
|
|
|
|
"group_ep": moe_all_to_all_group_name,
|
2025-07-28 14:06:20 +08:00
|
|
|
"ep_world_size": ep_world_size,
|
|
|
|
|
"ep_rank_id": ep_rank_id,
|
2025-04-23 16:23:25 +08:00
|
|
|
}
|
2025-07-28 14:06:20 +08:00
|
|
|
if need_extra_args:
|
|
|
|
|
stage1_kwargs.update({
|
|
|
|
|
"group_tp": moe_all_to_all_group_name,
|
|
|
|
|
"tp_world_size": 1,
|
|
|
|
|
"tp_rank_id": 0,
|
|
|
|
|
})
|
|
|
|
|
if a3_need_extra_args and enable_dispatch_v2:
|
|
|
|
|
stage1_kwargs.update({
|
|
|
|
|
"x_active_mask": mc2_mask,
|
|
|
|
|
})
|
2025-06-05 23:39:38 +08:00
|
|
|
kwargs_mc2.update(stage1_kwargs)
|
|
|
|
|
|
2025-07-28 14:06:20 +08:00
|
|
|
output = torch_npu.npu_moe_distribute_dispatch_v2(
|
|
|
|
|
**kwargs_mc2
|
|
|
|
|
) if enable_dispatch_v2 else torch_npu.npu_moe_distribute_dispatch(
|
|
|
|
|
**kwargs_mc2)
|
2025-04-23 16:23:25 +08:00
|
|
|
# comm_stream.wait_stream(torch.npu.current_stream())
|
2025-07-28 14:06:20 +08:00
|
|
|
expand_x, dynamic_scale, assist_info_for_combine, expert_token_nums, ep_recv_counts = output[
|
|
|
|
|
0:5]
|
2025-04-23 16:23:25 +08:00
|
|
|
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
if shared_experts is not None:
|
|
|
|
|
with npu_stream_switch("moe_secondary", 0):
|
2025-07-29 23:53:19 +08:00
|
|
|
npu_wait_tensor(shared_gate_up, expand_x)
|
2025-07-28 14:06:20 +08:00
|
|
|
shared_act_out = shared_experts.act_fn(
|
2025-07-29 23:53:19 +08:00
|
|
|
(shared_gate_up, shared_dequant_scale))
|
2025-07-28 14:06:20 +08:00
|
|
|
shared_act, swiglu_out_scale = shared_act_out[0], shared_act_out[1]
|
2025-04-23 16:23:25 +08:00
|
|
|
|
2025-07-29 23:53:19 +08:00
|
|
|
# `expand_x` will be disposed in the `apply_mlp` function
|
2025-08-06 10:17:44 +08:00
|
|
|
if w1_scale_bias is None:
|
|
|
|
|
down_out_list = apply_mlp_decode(expand_x,
|
|
|
|
|
w1,
|
|
|
|
|
w1_scale,
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_token_nums,
|
|
|
|
|
dynamic_scale=dynamic_scale)
|
|
|
|
|
else:
|
|
|
|
|
# w4a8 scene, cannot use apply_mlp_decode because the operator is not supported
|
|
|
|
|
down_out_list = apply_mlp(expand_x,
|
|
|
|
|
w1,
|
|
|
|
|
w1_scale,
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_token_nums,
|
|
|
|
|
dynamic_scale=dynamic_scale,
|
|
|
|
|
w1_scale_bias=w1_scale_bias,
|
|
|
|
|
w2_scale_bias=w2_scale_bias)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
# moeCombine
|
2025-06-05 23:39:38 +08:00
|
|
|
kwargs_mc2 = {
|
2025-04-23 16:23:25 +08:00
|
|
|
"expand_x": down_out_list,
|
|
|
|
|
"expert_ids": topk_ids,
|
|
|
|
|
"expert_scales": topk_weights.to(torch.float32),
|
|
|
|
|
"expert_shard_type": 0,
|
|
|
|
|
"shared_expert_rank_num": 0,
|
|
|
|
|
"moe_expert_num": moe_expert_num,
|
|
|
|
|
"global_bs": 0,
|
|
|
|
|
}
|
|
|
|
|
tp_recv_counts = torch.empty(1,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=hidden_states.device)
|
|
|
|
|
stage3_kwargs = {
|
|
|
|
|
"ep_send_counts": ep_recv_counts,
|
|
|
|
|
"group_ep": moe_all_to_all_group_name,
|
2025-07-28 14:06:20 +08:00
|
|
|
"ep_world_size": ep_world_size,
|
|
|
|
|
"ep_rank_id": ep_rank_id,
|
2025-04-23 16:23:25 +08:00
|
|
|
}
|
2025-07-28 14:06:20 +08:00
|
|
|
if enable_dispatch_v2:
|
|
|
|
|
stage3_kwargs.update({
|
|
|
|
|
"assist_info_for_combine":
|
|
|
|
|
assist_info_for_combine,
|
|
|
|
|
})
|
|
|
|
|
else:
|
|
|
|
|
stage3_kwargs.update({
|
|
|
|
|
"expand_idx": assist_info_for_combine,
|
|
|
|
|
})
|
|
|
|
|
if need_extra_args:
|
|
|
|
|
stage3_kwargs.update({
|
|
|
|
|
"tp_send_counts": tp_recv_counts,
|
|
|
|
|
"group_tp": moe_all_to_all_group_name,
|
|
|
|
|
"tp_world_size": 1,
|
|
|
|
|
"tp_rank_id": 0,
|
|
|
|
|
})
|
|
|
|
|
if a3_need_extra_args and enable_dispatch_v2:
|
|
|
|
|
stage3_kwargs.update({
|
|
|
|
|
"x_active_mask": mc2_mask,
|
|
|
|
|
})
|
2025-06-05 23:39:38 +08:00
|
|
|
kwargs_mc2.update(stage3_kwargs)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
2025-07-28 14:06:20 +08:00
|
|
|
hidden_states = torch_npu.npu_moe_distribute_combine_v2(
|
|
|
|
|
**kwargs_mc2
|
|
|
|
|
) if enable_dispatch_v2 else torch_npu.npu_moe_distribute_combine(
|
|
|
|
|
**kwargs_mc2)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
if shared_experts is None:
|
|
|
|
|
return hidden_states
|
|
|
|
|
else:
|
|
|
|
|
with npu_stream_switch("moe_secondary", 0):
|
2025-07-28 14:06:20 +08:00
|
|
|
npu_wait_tensor(shared_act, down_out_list)
|
|
|
|
|
shared_output, _ = shared_experts.down_proj(
|
|
|
|
|
(shared_act, swiglu_out_scale))
|
2025-06-05 23:39:38 +08:00
|
|
|
return hidden_states, shared_output
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
|
2025-08-05 18:47:13 +08:00
|
|
|
def init_routing_quant(hidden_states, top_k, topk_ids, global_num_experts):
|
|
|
|
|
num_tokens, _ = hidden_states.shape
|
|
|
|
|
row_idx_len = num_tokens * top_k
|
|
|
|
|
row_idx = (torch.arange(0,
|
|
|
|
|
row_idx_len,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=hidden_states.device).view(
|
|
|
|
|
top_k, -1).permute(1, 0).contiguous())
|
|
|
|
|
hidden_states, expanded_row_idx, expanded_expert_idx = torch_npu.npu_moe_init_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
row_idx=row_idx,
|
|
|
|
|
expert_idx=topk_ids,
|
|
|
|
|
active_num=num_tokens)
|
|
|
|
|
|
|
|
|
|
expanded_row_idx = (expanded_row_idx.view(top_k, -1).permute(
|
|
|
|
|
1, 0).contiguous().view(-1))
|
|
|
|
|
global_expert_tokens = torch.bincount(expanded_expert_idx,
|
|
|
|
|
minlength=global_num_experts)
|
|
|
|
|
global_expert_tokens = global_expert_tokens.to(torch.int32)
|
|
|
|
|
quantized_tokens, token_scales = torch_npu.npu_dynamic_quant(hidden_states)
|
|
|
|
|
return quantized_tokens, expanded_row_idx, global_expert_tokens, token_scales
|
|
|
|
|
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
# currently expert parallelism implemented with all2all
|
|
|
|
|
# is under-optimized.
|
|
|
|
|
def fused_experts_with_all2all(
|
|
|
|
|
hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None,
|
|
|
|
|
ep_group: GroupCoordinator = None,
|
2025-06-09 19:28:11 +08:00
|
|
|
log2phy: torch.Tensor = None,
|
|
|
|
|
global_redundant_expert_num: int = 0,
|
2025-08-06 10:17:44 +08:00
|
|
|
w1_scale_bias: torch.Tensor = None,
|
|
|
|
|
w2_scale_bias: torch.Tensor = None,
|
2025-05-15 09:19:55 +08:00
|
|
|
):
|
static EPLB fix bug, add unit test (#1186)
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
1.add static EPLB unit test
2.fix bug: Tensor cannot be directly judged by if statements
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
Run the unit test.
---------
Signed-off-by: songshanhu07 <1763685535@qq.com>
2025-06-18 19:46:56 +08:00
|
|
|
if log2phy is not None:
|
2025-06-10 08:39:24 +08:00
|
|
|
topk_ids = log2phy[topk_ids]
|
2025-05-15 09:19:55 +08:00
|
|
|
original_shape = hidden_states.shape
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
|
|
|
|
|
|
|
|
|
|
num_tokens, _ = hidden_states.shape
|
|
|
|
|
num_experts = w1.shape[0]
|
|
|
|
|
|
|
|
|
|
if expert_map is not None:
|
2025-06-09 19:28:11 +08:00
|
|
|
global_num_experts = len(expert_map) + global_redundant_expert_num
|
2025-08-05 18:47:13 +08:00
|
|
|
if hasattr(torch_npu, "npu_moe_init_routing_quant"):
|
|
|
|
|
quantized_tokens, expanded_row_idx, global_expert_tokens, _, token_scales = torch_npu.npu_moe_init_routing_quant(
|
|
|
|
|
hidden_states,
|
|
|
|
|
expert_idx=topk_ids.to(torch.int32),
|
|
|
|
|
active_num=0,
|
|
|
|
|
expert_capacity=0,
|
|
|
|
|
expert_num=global_num_experts,
|
|
|
|
|
drop_pad_mode=0,
|
|
|
|
|
expert_tokens_num_mode=2,
|
|
|
|
|
expert_tokens_before_capacity_flag=False,
|
|
|
|
|
quant_mode=1,
|
|
|
|
|
)
|
|
|
|
|
else:
|
|
|
|
|
quantized_tokens, expanded_row_idx, global_expert_tokens, token_scales = init_routing_quant(
|
|
|
|
|
hidden_states, top_k, topk_ids, global_num_experts)
|
|
|
|
|
|
|
|
|
|
gather_sizes = global_expert_tokens.new_empty(
|
|
|
|
|
global_expert_tokens.shape[0])
|
|
|
|
|
dist.all_to_all_single(gather_sizes, global_expert_tokens)
|
|
|
|
|
|
|
|
|
|
token_counts_combined = torch.stack(
|
|
|
|
|
[gather_sizes, global_expert_tokens], dim=0)
|
|
|
|
|
token_counts_combined = token_counts_combined.view(
|
|
|
|
|
2, ep_group.world_size, -1).sum(dim=2)
|
|
|
|
|
token_counts_combined_cpu = token_counts_combined.to(
|
|
|
|
|
torch.device("cpu"), non_blocking=True).numpy()
|
|
|
|
|
all_tokens = gather_sizes.sum()
|
|
|
|
|
|
|
|
|
|
gathered_tokens = quantized_tokens.new_empty(all_tokens.item(),
|
|
|
|
|
quantized_tokens.shape[1])
|
|
|
|
|
dynamic_scale = token_scales.new_empty(gathered_tokens.shape[0])
|
|
|
|
|
gather_size_list = token_counts_combined_cpu[1]
|
|
|
|
|
scatter_size_list = token_counts_combined_cpu[0]
|
|
|
|
|
|
|
|
|
|
dist.all_to_all_single(gathered_tokens, quantized_tokens,
|
|
|
|
|
scatter_size_list, gather_size_list)
|
|
|
|
|
dist.all_to_all_single(dynamic_scale, token_scales, scatter_size_list,
|
|
|
|
|
gather_size_list)
|
|
|
|
|
|
|
|
|
|
hidden_states, dynamic_scale, inverse_indices, expert_tokens = torch_npu.npu_moe_re_routing(
|
|
|
|
|
gathered_tokens,
|
|
|
|
|
gather_sizes.view(ep_group.world_size, -1),
|
|
|
|
|
per_token_scales=dynamic_scale)
|
|
|
|
|
expert_tokens = expert_tokens.to(torch.int64)
|
|
|
|
|
group_list_type = 1
|
2025-05-15 09:19:55 +08:00
|
|
|
else:
|
|
|
|
|
row_idx_len = num_tokens * top_k
|
|
|
|
|
row_idx = torch.arange(0,
|
|
|
|
|
row_idx_len,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=topk_weights.device).view(
|
|
|
|
|
top_k, -1).permute(1, 0).contiguous()
|
|
|
|
|
hidden_states, expanded_row_idx, expanded_expert_idx = torch_npu.npu_moe_init_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
row_idx=row_idx,
|
|
|
|
|
expert_idx=topk_ids,
|
|
|
|
|
active_num=num_tokens)
|
|
|
|
|
|
|
|
|
|
expert_tokens = torch_npu.npu_moe_compute_expert_tokens(
|
|
|
|
|
expanded_expert_idx, num_experts)
|
|
|
|
|
expert_tokens = expert_tokens.to(torch.int64)
|
|
|
|
|
group_list_type = 0
|
2025-08-05 18:47:13 +08:00
|
|
|
dynamic_scale = None
|
2025-05-15 09:19:55 +08:00
|
|
|
|
2025-05-29 11:48:26 +08:00
|
|
|
# `hidden_states` will be disposed in the `apply_mlp` function
|
2025-06-09 19:28:11 +08:00
|
|
|
hidden_states = apply_mlp(
|
|
|
|
|
hidden_states,
|
|
|
|
|
w1,
|
|
|
|
|
w1_scale, #17
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_tokens, #16
|
2025-08-05 18:47:13 +08:00
|
|
|
dynamic_scale=dynamic_scale,
|
2025-08-06 10:17:44 +08:00
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
w1_scale_bias=w1_scale_bias,
|
|
|
|
|
w2_scale_bias=w2_scale_bias)
|
2025-05-15 09:19:55 +08:00
|
|
|
|
|
|
|
|
if expert_map is not None:
|
2025-08-05 18:47:13 +08:00
|
|
|
reordered_outputs = torch.index_select(
|
|
|
|
|
hidden_states,
|
|
|
|
|
dim=0,
|
|
|
|
|
# Workaround: Convert to float so that argsort runs on AI Core instead of slower AICPU
|
|
|
|
|
index=inverse_indices.to(torch.float32).argsort().to(torch.int32))
|
|
|
|
|
|
|
|
|
|
hidden_states = reordered_outputs.new_empty(*quantized_tokens.shape)
|
|
|
|
|
dist.all_to_all_single(hidden_states, reordered_outputs,
|
|
|
|
|
gather_size_list, scatter_size_list)
|
2025-05-15 09:19:55 +08:00
|
|
|
|
|
|
|
|
final_hidden_states = torch_npu.npu_moe_finalize_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
skip1=None,
|
|
|
|
|
skip2=None,
|
|
|
|
|
bias=None,
|
|
|
|
|
scales=topk_weights,
|
|
|
|
|
expanded_src_to_dst_row=expanded_row_idx,
|
2025-08-05 18:47:13 +08:00
|
|
|
export_for_source_row=None,
|
|
|
|
|
drop_pad_mode=2)
|
2025-05-15 09:19:55 +08:00
|
|
|
else:
|
|
|
|
|
# TODO: Reorder device memory 2 times here, replace the current
|
|
|
|
|
# implementation here when suitable operators become available.
|
|
|
|
|
final_hidden_states = torch_npu.npu_moe_finalize_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
skip1=None,
|
|
|
|
|
skip2=None,
|
|
|
|
|
bias=None,
|
|
|
|
|
scales=topk_weights,
|
|
|
|
|
expanded_src_to_dst_row=expanded_row_idx,
|
|
|
|
|
export_for_source_row=topk_ids,
|
|
|
|
|
)
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
final_hidden_states = final_hidden_states.view(original_shape)
|
|
|
|
|
return final_hidden_states
|
|
|
|
|
|
|
|
|
|
|
2025-06-23 22:03:38 +08:00
|
|
|
def fused_experts_with_allgather(hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None):
|
|
|
|
|
original_shape = hidden_states.shape
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
|
|
|
|
|
num_tokens = hidden_states.shape[0]
|
|
|
|
|
batch_size, hidden_size = hidden_states.shape
|
|
|
|
|
topk_weights = topk_weights.to(hidden_states.dtype)
|
|
|
|
|
|
|
|
|
|
ep_group = get_ep_group().device_group
|
|
|
|
|
ep_rank = torch.distributed.get_rank(group=ep_group)
|
|
|
|
|
ep_size = torch.distributed.get_world_size(ep_group)
|
|
|
|
|
|
|
|
|
|
global_num_experts = len(expert_map)
|
|
|
|
|
local_num_experts = global_num_experts // ep_size
|
|
|
|
|
|
|
|
|
|
hidden_states, pertoken_scale = torch_npu.npu_dynamic_quant(hidden_states)
|
|
|
|
|
|
|
|
|
|
hidden_states, expanded_x_idx, expert_tokens, pertoken_scale = torch_npu.npu_moe_init_routing_v2(
|
|
|
|
|
hidden_states,
|
|
|
|
|
topk_ids,
|
|
|
|
|
scale=pertoken_scale,
|
|
|
|
|
offset=None,
|
|
|
|
|
active_num=num_tokens * top_k,
|
|
|
|
|
expert_num=global_num_experts,
|
|
|
|
|
expert_tokens_num_type=1,
|
|
|
|
|
expert_tokens_num_flag=True,
|
|
|
|
|
active_expert_range=[
|
|
|
|
|
ep_rank * local_num_experts, (ep_rank + 1) * local_num_experts
|
|
|
|
|
],
|
|
|
|
|
quant_mode=-1,
|
|
|
|
|
row_idx_type=1)
|
|
|
|
|
group_list_type = 1
|
|
|
|
|
|
|
|
|
|
sorted_topk_weight = torch.index_select(topk_weights.view(-1), 0,
|
|
|
|
|
expanded_x_idx)
|
|
|
|
|
row_index = expanded_x_idx // topk_ids.shape[-1]
|
|
|
|
|
row_index = row_index.to(torch.int64)
|
|
|
|
|
share_input = torch.zeros((batch_size, hidden_size),
|
|
|
|
|
dtype=torch.bfloat16,
|
|
|
|
|
device="npu")
|
|
|
|
|
|
|
|
|
|
hidden_states = torch_npu.npu_grouped_matmul(
|
|
|
|
|
x=[hidden_states],
|
|
|
|
|
weight=[w1],
|
|
|
|
|
split_item=3,
|
|
|
|
|
group_list_type=group_list_type,
|
|
|
|
|
group_type=0,
|
|
|
|
|
group_list=expert_tokens,
|
|
|
|
|
output_dtype=torch.int32)[0]
|
|
|
|
|
|
|
|
|
|
# act_fn: swiglu
|
|
|
|
|
hidden_states, pertoken_scale = torch_npu.npu_dequant_swiglu_quant(
|
|
|
|
|
x=hidden_states,
|
|
|
|
|
weight_scale=w1_scale.to(torch.float32),
|
|
|
|
|
activation_scale=pertoken_scale,
|
|
|
|
|
bias=None,
|
|
|
|
|
quant_scale=None,
|
|
|
|
|
quant_offset=None,
|
|
|
|
|
group_index=expert_tokens,
|
|
|
|
|
activate_left=True,
|
|
|
|
|
quant_mode=1,
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
final_hidden_states = torch_npu.npu_grouped_matmul_finalize_routing(
|
|
|
|
|
hidden_states,
|
|
|
|
|
w2,
|
|
|
|
|
scale=w2_scale.to(torch.float32),
|
|
|
|
|
bias=None,
|
|
|
|
|
pertoken_scale=pertoken_scale.view(-1),
|
|
|
|
|
group_list=expert_tokens,
|
|
|
|
|
shared_input=share_input,
|
|
|
|
|
logit=sorted_topk_weight.to(torch.float32),
|
|
|
|
|
row_index=row_index,
|
|
|
|
|
output_bs=batch_size).to(torch.bfloat16)
|
|
|
|
|
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
final_hidden_states = final_hidden_states.view(original_shape)
|
|
|
|
|
|
|
|
|
|
return final_hidden_states
|
|
|
|
|
|
|
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
def fused_experts(hidden_states: torch.Tensor,
|
|
|
|
|
w1: torch.Tensor,
|
|
|
|
|
w1_scale: torch.Tensor,
|
|
|
|
|
w2: torch.Tensor,
|
|
|
|
|
w2_scale: torch.Tensor,
|
|
|
|
|
topk_weights: torch.Tensor,
|
|
|
|
|
topk_ids: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
expert_map: torch.Tensor = None):
|
|
|
|
|
original_shape = hidden_states.shape
|
|
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
|
|
|
|
|
|
|
|
|
|
num_tokens, _ = hidden_states.shape
|
|
|
|
|
num_experts = w1.shape[0]
|
|
|
|
|
dtype = hidden_states.dtype
|
|
|
|
|
device = hidden_states.device
|
|
|
|
|
|
|
|
|
|
if expert_map is not None:
|
|
|
|
|
# Generate token indices and flatten
|
|
|
|
|
token_indices = (torch.arange(num_tokens,
|
|
|
|
|
device=device,
|
|
|
|
|
dtype=torch.int64).unsqueeze(1).expand(
|
|
|
|
|
-1, top_k).reshape(-1))
|
|
|
|
|
|
|
|
|
|
# Flatten token-to-expert mappings and map to local experts
|
|
|
|
|
weights_flat = topk_weights.view(-1)
|
|
|
|
|
experts_flat = topk_ids.view(-1)
|
|
|
|
|
local_experts_flat = expert_map[experts_flat]
|
|
|
|
|
|
|
|
|
|
# Filter valid token-expert pairs
|
|
|
|
|
mask = local_experts_flat != -1
|
|
|
|
|
filtered_weights = torch.where(
|
|
|
|
|
mask, weights_flat, torch.zeros_like(weights_flat)).to(dtype)
|
|
|
|
|
filtered_experts = torch.where(
|
|
|
|
|
mask, local_experts_flat,
|
|
|
|
|
torch.full_like(local_experts_flat,
|
|
|
|
|
num_experts)).to(topk_ids.dtype)
|
|
|
|
|
|
|
|
|
|
# Sort by local expert IDs
|
|
|
|
|
sort_indices = torch.argsort(filtered_experts)
|
|
|
|
|
sorted_token_indices = token_indices[sort_indices]
|
|
|
|
|
sorted_weights = filtered_weights[sort_indices]
|
|
|
|
|
|
|
|
|
|
# Compute token counts with minlength of num_experts
|
|
|
|
|
# This is equivalent to but faster than:
|
|
|
|
|
# >>> token_counts = torch.bincount(filtered_experts, minlength=num_experts)[:-1]
|
|
|
|
|
token_counts = torch.zeros(num_experts + 1,
|
|
|
|
|
device=device,
|
|
|
|
|
dtype=torch.int64)
|
|
|
|
|
ones = torch.ones_like(filtered_experts, dtype=torch.int64)
|
|
|
|
|
token_counts.scatter_add_(0, filtered_experts.to(torch.int64), ones)
|
2025-04-23 16:23:25 +08:00
|
|
|
expert_tokens = token_counts[:num_experts]
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
# Rearrange hidden_states
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states = hidden_states[sorted_token_indices]
|
2025-04-23 16:23:25 +08:00
|
|
|
group_list_type = 1
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
else:
|
|
|
|
|
row_idx_len = num_tokens * top_k
|
|
|
|
|
row_idx = torch.arange(0,
|
|
|
|
|
row_idx_len,
|
|
|
|
|
dtype=torch.int32,
|
|
|
|
|
device=topk_weights.device).view(
|
|
|
|
|
top_k, -1).permute(1, 0).contiguous()
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states, expanded_row_idx, expanded_expert_idx = torch_npu.npu_moe_init_routing(
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
hidden_states,
|
|
|
|
|
row_idx=row_idx,
|
|
|
|
|
expert_idx=topk_ids,
|
|
|
|
|
active_num=num_tokens)
|
|
|
|
|
|
|
|
|
|
expert_tokens = torch_npu.npu_moe_compute_expert_tokens(
|
|
|
|
|
expanded_expert_idx, num_experts)
|
|
|
|
|
expert_tokens = expert_tokens.to(torch.int64)
|
2025-04-23 16:23:25 +08:00
|
|
|
group_list_type = 0
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
2025-05-29 11:48:26 +08:00
|
|
|
# `hidden_states` will be disposed in the `apply_mlp` function
|
|
|
|
|
hidden_states = apply_mlp(hidden_states,
|
2025-04-23 16:23:25 +08:00
|
|
|
w1,
|
|
|
|
|
w1_scale,
|
|
|
|
|
w2,
|
|
|
|
|
w2_scale,
|
|
|
|
|
expert_tokens,
|
|
|
|
|
group_list_type=group_list_type)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
if expert_map is not None:
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states.mul_(sorted_weights.unsqueeze(1))
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
final_hidden_states = torch.zeros(*original_shape,
|
2025-05-09 15:09:37 +08:00
|
|
|
device=device,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
dtype=dtype)
|
2025-04-23 16:23:25 +08:00
|
|
|
|
|
|
|
|
num_valid_tokens = mask.sum()
|
|
|
|
|
valid_token_mask = torch.arange(
|
|
|
|
|
0, sorted_token_indices.shape[0],
|
|
|
|
|
device=device).unsqueeze(1) < num_valid_tokens
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states = hidden_states.masked_fill_(~valid_token_mask,
|
2025-05-06 22:09:56 +08:00
|
|
|
0).to(dtype)
|
2025-05-09 15:09:37 +08:00
|
|
|
final_hidden_states.index_add_(0, sorted_token_indices, hidden_states)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
else:
|
2025-04-23 16:23:25 +08:00
|
|
|
# TODO: Reorder device memory 2 times here, replace the current
|
|
|
|
|
# implementation here when suitable operators become available.
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
final_hidden_states = torch_npu.npu_moe_finalize_routing(
|
2025-05-09 15:09:37 +08:00
|
|
|
hidden_states,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
skip1=None,
|
|
|
|
|
skip2=None,
|
|
|
|
|
bias=None,
|
|
|
|
|
scales=topk_weights,
|
|
|
|
|
expanded_src_to_dst_row=expanded_row_idx,
|
2025-04-23 16:23:25 +08:00
|
|
|
export_for_source_row=topk_ids,
|
|
|
|
|
)
|
2025-05-09 15:09:37 +08:00
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
if len(original_shape) == 3:
|
|
|
|
|
final_hidden_states = final_hidden_states.view(original_shape)
|
|
|
|
|
return final_hidden_states
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class AscendW8A8DynamicLinearMethod:
|
|
|
|
|
"""Linear method for Ascend W8A8_DYNAMIC.
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
|
self.transpose_weight = True
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_weight(input_size: int, output_size: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
params_dict = {
|
|
|
|
|
"weight": torch.empty(output_size, input_size, dtype=torch.int8)
|
|
|
|
|
}
|
|
|
|
|
return params_dict
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_pertensor_param(params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
return {}
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_perchannel_param(
|
|
|
|
|
output_size: int,
|
|
|
|
|
params_dtype: torch.dtype,
|
|
|
|
|
) -> Dict[str, Any]:
|
|
|
|
|
params_dict = {}
|
|
|
|
|
params_dict["weight_scale"] = torch.empty(output_size,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
params_dict["weight_offset"] = torch.empty(output_size,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
return params_dict
|
|
|
|
|
|
2025-07-30 14:57:14 +08:00
|
|
|
def get_pergroup_param(self, input_size: int, output_size: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
return {}
|
|
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
@staticmethod
|
|
|
|
|
def apply(
|
|
|
|
|
layer: torch.nn.Module,
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
x: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]],
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
bias: Optional[torch.Tensor] = None,
|
|
|
|
|
tp_rank: Optional[int] = 0,
|
|
|
|
|
) -> torch.Tensor:
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
config = getattr(layer, "_ascend_quant_config", {})
|
|
|
|
|
if not isinstance(x, tuple):
|
|
|
|
|
output_dtype = config.get("output_dtype", x.dtype)
|
|
|
|
|
quantized_x, dynamic_scale = torch_npu.npu_dynamic_quant(x)
|
|
|
|
|
else:
|
|
|
|
|
assert "output_dtype" in config.keys(), (
|
|
|
|
|
f"DynamicLinearMethod needs explicitly specified `output_dtype`"
|
|
|
|
|
f"for pre-quantized input, got config [{config}]")
|
|
|
|
|
output_dtype = config["output_dtype"]
|
|
|
|
|
quantized_x, dynamic_scale = x
|
|
|
|
|
pertoken_scale = (dynamic_scale
|
|
|
|
|
if config.get("pertoken_scale", True) else None)
|
|
|
|
|
|
|
|
|
|
output = torch_npu.npu_quant_matmul(
|
|
|
|
|
quantized_x,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.weight,
|
|
|
|
|
layer.weight_scale,
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
pertoken_scale=pertoken_scale,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
bias=bias,
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
output_dtype=output_dtype,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
)
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
return ((output, dynamic_scale)
|
|
|
|
|
if config.get("return_scale", False) else output)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
def process_weights_after_loading(self, layer):
|
|
|
|
|
if self.transpose_weight:
|
|
|
|
|
layer.weight.data = layer.weight.data.transpose(0, 1).contiguous()
|
2025-05-01 13:51:42 +08:00
|
|
|
# cast quantized weight tensors in NZ format (29) for higher inference speed
|
|
|
|
|
layer.weight.data = torch_npu.npu_format_cast(layer.weight.data, 29)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.weight_scale.data = layer.weight_scale.data.flatten()
|
2025-05-01 13:51:42 +08:00
|
|
|
layer.weight_scale_fp32 = layer.weight_scale.data.to(torch.float32)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.weight_offset.data = layer.weight_offset.data.flatten()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
class AscendW8A8DynamicFusedMoEMethod:
|
|
|
|
|
"""FusedMoe method for Ascend W8A8_DYNAMIC.
|
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
|
self.transpose_weight = True
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
self.ep_group = get_ep_group()
|
2025-04-23 16:23:25 +08:00
|
|
|
|
2025-06-05 16:28:01 +08:00
|
|
|
ascend_config = get_ascend_config()
|
|
|
|
|
self.torchair_graph_enabled = ascend_config.torchair_graph_config.enabled
|
2025-06-04 18:31:41 +08:00
|
|
|
|
2025-04-23 16:23:25 +08:00
|
|
|
try:
|
2025-07-28 14:06:20 +08:00
|
|
|
device_group = get_mc2_group().device_group
|
2025-04-23 16:23:25 +08:00
|
|
|
# TODO: Try local_rank = ep_group.rank_in_group
|
|
|
|
|
local_rank = torch.distributed.get_rank(group=device_group)
|
|
|
|
|
backend = device_group._get_backend(torch.device("npu"))
|
|
|
|
|
self.moe_all_to_all_group_name = backend.get_hccl_comm_name(
|
|
|
|
|
local_rank)
|
|
|
|
|
except AttributeError:
|
|
|
|
|
self.moe_all_to_all_group_name = ""
|
|
|
|
|
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
@staticmethod
|
|
|
|
|
def get_weight(num_experts: int, intermediate_size_per_partition: int,
|
|
|
|
|
hidden_sizes: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
param_dict = {}
|
|
|
|
|
param_dict["w13_weight"] = torch.empty(num_experts,
|
|
|
|
|
2 *
|
|
|
|
|
intermediate_size_per_partition,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
dtype=torch.int8)
|
|
|
|
|
param_dict["w2_weight"] = torch.empty(num_experts,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
intermediate_size_per_partition,
|
|
|
|
|
dtype=torch.int8)
|
|
|
|
|
return param_dict
|
|
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
|
def get_dynamic_quant_param(num_experts: int,
|
|
|
|
|
intermediate_size_per_partition: int,
|
|
|
|
|
hidden_sizes: int,
|
|
|
|
|
params_dtype: torch.dtype) -> Dict[str, Any]:
|
|
|
|
|
param_dict = {}
|
|
|
|
|
param_dict["w13_weight_scale"] = torch.empty(
|
|
|
|
|
num_experts,
|
|
|
|
|
2 * intermediate_size_per_partition,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
param_dict["w13_weight_offset"] = torch.empty(
|
|
|
|
|
num_experts,
|
|
|
|
|
2 * intermediate_size_per_partition,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
param_dict["w2_weight_scale"] = torch.empty(num_experts,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
param_dict["w2_weight_offset"] = torch.empty(num_experts,
|
|
|
|
|
hidden_sizes,
|
|
|
|
|
1,
|
|
|
|
|
dtype=params_dtype)
|
|
|
|
|
return param_dict
|
|
|
|
|
|
|
|
|
|
def apply(
|
2025-04-23 16:23:25 +08:00
|
|
|
self,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer: torch.nn.Module,
|
|
|
|
|
x: torch.Tensor,
|
|
|
|
|
router_logits: torch.Tensor,
|
|
|
|
|
top_k: int,
|
|
|
|
|
renormalize: bool,
|
|
|
|
|
use_grouped_topk: bool = False,
|
|
|
|
|
global_num_experts: int = -1,
|
|
|
|
|
expert_map: Optional[torch.Tensor] = None,
|
2025-04-23 16:23:25 +08:00
|
|
|
topk_group: Optional[int] = None,
|
|
|
|
|
num_expert_group: Optional[int] = None,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
custom_routing_function: Optional[Callable] = None,
|
|
|
|
|
scoring_func: str = "softmax",
|
|
|
|
|
e_score_correction_bias: Optional[torch.Tensor] = None,
|
2025-04-23 16:23:25 +08:00
|
|
|
is_prefill: bool = True,
|
2025-05-15 09:19:55 +08:00
|
|
|
enable_force_load_balance: bool = True,
|
2025-06-09 19:28:11 +08:00
|
|
|
log2phy: torch.Tensor = None,
|
|
|
|
|
global_redundant_expert_num: int = 0,
|
Support multistream of shared experts in FusedMoE (#997)
Contains on #1111 for completeness.
<!-- Thanks for sending a pull request!
BEFORE SUBMITTING, PLEASE READ
https://docs.vllm.ai/en/latest/contributing/overview.html
-->
### What this PR does / why we need it?
Implement multi-stream parallelism for MoE layers with shared experts,
where computation of shared experts will be overlapped with expert token
dispatch and combine. Also, when multi-stream is enabled, weights of
shared experts will be force to replicate across all cards, regardless
of any tensor parallelism configurations, to avoid AllReduce operations.
With the expected overlaping being:
```
| shared gate_up | shared act | | shared down |
| dispatch | routed gate_up, act, down | combine |
```
<!--
- Please clarify what changes you are proposing. The purpose of this
section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR.
- Please clarify why the changes are needed. For instance, the use case
and bug description.
- Fixes #
-->
### Does this PR introduce _any_ user-facing change?
No.
<!--
Note that it means *any* user-facing change including all aspects such
as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
### How was this patch tested?
Tested on 1x16 910 node, with tailored 2 layer DSKv2.
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please
clarify how you tested step by step, ideally copy and paste-able, so
that other reviewers can test and check, and descendants can verify in
the future.
If tests were not added, please describe why they were not added and/or
why it was difficult to add.
-->
---------
Signed-off-by: sdmyzlp <lrwei2@petalmail.com>
2025-06-11 09:18:38 +08:00
|
|
|
shared_experts: Optional[Any] = None,
|
2025-07-29 23:53:19 +08:00
|
|
|
quantized_x_for_share: Optional[Any] = None,
|
|
|
|
|
dynamic_scale_for_share: Optional[Any] = None,
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
**kwargs,
|
|
|
|
|
) -> torch.Tensor:
|
|
|
|
|
assert router_logits.shape[
|
|
|
|
|
1] == global_num_experts, "Number of global experts mismatch"
|
|
|
|
|
|
2025-06-23 22:03:38 +08:00
|
|
|
is_deepseek_v3_r1 = global_num_experts == 256
|
|
|
|
|
|
2025-04-23 16:23:25 +08:00
|
|
|
# NOTE: now npu_moe_gating_top_k can only support `group_count=256` pattern
|
2025-06-23 22:03:38 +08:00
|
|
|
if is_deepseek_v3_r1:
|
2025-04-23 16:23:25 +08:00
|
|
|
topk_weights, topk_ids, _ = torch_npu.npu_moe_gating_top_k(
|
|
|
|
|
router_logits,
|
|
|
|
|
k=top_k, # topk当前写8
|
|
|
|
|
bias=e_score_correction_bias,
|
|
|
|
|
k_group=topk_group, # fix: 4
|
|
|
|
|
group_count=num_expert_group, # fix 8
|
|
|
|
|
group_select_mode=1, # 0: group中的最大; 1: topk2.sum(fix)
|
|
|
|
|
renorm=0, # 0: softmax->topk(fix); 1: topk->softmax
|
|
|
|
|
norm_type=1, # 0: softmax; 1: sigmoid(fix)
|
|
|
|
|
# out_flag=False, # todo new api; 第三个输出是否输出
|
|
|
|
|
# y2_flag=False, # old api; 第三个输出是否输出
|
|
|
|
|
routed_scaling_factor=1,
|
|
|
|
|
eps=float(1e-20))
|
|
|
|
|
else:
|
|
|
|
|
topk_weights, topk_ids = select_experts(
|
|
|
|
|
hidden_states=x,
|
|
|
|
|
router_logits=router_logits,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
use_grouped_topk=use_grouped_topk,
|
|
|
|
|
renormalize=renormalize,
|
|
|
|
|
topk_group=topk_group,
|
|
|
|
|
num_expert_group=num_expert_group,
|
|
|
|
|
custom_routing_function=custom_routing_function,
|
|
|
|
|
scoring_func=scoring_func,
|
|
|
|
|
e_score_correction_bias=e_score_correction_bias,
|
|
|
|
|
)
|
|
|
|
|
|
2025-07-29 23:53:19 +08:00
|
|
|
fused_moe_state = get_forward_context().fused_moe_state
|
|
|
|
|
shared_gate_up, shared_dequant_scale = None, None
|
|
|
|
|
if shared_experts is not None and fused_moe_state == FusedMoEState.MC2:
|
|
|
|
|
with npu_stream_switch("moe_secondary", 0):
|
|
|
|
|
npu_wait_tensor(quantized_x_for_share, router_logits)
|
|
|
|
|
share_up_out, _ = shared_experts.gate_up_proj(
|
|
|
|
|
(quantized_x_for_share, dynamic_scale_for_share))
|
|
|
|
|
shared_gate_up, shared_dequant_scale = share_up_out[
|
|
|
|
|
0], share_up_out[1]
|
|
|
|
|
|
2025-05-15 09:19:55 +08:00
|
|
|
# this is a naive implementation for experts load balance so as
|
|
|
|
|
# to avoid accumulating too much tokens on a single rank.
|
|
|
|
|
# currently it is only activated when doing profile runs.
|
|
|
|
|
if enable_force_load_balance:
|
|
|
|
|
topk_ids = torch.randint_like(topk_ids, 0, global_num_experts)
|
|
|
|
|
|
2025-06-04 20:26:44 +08:00
|
|
|
topk_weights = topk_weights.to(x.dtype)
|
2025-06-23 22:03:38 +08:00
|
|
|
if fused_moe_state == FusedMoEState.AllGatherEP:
|
|
|
|
|
return fused_experts_with_allgather(
|
|
|
|
|
hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w1_scale=layer.w13_weight_scale,
|
|
|
|
|
w2=layer.w2_weight,
|
|
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map)
|
|
|
|
|
elif fused_moe_state == FusedMoEState.MC2:
|
2025-04-23 16:23:25 +08:00
|
|
|
return fused_experts_with_mc2(
|
|
|
|
|
hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w2=layer.w2_weight,
|
2025-07-29 23:53:19 +08:00
|
|
|
w1_scale=layer.w13_weight_scale_fp32,
|
2025-04-23 16:23:25 +08:00
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map,
|
2025-06-05 23:39:38 +08:00
|
|
|
moe_all_to_all_group_name=self.moe_all_to_all_group_name,
|
2025-06-09 19:28:11 +08:00
|
|
|
log2phy=log2phy,
|
|
|
|
|
global_redundant_expert_num=global_redundant_expert_num,
|
2025-07-28 14:06:20 +08:00
|
|
|
shared_experts=shared_experts,
|
|
|
|
|
is_torchair=self.torchair_graph_enabled,
|
2025-07-29 23:53:19 +08:00
|
|
|
mc2_mask=kwargs.get("mc2_mask", None),
|
|
|
|
|
shared_gate_up=shared_gate_up,
|
|
|
|
|
shared_dequant_scale=shared_dequant_scale)
|
2025-07-07 22:36:03 +08:00
|
|
|
elif fused_moe_state in [
|
|
|
|
|
FusedMoEState.AllGather, FusedMoEState.NaiveMulticast
|
|
|
|
|
]:
|
2025-04-23 16:23:25 +08:00
|
|
|
return fused_experts(hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w1_scale=layer.w13_weight_scale,
|
|
|
|
|
w2=layer.w2_weight,
|
|
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map)
|
2025-05-15 09:19:55 +08:00
|
|
|
else:
|
2025-05-24 14:29:36 +08:00
|
|
|
# The current implementation of deepseek moe splits hidden_states
|
|
|
|
|
# according to tp_size before they are feed into fused_moe module.
|
|
|
|
|
# Therefore, all2all is needed no matter how dp/tp is set so as to
|
|
|
|
|
# dispatch/combine tokens.
|
2025-06-09 19:28:11 +08:00
|
|
|
return fused_experts_with_all2all(
|
|
|
|
|
hidden_states=x,
|
|
|
|
|
w1=layer.w13_weight,
|
|
|
|
|
w1_scale=layer.w13_weight_scale,
|
|
|
|
|
w2=layer.w2_weight,
|
|
|
|
|
w2_scale=layer.w2_weight_scale,
|
|
|
|
|
topk_weights=topk_weights,
|
|
|
|
|
topk_ids=topk_ids,
|
|
|
|
|
top_k=top_k,
|
|
|
|
|
expert_map=expert_map,
|
|
|
|
|
ep_group=self.ep_group,
|
|
|
|
|
log2phy=log2phy,
|
|
|
|
|
global_redundant_expert_num=global_redundant_expert_num,
|
|
|
|
|
)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
|
|
|
|
|
def process_weights_after_loading(self, layer):
|
|
|
|
|
if self.transpose_weight:
|
|
|
|
|
layer.w13_weight.data = layer.w13_weight.data.transpose(
|
|
|
|
|
1, 2).contiguous()
|
|
|
|
|
layer.w2_weight.data = layer.w2_weight.data.transpose(
|
|
|
|
|
1, 2).contiguous()
|
2025-06-23 22:03:38 +08:00
|
|
|
if envs.VLLM_ENABLE_FUSED_EXPERTS_ALLGATHER_EP:
|
|
|
|
|
torch_npu.npu_format_cast_(layer.w2_weight, ACL_FORMAT_FRACTAL_NZ)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.w13_weight_scale.data = layer.w13_weight_scale.data.view(
|
2025-05-15 09:19:55 +08:00
|
|
|
layer.w13_weight_scale.data.shape[0], -1)
|
2025-07-29 23:53:19 +08:00
|
|
|
layer.w13_weight_scale_fp32 = layer.w13_weight_scale.data.to(
|
|
|
|
|
torch.float32)
|
[quantization] Support w8a8 quantization (#580)
### What this PR does / why we need it?
Add a `VLLMAscendQuantizer` to support w8a8 static (W8A8) and dynamic on
linear and moe (W8A8_DYNAMIC), the quantizer will be enable if a model
has [quantize
filed](https://huggingface.co/vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8/blob/main/config.json#L27).
If MindIE Turbo is installed, the MindIE Turbo Quantizer will apply,
otherwise will use VLLMAscendQuantizer directly.
- This patch fix installation docs to make installation work
- This patch enable norm quantization by patch `RMSNorm.__init__`,
`RMSNorm.forward_oot`, `NPUModelRunnerBase.load_model`
- Add `AscendW8A8LinearMethod` for W8A8
- Add `AscendW8A8DynamicLinearMethod` and
`AscendW8A8DynamicFusedMoEMethod` for W8A8_DYNAMIC
- Add a e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
### Does this PR introduce _any_ user-facing change?
Yes, support w8a8 quantization. After this patch supported, users can
use below commands to run w8a8 models:
```
vllm serve /root/.cache/modelscope/hub/Qwen/Qwen2.5-7B-Instruct-w8a8 --served-model-name "qwen2.5-7B"
```
### How was this patch tested?
0. CI passed: add e2e test for `vllm-ascend/Qwen2.5-0.5B-Instruct-w8a8`
1. From @Yikun:
I test Qwen2.5-0.5B-Instruct-w8a8 for functional test all is well, pls
refer to
https://github.com/vllm-project/vllm-ascend/pull/580#issuecomment-2816747613
2. From @dingdingchaomian :
Use qwen2.5-72b-instruct model and deepseek-v2-lite-chat tested, both
models were quantized using Ascend's msmodelslim tool:
- Qwen2.5-72b-instruct were tested twice, one for w8a8 static and one
for w8a8 dynamic.
- Deepseek-v2-lite-chat were tested once because its quantization used
both static and dynamic w8a8.
Models were tested using both off line inference and online serving, and
both work well. The inference codes are exactly the same with the
examples in
https://vllm-ascend.readthedocs.io/en/latest/quick_start.html, with
model path and tensor parallel number changed.
---------
Signed-off-by: dingdingchaomian <wangce21@huawei.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: dingdingchaomian <wangce21@huawei.com>
Co-authored-by: Angazenn <zengyanjia@huawei.com>
Co-authored-by: liujiaxu <liujiaxu4@huawei.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: ganyi1996ppo <pleaplusone.gy@gmail.com>
2025-04-20 18:14:05 +08:00
|
|
|
layer.w13_weight_offset.data = layer.w13_weight_offset.data.view(
|
|
|
|
|
layer.w13_weight_offset.data.shape[0], -1)
|
|
|
|
|
layer.w2_weight_scale.data = layer.w2_weight_scale.data.view(
|
|
|
|
|
layer.w2_weight_scale.data.shape[0], -1)
|
|
|
|
|
layer.w2_weight_offset.data = layer.w2_weight_offset.data.view(
|
|
|
|
|
layer.w2_weight_offset.data.shape[0], -1)
|