[refactor] refactor weight trans nz and transpose (#4878)
### What this PR does / why we need it?
Now `VLLM_ASCEND_ENABLE_NZ` will have three options:
0: disable nz;
1: only quant case enable nz;
2: enable nz as long as possible;
And `VLLM_ASCEND_ENABLE_NZ`=1 by default.
All cases are shown in the table below:
| | W4A4 | W4A8 | W8A8 | fp16/bf16 | fp32 |
|---|---|---|---|---|---|
| trans nz | can't support nz | trans nz by default | trans nz by
default | trans nz when VLLM_ASCEND_ENABLE_NZ is 2 | can't support nz |
| transpose | only support not transpose case | only support transpose
case | only support transpose case | linear: only support not transpose
case<br>gmm: only support transpose case | same to fp16/bf16 |
Some exceptional cases:
1. MLAPO op need to do some additional processing on the weights,
including trans nz. If use MLAPO op, some weight will be transformed to
nz forcely;
2. MLA/SFA's weight `W_UV` will be used by op
`torch.ops._C_ascend.batch_matmul_transpose`, and this op can't support
nz currently;
### Does this PR introduce _any_ user-facing change?
Now fp16/bf16 weight will not trans nz by default.
### How was this patch tested?
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
Signed-off-by: zzzzwwjj <1183291235@qq.com>
This commit is contained in:
@@ -86,7 +86,6 @@ class AscendW4A4FlatQuantDynamicLinearMethod:
|
||||
input_size = 0
|
||||
|
||||
def __init__(self):
|
||||
self.transpose_weight = False
|
||||
self.sym = True
|
||||
|
||||
@staticmethod
|
||||
@@ -176,9 +175,8 @@ class AscendW4A4FlatQuantDynamicLinearMethod:
|
||||
return output
|
||||
|
||||
def process_weights_after_loading(self, layer):
|
||||
# NOTE: Currently, w4a4 can't support weight nz
|
||||
weight_packed = pack_int4_weights(layer.weight.data)
|
||||
if self.transpose_weight:
|
||||
weight_packed = weight_packed.transpose(0, 1).contiguous()
|
||||
layer.register_parameter(
|
||||
'weight_packed',
|
||||
torch.nn.Parameter(weight_packed, requires_grad=False))
|
||||
|
||||
Reference in New Issue
Block a user