[refactor] refactor weight trans nz and transpose (#4878)

### What this PR does / why we need it?

Now `VLLM_ASCEND_ENABLE_NZ` will have three options:
0: disable nz;
1: only quant case enable nz;
2: enable nz as long as possible;

And `VLLM_ASCEND_ENABLE_NZ`=1 by default.

All cases are shown in the table below:

|  | W4A4 | W4A8 | W8A8 | fp16/bf16 | fp32 |
|---|---|---|---|---|---|
| trans nz | can't support nz | trans nz by default | trans nz by
default | trans nz when VLLM_ASCEND_ENABLE_NZ is 2 | can't support nz |
| transpose | only support not transpose case | only support transpose
case | only support transpose case | linear: only support not transpose
case<br>gmm: only support transpose case | same to fp16/bf16 |

Some exceptional cases:
1. MLAPO op need to do some additional processing on the weights,
including trans nz. If use MLAPO op, some weight will be transformed to
nz forcely;
2. MLA/SFA's weight `W_UV` will be used by op
`torch.ops._C_ascend.batch_matmul_transpose`, and this op can't support
nz currently;

### Does this PR introduce _any_ user-facing change?
Now fp16/bf16 weight will not trans nz by default.

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

Signed-off-by: zzzzwwjj <1183291235@qq.com>
This commit is contained in:
zzzzwwjj
2025-12-19 14:27:24 +08:00
committed by GitHub
parent ea8f544ce7
commit cc23067f1e
19 changed files with 156 additions and 255 deletions

View File

@@ -199,7 +199,6 @@ class TestW4A4FlatQuantDynamic(unittest.TestCase):
(self.output_size, self.input_size // 8),
dtype=torch.int32)
mock_pack_weights.return_value = mock_packed
self.method.transpose_weight = False
self.method.process_weights_after_loading(layer)
mock_pack_weights.assert_called_once()
self.assertFalse(hasattr(layer, 'weight'))
@@ -212,35 +211,6 @@ class TestW4A4FlatQuantDynamic(unittest.TestCase):
self.assertEqual(layer.left_trans.shape, (24, 24))
self.assertTrue(layer.left_trans.is_contiguous())
@patch('vllm_ascend.quantization.w4a4_flatquant_dynamic.pack_int4_weights')
def test_process_weights_after_loading_with_transpose(
self, mock_pack_weights):
"""Tests weight processing after loading, with transpose."""
layer = nn.Module()
layer.weight = torch.randint(-8,
7, (self.output_size, self.input_size),
dtype=torch.int8)
layer.weight_scale = torch.randn(self.output_size,
1,
dtype=torch.bfloat16)
layer.weight_offset = torch.randn(self.output_size,
1,
dtype=torch.bfloat16)
layer.left_trans = torch.randn(24, 24)
layer.right_trans = torch.randn(32, 32)
layer.clip_ratio = torch.tensor([0.9])
mock_packed = torch.randint(0,
100,
(self.output_size, self.input_size // 8),
dtype=torch.int32)
mock_pack_weights.return_value = mock_packed
self.method.transpose_weight = True
self.method.process_weights_after_loading(layer)
self.assertTrue(hasattr(layer, 'weight_packed'))
self.assertEqual(layer.weight_packed.shape,
(self.input_size // 8, self.output_size))
self.assertTrue(layer.weight_packed.is_contiguous())
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)