[v0.11.0][bugfix] Add 'layer_type' param to get_pergroup_param() for compatibility (#3684)

Resolves a `TypeError: got an unexpected keyword argument 'layer_type'`.

A recent change (PR #3311) started passing the `layer_type` argument
when calling `get_pergroup_param()`. This specific implementation does
not use this parameter, causing the error.

This patch adds `layer_type=None` to the method signature to maintain
API compatibility and ignore the unused argument.

Signed-off-by: SlightwindSec <slightwindsec@gmail.com>
This commit is contained in:
Slightwind
2025-10-23 21:26:50 +08:00
committed by GitHub
parent f3ea657e93
commit d2d19a4c3c

View File

@@ -130,8 +130,11 @@ class AscendW4A4FlatQuantDynamicLinearMethod:
dtype=torch.float32)
return params_dict
def get_pergroup_param(self, input_size: int, output_size: int,
params_dtype: torch.dtype) -> Dict[str, Any]:
def get_pergroup_param(self,
input_size: int,
output_size: int,
params_dtype: torch.dtype,
layer_type: Optional[str] = None) -> Dict[str, Any]:
return {}
@staticmethod