[Bugfix] fix custom op GmmSwigluQuantWeightNzTensorList (#4593)

### What this PR does / why we need it?

1. Fixes the environment path used to locate custom op shared libraries.
2. Uses empty tensor initialization for op outputs instead of
zero-initialization for better efficiency.



- vLLM version: v0.11.2
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2

---------

Signed-off-by: QianChenxi <chenxi.qian.cq@outlook.com>
This commit is contained in:
Chenxi Qian
2025-12-02 22:02:04 +08:00
committed by GitHub
parent b84c9afbf5
commit 4588cdac02
6 changed files with 18 additions and 27 deletions

View File

@@ -568,9 +568,9 @@ std::tuple<at::Tensor, at::Tensor, at::Tensor> grouped_matmul_swiglu_quant_weigh
int m = x_size[0];
int k = x_size[1];
at::Tensor output = at::zeros({m, n/2}, x.options().dtype(at::kChar));
at::Tensor output_scale = at::zeros({m}, x.options().dtype(at::kFloat));
at::Tensor output_offset = at::zeros({m}, x.options().dtype(at::kFloat));
at::Tensor output = at::empty({m, n/2}, x.options().dtype(at::kChar));
at::Tensor output_scale = at::empty({m}, x.options().dtype(at::kFloat));
at::Tensor output_offset = at::empty({m}, x.options().dtype(at::kFloat));
EXEC_NPU_CMD(
aclnnGroupedMatmulSwigluQuantWeightNzTensorList,