Fix a data conversion bug introduced by commit 3b7eb51 in main#4655 (#5115)

### What this PR does / why we need it?

[Fix a data conversion bug introduced by
[main#4655](3b7eb5179f)
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

Signed-off-by: tongyuzhou <tongyuzhou1@huawei.com>
Co-authored-by: tongyuzhou <tongyuzhou1@huawei.com>
Co-authored-by: weijinqian0 <1184188277@qq.com>
This commit is contained in:
Yuzhou Tong
2025-12-17 20:19:02 +08:00
committed by GitHub
parent 7f1e93f185
commit 7671ce1bf1

View File

@@ -3079,7 +3079,7 @@ class NPUModelRunner(GPUModelRunner):
(2 * self.pcp_size)).astype(np.int32) * (2 * self.pcp_size)
num_padded_scheduled_tokens[:num_decode_reqs] = (
tokens[:num_decode_reqs] * self.pcp_size)
self.num_pcp_pads = num_padded_scheduled_tokens - tokens
self.num_pcp_pads = torch.tensor(num_padded_scheduled_tokens - tokens)
cu_padded_tokens, pcp_padded_arange = \
self._get_cumsum_and_arange(num_padded_scheduled_tokens)
unpad_mask = torch.from_numpy(