### What this PR does / why we need it? [Fix a data conversion bug introduced by [main#4655](3b7eb5179f) ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.12.0 - vLLM main:ad32e3e19cSigned-off-by: tongyuzhou <tongyuzhou1@huawei.com> Co-authored-by: tongyuzhou <tongyuzhou1@huawei.com> Co-authored-by: weijinqian0 <1184188277@qq.com>
This commit is contained in:
@@ -3079,7 +3079,7 @@ class NPUModelRunner(GPUModelRunner):
|
||||
(2 * self.pcp_size)).astype(np.int32) * (2 * self.pcp_size)
|
||||
num_padded_scheduled_tokens[:num_decode_reqs] = (
|
||||
tokens[:num_decode_reqs] * self.pcp_size)
|
||||
self.num_pcp_pads = num_padded_scheduled_tokens - tokens
|
||||
self.num_pcp_pads = torch.tensor(num_padded_scheduled_tokens - tokens)
|
||||
cu_padded_tokens, pcp_padded_arange = \
|
||||
self._get_cumsum_and_arange(num_padded_scheduled_tokens)
|
||||
unpad_mask = torch.from_numpy(
|
||||
|
||||
Reference in New Issue
Block a user