Files
xc-llm-ascend/vllm_ascend
wangx700 3b7eb5179f [Bugfix] fix the incorrect use of python's sum on tensors. (#4655)
### What this PR does / why we need it?
Fix the incorrect use of python's sum function on PyTorch tensors.
1. Using Python's sum() function on a tensor self.num_pcp_pads resulted
in 6ms execution time
Optimization: replacing with PyTorch's torch.sum() reduced execution
time to 474µs
2. scheduler_output.scheduled_spec_decode_tokens undergoes repeated loop
processing even when speculative decoding is not used

Optimization: added conditional logic to skip processing loops when
speculative decoding is disabled, eliminating unnecessary computational
overhead.


- vLLM version: 86e178f7c4d8c3b0eaf3c8e3f810a83f63b90e24
- vLLM main:
86e178f7c4

Signed-off-by: wangx700 <wangxin700@huawei.com>
Co-authored-by: weijinqian0 <1184188277@qq.com>
2025-12-15 19:22:40 +08:00
..
2025-12-02 22:10:52 +08:00
2025-12-11 18:45:43 +08:00
2025-12-02 17:35:47 +08:00