[bugfix]limit graph replay sync (#5761)

### What this PR does / why we need it?
when graph mode is picewise,replay by synchronize will be effect
performance, sync almost cost 250us

![123](https://github.com/user-attachments/assets/04d2a1f3-1f57-4dbb-85ce-b250f2ee7ff0)

### Does this PR introduce _any_ user-facing change?
only sync when graph mode contain full mode
### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef

---------

Signed-off-by: wangyongjun <wangyongjun7@huawei.com>
This commit is contained in:
wangyongjun
2026-01-12 16:46:21 +08:00
committed by GitHub
parent 7a6fde80b1
commit 4453c60262

View File

@@ -192,11 +192,12 @@ class ACLGraphWrapper:
f"got {new_input_addresses}")
logger.info_once("Replaying aclgraph")
# In async scheduling or multi-threaded (MT) scenarios, it is possible that
# In async scheduling or multi-threaded (MT) scenarios when graph mode is FULL, it is possible that
# the CPU's record event (from update_attn_params) for the iteration i completes
# before the grph replay of iteration i-1.
# To ensure proper ordering, we must call synchronize here before replaying,
# so that update_attn_params only executes after the previous graph replay has fully completed.
if self.runtime_mode == CUDAGraphMode.FULL:
torch.npu.synchronize()
entry.aclgraph.replay()
return entry.output