[Bugfix] fix hang in async scheduling (#4233)

### What this PR does / why we need it?

After https://github.com/vllm-project/vllm-ascend/pull/4113, there is no
synchronization between steps. However, in async scheduling with
aclgraph, it is possible that the CPU's record event for the current
iteration completes before the previous iteration's graph execution has
finished.

If cpu is fast enough, device will hang on event_wait in interation i+1
(assume that event_record is executed immediately on update stream of
device):
<img width="1812" height="489" alt="image"
src="https://github.com/user-attachments/assets/373fe655-afe5-4d7d-807e-b0aacf24a543"
/>

after add synchonization, record is launched after graph replay:
<img width="1803" height="466" alt="image"
src="https://github.com/user-attachments/assets/a8a68053-bd7d-49f5-a79c-9a26ef1285cc"
/>

bubble time caused by synchronization is about 85 us on G8600:
<img width="1491" height="804" alt="image"
src="https://github.com/user-attachments/assets/968611ee-f39a-4329-8150-1c4adba25dd1"
/>

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: realliujiaxu <realliujiaxu@163.com>
Co-authored-by: hwhaokun <haokun0405@163.com>
This commit is contained in:
realliujiaxu
2025-11-19 14:47:19 +08:00
committed by GitHub
parent 91b6ba8ffe
commit 1cdf9ffa73
2 changed files with 29 additions and 1 deletions

View File

@@ -128,7 +128,7 @@ def test_chunked_prefill_with_scheduler_dynamic_batch(
)
def test_async_scheduling() -> None:
def test_async_scheduling_eager() -> None:
prompts = [
"Hello, my name is",
"The president of the United States is",
@@ -148,3 +148,25 @@ def test_async_scheduling() -> None:
async_scheduling=True,
) as vllm_model:
vllm_model.generate(prompts, sampling_params=sampling_params)
def test_async_scheduling_with_full_graph() -> None:
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
] * 10
sampling_params = SamplingParams(temperature=0.2,
max_tokens=10,
stop_token_ids=None)
with VllmRunner("Qwen/Qwen3-8B",
max_model_len=4096,
max_num_seqs=50,
dtype="bfloat16",
gpu_memory_utilization=0.9,
async_scheduling=True,
compilation_config={"cudagraph_mode":
"FULL"}) as vllm_model:
vllm_model.generate(prompts, sampling_params=sampling_params)

View File

@@ -186,6 +186,12 @@ class ACLGraphWrapper:
f"got {new_input_addresses}")
logger.info_once("Replaying aclgraph")
# In async scheduling or multi-threaded (MT) scenarios, it is possible that
# the CPU's record event (from update_attn_params) for the iteration i completes
# before the grph replay of iteration i-1.
# To ensure proper ordering, we must call synchronize here before replaying,
# so that update_attn_params only executes after the previous graph replay has fully completed.
torch.npu.synchronize()
entry.aclgraph.replay()
return entry.output