[Fix] Prevent memory leak in MLA decode graph (#3743)

### What this PR does / why we need it?
The cache for MLA decode graph parameters was holding strong references
to tensors, preventing them from being garbage collected and leading to
increased memory usage.

This change wraps the cached tensors in weak references, allowing them
to be deallocated when no longer in use and reducing overall memory
pressure.

### Does this PR introduce _any_ user-facing change?
None.

### How was this patch tested?
None.

- vLLM version: v0.11.0rc3
- vLLM main:
c9461e05a4

---------

Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
This commit is contained in:
Yizhou
2025-10-25 20:37:33 +08:00
committed by GitHub
parent afc58184ec
commit 8ab8111fde
4 changed files with 29 additions and 19 deletions

View File

@@ -562,7 +562,8 @@ class AscendAttentionBackendImpl(AttentionImpl):
block_table=attn_metadata.block_tables,
context_lens=attn_metadata.seq_lens,
out=output)
update_graph_params_workspaces(num_tokens, workspace)
update_graph_params_workspaces(
num_tokens, weak_ref_tensors(workspace))
# Handle graph capturing mode
stream = torch_npu.npu.current_stream()
@@ -578,7 +579,7 @@ class AscendAttentionBackendImpl(AttentionImpl):
self.num_kv_heads,
self.num_heads,
self.scale,
weak_ref_tensors(attn_metadata.block_tables),
attn_metadata.block_tables,
attn_metadata.seq_lens,
weak_ref_tensors(output),
))