Files
xc-llm-ascend/vllm_ascend
Chen Chen a2daacbd71 [perf] Fix MLAPO weight disposal for KV-consumer MLA in PD-mix deploy... (#5192)
### What this PR does / why we need it?

- Problem: In MLA+MLAPO, KV-consumer deployments keep
fused_qkv_a_proj/q_proj weights and quant params even though MLAPO uses
the prepacked buffers, increasing memory footprint on decode nodes.
- Fix: Conditionally drop those tensors only when
`kv_transfer_config.is_kv_consumer` to reclaim memory (consistent with
the SFA behavior #4774 ).

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

Signed-off-by: Chen Chen <0109chenchen@gmail.com>
2026-01-05 21:29:45 +08:00
..
2025-12-20 17:03:25 +08:00
2025-12-11 18:45:43 +08:00
2025-12-25 09:17:06 +08:00
2025-12-02 17:35:47 +08:00