Files
xc-llm-ascend/vllm_ascend
panchao-hub 42774df744 [Bugfix] Fix weight transpose in RL scenarios (#5567)
### What this PR does / why we need it?
In the training-inference switching scenario, there is no need to resume
the model weights during KV cache resumption, as this would lead to
format mismatch.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
7157596103

Signed-off-by: p00465316 <panchao13@huawei.com>
Co-authored-by: p00465316 <panchao13@huawei.com>
2026-01-05 09:17:26 +08:00
..
2025-12-20 17:03:25 +08:00
2025-12-11 18:45:43 +08:00
2025-12-31 09:49:55 +08:00
2025-12-25 09:17:06 +08:00
2025-12-02 17:35:47 +08:00