Files
xc-llm-ascend/vllm_ascend
jiangmengyu18 74699877c9 [v0.18.0][BugFix] fix the weightsmapper bug of qwen3-vl (#7868)
### What this PR does / why we need it?
This PR fixes a weight loading error in the Qwen3-VL model.
The bug was introduced by vLLM. In vLLM's `qwen3-vl.py`, the prefix of
the `lm_head` layer is hardcoded as `"lm_head"`. However,
`hf_to_vllm_mapper` remaps the weight name of `lm_head` from `lm_head`
to `language_model.lm_head`.
This causes a mismatch between the keys in the weight file and the
prefix of the lm_head layer, resulting in an error.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- [x] Run Qwen3-VL dense model with the fusion operator, verify correct
output

Signed-off-by: betta18 <jiangmengyu1@huawei.com>
Co-authored-by: betta18 <jiangmengyu1@huawei.com>
2026-04-02 12:56:08 +08:00
..
2026-03-21 16:05:38 +08:00
2026-03-19 14:27:27 +08:00