[Bugfix][LoRA] Fix the bug when runs Qwen3-Reranker-0.6B with LoRA. (#7156)

### What this PR does / why we need it?
Fix the error that reports while initializing qwen3-reranker-0.6b model
with `--enable-lora`.
And add a testcase to verify the fix.

- vLLM version: v0.17.0
- vLLM main:
4034c3d32e
---------
Signed-off-by: paulyu12 <507435917@qq.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
This commit is contained in:
yupeng
2026-03-15 17:55:42 +08:00
committed by GitHub
parent 7daccf4b64
commit 29f195a91c
6 changed files with 108 additions and 4 deletions

View File

@@ -734,11 +734,12 @@ def get_parallel_op(disable_tp, prefix, layer, direct):
return None, get_tp_group().rank_in_group, get_tp_group().world_size
def get_replicated_op(disable_tp, prefix, layer) -> CustomReplicatedOp | None:
def get_replicated_op(disable_tp, prefix, layer) -> tuple[CustomReplicatedOp | None, int | None, int | None]:
if disable_tp:
return None
return None, None, None
return CustomReplicatedOp(layer)
custom_op = CustomReplicatedOp(layer)
return custom_op, custom_op.tp_rank, custom_op.tp_size
def is_moe_layer(prefix: str) -> bool: