[BugFix] Fix mlapo accuracy problem related with weight processing. (#3850)
This PR fixes a mlapo accuracy problem related with weight processing.
Furthermore, add back mlapo related e2e test with quantized deepseek
model.
- vLLM version: v0.11.0rc3
- vLLM main:
83f478bb19
Signed-off-by: whx-sjtu <2952154980@qq.com>
This commit is contained in:
@@ -826,9 +826,9 @@ class AscendMLAImpl(MLAAttentionImpl):
|
||||
..., self.q_lora_rank:].contiguous()
|
||||
q_a_proj_wt = self.fused_qkv_a_proj.weight.data[
|
||||
..., :self.q_lora_rank].contiguous()
|
||||
kv_a_proj_wt = kv_a_proj_wt.contiguous()
|
||||
kv_a_proj_wt = kv_a_proj_wt.t().contiguous()
|
||||
kv_a_proj_wt = trans_rope_weight(kv_a_proj_wt, self.qk_rope_head_dim)
|
||||
kv_a_proj_wt = kv_a_proj_wt.contiguous()
|
||||
kv_a_proj_wt = kv_a_proj_wt.t().contiguous()
|
||||
wd_qkv = torch.cat((kv_a_proj_wt, q_a_proj_wt), dim=-1)
|
||||
wd_qkv = wd_qkv.t().contiguous()
|
||||
wd_qkv = transdata(wd_qkv,
|
||||
|
||||
Reference in New Issue
Block a user