[Bugfix] Fix the bug of incorrect precision (#2479)

### What this PR does / why we need it?
Fix the bug of incorrect precision

- vLLM version: v0.10.0
- vLLM main:
53415653ff

---------

Signed-off-by: weiguihua2 <weiguihua2@huawei.com>
This commit is contained in:
weiguihua2
2025-08-22 17:08:56 +08:00
committed by GitHub
parent f0be3eed84
commit dd04a96ee3

View File

@@ -75,8 +75,8 @@ class VLLMAscendQuantizer:
"vllm.model_executor.layers.layernorm.RMSNorm", "__init__",
[wrapper_rmsnorm_init])
VLLMAscendQuantizer.apply_patch(
"vllm.model_executor.layers.layernorm.RMSNorm",
"forward_oot", [wrapper_rmsnorm_forward_oot])
"vllm_ascend.ops.layernorm.AscendRMSNorm", "forward_oot",
[wrapper_rmsnorm_forward_oot])
VLLMAscendQuantizer.apply_patch(
"vllm.model_executor.layers.vocab_parallel_embedding.VocabParallelEmbedding",
"__init__", [wrapper_vocab_parallel_embedding_init])