[Feature] implement eagle spec decoding for model runner v2 (#5840)
### What this PR does / why we need it? this pr implement eagle spec decoding for model runner v2, please see RFC https://github.com/vllm-project/vllm-ascend/issues/5208 ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? vLLM version: v0.13.0 --------- Signed-off-by: Ronald1995 <ronaldautomobile@163.com>
This commit is contained in:
@@ -174,7 +174,8 @@
|
||||
#
|
||||
# ** 6. File: worker/patch_triton.py**
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# 1. `vllm.model_executor.layers.mamba.ops`, `vllm.model_executor.layers.fla.ops`
|
||||
# 1. `vllm.model_executor.layers.mamba.ops`, `vllm.model_executor.layers.fla.ops`,
|
||||
# `vllm.v1.worker.gpu.sample.gumbel.gumbel_sample`
|
||||
# Why:
|
||||
# triton ops in vLLM perform not good on NPU. And there is no dispatch mechanism for triton ops.
|
||||
# How:
|
||||
@@ -263,3 +264,15 @@
|
||||
# Future Plan:
|
||||
# Remove this patch when vLLM support these operators.
|
||||
#
|
||||
# ** 12. File: worker/patch_v2_eagle.py**
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# 1. `vllm.v1.worker.gpu.spec_decode.eagle.EagleSpeculator.propose`
|
||||
# Why:
|
||||
# `propose` method use torch.gather, but the gather operator will
|
||||
# pollute the arguments passed to it. the bug is reported to huawei
|
||||
# CANN team, but not fixed yet.
|
||||
# How:
|
||||
# clone the out attribute ahead of gather to avoid the bug.
|
||||
# Future Plan:
|
||||
# Remove this patch when cann fix the gather bug.
|
||||
#
|
||||
|
||||
Reference in New Issue
Block a user