[Bugfix] Fixed an accuracy problem of sp with eagle3 (#5816)
### What this PR does / why we need it?
Fixed an accuracy problem when using eagle3 with sp.
The problem is described in
https://github.com/vllm-project/vllm-ascend/issues/5825.
It also adds a much more precise way to determine whether drafter should
use `sp` or not.
Also, it changes the `eager` of drafter to be a real `eager` in frontend
to avoid a `fx-graph` problem.
### Does this PR introduce _any_ user-facing change?
N/A
### How was this patch tested?
For simpilicity, we test it as in
https://github.com/vllm-project/vllm-ascend/issues/5825.
And we get the same result of `eagle3` with `sp` disabled.
```text
--------------------------------------------------
total_num_output_tokens: 1000
num_drafts: 437
num_draft_tokens: 1311
num_accepted_tokens: 564
mean acceptance length: 2.29
--------------------------------------------------
acceptance at token 0: 0.62
acceptance at token 1: 0.40
acceptance at token 2: 0.27
acceptance at token 3: 0.00
acceptance at token 4: 0.00
acceptance at token 5: 0.00
```
* vLLM version: v0.13.0
* vLLM main:
2f4e6548ef
Signed-off-by: drslark <slarksblood@qq.com>
This commit is contained in:
@@ -275,7 +275,9 @@ class TestEagleProposerDummyRun(TestBase):
|
||||
self.mock_cpugpubuffer.stop()
|
||||
self.mock_supports_multimodal_inputs.stop()
|
||||
|
||||
@patch("vllm_ascend.spec_decode.eagle_proposer.get_forward_context")
|
||||
# cpu does not support parallel-group, let alone `sp`
|
||||
@patch("vllm_ascend.spec_decode.eagle_proposer.get_forward_context",
|
||||
**{"return_value.sp_enabled": False})
|
||||
@patch("vllm_ascend.spec_decode.eagle_proposer.set_ascend_forward_context")
|
||||
def test_dummy_run_basic(self, mock_context, mock_get_context):
|
||||
num_tokens = 32
|
||||
@@ -288,7 +290,9 @@ class TestEagleProposerDummyRun(TestBase):
|
||||
|
||||
self.assertTrue(self.proposer.model.call_count == 4)
|
||||
|
||||
@patch("vllm_ascend.spec_decode.eagle_proposer.get_forward_context")
|
||||
# cpu does not support parallel-group, let alone `sp`
|
||||
@patch("vllm_ascend.spec_decode.eagle_proposer.get_forward_context",
|
||||
**{"return_value.sp_enabled": False})
|
||||
@patch("vllm_ascend.spec_decode.eagle_proposer.set_ascend_forward_context")
|
||||
def test_dummy_run_with_prefill(self, mock_context, mock_get_context):
|
||||
mock_context.return_value.__enter__.return_value = None
|
||||
@@ -306,6 +310,8 @@ class TestEagleProposerDummyRun(TestBase):
|
||||
mock_return_context = MagicMock()
|
||||
mock_return_context.cudagraph_runtime_mode = CUDAGraphMode.FULL
|
||||
mock_return_context.capturing = True
|
||||
# cpu does not support parallel-group, let alone `sp`
|
||||
mock_return_context.sp_enabled = False
|
||||
mock_get_context.return_value = mock_return_context
|
||||
self.proposer.use_cuda_graph = True
|
||||
# cpu does not support `torch.ops.vllm.maybe_pad_and_reduce`
|
||||
@@ -326,6 +332,8 @@ class TestEagleProposerDummyRun(TestBase):
|
||||
mock_return_context = MagicMock()
|
||||
mock_return_context.cudagraph_runtime_mode = CUDAGraphMode.FULL
|
||||
mock_return_context.capturing = False
|
||||
# cpu does not support parallel-group, let alone `sp`
|
||||
mock_return_context.sp_enabled = False
|
||||
mock_get_context.return_value = mock_return_context
|
||||
self.proposer.use_cuda_graph = True
|
||||
# cpu does not support `torch.ops.vllm.maybe_pad_and_reduce`
|
||||
|
||||
Reference in New Issue
Block a user