[Feat] Support MTP to running in full graph mode (#3892)
### What this PR does / why we need it?
Currently, the MTP model still runs in eager in full graph mode. This PR
adapts the MTP with the full graph capture and execution. When the graph
mode is set to "FULL_DECODE_ONLY", the MTP will run in full-graph to
improve the performance.
The change in both disable_padded_drafter_batch is True and False case
include:
1. Add _mtp_graph_params in acl_graph.py to isolate the data of main
model and the data of MTP.
2. Padding some metadata in mla_v1.py when in fullgraph mode.
3. Fixed the essential data address that will be used in model.forward.
4. Adapted according to the aclgraph capture framwork:
1). Rebuild MTP model with ACLGraphWrapper.
2). Add common attn metadata when start capture in MTP dummy_run.
3). Add common attn metadata update in MTP.
4). Addapted data update when num_speculative_tokens > 1.
5. Add a patch of MTP to adapt vllm v0.11.0.
Existing Issues:
1. When disable_padded_drafter_batch=True and running in FullGraph mode,
the data of the first-round requests in MTP is abnormal. We need to
identify the cause subsequently.
2. When disable_padded_drafter_batch=False and running in FullGraph
mode, the acceptance rate of the second and third tokens will decrease
(For example, if we set the num_speculative_tokens=3, the acceptance
rate of first token is 90%, the second is only 50% lower than 60%, the
third is only 20% lower than 30%). The reason is that the data processed
after the model runs does not match. This is a problem from another PR.
It works fine in eager and PIECEWISE mode, but has problem in FullGraph
mode. Once we have a solution, we will submit a bugfix.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: anon189Ty <Stari_Falcon@outlook.com>
This commit is contained in:
@@ -21,7 +21,9 @@ from vllm.config import CUDAGraphMode, VllmConfig
|
||||
from vllm.forward_context import BatchDescriptor, ForwardContext
|
||||
|
||||
from tests.ut.base import TestBase
|
||||
from vllm_ascend.compilation.acl_graph import ACLGraphEntry, ACLGraphWrapper
|
||||
from vllm_ascend.compilation.acl_graph import (
|
||||
ACLGraphEntry, ACLGraphWrapper, get_mtp_graph_params, set_mtp_graph_params,
|
||||
update_mtp_graph_params_workspaces)
|
||||
|
||||
|
||||
class TestACLGraphEntry(TestBase):
|
||||
@@ -718,3 +720,24 @@ class TestACLGraphWrapper(TestBase):
|
||||
|
||||
unwrapped = wrapper.unwrap()
|
||||
self.assertEqual(unwrapped, self.mock_runnable)
|
||||
|
||||
|
||||
class TestMTPGraphParams(TestBase):
|
||||
|
||||
def test_set_mtp_graph_params(self):
|
||||
with patch('vllm_ascend.compilation.acl_graph._mtp_graph_params',
|
||||
new=None):
|
||||
set_mtp_graph_params([4])
|
||||
from vllm_ascend.compilation.acl_graph import _mtp_graph_params
|
||||
self.assertIsNotNone(_mtp_graph_params)
|
||||
|
||||
@patch('vllm_ascend.compilation.acl_graph._mtp_graph_params')
|
||||
def test_update_mtp_graph_params_workspaces(self, mtp_graph_params_mock):
|
||||
mtp_graph_params_mock.workspaces = {4: 5}
|
||||
update_mtp_graph_params_workspaces(4, 6)
|
||||
self.assertEqual(mtp_graph_params_mock.workspaces[4], 6)
|
||||
|
||||
@patch('vllm_ascend.compilation.acl_graph._mtp_graph_params')
|
||||
def test_get_mtp_graph_params(self, mtp_graph_params_mock):
|
||||
graph_params = get_mtp_graph_params()
|
||||
self.assertIs(mtp_graph_params_mock, graph_params)
|
||||
|
||||
Reference in New Issue
Block a user