[Bugfix] Add missing draft_attn_metadatas parameter to fix MTP test (#6232)
### What this PR does / why we need it? Fix the MTP test failure caused by accessing non-existent attribute `forward_context.draft_attn_metadatas`. **Root cause:** In `AscendAttentionBackendImpl.update_graph_params`, the code incorrectly accessed `forward_context.draft_attn_metadatas`, but `ForwardContext` class doesn't have this attribute. The original code passed this value via function parameter. **Fix:** Add `draft_attn_metadatas` parameter to the entire call chain: - `update_full_graph_params` function in `acl_graph.py` - All `update_graph_params` methods in attention backends - Pass the parameter correctly in `eagle_proposer.py` Also applied Gemini's suggestion to make `vllm_config=None` in `AscendAttentionCPImpl.update_graph_params` for API consistency. Related to item 9 in #5463 ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? This fixes the CI test failure: `test_deepseek_mtp_correctness[True-FULL_DECODE_ONLY-2-wemaster/deepseek_mtp_main_random_bf16]` Signed-off-by: lico67373 <918688502@qq.com>
This commit is contained in:
@@ -379,6 +379,7 @@ class AscendAttentionBackendImpl(AttentionImpl):
|
||||
vllm_config,
|
||||
speculative_config=None,
|
||||
num_dcp_pcp_tokens=None,
|
||||
draft_attn_metadatas=None,
|
||||
):
|
||||
if using_paged_attention(num_tokens, vllm_config):
|
||||
# Paged Attention update logic
|
||||
@@ -436,7 +437,7 @@ class AscendAttentionBackendImpl(AttentionImpl):
|
||||
# FIA update logic
|
||||
if forward_context.is_draft_model:
|
||||
graph_params = get_draft_graph_params()
|
||||
attn_metadata = forward_context.draft_attn_metadatas
|
||||
attn_metadata = draft_attn_metadatas
|
||||
attn_keys = list(attn_metadata[0].keys())
|
||||
else:
|
||||
graph_params = get_graph_params()
|
||||
|
||||
@@ -281,9 +281,10 @@ class AscendAttentionCPImpl(AscendAttentionBackendImpl):
|
||||
update_stream,
|
||||
forward_context,
|
||||
num_tokens,
|
||||
vllm_config,
|
||||
vllm_config=None,
|
||||
speculative_config=None,
|
||||
num_dcp_pcp_tokens=None,
|
||||
draft_attn_metadatas=None,
|
||||
):
|
||||
graph_params = get_graph_params()
|
||||
# FIXME: Behold! We are using a temporary hack here to update the args
|
||||
|
||||
@@ -292,6 +292,7 @@ class AscendMlaCPImpl(AscendMLAImpl):
|
||||
vllm_config=None,
|
||||
speculative_config=None,
|
||||
num_dcp_pcp_tokens=None,
|
||||
draft_attn_metadatas=None,
|
||||
):
|
||||
if forward_context.is_draft_model:
|
||||
graph_params = get_draft_graph_params()
|
||||
|
||||
@@ -733,6 +733,7 @@ class AscendMLAImpl(MLAAttentionImpl):
|
||||
vllm_config=None,
|
||||
speculative_config=None,
|
||||
num_dcp_pcp_tokens=None,
|
||||
draft_attn_metadatas=None,
|
||||
):
|
||||
if forward_context.is_draft_model:
|
||||
graph_params = get_draft_graph_params()
|
||||
|
||||
Reference in New Issue
Block a user