[FEAT] Support DeepSeek-V3.2 with FULL_DECODE_ONLY mode (#4706)
### What this PR does / why we need it?
The first commit support `FULL_DECODE_ONLY`:
- Update `AscendSFAMetadataBuilder` to use `num_input_tokens` for
slicing slots and positions, ensuring fixed tensor shapes.
- Implement padding logic for `query_start_loc` in `NPUModelRunner` to
support uniform decode in full graph mode, aligning with GPU runner
behavior.
- Adjust MLA cosine cache allocation to occur independently of graph
mode and switch to using device-resident sequence lengths for attention
metadata.
- Remove redundant slicing of hidden states and outputs in
`AscendSFAImpl` and optimize `sin`/`cos` cache updates.
The second commit take MTP into account:
- Update `AscendSFAMetadataBuilder` to use `num_input_tokens` for
slicing slots and positions, ensuring fixed tensor shapes.
- Implement padding logic for `query_start_loc` in `NPUModelRunner` to
support uniform decode in full graph mode, aligning with GPU runner
behavior.
- Adjust MLA cosine cache allocation to occur independently of graph
mode and switch to using device-resident sequence lengths for attention
metadata.
- Remove redundant slicing of hidden states and outputs in
`AscendSFAImpl` and optimize `sin`/`cos` cache updates.
And the rest of them are just bugfix.
### Does this PR introduce _any_ user-facing change?
None.
### How was this patch tested?
Test cases needed.
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
---------
Signed-off-by: Yizhou Liu <liu_yizhou@outlook.com>
This commit is contained in:
@@ -124,6 +124,9 @@ class TestAscendSFAMetadataBuilder(TestBase):
|
||||
common_attn_metadata.attn_mask = None
|
||||
common_attn_metadata.attn_state = AscendAttentionState.ChunkedPrefill
|
||||
common_attn_metadata.block_table_tensor = torch.randn(100, 4)
|
||||
common_attn_metadata.cos = None
|
||||
common_attn_metadata.sin = None
|
||||
common_attn_metadata.num_input_tokens = 100
|
||||
|
||||
model = MagicMock()
|
||||
model.model.layers = [MagicMock() for _ in range(10)]
|
||||
@@ -166,6 +169,9 @@ class TestAscendSFAMetadataBuilder(TestBase):
|
||||
common_attn_metadata.attn_mask = None
|
||||
common_attn_metadata.attn_state = AscendAttentionState.ChunkedPrefill
|
||||
common_attn_metadata.block_table_tensor = torch.randn(100, 4)
|
||||
common_attn_metadata.cos = None
|
||||
common_attn_metadata.sin = None
|
||||
common_attn_metadata.num_input_tokens = 100
|
||||
|
||||
model = MagicMock()
|
||||
model.model.layers = [MagicMock() for _ in range(10)]
|
||||
|
||||
Reference in New Issue
Block a user