[Feat] support basic pcp&dcp for qwen3next (#6091)
### What this PR does / why we need it?
This PR implements Context Parallelism (CP) support for the Qwen3-Next
model, including PCP (Parallel Context Parallelism) and DCP
(Dynamic/Data Context Parallelism).
- vLLM version: v0.15.0
- vLLM main:
f176443446
---------
Signed-off-by: SunnyLee219 <3294305115@qq.com>
Signed-off-by: Jingchun Gao <gaojingchun1@huawei.com>
Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
Signed-off-by: Bai Yongbin <845473182@qq.com>
Co-authored-by: SunnyLee219 <3294305115@qq.com>
Co-authored-by: Jingchun Gao <gaojingchun1@huawei.com>
Co-authored-by: 白永斌 <baiyongbin3@h-partners.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
This commit is contained in:
@@ -25,7 +25,12 @@ class AscendPCPMetadata:
|
||||
head_attn_nomask_seqlens: torch.Tensor = None
|
||||
tail_attn_nomask_seqlens: torch.Tensor = None
|
||||
q_full_idx: torch.Tensor = None
|
||||
pcp_use_hybrid_attn: bool = False
|
||||
pcp_unpad_mask: torch.Tensor = None
|
||||
pcp_allgather_restore_idx: list[int] | None = None
|
||||
pcp_fa_query_idx: torch.Tensor = None
|
||||
pcp_padded_tokens_fla: int = 0
|
||||
pcp_enter_fa_restore_idx: torch.Tensor = None
|
||||
block_table_cp: torch.Tensor = None
|
||||
valid_block_ids: torch.Tensor = None
|
||||
prefill_q_cum_seqlens: torch.Tensor = None
|
||||
|
||||
Reference in New Issue
Block a user