[BugFix] Add async communication check for capturing mode (#8149)
### What this PR does / why we need it? Introduce a check to not using asynchronous communication under `enable_dsa_cp_with_layer_shard` branch on capturing mode. This change prevents potential stream and event issues when operating in graph/capturing mode, ensuring safer communication practices. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? E2E test with dsv32 + FC1 + FULL_DECODE_ONLY + kv_transfer_config(kv_both) --------- Signed-off-by: chenchuw886 <chenchuw@huawei.com> Co-authored-by: chenchuw886 <chenchuw@huawei.com>
This commit is contained in:
@@ -43,7 +43,7 @@ The following table lists additional configuration options available in vLLM Asc
|
||||
| `enable_npugraph_ex` | bool | `False` | Whether to enable npugraph_ex graph mode. |
|
||||
| `pa_shape_list` | list | `[]` | The custom shape list of page attention ops. |
|
||||
| `enable_kv_nz` | bool | `False` | Whether to enable KV cache NZ layout. This option only takes effects on models using MLA (e.g., DeepSeek). |
|
||||
| `layer_sharding` | dict | `{}` | Configuration options for Layer Sharding Linear |
|
||||
| `layer_sharding` | dict | `{}` | Configuration options for Layer Sharding Linear. In PD-disaggregated deployments, it is supported only on P nodes with `kv_role="kv_producer"`. |
|
||||
| `enable_sparse_c8` | bool | `False` | Whether to enable KV cache C8 in DSA models (e.g., DeepSeekV3.2 and GLM5). Not supported on A5 devices now |
|
||||
| `enable_mc2_hierarchy_comm` | bool | `False` | Enable dispatch/combine op inter-node communication by ROCE. |
|
||||
|
||||
|
||||
@@ -37,11 +37,15 @@ To enable **Layer Shard Linear**, specify the target linear layers using the `--
|
||||
}'
|
||||
```
|
||||
|
||||
> **Restriction**
|
||||
> In PD-disaggregated deployments, Layer Sharding can only be enabled on the **P node** with `kv_role="kv_producer"`.
|
||||
> `kv_role="kv_consumer"` and `kv_role="kv_both"` are not supported.
|
||||
|
||||
---
|
||||
|
||||
## Supported Scenarios
|
||||
|
||||
This feature can be enabled in any scenario, but delivers the greatest benefit in the following cases:
|
||||
This feature delivers the greatest benefit in the following cases:
|
||||
|
||||
### FlashComm2-enabled
|
||||
|
||||
@@ -62,6 +66,8 @@ vllm serve \
|
||||
|
||||
With [DSA-CP](https://github.com/vllm-project/vllm-ascend/pull/4702), both `q_b_proj` and `o_proj` layers require large weight matrices to be stored per layer. Sharding these layers across NPUs helps fit extremely deep models (e.g., 61-layer architectures) into limited device memory.
|
||||
|
||||
In PD-disaggregated deployments, this mode is supported only on the **P node** with `kv_role="kv_producer"`.
|
||||
|
||||
**Example configuration:**
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user