vllm-ascend support chunked prefill (#1172)

### What this PR does / why we need it?
vllm-ascend support chunked prefill for MLA


---------

Signed-off-by: fems14 <1804143737@qq.com>
This commit is contained in:
fems14
2025-06-14 22:31:16 +08:00
committed by GitHub
parent a3b5af8307
commit ab5d110fcc
5 changed files with 303 additions and 20 deletions

View File

@@ -31,6 +31,7 @@ The following table lists the additional configuration options available in vLLM
| `expert_tensor_parallel_size` | str | `0` | Expert tensor parallel size the model to use. |
| `refresh` | bool | `false` | Whether to refresh global ascend config content. This value is usually used by rlhf case. |
| `expert_map_path` | str | None | When using expert load balancing for the MOE model, an expert map path needs to be passed in. |
| `chunked_prefill_for_mla` | bool | `False` | Whether to enable the fused operator-like chunked_prefill. |
The details of each config option are as follows: