### What this PR does / why we need it?
- Delete the environment variable
`VLLM_ASCEND_ENABLE_FLASHCOMM2_OSHARED`
- Introduce layer_sharding as a configurable feature in
additional_config
- Revise the term "shared weight" to "shard weight."
Configuration : The feature is opt-in via the additional_config
argument:
```
--additional-config '{
"layer_sharding": ["o_proj", "q_b_proj"]
}'
```
This is orthogonal to standard tensor parallelism and weight replication
strategies. It is treated as a separate, explicit feature.It can be used
in any scenario, combined with the
flashcomm2https://github.com/vllm-project/vllm-ascend/pull/3232 feature
or the ShardedCP #4702 feature, to achieve significant performance.
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
---------
Signed-off-by: zzhx1 <zzh_201018@outlook.com>
Signed-off-by: zzhxx <zhangzihang23@mails.ucas.ac.cn>
Signed-off-by: chenxiao <Jaychou1620@Gmail.com>
Co-authored-by: clrs97 <524936896@qq.com>
Co-authored-by: Levi-JQ <yujinqi2@huawei.com>
Co-authored-by: chenxiao <Jaychou1620@Gmail.com>
26 lines
386 B
Markdown
26 lines
386 B
Markdown
# Feature Guide
|
|
|
|
This section provides a detailed usage guide of vLLM Ascend features.
|
|
|
|
:::{toctree}
|
|
:caption: Feature Guide
|
|
:maxdepth: 1
|
|
graph_mode
|
|
quantization
|
|
sleep_mode
|
|
structured_output
|
|
lora
|
|
eplb_swift_balancer
|
|
netloader
|
|
Multi_Token_Prediction
|
|
dynamic_batch
|
|
kv_pool
|
|
external_dp
|
|
large_scale_ep
|
|
ucm_deployment
|
|
Fine_grained_TP
|
|
layer_sharding
|
|
speculative_decoding
|
|
context_parallel
|
|
:::
|