【feat】switch for fusion ops gmmswigluquant (#5992)
### What this PR does / why we need it?
Set a additional config parameter to control whether the gmmswigluequant
fuseion operator is enabled; it is enabled by True. / When enabled with
a small number of GPUs, the gmmswigluquant fused operator can cause some
performance degradation.
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
2c24bc6996
#### Perf
test model: GLM 4.6(w8a8)
- single A3 node(ep16, tp16), async-scheduling, mtp, FULL_DECODE_ONLY
- bs=1, input_lens=32000, ouput_lens=1024
Without this PR: TPOT 32.22.ms
With this PR: TPOT 30.23ms
---------
Signed-off-by: zjks98 <zhangjiakang4@huawei.com>
Co-authored-by: zjks98 <zhangjiakang4@huawei.com>
This commit is contained in:
@@ -206,6 +206,13 @@ class NPUPlatform(Platform):
|
||||
|
||||
elif model_config and hasattr(model_config.hf_text_config, "index_topk"):
|
||||
vllm_config.cache_config.cache_dtype = str(model_config.dtype).replace("torch.", "")
|
||||
|
||||
ascend_fusion_config = ascend_config.ascend_fusion_config
|
||||
if ascend_fusion_config:
|
||||
vllm_config.additional_config.setdefault("ascend_fusion_config", {}).update(
|
||||
vars(ascend_fusion_config) if not isinstance(ascend_fusion_config, dict) else ascend_fusion_config
|
||||
)
|
||||
|
||||
if model_config is None:
|
||||
logger.warning("Model config is missing. This may indicate that we are running a test case")
|
||||
enforce_eager = False
|
||||
|
||||
Reference in New Issue
Block a user