[npugraph_ex]enable npugraph_ex by default (#6664)
### What this PR does / why we need it? This pull request enables the `npugraph_ex` backend by default to improve performance on Ascend NPUs, as proposed in the [RFC](https://github.com/vllm-project/vllm-ascend/issues/6214). ### Does this PR introduce _any_ user-facing change? Yes. `npugraph_ex` is now enabled by default. Users can disable it by setting `enable: false` in the `npugraph_ex_config` section of the `additional_config`. ### How was this patch tested? CI passed. The changes are covered by existing and new E2E tests (`test_aclgraph_accuracy.py`) and unit tests (`test_ascend_config.py`) that have been updated to reflect the new default behavior. The tests verify correctness and consistency with `npugraph_ex` enabled and disabled, as well as with the new static kernel option. Signed-off-by: huyuanquan1 <huyuanquan1@huawei.com> Co-authored-by: huyuanquan1 <huyuanquan1@huawei.com>
This commit is contained in:
@@ -249,3 +249,17 @@
|
||||
# make unquantized_gemm as a customop.
|
||||
# Future Plan:
|
||||
# Remove this patch when vLLM support the operator as customop.
|
||||
#
|
||||
# ** 13. File: worker/patch_npugraph_ex_triton.py**
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# 1. `torchair.core._concrete_graph.ValuePack`,
|
||||
# `torchair.npu_fx_compiler._unpack_meta`,
|
||||
# `torchair.npu_fx_compiler._NpuGraphConverter._unpack_npu`
|
||||
# Why:
|
||||
# In the Triton scenario, npugraph_ex backend needs to process the value pack of the input parameters.
|
||||
# How:
|
||||
# Supplement the relevant processing logic through patches.
|
||||
# Related PR (if no, explain why):
|
||||
# https://gitcode.com/Ascend/torchair/pull/2575
|
||||
# Future Plan:
|
||||
# Remove this patch when the PTA version used by vllm-ascend has been upgraded.
|
||||
|
||||
Reference in New Issue
Block a user