enable npugraph_ex (#5120)
### What this PR does / why we need it?
We will expose the enabling switch for npugraph_ex to better facilitate
subsequent optimization.
### Does this PR introduce _any_ user-facing change?
Previously, the enable_npugraph_ex switch would trigger an error; now we
have removed the error reporting mechanism to better facilitate
subsequent optimization efforts.
Basic functionalities are available in CANN and torch_npu for Q3, while
advanced optimizations will depend on the Q4 release.
### How was this patch tested?
llm =LLM(
model=model,
enforce_eager=False ,
additional_config={
"enable_npugraph_ex": True
},
compilation_config={
"cudagraph_mode": "FULL_DECODE_ONLY",
"cudagraph_capture_sizes": [16],
},
}
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
---------
Signed-off-by: p00465316 <panchao13@huawei.com>
Co-authored-by: p00465316 <panchao13@huawei.com>
Co-authored-by: weijinqian0 <1184188277@qq.com>
This commit is contained in:
@@ -89,13 +89,13 @@ def npugraph_ex_compile(
|
||||
tuple,
|
||||
args=([return_value], ))
|
||||
output_node.args = (tuple_node, )
|
||||
fx_graph.recompile()
|
||||
graph.recompile()
|
||||
|
||||
import torchair
|
||||
|
||||
# TODO: use a better way to lazy register replacement, instead of import one by one
|
||||
# As an example, we directly import here to register replacement.
|
||||
import vllm_ascend.compilation.npugraph_ex_passes.add_rms_norm_quant # noqa
|
||||
# import vllm_ascend.compilation.npugraph_ex_passes.add_rms_norm_quant # noqa
|
||||
|
||||
torch.npu.set_compile_mode(jit_compile=False)
|
||||
config = torchair.CompilerConfig()
|
||||
|
||||
Reference in New Issue
Block a user