enable npugraph_ex (#5120)

### What this PR does / why we need it?
We will expose the enabling switch for npugraph_ex to better facilitate
subsequent optimization.

### Does this PR introduce _any_ user-facing change?
Previously, the enable_npugraph_ex switch would trigger an error; now we
have removed the error reporting mechanism to better facilitate
subsequent optimization efforts.
Basic functionalities are available in CANN and torch_npu for Q3, while
advanced optimizations will depend on the Q4 release.

### How was this patch tested?
llm =LLM(
    model=model,
    enforce_eager=False ,
        additional_config={
        "enable_npugraph_ex":  True
        },
        compilation_config={
            "cudagraph_mode": "FULL_DECODE_ONLY",
            "cudagraph_capture_sizes": [16],
        },
}


- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: p00465316 <panchao13@huawei.com>
Co-authored-by: p00465316 <panchao13@huawei.com>
Co-authored-by: weijinqian0 <1184188277@qq.com>
This commit is contained in:
panchao-hub
2025-12-18 09:08:40 +08:00
committed by GitHub
parent 39bdd4cfaa
commit 8069442b41
4 changed files with 107 additions and 13 deletions

View File

@@ -64,13 +64,13 @@ class TestAscendConfig(TestBase):
@_clean_up_ascend_config
def test_init_ascend_config_enable_npugraph_ex(self):
with self.assertRaises(NotImplementedError):
test_vllm_config = VllmConfig()
test_vllm_config.additional_config = {
"enable_npugraph_ex": True,
"refresh": True,
}
init_ascend_config(test_vllm_config)
test_vllm_config = VllmConfig()
test_vllm_config.additional_config = {
"enable_npugraph_ex": True,
"refresh": True,
}
ascend_config = init_ascend_config(test_vllm_config)
self.assertTrue(ascend_config.enable_npugraph_ex)
@_clean_up_ascend_config
def test_get_ascend_config(self):