[bugfix] fix test_camem failed with triton-ascend (#5492)

### What this PR does / why we need it?
This fixes a bug that occurred when running `test_camem.py` in the
triton-ascend environment `NPU function error:
aclrtGetMemInfo(ACL_HBM_MEM, &device_free, &device_total)`

- vLLM version: v0.13.0
- vLLM main:
5326c89803

---------

Signed-off-by: Meihan-chen <jcccx.cmh@gmail.com>
This commit is contained in:
meihanc
2026-01-05 20:10:54 +08:00
committed by GitHub
parent 58e8d19c35
commit 16b1bee804
5 changed files with 10 additions and 21 deletions

View File

@@ -88,6 +88,11 @@ class NPUWorker(WorkerBase):
# register patch for vllm
from vllm_ascend.utils import adapt_patch
adapt_patch()
# Import _inductor for graph mode execution with triton
# This lazy import avoids torch_npu re-initialization in patch
from vllm.triton_utils import HAS_TRITON
if HAS_TRITON:
import torch_npu._inductor # noqa: F401
# Register ops when worker init.
from vllm_ascend import ops
ops.register_dummy_fusion_op()