天垓100 运行出错 #1

Open
opened 2025-09-25 17:05:15 +08:00 by lukeyan · 1 comment
  1. 如下为我这边的 天垓100的信息

image.png

  1. 运行如下命令

docker run -it --rm -p 8000:80
--name vllm-iluvatar
-v ~/models/Qwen2.5-7B-Instruct:/model:ro
--privileged
-e TENSOR_PARALLEL_SIZE=1
-e PREFIX_CACHING=true
-e MAX_MODEL_LEN=10000
enginex-iluvatar-vllm:bi100

报错如下:

declare -x CLASSPATH=".:/root/apps/jdk1.8.0_411/lib/dt.jar:/root/apps/jdk1.8.0_411/lib/tools.jar:/root/apps/apache-jmeter-5.6.3/lib/ext/ApacheJMeter_core.jar:/root/apps/apache-jmeter-5.6.3/lib/jorphan.jar:/root/apps/apache-jmeter-5.6.3/lib/logkit-2.0.jar:"
declare -x COREX_VERSION="3.2.1"
declare -x DEBIAN_FRONTEND="noninteractive"
declare -x HOME="/root"
declare -x HOSTNAME="373baf50c141"
declare -x ILUVATAR_VISIBLE_DEVICES="GPU-4e7c3600-3229-5ab6-9191-a6737d9c3f99,GPU-ea53618c-8b25-5196-8220-4d9b783a5fce,GPU-7b2fbbea-ce1c-5e68-9f4d-7da6ea071243,GPU-c8d8c226-1559-5a7b-9b77-20cd755ad7fb,GPU-d1c87501-19d4-51eb-822e-0125d4cb3642,GPU-06a4f8d6-24bd-52ae-80dd-6f92bf6cff8c,GPU-26c45656-b77a-554b-9424-d0a1600ddc5f,GPU-78e9d822-5ab3-5b2d-9096-6072221b38fa"
declare -x JAVA_HOME="/root/apps/jdk1.8.0_411"
declare -x JMETER_HOME="/root/apps/apache-jmeter-5.6.3"
declare -x JRE_HOME="/root/apps/jdk1.8.0_411/jre"
declare -x KUBERNETES_PORT="tcp://10.43.0.1:443"
declare -x KUBERNETES_PORT_443_TCP="tcp://10.43.0.1:443"
declare -x KUBERNETES_PORT_443_TCP_ADDR="10.43.0.1"
declare -x KUBERNETES_PORT_443_TCP_PORT="443"
declare -x KUBERNETES_PORT_443_TCP_PROTO="tcp"
declare -x KUBERNETES_SERVICE_HOST="10.43.0.1"
declare -x KUBERNETES_SERVICE_PORT="443"
declare -x KUBERNETES_SERVICE_PORT_HTTPS="443"
declare -x LANG="en_US.utf8"
declare -x LC_ALL="en_US.utf8"
declare -x LD_LIBRARY_PATH="/usr/local/corex/lib64:/usr/local/openmpi/lib"
declare -x MAX_MODEL_LEN="10000"
declare -x OLDPWD
declare -x PATH="/root/apps/apache-jmeter-5.6.3/bin:/root/apps/jdk1.8.0_411/bin:/usr/local/corex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/corex/lib64/python3/dist-packages/bin:/usr/local/openmpi/bin"
declare -x PREFIX_CACHING="true"
declare -x PWD="/workspace"
declare -x PYTHONPATH="/usr/local/corex/lib64/python3/dist-packages"
declare -x RPC_CLIENT_PATH="/usr/local/iluvatar/bin/"
declare -x SHLVL="1"
declare -x TENSOR_PARALLEL_SIZE="1"
declare -x TERM="xterm"
Thu 25 Sep 2025 08:57:55 AM UTC

Starting VLLM OpenAI API Server...
Using effective arguments:
Host (--host): 0.0.0.0
Port (--port): 80
Enforce Eager (--enforce-eager): Enabled
Disable Log Req (--disable-log-requests): Enabled
Served Model Name (--served-model-name): llm
Model Path (--model): /model
Max Model Length (--max-model-len): 10000
Tensor Parallel Size (--tensor-parallel-size): 1
Max Num Seqs (--max-num-seqs): 64

Full cmd:
python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 80 --enforce-eager --disable-log-requests --enable-prefix-caching --served-model-name llm --model /model --max-model-len 10000 --tensor-parallel-size 1 --max-num-seqs 64

CUDA Error:
File: /opt/apps/ixformer/src/impl/ixformer/cuda/cuda_functions.cpp
Line: 7
Error code: 100
Error text: no CUDA-capable device is detected
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
import(pkg_name)
File "/usr/local/lib/python3.10/site-packages/vllm/init.py", line 3, in
from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
File "/usr/local/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 11, in
from vllm.config import (CacheConfig, ConfigFormat, DecodingConfig,
File "/usr/local/lib/python3.10/site-packages/vllm/config.py", line 12, in
from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS
File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/init.py", line 1, in
from vllm.model_executor.parameter import (BasevLLMParameter,
File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/parameter.py", line 7, in
from vllm.distributed import get_tensor_model_parallel_rank
File "/usr/local/lib/python3.10/site-packages/vllm/distributed/init.py", line 1, in
from .communication_op import *
File "/usr/local/lib/python3.10/site-packages/vllm/distributed/communication_op.py", line 6, in
from .parallel_state import get_tp_group
File "/usr/local/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 40, in
from ixformer.distributed import all_reduce
File "/usr/local/corex/lib64/python3/dist-packages/ixformer/init.py", line 15, in
from .autograd import enable_grad, no_grad
File "/usr/local/corex/lib64/python3/dist-packages/ixformer/autograd/init.py", line 2, in
from .function import Function, FunctionCtx, compatible_torch_function
File "/usr/local/corex/lib64/python3/dist-packages/ixformer/autograd/function.py", line 7, in
from ixformer.contrib.torch.function import compatible_torch_function
File "/usr/local/corex/lib64/python3/dist-packages/ixformer/contrib/torch/init.py", line 13, in
from .compatiable_mode import *
File "/usr/local/corex/lib64/python3/dist-packages/ixformer/contrib/torch/compatiable_mode.py", line 30, in
refresh_memory_allocator()
File "/usr/local/corex/lib64/python3/dist-packages/ixformer/contrib/torch/compatiable_mode.py", line 26, in refresh_memory_allocator
_ORIGIN_MEMORY_ALLOCATOR_PTR = _C.get_memory_allocator()
RuntimeError: CUDA_CHECK ERROR

1. 如下为我这边的 天垓100的信息 ![image.png](/attachments/57974d52-2e3d-4546-8fba-b1976d16357b) 1. 运行如下命令 docker run -it --rm -p 8000:80 \ --name vllm-iluvatar \ -v ~/models/Qwen2.5-7B-Instruct:/model:ro \ --privileged \ -e TENSOR_PARALLEL_SIZE=1 \ -e PREFIX_CACHING=true \ -e MAX_MODEL_LEN=10000 \ enginex-iluvatar-vllm:bi100 报错如下: declare -x CLASSPATH=".:/root/apps/jdk1.8.0_411/lib/dt.jar:/root/apps/jdk1.8.0_411/lib/tools.jar:/root/apps/apache-jmeter-5.6.3/lib/ext/ApacheJMeter_core.jar:/root/apps/apache-jmeter-5.6.3/lib/jorphan.jar:/root/apps/apache-jmeter-5.6.3/lib/logkit-2.0.jar:" declare -x COREX_VERSION="3.2.1" declare -x DEBIAN_FRONTEND="noninteractive" declare -x HOME="/root" declare -x HOSTNAME="373baf50c141" declare -x ILUVATAR_VISIBLE_DEVICES="GPU-4e7c3600-3229-5ab6-9191-a6737d9c3f99,GPU-ea53618c-8b25-5196-8220-4d9b783a5fce,GPU-7b2fbbea-ce1c-5e68-9f4d-7da6ea071243,GPU-c8d8c226-1559-5a7b-9b77-20cd755ad7fb,GPU-d1c87501-19d4-51eb-822e-0125d4cb3642,GPU-06a4f8d6-24bd-52ae-80dd-6f92bf6cff8c,GPU-26c45656-b77a-554b-9424-d0a1600ddc5f,GPU-78e9d822-5ab3-5b2d-9096-6072221b38fa" declare -x JAVA_HOME="/root/apps/jdk1.8.0_411" declare -x JMETER_HOME="/root/apps/apache-jmeter-5.6.3" declare -x JRE_HOME="/root/apps/jdk1.8.0_411/jre" declare -x KUBERNETES_PORT="tcp://10.43.0.1:443" declare -x KUBERNETES_PORT_443_TCP="tcp://10.43.0.1:443" declare -x KUBERNETES_PORT_443_TCP_ADDR="10.43.0.1" declare -x KUBERNETES_PORT_443_TCP_PORT="443" declare -x KUBERNETES_PORT_443_TCP_PROTO="tcp" declare -x KUBERNETES_SERVICE_HOST="10.43.0.1" declare -x KUBERNETES_SERVICE_PORT="443" declare -x KUBERNETES_SERVICE_PORT_HTTPS="443" declare -x LANG="en_US.utf8" declare -x LC_ALL="en_US.utf8" declare -x LD_LIBRARY_PATH="/usr/local/corex/lib64:/usr/local/openmpi/lib" declare -x MAX_MODEL_LEN="10000" declare -x OLDPWD declare -x PATH="/root/apps/apache-jmeter-5.6.3/bin:/root/apps/jdk1.8.0_411/bin:/usr/local/corex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/corex/lib64/python3/dist-packages/bin:/usr/local/openmpi/bin" declare -x PREFIX_CACHING="true" declare -x PWD="/workspace" declare -x PYTHONPATH="/usr/local/corex/lib64/python3/dist-packages" declare -x RPC_CLIENT_PATH="/usr/local/iluvatar/bin/" declare -x SHLVL="1" declare -x TENSOR_PARALLEL_SIZE="1" declare -x TERM="xterm" Thu 25 Sep 2025 08:57:55 AM UTC -------------------------------------------------- Starting VLLM OpenAI API Server... Using effective arguments: Host (--host): 0.0.0.0 Port (--port): 80 Enforce Eager (--enforce-eager): Enabled Disable Log Req (--disable-log-requests): Enabled Served Model Name (--served-model-name): llm Model Path (--model): /model Max Model Length (--max-model-len): 10000 Tensor Parallel Size (--tensor-parallel-size): 1 Max Num Seqs (--max-num-seqs): 64 -------------------------------------------------- Full cmd: python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 80 --enforce-eager --disable-log-requests --enable-prefix-caching --served-model-name llm --model /model --max-model-len 10000 --tensor-parallel-size 1 --max-num-seqs 64 -------------------------------------------------- CUDA Error: File: /opt/apps/ixformer/src/impl/ixformer/cuda/cuda_functions.cpp Line: 7 Error code: 100 Error text: no CUDA-capable device is detected Traceback (most recent call last): File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details __import__(pkg_name) File "/usr/local/lib/python3.10/site-packages/vllm/__init__.py", line 3, in <module> from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/usr/local/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 11, in <module> from vllm.config import (CacheConfig, ConfigFormat, DecodingConfig, File "/usr/local/lib/python3.10/site-packages/vllm/config.py", line 12, in <module> from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/__init__.py", line 1, in <module> from vllm.model_executor.parameter import (BasevLLMParameter, File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/parameter.py", line 7, in <module> from vllm.distributed import get_tensor_model_parallel_rank File "/usr/local/lib/python3.10/site-packages/vllm/distributed/__init__.py", line 1, in <module> from .communication_op import * File "/usr/local/lib/python3.10/site-packages/vllm/distributed/communication_op.py", line 6, in <module> from .parallel_state import get_tp_group File "/usr/local/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 40, in <module> from ixformer.distributed import all_reduce File "/usr/local/corex/lib64/python3/dist-packages/ixformer/__init__.py", line 15, in <module> from .autograd import enable_grad, no_grad File "/usr/local/corex/lib64/python3/dist-packages/ixformer/autograd/__init__.py", line 2, in <module> from .function import Function, FunctionCtx, compatible_torch_function File "/usr/local/corex/lib64/python3/dist-packages/ixformer/autograd/function.py", line 7, in <module> from ixformer.contrib.torch.function import compatible_torch_function File "/usr/local/corex/lib64/python3/dist-packages/ixformer/contrib/torch/__init__.py", line 13, in <module> from .compatiable_mode import * File "/usr/local/corex/lib64/python3/dist-packages/ixformer/contrib/torch/compatiable_mode.py", line 30, in <module> refresh_memory_allocator() File "/usr/local/corex/lib64/python3/dist-packages/ixformer/contrib/torch/compatiable_mode.py", line 26, in refresh_memory_allocator _ORIGIN_MEMORY_ALLOCATOR_PTR = _C.get_memory_allocator() RuntimeError: CUDA_CHECK ERROR
116 KiB
Owner

可能是天数驱动版本不一致导致,需要更新到3.2.1

可能是天数驱动版本不一致导致,需要更新到3.2.1
Sign in to join this conversation.
No Label
2 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: EngineX-Iluvatar/enginex-bi_series-vllm#1
No description provided.