[Doc] Add --shm-size option to Docker command for qwen3 vl 235B (#3519)
### What this PR does / why we need it? Added shared memory size option to Docker run command.If shm-size is not specified, docker will use 64MB by default. In this case, vllm:EngineCore process may coredump if workload is high. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Done Closes: https://github.com/vllm-project/vllm-ascend/issues/3513 - vLLM version: v0.11.0rc3 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 --------- Signed-off-by: likeful <irayki@gmail.com> Signed-off-by: leijie2015 <irayki@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
@@ -37,6 +37,7 @@ docker run --rm \
|
||||
--device /dev/davinci_manager \
|
||||
--device /dev/devmm_svm \
|
||||
--device /dev/hisi_hdc \
|
||||
--shm-size 256g \
|
||||
-v /usr/local/dcmi:/usr/local/dcmi \
|
||||
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
|
||||
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
|
||||
|
||||
Reference in New Issue
Block a user