Files
xc-llm-ascend/examples/disaggregated_prefill_v1/run_server.sh
linfeng-yuan e0757dc376 [0.11.0]fix the configuration conflicts in documentation (#4824)
### What this PR does / why we need it?
Fix configuration errors in our documentation.

### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
NA.

Signed-off-by: linfeng-yuan <1102311262@qq.com>
2025-12-09 15:37:06 +08:00

33 lines
891 B
Bash

export HCCL_IF_IP=141.61.39.117
export GLOO_SOCKET_IFNAME="enp48s3u1u1"
export TP_SOCKET_IFNAME="enp48s3u1u1"
export HCCL_SOCKET_IFNAME="enp48s3u1u1"
export DISAGGREGATED_PREFILL_RANK_TABLE_PATH=path-to-rank-table
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=10
export VLLM_USE_V1=1
vllm serve model_path \
--host 0.0.0.0 \
--port 20002 \
--tensor-parallel-size 1\
--seed 1024 \
--served-model-name dsv3 \
--max-model-len 2000 \
---max-num-batched-tokens 2000 \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--kv-transfer-config \
'{"kv_connector": "LLMDataDistCMgrConnector",
"kv_buffer_device": "npu",
"kv_role": "kv_consumer",
"kv_parallel_size": 1,
"kv_port": "20001",
"engine_id": 0,
"kv_connector_module_path": "vllm_ascend.distributed.llmdatadist_connector_v1_a3"
}' \
--additional-config \
'{"enable_graph_mode": "True"}'\