Files
xc-llm-ascend/examples/external_online_dp/run_dp_template.sh
whx a5554b6661 [Feat][Doc] Add a load_balance_dp_proxy in examples and external dp doc. (#4265)
### What this PR does / why we need it?
This PR adds a load-balance dp proxy server which can be used in
external DP scenario without Disaggregated-Prefill enabled. What's more,
add a doc of external dp and load-balance dp proxy server.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
See the new doc.

- vLLM version: v0.11.0
- vLLM main:
2918c1b49c

---------

Signed-off-by: whx-sjtu <2952154980@qq.com>
2025-11-21 16:33:23 +08:00

32 lines
1001 B
Bash

export HCCL_IF_IP=your_ip_here
export GLOO_SOCKET_IFNAME=your_socket_ifname_here
export TP_SOCKET_IFNAME=your_socket_ifname_here
export HCCL_SOCKET_IFNAME=your_socket_ifname_here
export VLLM_LOGGING_LEVEL="info"
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=10
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_DETERMINISTIC=True
export HCCL_BUFFSIZE=1024
export TASK_QUEUE_ENABLE=1
export ASCEND_RT_VISIBLE_DEVICES=$1
vllm serve model_path \
--host 0.0.0.0 \
--port $2 \
--data-parallel-size $3 \
--data-parallel-rank $4 \
--data-parallel-address $5 \
--data-parallel-rpc-port $6 \
--tensor-parallel-size $7 \
--enable-expert-parallel \
--seed 1024 \
--served-model-name dsv3 \
--max-model-len 8192 \
--max-num-batched-tokens 2048 \
--max-num-seqs 16 \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--quantization ascend \
--speculative-config '{"num_speculative_tokens": 1, "method":"deepseek_mtp"}' \