### What this PR does / why we need it?
This PR adopt `LLMDataDist` for kv cache register and `pull_blocks`
style disaggregate prefill implementation. The interface implementation
mainly follows the design of NIXL PR
https://github.com/vllm-project/vllm/pull/17751/files#diff-7eaad0b7dee0626bf29d10081b0f0c5e3ea15a4af97e7b182a4e0d35f8346953
.
This PR can be test with the following step:
- Generate the rank table for all machine.
- execute`toy_proxy.py` to launch the disaggregate prefill proxy server,
specify the prefill ip, port and the decode ip, port
- Run the prefill server and decode server.
- send the request to the disaggregate prefill proxy
### Does this PR introduce _any_ user-facing change?
### How was this patch tested?
- vLLM version: v0.9.2
- vLLM main:
8d0a01a5f2
---------
Signed-off-by: ganyi <pleaplusone.gy@gmail.com>
Signed-off-by: machenglong <machenglong_yewu@cmss.chinamobile.com>
Signed-off-by: liziyu179 <3475441767@qq.com>
Signed-off-by: underfitc <hucong24@huawei.com>
Signed-off-by: zouyida2052 <zouyida@huawei.com>
Signed-off-by: liziyu <liziyu16@huawei.com>
Signed-off-by: underfituu <hzhucong@163.com>
Co-authored-by: machenglong <machenglong_yewu@cmss.chinamobile.com>
Co-authored-by: liziyu179 <3475441767@qq.com>
Co-authored-by: underfitc <hucong24@huawei.com>
Co-authored-by: zouyida2052 <zouyida@huawei.com>
Co-authored-by: liziyu <liziyu16@huawei.com>
Co-authored-by: underfituu <hzhucong@163.com>
33 lines
892 B
Bash
33 lines
892 B
Bash
export HCCL_IF_IP=141.61.39.117
|
|
export GLOO_SOCKET_IFNAME="enp48s3u1u1"
|
|
export TP_SOCKET_IFNAME="enp48s3u1u1"
|
|
export HCCL_SOCKET_IFNAME="enp48s3u1u1"
|
|
export DISAGGREGATED_PREFILL_RANK_TABLE_PATH=path-to-rank-table
|
|
|
|
export OMP_PROC_BIND=false
|
|
export OMP_NUM_THREADS=100
|
|
|
|
export VLLM_USE_V1=1
|
|
|
|
vllm serve model_path \
|
|
--host 0.0.0.0 \
|
|
--port 20002 \
|
|
--tensor-parallel-size 1\
|
|
--seed 1024 \
|
|
--served-model-name dsv3 \
|
|
--max-model-len 2000 \
|
|
---max-num-batched-tokens 2000 \
|
|
--trust-remote-code \
|
|
--gpu-memory-utilization 0.9 \
|
|
--kv-transfer-config \
|
|
'{"kv_connector": "LLMDataDistCMgrConnector",
|
|
"kv_buffer_device": "npu",
|
|
"kv_role": "kv_consumer",
|
|
"kv_parallel_size": 1,
|
|
"kv_port": "20001",
|
|
"engine_id": 0,
|
|
"kv_connector_module_path": "vllm_ascend.distributed.llmdatadist_connector_v1_a3"
|
|
}' \
|
|
--additional-config \
|
|
'{"enable_graph_mode": "True"}'\
|