From 565868a2a621ee30c0d017d8f3dc2d97c4371a9f Mon Sep 17 00:00:00 2001 From: LoganJane <42287016+LoganJane@users.noreply.github.com> Date: Wed, 18 Mar 2026 17:16:35 +0800 Subject: [PATCH] [doc] add doc for Kimi-K2.5.md (#7371) ### What this PR does / why we need it? Upload doc for Kimi-K2.5 on Ascend Base on vllm-ascend:v0.17.0rc1 - vLLM version: v0.17.0 - vLLM main: https://github.com/vllm-project/vllm/commit/4034c3d32e30d01639459edd3ab486f56993876d --------- Signed-off-by: g00887675/loganJane Signed-off-by: LoganJane Co-authored-by: g00887675/loganJane --- docs/source/tutorials/models/Kimi-K2.5.md | 706 ++++++++++++++++++++++ docs/source/tutorials/models/index.md | 1 + 2 files changed, 707 insertions(+) create mode 100644 docs/source/tutorials/models/Kimi-K2.5.md diff --git a/docs/source/tutorials/models/Kimi-K2.5.md b/docs/source/tutorials/models/Kimi-K2.5.md new file mode 100644 index 00000000..5180e5c3 --- /dev/null +++ b/docs/source/tutorials/models/Kimi-K2.5.md @@ -0,0 +1,706 @@ +# Kimi-K2.5 + +## Introduction + +Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms. + +The `Kimi-K2.5` model is first supported in `vllm-ascend:v0.17.0rc1`. + +This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation. + +## Supported Features + +Refer to [supported features](../../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix. + +Refer to [feature guide](../../user_guide/feature_guide/index.md) to get the feature's configuration. + +## Environment Preparation + +### Model Weight + +- `Kimi-K2.5-w4a8`(Quantized version for w4a8): [Download model weight](https://modelscope.cn/models/Eco-Tech/Kimi-K2.5-W4A8). + +It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`. + +### Verify Multi-node Communication(Optional) + +If you want to deploy multi-node environment, you need to verify multi-node communication according to [verify multi-node communication environment](../../installation.md#verify-multi-node-communication). + +### Installation + +You can use our official docker image to run `Kimi-K2.5` directly. + +Select an image based on your machine type and start the docker image on your node, refer to [using docker](../../installation.md#set-up-using-docker). + +```{code-block} bash + :substitutions: +# Update --device according to your device (Atlas A2: /dev/davinci[0-7] Atlas A3:/dev/davinci[0-15]). +# Update the vllm-ascend image according to your environment. +# Note you should download the weight to /root/.cache in advance. +# Update the vllm-ascend image +export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version| +export NAME=vllm-ascend + +# Run the container using the defined variables +# Note: If you are running bridge network with docker, please expose available ports for multiple nodes communication in advance. +docker run --rm \ +--name $NAME \ +--net=host \ +--shm-size=1g \ +--device /dev/davinci0 \ +--device /dev/davinci1 \ +--device /dev/davinci2 \ +--device /dev/davinci3 \ +--device /dev/davinci4 \ +--device /dev/davinci5 \ +--device /dev/davinci6 \ +--device /dev/davinci7 \ +--device /dev/davinci_manager \ +--device /dev/devmm_svm \ +--device /dev/hisi_hdc \ +-v /usr/local/dcmi:/usr/local/dcmi \ +-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \ +-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ +-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ +-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ +-v /etc/ascend_install.info:/etc/ascend_install.info \ +-v /root/.cache:/root/.cache \ +-it $IMAGE bash +``` + +If you want to deploy multi-node environment, you need to set up environment on each node. + +## Deployment + +### Single-node Deployment + +- Quantized model `Kimi-K2.5-w4a8` can be deployed on 1 Atlas 800 A3 (64G × 16). + +Run the following script to execute online inference. + +```shell +#!/bin/sh +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxxx" +local_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +# AIV +export HCCL_OP_EXPANSION_MODE="AIV" + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name +export VLLM_ASCEND_BALANCE_SCHEDULING=1 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export TASK_QUEUE_ENABLE=1 +export VLLM_ASCEND_ENABLE_MLAPO=1 + +vllm serve /weights/Kimi-K2.5-w4a8 \ +--host 0.0.0.0 \ +--port 8015 \ +--data-parallel-size 4 \ +--tensor-parallel-size 4 \ +--quantization ascend \ +--seed 1024 \ +--served-model-name kimi_k25 \ +--enable-expert-parallel \ +--async-scheduling \ +--max-num-seqs 16 \ +--max-model-len 16384 \ +--max-num-batched-tokens 4096 \ +--trust-remote-code \ +--no-enable-prefix-caching \ +--gpu-memory-utilization 0.9 \ +--compilation-config '{"cudagraph_capture_sizes":[1,2,4,8,16], "cudagraph_mode": "FULL_DECODE_ONLY"}' \ +--additional-config '{"multistream_overlap_shared_expert":true}' \ +--mm-processor-cache-type shm \ +--mm-encoder-tp-mode data +``` + +**Notice:** +The parameters are explained as follows: + +- Setting the environment variable `VLLM_ASCEND_BALANCE_SCHEDULING=1` enables balance scheduling. This may help increase output throughput and reduce TPOT in v1 scheduler. However, TTFT may degrade in some scenarios. Furthermore, enabling this feature is not recommended in scenarios where PD is separated. +- For single-node deployment, we recommend using `dp4tp4` instead of `dp2tp8`. +- `--max-model-len` specifies the maximum context length - that is, the sum of input and output tokens for a single request. For performance testing with an input length of 3.5K and output length of 1.5K, a value of `16384` is sufficient, however, for precision testing, please set it at least `35000`. +- `--no-enable-prefix-caching` indicates that prefix caching is disabled. To enable it, remove this option. +- If you use the w4a8 weight, more memory will be allocated to kvcache, and you can try to increase system throughput to achieve greater throughput. + +### Multi-node Deployment + +- `Kimi-K2.5-w4a8`: require at least 2 Atlas 800 A2 (64G × 8). + +Run the following scripts on two nodes respectively. + +**Node 0** + +```shell +#!/bin/sh + +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxxx" +local_ip="xxxx" + +# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) +node0_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export HCCL_BUFFSIZE=1024 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export VLLM_ASCEND_BALANCE_SCHEDULING=1 +export HCCL_INTRA_PCIE_ENABLE=1 +export HCCL_INTRA_ROCE_ENABLE=0 +export TASK_QUEUE_ENABLE=1 +export VLLM_ASCEND_ENABLE_MLAPO=1 + +vllm serve /weights/Kimi-K2.5-w4a8 \ +--host 0.0.0.0 \ +--port 8004 \ +--data-parallel-size 4 \ +--data-parallel-size-local 2 \ +--data-parallel-address $node0_ip \ +--data-parallel-rpc-port 13389 \ +--tensor-parallel-size 4 \ +--quantization ascend \ +--seed 1024 \ +--served-model-name kimi_k25 \ +--enable-expert-parallel \ +--async-scheduling \ +--max-num-seqs 16 \ +--max-model-len 16384 \ +--max-num-batched-tokens 4096 \ +--trust-remote-code \ +--no-enable-prefix-caching \ +--gpu-memory-utilization 0.9 \ +--compilation-config '{"cudagraph_capture_sizes":[1,2,4,8,16], "cudagraph_mode": "FULL_DECODE_ONLY"}' \ +--additional-config '{"multistream_overlap_shared_expert":true}' \ +--mm-processor-cache-type shm \ +--mm-encoder-tp-mode data +``` + +**Node 1** + +```shell +#!/bin/sh + +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxx" +local_ip="xxx" + +# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) +node0_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export HCCL_BUFFSIZE=1024 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export VLLM_ASCEND_BALANCE_SCHEDULING=1 +export HCCL_INTRA_PCIE_ENABLE=1 +export HCCL_INTRA_ROCE_ENABLE=0 +export TASK_QUEUE_ENABLE=1 +export VLLM_ASCEND_ENABLE_MLAPO=1 + +vllm serve /weights/Kimi-K2.5-w4a8 \ +--host 0.0.0.0 \ +--port 8004 \ +--headless \ +--data-parallel-size 4 \ +--data-parallel-size-local 2 \ +--data-parallel-start-rank 2 \ +--data-parallel-address $node0_ip \ +--data-parallel-rpc-port 13389 \ +--tensor-parallel-size 4 \ +--quantization ascend \ +--seed 1024 \ +--served-model-name kimi_k25 \ +--enable-expert-parallel \ +--async-scheduling \ +--max-num-seqs 16 \ +--max-model-len 16384 \ +--max-num-batched-tokens 4096 \ +--trust-remote-code \ +--no-enable-prefix-caching \ +--gpu-memory-utilization 0.9 \ +--compilation-config '{"cudagraph_capture_sizes":[1,2,4,8,16], "cudagraph_mode": "FULL_DECODE_ONLY"}' \ +--additional-config '{"multistream_overlap_shared_expert":true}' \ +--mm-processor-cache-type shm \ +--mm-encoder-tp-mode data +``` + +### Prefill-Decode Disaggregation + +We recommend using Mooncake for deployment: [Mooncake](../features/pd_disaggregation_mooncake_multi_node.md). + +Take Atlas 800 A3 (64G × 16) for example, we recommend to deploy 2P1D (4 nodes) rather than 1P1D (2 nodes), because there is no enough NPU memory to serve high concurrency in 1P1D case. + +- `Kimi-K2.5-w4a8 2P1D Layerwise` require 4 Atlas 800 A3 (64G × 16). + +To run the vllm-ascend `Prefill-Decode Disaggregation` service, you need to deploy a `launch_dp_program.py` script and a `run_dp_template.sh` script on each node and deploy a `proxy.sh` script on prefill master node to forward requests. + +1. `launch_online_dp.py` to launch external dp vllm servers. +[launch\_online\_dp.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/external_online_dp/launch_online_dp.py) + +2. Prefill Node 0 `run_dp_template.sh` script + +```shell +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxx" +local_ip="141.xx.xx.1" + +# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) +node0_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name + +export VLLM_RPC_TIMEOUT=3600000 +export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 +export HCCL_EXEC_TIMEOUT=204 +export HCCL_CONNECT_TIMEOUT=120 + +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export HCCL_BUFFSIZE=256 +export TASK_QUEUE_ENABLE=1 +export HCCL_OP_EXPANSION_MODE="AIV" +export ASCEND_RT_VISIBLE_DEVICES=$1 +export ASCEND_BUFFER_POOL=4:8 +export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH + +vllm serve /weights/Kimi-K2.5-w4a8 \ + --host 0.0.0.0 \ + --port $2 \ + --data-parallel-size $3 \ + --data-parallel-rank $4 \ + --data-parallel-address $5 \ + --data-parallel-rpc-port $6 \ + --tensor-parallel-size $7 \ + --enable-expert-parallel \ + --seed 1024 \ + --served-model-name kimi_k25 \ + --max-model-len 65536 \ + --max-num-batched-tokens 16384 \ + --max-num-seqs 8 \ + --enforce-eager \ + --trust-remote-code \ + --gpu-memory-utilization 0.9 \ + --quantization ascend \ + --no-enable-prefix-caching \ + --additional-config '{"recompute_scheduler_enable":true}' \ + --mm-processor-cache-type shm \ + --mm-encoder-tp-mode data \ + --kv-transfer-config \ + '{"kv_connector": "MooncakeConnectorV1", + "kv_role": "kv_producer", + "kv_port": "30000", + "engine_id": "0", + "kv_connector_extra_config": { + "prefill": { + "dp_size": 2, + "tp_size": 8 + }, + "decode": { + "dp_size": 32, + "tp_size": 1 + } + } + }' +``` + +3. Prefill Node 1 `run_dp_template.sh` script + +```shell +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxx" +local_ip="141.xx.xx.2" + +# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) +node0_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name + +export VLLM_RPC_TIMEOUT=3600000 +export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 +export HCCL_EXEC_TIMEOUT=204 +export HCCL_CONNECT_TIMEOUT=120 + +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export HCCL_BUFFSIZE=256 +export TASK_QUEUE_ENABLE=1 +export HCCL_OP_EXPANSION_MODE="AIV" +export ASCEND_RT_VISIBLE_DEVICES=$1 +export ASCEND_BUFFER_POOL=4:8 +export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH + +vllm serve /weights/Kimi-K2.5-w4a8 \ + --host 0.0.0.0 \ + --port $2 \ + --data-parallel-size $3 \ + --data-parallel-rank $4 \ + --data-parallel-address $5 \ + --data-parallel-rpc-port $6 \ + --tensor-parallel-size $7 \ + --enable-expert-parallel \ + --seed 1024 \ + --served-model-name kimi_k25 \ + --max-model-len 65536 \ + --max-num-batched-tokens 16384 \ + --max-num-seqs 8 \ + --enforce-eager \ + --trust-remote-code \ + --gpu-memory-utilization 0.9 \ + --quantization ascend \ + --no-enable-prefix-caching \ + --additional-config '{"recompute_scheduler_enable":true}' \ + --mm-processor-cache-type shm \ + --mm-encoder-tp-mode data \ + --kv-transfer-config \ + '{"kv_connector": "MooncakeConnectorV1", + "kv_role": "kv_producer", + "kv_port": "30100", + "engine_id": "1", + "kv_connector_extra_config": { + "prefill": { + "dp_size": 2, + "tp_size": 8 + }, + "decode": { + "dp_size": 32, + "tp_size": 1 + } + } + }' +``` + +4. Decode Node 0 `run_dp_template.sh` script + +```shell +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxx" +local_ip="141.xx.xx.3" + +# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) +node0_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name + +export VLLM_RPC_TIMEOUT=3600000 +export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 +export HCCL_EXEC_TIMEOUT=204 +export HCCL_CONNECT_TIMEOUT=120 + +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export HCCL_BUFFSIZE=1100 +export TASK_QUEUE_ENABLE=1 +export HCCL_OP_EXPANSION_MODE="AIV" +export ASCEND_RT_VISIBLE_DEVICES=$1 +export ASCEND_BUFFER_POOL=4:8 +export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH +export VLLM_ASCEND_ENABLE_MLAPO=1 + +vllm serve /weights/Kimi-K2.5-w4a8 \ + --host 0.0.0.0 \ + --port $2 \ + --data-parallel-size $3 \ + --data-parallel-rank $4 \ + --data-parallel-address $5 \ + --data-parallel-rpc-port $6 \ + --tensor-parallel-size $7 \ + --enable-expert-parallel \ + --seed 1024 \ + --served-model-name kimi_k25 \ + --max-model-len 65536 \ + --max-num-batched-tokens 256 \ + --max-num-seqs 28 \ + --trust-remote-code \ + --gpu-memory-utilization 0.92 \ + --quantization ascend \ + --no-enable-prefix-caching \ + --async-scheduling \ + --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY", "cudagraph_capture_sizes":[2, 4, 8, 16, 24, 32, 48, 56]}' \ + --additional-config '{"recompute_scheduler_enable":true,"multistream_overlap_shared_expert": true,"finegrained_tp_config": {"lmhead_tensor_parallel_size":8}}' \ + --kv-transfer-config \ + '{"kv_connector": "MooncakeConnectorV1", + "kv_role": "kv_consumer", + "kv_port": "30200", + "engine_id": "2", + "kv_connector_extra_config": { + "prefill": { + "dp_size": 2, + "tp_size": 8 + }, + "decode": { + "dp_size": 32, + "tp_size": 1 + } + } + }' +``` + +5. Decode Node 1 `run_dp_template.sh` script + +```shell +# this obtained through ifconfig +# nic_name is the network interface name corresponding to local_ip of the current node +nic_name="xxx" +local_ip="141.xx.xx.4" + +# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) +node0_ip="xxxx" + +# [Optional] jemalloc +# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. +# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD + +export HCCL_IF_IP=$local_ip +export GLOO_SOCKET_IFNAME=$nic_name +export TP_SOCKET_IFNAME=$nic_name +export HCCL_SOCKET_IFNAME=$nic_name + +export VLLM_RPC_TIMEOUT=3600000 +export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 +export HCCL_EXEC_TIMEOUT=204 +export HCCL_CONNECT_TIMEOUT=120 + +export OMP_PROC_BIND=false +export OMP_NUM_THREADS=1 +export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True +export HCCL_BUFFSIZE=1100 +export TASK_QUEUE_ENABLE=1 +export HCCL_OP_EXPANSION_MODE="AIV" +export ASCEND_RT_VISIBLE_DEVICES=$1 +export ASCEND_BUFFER_POOL=4:8 +export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH +export VLLM_ASCEND_ENABLE_MLAPO=1 + +vllm serve /weights/Kimi-K2.5-w4a8 \ + --host 0.0.0.0 \ + --port $2 \ + --data-parallel-size $3 \ + --data-parallel-rank $4 \ + --data-parallel-address $5 \ + --data-parallel-rpc-port $6 \ + --tensor-parallel-size $7 \ + --enable-expert-parallel \ + --seed 1024 \ + --served-model-name kimi_k25 \ + --max-model-len 65536 \ + --max-num-batched-tokens 256 \ + --max-num-seqs 28 \ + --trust-remote-code \ + --gpu-memory-utilization 0.92 \ + --quantization ascend \ + --no-enable-prefix-caching \ + --async-scheduling \ + --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY", "cudagraph_capture_sizes":[2, 4, 8, 16, 24, 32, 48, 56]}' \ + --additional-config '{"recompute_scheduler_enable":true,"multistream_overlap_shared_expert": true,"finegrained_tp_config": {"lmhead_tensor_parallel_size":8}}' \ + --kv-transfer-config \ + '{"kv_connector": "MooncakeConnectorV1", + "kv_role": "kv_consumer", + "kv_port": "30200", + "engine_id": "2", + "kv_connector_extra_config": { + "prefill": { + "dp_size": 2, + "tp_size": 8 + }, + "decode": { + "dp_size": 32, + "tp_size": 1 + } + } + }' +``` + +**Notice:** +The parameters are explained as follows: + +- `VLLM_ASCEND_ENABLE_MLAPO=1`: enables the fusion operator, which can significantly improve performance but consumes more NPU memory. In the Prefill-Decode (PD) separation scenario, enable MLAPO only on decode nodes. +- `--async-scheduling`: enables the asynchronous scheduling function. When Multi-Token Prediction (MTP) is enabled, asynchronous scheduling of operator delivery can be implemented to overlap the operator delivery latency. +- `cudagraph_capture_sizes`: The recommended value is `n x (mtp + 1)`. And the min is `n = 1` and the max is `n = max-num-seqs`. For other values, it is recommended to set them to the number of frequently occurring requests on the Decode (D) node. +- `recompute_scheduler_enable: true`: enables the recomputation scheduler. When the Key-Value Cache (KV Cache) of the decode node is insufficient, requests will be sent to the prefill node to recompute the KV Cache. In the PD separation scenario, it is recommended to enable this configuration on both prefill and decode nodes simultaneously. +- `multistream_overlap_shared_expert: true`: When the Tensor Parallelism (TP) size is 1 or `enable_shared_expert_dp: true`, an additional stream is enabled to overlap the computation process of shared experts for improved efficiency. +- `lmhead_tensor_parallel_size: 8`: When the Tensor Parallelism (TP) size of the decode node is 1, this parameter allows the TP size of the LMHead embedding layer to be greater than 1, which is used to reduce the computational load of each card on the LMHead embedding layer. + +1. run server for each node: + +```shell +# p0 +python launch_online_dp.py --dp-size 2 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address 141.xx.xx.1 --dp-rpc-port 12321 --vllm-start-port 7100 +# p1 +python launch_online_dp.py --dp-size 2 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address 141.xx.xx.2 --dp-rpc-port 12321 --vllm-start-port 7100 +# d0 +python launch_online_dp.py --dp-size 32 --tp-size 1 --dp-size-local 16 --dp-rank-start 0 --dp-address 141.xx.xx.3 --dp-rpc-port 12321 --vllm-start-port 7100 +# d1 +python launch_online_dp.py --dp-size 32 --tp-size 1 --dp-size-local 16 --dp-rank-start 16 --dp-address 141.xx.xx.3 --dp-rpc-port 12321 --vllm-start-port 7100 +``` + +7. Run the `proxy.sh` script on the prefill master node + +Run a proxy server on the same node with the prefiller service instance. You can get the proxy program in the repository's examples: [load\_balance\_proxy\_server\_example.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/disaggregated_prefill_v1/load_balance_proxy_server_example.py) + +```shell +python load_balance_proxy_server_example.py \ + --port 1999 \ + --host 141.xx.xx.1 \ + --prefiller-hosts \ + 141.xx.xx.1 \ + 141.xx.xx.1 \ + 141.xx.xx.2 \ + 141.xx.xx.2 \ + --prefiller-ports \ + 7100 7101 7100 7101 \ + --decoder-hosts \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.3 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + 141.xx.xx.4 \ + --decoder-ports \ + 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 \ + 7100 7101 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7113 7114 7115 \ +``` + +```shell +cd vllm-ascend/examples/disaggregated_prefill_v1/ +bash proxy.sh +``` + +## Functional Verification + +Once your server is started, you can query the model with input prompts: + +```shell +curl http://:/v1/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "kimi_k25", + "prompt": "The future of AI is", + "max_completion_tokens": 50, + "temperature": 0 + }' +``` + +## Accuracy Evaluation + +Here are two accuracy evaluation methods. + +### Using AISBench + +1. Refer to [Using AISBench](../../developer_guide/evaluation/using_ais_bench.md) for details. + +2. After execution, you can get the result, here is the result of `Kimi-K2.5-w4a8` in `vllm-ascend:v0.17.0rc1` for reference only. + +| dataset | version | metric | mode | vllm-api-general-chat | note | +|----- | ----- | ----- | ----- | -----| ----- | +| gsm8k | - | accuracy | gen | 94.62 | 1 Atlas 800 A3 (64G × 16) | +| textvqa | - | accuracy | gen | 80.29 | 1 Atlas 800 A3 (64G × 16) | + +## Performance + +### Using AISBench + +Refer to [Using AISBench for performance evaluation](../../developer_guide/evaluation/using_ais_bench.md#execute-performance-evaluation) for details. + +### Using vLLM Benchmark + +Run performance evaluation of `Kimi-K2.5-w4a8` as an example. + +Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/benchmarks.html) for more details. + +There are three `vllm bench` subcommands: + +- `latency`: Benchmark the latency of a single batch of requests. +- `serve`: Benchmark the online serving throughput. +- `throughput`: Benchmark offline inference throughput. + +Take the `serve` as an example. Run the code as follows. + +```shell +export VLLM_USE_MODELSCOPE=true +vllm bench serve --model Eco-Tech/Kimi-K2.5-w4a8 --dataset-name random --random-input 1024 --num-prompts 200 --request-rate 1 --save-result --result-dir ./ +``` + +After about several minutes, you can get the performance evaluation result. diff --git a/docs/source/tutorials/models/index.md b/docs/source/tutorials/models/index.md index 787a8a2a..28c6df87 100644 --- a/docs/source/tutorials/models/index.md +++ b/docs/source/tutorials/models/index.md @@ -30,5 +30,6 @@ DeepSeek-R1.md GLM4.x.md GLM5.md Kimi-K2-Thinking.md +Kimi-K2.5.md PaddleOCR-VL.md :::