[Doc] Update docs of Kimi-K2.5 for 0.18.0rc1 (#7931)

### What this PR does / why we need it?
Update docs of Kimi-K2.5 for 0.18.0rc1
backport of #7901
---------
Signed-off-by: LoganJane <loganJane73@hotmail.com>
This commit is contained in:
LoganJane
2026-04-02 14:15:12 +08:00
committed by GitHub
parent 74699877c9
commit 829957b53f

View File

@@ -19,6 +19,7 @@ Refer to [feature guide](../../user_guide/feature_guide/index.md) to get the fea
### Model Weight ### Model Weight
- `Kimi-K2.5-w4a8`(Quantized version for w4a8): [Download model weight](https://modelscope.cn/models/Eco-Tech/Kimi-K2.5-W4A8). - `Kimi-K2.5-w4a8`(Quantized version for w4a8): [Download model weight](https://modelscope.cn/models/Eco-Tech/Kimi-K2.5-W4A8).
- `kimi-k2.5-eagle3`(Eagle3 MTP draft model for accelerating inference of Kimi-K2.5): [Download model weight](https://huggingface.co/lightseekorg/kimi-k2.5-eagle3)
It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`. It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`.
@@ -32,42 +33,93 @@ You can use our official docker image to run `Kimi-K2.5` directly.
Select an image based on your machine type and start the docker image on your node, refer to [using docker](../../installation.md#set-up-using-docker). Select an image based on your machine type and start the docker image on your node, refer to [using docker](../../installation.md#set-up-using-docker).
:::::{tab-set}
:sync-group: install
::::{tab-item} A3 series
:sync: A3
Start the docker image on your each node.
```{code-block} bash ```{code-block} bash
:substitutions: :substitutions:
# Update --device according to your device (Atlas A2: /dev/davinci[0-7] Atlas A3:/dev/davinci[0-15]).
# Update the vllm-ascend image according to your environment.
# Note you should download the weight to /root/.cache in advance.
# Update the vllm-ascend image
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version|
export NAME=vllm-ascend
# Run the container using the defined variables export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|-a3
# Note: If you are running bridge network with docker, please expose available ports for multiple nodes communication in advance.
docker run --rm \ docker run --rm \
--name $NAME \ --name vllm-ascend \
--net=host \ --shm-size=1g \
--shm-size=1g \ --net=host \
--device /dev/davinci0 \ --device /dev/davinci0 \
--device /dev/davinci1 \ --device /dev/davinci1 \
--device /dev/davinci2 \ --device /dev/davinci2 \
--device /dev/davinci3 \ --device /dev/davinci3 \
--device /dev/davinci4 \ --device /dev/davinci4 \
--device /dev/davinci5 \ --device /dev/davinci5 \
--device /dev/davinci6 \ --device /dev/davinci6 \
--device /dev/davinci7 \ --device /dev/davinci7 \
--device /dev/davinci_manager \ --device /dev/davinci8 \
--device /dev/devmm_svm \ --device /dev/davinci9 \
--device /dev/hisi_hdc \ --device /dev/davinci10 \
-v /usr/local/dcmi:/usr/local/dcmi \ --device /dev/davinci11 \
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \ --device /dev/davinci12 \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ --device /dev/davinci13 \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ --device /dev/davinci14 \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ --device /dev/davinci15 \
-v /etc/ascend_install.info:/etc/ascend_install.info \ --device /dev/davinci_manager \
-v /root/.cache:/root/.cache \ --device /dev/devmm_svm \
-it $IMAGE bash --device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-it $IMAGE bash
``` ```
::::
::::{tab-item} A2 series
:sync: A2
Start the docker image on your each node.
```{code-block} bash
:substitutions:
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--net=host \
--device /dev/davinci0 \
--device /dev/davinci1 \
--device /dev/davinci2 \
--device /dev/davinci3 \
--device /dev/davinci4 \
--device /dev/davinci5 \
--device /dev/davinci6 \
--device /dev/davinci7 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-it $IMAGE bash
```
::::
:::::
In addition, if you don't want to use the docker image as above, you can also build all from source:
- Install `vllm-ascend` from source, refer to [installation](../../installation.md).
If you want to deploy multi-node environment, you need to set up environment on each node. If you want to deploy multi-node environment, you need to set up environment on each node.
## Deployment ## Deployment
@@ -80,49 +132,45 @@ Run the following script to execute online inference.
```shell ```shell
#!/bin/sh #!/bin/sh
# this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxxx"
local_ip="xxxx"
# [Optional] jemalloc # [Optional] jemalloc
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
# AIV echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sysctl -w vm.swappiness=0
sysctl -w kernel.numa_balancing=0
sysctl -w kernel.sched_migration_cost_ns=50000
export HCCL_OP_EXPANSION_MODE="AIV" export HCCL_OP_EXPANSION_MODE="AIV"
export HCCL_IF_IP=$local_ip
export GLOO_SOCKET_IFNAME=$nic_name
export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export VLLM_ASCEND_BALANCE_SCHEDULING=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export OMP_PROC_BIND=false export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1 export OMP_NUM_THREADS=1
export TASK_QUEUE_ENABLE=1 export TASK_QUEUE_ENABLE=1
export VLLM_ASCEND_ENABLE_MLAPO=1
vllm serve /weights/Kimi-K2.5-w4a8 \ export HCCL_BUFFSIZE=800
--host 0.0.0.0 \ export VLLM_ASCEND_ENABLE_MLAPO=1
--port 8015 \ export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
--data-parallel-size 4 \ export VLLM_ASCEND_BALANCE_SCHEDULING=1
--tensor-parallel-size 4 \
--quantization ascend \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--seed 1024 \ --host 0.0.0.0 \
--served-model-name kimi_k25 \ --port 8088 \
--enable-expert-parallel \ --quantization ascend \
--async-scheduling \ --served-model-name kimi_k25 \
--max-num-seqs 16 \ --allowed-local-media-path / \
--max-model-len 16384 \ --trust-remote-code \
--max-num-batched-tokens 4096 \ --no-enable-prefix-caching \
--trust-remote-code \ --seed 1024 \
--no-enable-prefix-caching \ --tensor-parallel-size 4 \
--gpu-memory-utilization 0.9 \ --data-parallel-size 4 \
--compilation-config '{"cudagraph_capture_sizes":[1,2,4,8,16], "cudagraph_mode": "FULL_DECODE_ONLY"}' \ --enable-expert-parallel \
--additional-config '{"multistream_overlap_shared_expert":true}' \ --async-scheduling \
--mm-processor-cache-type shm \ --max-num-seqs 64 \
--mm-encoder-tp-mode data --max-model-len 32768 \
--max-num-batched-tokens 16384 \
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_capture_sizes":[4,8,16,32,64,128,256], "cudagraph_mode":"FULL_DECODE_ONLY"}' \
--speculative-config '{"method":"eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens":3}' \
--mm-encoder-tp-mode data
``` ```
**Notice:** **Notice:**
@@ -132,6 +180,7 @@ The parameters are explained as follows:
- For single-node deployment, we recommend using `dp4tp4` instead of `dp2tp8`. - For single-node deployment, we recommend using `dp4tp4` instead of `dp2tp8`.
- `--max-model-len` specifies the maximum context length - that is, the sum of input and output tokens for a single request. For performance testing with an input length of 3.5K and output length of 1.5K, a value of `16384` is sufficient, however, for precision testing, please set it at least `35000`. - `--max-model-len` specifies the maximum context length - that is, the sum of input and output tokens for a single request. For performance testing with an input length of 3.5K and output length of 1.5K, a value of `16384` is sufficient, however, for precision testing, please set it at least `35000`.
- `--no-enable-prefix-caching` indicates that prefix caching is disabled. To enable it, remove this option. - `--no-enable-prefix-caching` indicates that prefix caching is disabled. To enable it, remove this option.
- `--mm-encoder-tp-mode` indicates how to optimize multi-modal encoder inference using tensor parallelism (TP). If you want to test the multimodal inputs, we recommend using `data`.
- If you use the w4a8 weight, more memory will be allocated to kvcache, and you can try to increase system throughput to achieve greater throughput. - If you use the w4a8 weight, more memory will be allocated to kvcache, and you can try to increase system throughput to achieve greater throughput.
### Multi-node Deployment ### Multi-node Deployment
@@ -153,47 +202,55 @@ local_ip="xxxx"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) # The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx" node0_ip="xxxx"
# [Optional] jemalloc
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
export HCCL_IF_IP=$local_ip export HCCL_IF_IP=$local_ip
export GLOO_SOCKET_IFNAME=$nic_name export GLOO_SOCKET_IFNAME=$nic_name
export TP_SOCKET_IFNAME=$nic_name export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name export HCCL_SOCKET_IFNAME=$nic_name
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export HCCL_BUFFSIZE=1024
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export VLLM_ASCEND_BALANCE_SCHEDULING=1
export HCCL_INTRA_PCIE_ENABLE=1 export HCCL_INTRA_PCIE_ENABLE=1
export HCCL_INTRA_ROCE_ENABLE=0 export HCCL_INTRA_ROCE_ENABLE=0
export TASK_QUEUE_ENABLE=1
export VLLM_ASCEND_ENABLE_MLAPO=1
vllm serve /weights/Kimi-K2.5-w4a8 \ # [Optional] jemalloc
--host 0.0.0.0 \ # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
--port 8004 \ export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
--data-parallel-size 4 \
--data-parallel-size-local 2 \ echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
--data-parallel-address $node0_ip \ sysctl -w vm.swappiness=0
--data-parallel-rpc-port 13389 \ sysctl -w kernel.numa_balancing=0
--tensor-parallel-size 4 \ sysctl -w kernel.sched_migration_cost_ns=50000
--quantization ascend \ export HCCL_OP_EXPANSION_MODE="AIV"
--seed 1024 \ export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
--served-model-name kimi_k25 \ export OMP_PROC_BIND=false
--enable-expert-parallel \ export OMP_NUM_THREADS=1
--async-scheduling \ export TASK_QUEUE_ENABLE=1
--max-num-seqs 16 \
--max-model-len 16384 \ export HCCL_BUFFSIZE=1024
--max-num-batched-tokens 4096 \ export VLLM_ASCEND_ENABLE_MLAPO=1
--trust-remote-code \ export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
--no-enable-prefix-caching \ export VLLM_ASCEND_BALANCE_SCHEDULING=1
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_capture_sizes":[1,2,4,8,16], "cudagraph_mode": "FULL_DECODE_ONLY"}' \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--additional-config '{"multistream_overlap_shared_expert":true}' \ --host 0.0.0.0 \
--mm-processor-cache-type shm \ --port 8088 \
--mm-encoder-tp-mode data --quantization ascend \
--served-model-name kimi_k25 \
--allowed-local-media-path / \
--trust-remote-code \
--no-enable-prefix-caching \
--seed 1024 \
--data-parallel-size 4 \
--data-parallel-size-local 2 \
--data-parallel-address $node0_ip \
--data-parallel-rpc-port 13389 \
--tensor-parallel-size 4 \
--enable-expert-parallel \
--async-scheduling \
--max-num-seqs 16 \
--max-model-len 32768 \
--max-num-batched-tokens 16384 \
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_capture_sizes":[4,8,16,32,64], "cudagraph_mode":"FULL_DECODE_ONLY"}' \
--speculative-config '{"method":"eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens":3}' \
--mm-encoder-tp-mode data
``` ```
**Node 1** **Node 1**
@@ -209,49 +266,57 @@ local_ip="xxx"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) # The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx" node0_ip="xxxx"
# [Optional] jemalloc
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
export HCCL_IF_IP=$local_ip export HCCL_IF_IP=$local_ip
export GLOO_SOCKET_IFNAME=$nic_name export GLOO_SOCKET_IFNAME=$nic_name
export TP_SOCKET_IFNAME=$nic_name export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name export HCCL_SOCKET_IFNAME=$nic_name
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=1
export HCCL_BUFFSIZE=1024
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export VLLM_ASCEND_BALANCE_SCHEDULING=1
export HCCL_INTRA_PCIE_ENABLE=1 export HCCL_INTRA_PCIE_ENABLE=1
export HCCL_INTRA_ROCE_ENABLE=0 export HCCL_INTRA_ROCE_ENABLE=0
export TASK_QUEUE_ENABLE=1
export VLLM_ASCEND_ENABLE_MLAPO=1
vllm serve /weights/Kimi-K2.5-w4a8 \ # [Optional] jemalloc
--host 0.0.0.0 \ # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
--port 8004 \ export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
--headless \
--data-parallel-size 4 \ echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
--data-parallel-size-local 2 \ sysctl -w vm.swappiness=0
--data-parallel-start-rank 2 \ sysctl -w kernel.numa_balancing=0
--data-parallel-address $node0_ip \ sysctl -w kernel.sched_migration_cost_ns=50000
--data-parallel-rpc-port 13389 \ export HCCL_OP_EXPANSION_MODE="AIV"
--tensor-parallel-size 4 \ export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
--quantization ascend \ export OMP_PROC_BIND=false
--seed 1024 \ export OMP_NUM_THREADS=1
--served-model-name kimi_k25 \ export TASK_QUEUE_ENABLE=1
--enable-expert-parallel \
--async-scheduling \ export HCCL_BUFFSIZE=1024
--max-num-seqs 16 \ export VLLM_ASCEND_ENABLE_MLAPO=1
--max-model-len 16384 \ export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
--max-num-batched-tokens 4096 \ export VLLM_ASCEND_BALANCE_SCHEDULING=1
--trust-remote-code \
--no-enable-prefix-caching \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--gpu-memory-utilization 0.9 \ --host 0.0.0.0 \
--compilation-config '{"cudagraph_capture_sizes":[1,2,4,8,16], "cudagraph_mode": "FULL_DECODE_ONLY"}' \ --port 8088 \
--additional-config '{"multistream_overlap_shared_expert":true}' \ --quantization ascend \
--mm-processor-cache-type shm \ --served-model-name kimi_k25 \
--mm-encoder-tp-mode data --allowed-local-media-path / \
--trust-remote-code \
--no-enable-prefix-caching \
--seed 1024 \
--headless \
--data-parallel-size 4 \
--data-parallel-size-local 2 \
--data-parallel-start-rank 2 \
--data-parallel-address $node0_ip \
--data-parallel-rpc-port 13389 \
--tensor-parallel-size 4 \
--enable-expert-parallel \
--async-scheduling \
--max-num-seqs 16 \
--max-model-len 32768 \
--max-num-batched-tokens 16384 \
--gpu-memory-utilization 0.9 \
--compilation-config '{"cudagraph_capture_sizes":[4,8,16,32,64], "cudagraph_mode":"FULL_DECODE_ONLY"}' \
--speculative-config '{"method":"eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens":3}' \
--mm-encoder-tp-mode data
``` ```
### Prefill-Decode Disaggregation ### Prefill-Decode Disaggregation
@@ -260,49 +325,52 @@ We recommend using Mooncake for deployment: [Mooncake](../features/pd_disaggrega
Take Atlas 800 A3 (64G × 16) for example, we recommend to deploy 2P1D (4 nodes) rather than 1P1D (2 nodes), because there is no enough NPU memory to serve high concurrency in 1P1D case. Take Atlas 800 A3 (64G × 16) for example, we recommend to deploy 2P1D (4 nodes) rather than 1P1D (2 nodes), because there is no enough NPU memory to serve high concurrency in 1P1D case.
- `Kimi-K2.5-w4a8 2P1D Layerwise` require 4 Atlas 800 A3 (64G × 16). - `Kimi-K2.5-w4a8 2P1D` require 4 Atlas 800 A3 (64G × 16).
To run the vllm-ascend `Prefill-Decode Disaggregation` service, you need to deploy a `launch_dp_program.py` script and a `run_dp_template.sh` script on each node and deploy a `proxy.sh` script on prefill master node to forward requests. To run the vllm-ascend `Prefill-Decode Disaggregation` service, you need to deploy a `launch_dp_program.py` script and a `run_dp_template.sh` script on each node and deploy a `proxy.sh` script on prefill master node to forward requests.
1. `launch_online_dp.py` to launch external dp vllm servers. 1. `launch_online_dp.py` to launch external dp vllm servers.
[launch\_online\_dp.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/external_online_dp/launch_online_dp.py) [launch\_online\_dp.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/external_online_dp/launch_online_dp.py)
2. Prefill Node 0 `run_dp_template.sh` script 2. Prefill Node 0 `run_dp_template.sh` script
```shell ```shell
# this obtained through ifconfig # this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node # nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxx" nic_name="xxx"
local_ip="141.xx.xx.1" local_ip="141.xx.xx.1"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) # The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx" node0_ip="xxxx"
# [Optional] jemalloc export HCCL_IF_IP=$local_ip
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. export GLOO_SOCKET_IFNAME=$nic_name
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export HCCL_IF_IP=$local_ip # [Optional] jemalloc
export GLOO_SOCKET_IFNAME=$nic_name # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
export TP_SOCKET_IFNAME=$nic_name export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
export HCCL_SOCKET_IFNAME=$nic_name echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sysctl -w vm.swappiness=0
sysctl -w kernel.numa_balancing=0
sysctl kernel.sched_migration_cost_ns=50000
export VLLM_RPC_TIMEOUT=3600000
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000
export VLLM_RPC_TIMEOUT=3600000 export HCCL_OP_EXPANSION_MODE="AIV"
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_EXEC_TIMEOUT=204 export OMP_PROC_BIND=false
export HCCL_CONNECT_TIMEOUT=120 export OMP_NUM_THREADS=1
export TASK_QUEUE_ENABLE=1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
export OMP_PROC_BIND=false export HCCL_BUFFSIZE=256
export OMP_NUM_THREADS=1 export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export ASCEND_RT_VISIBLE_DEVICES=$1
export HCCL_BUFFSIZE=256
export TASK_QUEUE_ENABLE=1
export HCCL_OP_EXPANSION_MODE="AIV"
export ASCEND_RT_VISIBLE_DEVICES=$1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
vllm serve /weights/Kimi-K2.5-w4a8 \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--host 0.0.0.0 \ --host 0.0.0.0 \
--port $2 \ --port $2 \
--data-parallel-size $3 \ --data-parallel-size $3 \
@@ -312,17 +380,17 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
--tensor-parallel-size $7 \ --tensor-parallel-size $7 \
--enable-expert-parallel \ --enable-expert-parallel \
--seed 1024 \ --seed 1024 \
--served-model-name kimi_k25 \
--max-model-len 65536 \
--max-num-batched-tokens 16384 \
--max-num-seqs 8 \
--enforce-eager \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--quantization ascend \ --quantization ascend \
--served-model-name kimi_k25 \
--trust-remote-code \
--max-num-seqs 8 \
--max-model-len 32768 \
--max-num-batched-tokens 16384 \
--no-enable-prefix-caching \ --no-enable-prefix-caching \
--gpu-memory-utilization 0.8 \
--enforce-eager \
--speculative-config '{"method": "eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens": 3}' \
--additional-config '{"recompute_scheduler_enable":true}' \ --additional-config '{"recompute_scheduler_enable":true}' \
--mm-processor-cache-type shm \
--mm-encoder-tp-mode data \ --mm-encoder-tp-mode data \
--kv-transfer-config \ --kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1", '{"kv_connector": "MooncakeConnectorV1",
@@ -340,44 +408,47 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
} }
} }
}' }'
``` ```
3. Prefill Node 1 `run_dp_template.sh` script 3. Prefill Node 1 `run_dp_template.sh` script
```shell ```shell
# this obtained through ifconfig # this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node # nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxx" nic_name="xxx"
local_ip="141.xx.xx.2" local_ip="141.xx.xx.2"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) # The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx" node0_ip="xxxx"
# [Optional] jemalloc export HCCL_IF_IP=$local_ip
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. export GLOO_SOCKET_IFNAME=$nic_name
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export HCCL_IF_IP=$local_ip # [Optional] jemalloc
export GLOO_SOCKET_IFNAME=$nic_name # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
export TP_SOCKET_IFNAME=$nic_name export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
export HCCL_SOCKET_IFNAME=$nic_name echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sysctl -w vm.swappiness=0
sysctl -w kernel.numa_balancing=0
sysctl kernel.sched_migration_cost_ns=50000
export VLLM_RPC_TIMEOUT=3600000
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000
export VLLM_RPC_TIMEOUT=3600000 export HCCL_OP_EXPANSION_MODE="AIV"
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_EXEC_TIMEOUT=204 export OMP_PROC_BIND=false
export HCCL_CONNECT_TIMEOUT=120 export OMP_NUM_THREADS=1
export TASK_QUEUE_ENABLE=1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
export OMP_PROC_BIND=false export HCCL_BUFFSIZE=256
export OMP_NUM_THREADS=1 export VLLM_ASCEND_ENABLE_FLASHCOMM1=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export ASCEND_RT_VISIBLE_DEVICES=$1
export HCCL_BUFFSIZE=256
export TASK_QUEUE_ENABLE=1
export HCCL_OP_EXPANSION_MODE="AIV"
export ASCEND_RT_VISIBLE_DEVICES=$1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
vllm serve /weights/Kimi-K2.5-w4a8 \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--host 0.0.0.0 \ --host 0.0.0.0 \
--port $2 \ --port $2 \
--data-parallel-size $3 \ --data-parallel-size $3 \
@@ -387,17 +458,17 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
--tensor-parallel-size $7 \ --tensor-parallel-size $7 \
--enable-expert-parallel \ --enable-expert-parallel \
--seed 1024 \ --seed 1024 \
--served-model-name kimi_k25 \
--max-model-len 65536 \
--max-num-batched-tokens 16384 \
--max-num-seqs 8 \
--enforce-eager \
--trust-remote-code \
--gpu-memory-utilization 0.9 \
--quantization ascend \ --quantization ascend \
--served-model-name kimi_k25 \
--trust-remote-code \
--max-num-seqs 8 \
--max-model-len 32768 \
--max-num-batched-tokens 16384 \
--no-enable-prefix-caching \ --no-enable-prefix-caching \
--gpu-memory-utilization 0.8 \
--enforce-eager \
--speculative-config '{"method": "eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens": 3}' \
--additional-config '{"recompute_scheduler_enable":true}' \ --additional-config '{"recompute_scheduler_enable":true}' \
--mm-processor-cache-type shm \
--mm-encoder-tp-mode data \ --mm-encoder-tp-mode data \
--kv-transfer-config \ --kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1", '{"kv_connector": "MooncakeConnectorV1",
@@ -415,45 +486,47 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
} }
} }
}' }'
``` ```
4. Decode Node 0 `run_dp_template.sh` script 4. Decode Node 0 `run_dp_template.sh` script
```shell ```shell
# this obtained through ifconfig # this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node # nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxx" nic_name="xxx"
local_ip="141.xx.xx.3" local_ip="141.xx.xx.3"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) # The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx" node0_ip="xxxx"
# [Optional] jemalloc export HCCL_IF_IP=$local_ip
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. export GLOO_SOCKET_IFNAME=$nic_name
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export HCCL_IF_IP=$local_ip # [Optional] jemalloc
export GLOO_SOCKET_IFNAME=$nic_name # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
export TP_SOCKET_IFNAME=$nic_name export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
export HCCL_SOCKET_IFNAME=$nic_name echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sysctl -w vm.swappiness=0
sysctl -w kernel.numa_balancing=0
sysctl kernel.sched_migration_cost_ns=50000
export VLLM_RPC_TIMEOUT=3600000
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000
export VLLM_RPC_TIMEOUT=3600000 export HCCL_OP_EXPANSION_MODE="AIV"
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_EXEC_TIMEOUT=204 export OMP_PROC_BIND=false
export HCCL_CONNECT_TIMEOUT=120 export OMP_NUM_THREADS=1
export TASK_QUEUE_ENABLE=1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
export OMP_PROC_BIND=false export HCCL_BUFFSIZE=1100
export OMP_NUM_THREADS=1 export VLLM_ASCEND_ENABLE_MLAPO=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export ASCEND_RT_VISIBLE_DEVICES=$1
export HCCL_BUFFSIZE=1100
export TASK_QUEUE_ENABLE=1
export HCCL_OP_EXPANSION_MODE="AIV"
export ASCEND_RT_VISIBLE_DEVICES=$1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
export VLLM_ASCEND_ENABLE_MLAPO=1
vllm serve /weights/Kimi-K2.5-w4a8 \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--host 0.0.0.0 \ --host 0.0.0.0 \
--port $2 \ --port $2 \
--data-parallel-size $3 \ --data-parallel-size $3 \
@@ -463,17 +536,17 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
--tensor-parallel-size $7 \ --tensor-parallel-size $7 \
--enable-expert-parallel \ --enable-expert-parallel \
--seed 1024 \ --seed 1024 \
--served-model-name kimi_k25 \
--max-model-len 65536 \
--max-num-batched-tokens 256 \
--max-num-seqs 28 \
--trust-remote-code \
--gpu-memory-utilization 0.92 \
--quantization ascend \ --quantization ascend \
--served-model-name kimi_k25 \
--trust-remote-code \
--max-num-seqs 48 \
--max-model-len 32768 \
--max-num-batched-tokens 256 \
--no-enable-prefix-caching \ --no-enable-prefix-caching \
--async-scheduling \ --gpu-memory-utilization 0.95 \
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY", "cudagraph_capture_sizes":[2, 4, 8, 16, 24, 32, 48, 56]}' \ --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY", "cudagraph_capture_sizes":[4,8,16,32,48,64,80,96,112,128,144,160]}' \
--additional-config '{"recompute_scheduler_enable":true,"multistream_overlap_shared_expert": true,"finegrained_tp_config": {"lmhead_tensor_parallel_size":8}}' \ --additional-config '{"recompute_scheduler_enable":true,"multistream_overlap_shared_expert": false}' \
--speculative-config '{"method": "eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens": 3}' \
--kv-transfer-config \ --kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1", '{"kv_connector": "MooncakeConnectorV1",
"kv_role": "kv_consumer", "kv_role": "kv_consumer",
@@ -490,45 +563,47 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
} }
} }
}' }'
``` ```
5. Decode Node 1 `run_dp_template.sh` script 5. Decode Node 1 `run_dp_template.sh` script
```shell ```shell
# this obtained through ifconfig # this obtained through ifconfig
# nic_name is the network interface name corresponding to local_ip of the current node # nic_name is the network interface name corresponding to local_ip of the current node
nic_name="xxx" nic_name="xxx"
local_ip="141.xx.xx.4" local_ip="141.xx.xx.4"
# The value of node0_ip must be consistent with the value of local_ip set in node0 (master node) # The value of node0_ip must be consistent with the value of local_ip set in node0 (master node)
node0_ip="xxxx" node0_ip="xxxx"
# [Optional] jemalloc export HCCL_IF_IP=$local_ip
# jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on. export GLOO_SOCKET_IFNAME=$nic_name
# export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export HCCL_IF_IP=$local_ip # [Optional] jemalloc
export GLOO_SOCKET_IFNAME=$nic_name # jemalloc is for better performance, if `libjemalloc.so` is installed on your machine, you can turn it on.
export TP_SOCKET_IFNAME=$nic_name export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libjemalloc.so.2:$LD_PRELOAD
export HCCL_SOCKET_IFNAME=$nic_name echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sysctl -w vm.swappiness=0
sysctl -w kernel.numa_balancing=0
sysctl kernel.sched_migration_cost_ns=50000
export VLLM_RPC_TIMEOUT=3600000
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000
export VLLM_RPC_TIMEOUT=3600000 export HCCL_OP_EXPANSION_MODE="AIV"
export VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS=30000 export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_EXEC_TIMEOUT=204 export OMP_PROC_BIND=false
export HCCL_CONNECT_TIMEOUT=120 export OMP_NUM_THREADS=1
export TASK_QUEUE_ENABLE=1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
export OMP_PROC_BIND=false export HCCL_BUFFSIZE=1100
export OMP_NUM_THREADS=1 export VLLM_ASCEND_ENABLE_MLAPO=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True export ASCEND_RT_VISIBLE_DEVICES=$1
export HCCL_BUFFSIZE=1100
export TASK_QUEUE_ENABLE=1
export HCCL_OP_EXPANSION_MODE="AIV"
export ASCEND_RT_VISIBLE_DEVICES=$1
export ASCEND_BUFFER_POOL=4:8
export LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages/mooncake:$LD_LIBRARY_PATH
export VLLM_ASCEND_ENABLE_MLAPO=1
vllm serve /weights/Kimi-K2.5-w4a8 \ vllm serve Eco-Tech/Kimi-K2.5-W4A8 \
--host 0.0.0.0 \ --host 0.0.0.0 \
--port $2 \ --port $2 \
--data-parallel-size $3 \ --data-parallel-size $3 \
@@ -538,17 +613,17 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
--tensor-parallel-size $7 \ --tensor-parallel-size $7 \
--enable-expert-parallel \ --enable-expert-parallel \
--seed 1024 \ --seed 1024 \
--served-model-name kimi_k25 \
--max-model-len 65536 \
--max-num-batched-tokens 256 \
--max-num-seqs 28 \
--trust-remote-code \
--gpu-memory-utilization 0.92 \
--quantization ascend \ --quantization ascend \
--served-model-name kimi_k25 \
--trust-remote-code \
--max-num-seqs 48 \
--max-model-len 32768 \
--max-num-batched-tokens 256 \
--no-enable-prefix-caching \ --no-enable-prefix-caching \
--async-scheduling \ --gpu-memory-utilization 0.95 \
--compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY", "cudagraph_capture_sizes":[2, 4, 8, 16, 24, 32, 48, 56]}' \ --compilation-config '{"cudagraph_mode": "FULL_DECODE_ONLY", "cudagraph_capture_sizes":[4,8,16,32,48,64,80,96,112,128,144,160]}' \
--additional-config '{"recompute_scheduler_enable":true,"multistream_overlap_shared_expert": true,"finegrained_tp_config": {"lmhead_tensor_parallel_size":8}}' \ --additional-config '{"recompute_scheduler_enable":true,"multistream_overlap_shared_expert": false}' \
--speculative-config '{"method": "eagle3", "model":"lightseekorg/kimi-k2.5-eagle3", "num_speculative_tokens": 3}' \
--kv-transfer-config \ --kv-transfer-config \
'{"kv_connector": "MooncakeConnectorV1", '{"kv_connector": "MooncakeConnectorV1",
"kv_role": "kv_consumer", "kv_role": "kv_consumer",
@@ -565,32 +640,32 @@ vllm serve /weights/Kimi-K2.5-w4a8 \
} }
} }
}' }'
``` ```
**Notice:** **Notice:**
The parameters are explained as follows: The parameters are explained as follows:
- `VLLM_ASCEND_ENABLE_FLASHCOMM1=1`: enables the communication optimization function on the prefill nodes.
- `VLLM_ASCEND_ENABLE_MLAPO=1`: enables the fusion operator, which can significantly improve performance but consumes more NPU memory. In the Prefill-Decode (PD) separation scenario, enable MLAPO only on decode nodes. - `VLLM_ASCEND_ENABLE_MLAPO=1`: enables the fusion operator, which can significantly improve performance but consumes more NPU memory. In the Prefill-Decode (PD) separation scenario, enable MLAPO only on decode nodes.
- `--async-scheduling`: enables the asynchronous scheduling function. When Multi-Token Prediction (MTP) is enabled, asynchronous scheduling of operator delivery can be implemented to overlap the operator delivery latency. - `--async-scheduling`: enables the asynchronous scheduling function. When Multi-Token Prediction (MTP) is enabled, asynchronous scheduling of operator delivery can be implemented to overlap the operator delivery latency.
- `cudagraph_capture_sizes`: The recommended value is `n x (mtp + 1)`. And the min is `n = 1` and the max is `n = max-num-seqs`. For other values, it is recommended to set them to the number of frequently occurring requests on the Decode (D) node. - `cudagraph_capture_sizes`: The recommended value is `n x (mtp + 1)`. And the min is `n = 1` and the max is `n = max-num-seqs`. For other values, it is recommended to set them to the number of frequently occurring requests on the Decode (D) node.
- `recompute_scheduler_enable: true`: enables the recomputation scheduler. When the Key-Value Cache (KV Cache) of the decode node is insufficient, requests will be sent to the prefill node to recompute the KV Cache. In the PD separation scenario, it is recommended to enable this configuration on both prefill and decode nodes simultaneously. - `recompute_scheduler_enable: true`: enables the recomputation scheduler. When the Key-Value Cache (KV Cache) of the decode node is insufficient, requests will be sent to the prefill node to recompute the KV Cache. In the PD separation scenario, it is recommended to enable this configuration on both prefill and decode nodes simultaneously.
- `multistream_overlap_shared_expert: true`: When the Tensor Parallelism (TP) size is 1 or `enable_shared_expert_dp: true`, an additional stream is enabled to overlap the computation process of shared experts for improved efficiency. - `multistream_overlap_shared_expert: true`: When the Tensor Parallelism (TP) size is 1 or `enable_shared_expert_dp: true`, an additional stream is enabled to overlap the computation process of shared experts for improved efficiency.
- `lmhead_tensor_parallel_size: 8`: When the Tensor Parallelism (TP) size of the decode node is 1, this parameter allows the TP size of the LMHead embedding layer to be greater than 1, which is used to reduce the computational load of each card on the LMHead embedding layer.
1. run server for each node: 1. run server for each node:
```shell ```shell
# p0 # p0
python launch_online_dp.py --dp-size 2 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address 141.xx.xx.1 --dp-rpc-port 12321 --vllm-start-port 7100 python launch_online_dp.py --dp-size 2 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address 141.xx.xx.1 --dp-rpc-port 12321 --vllm-start-port 7100
# p1 # p1
python launch_online_dp.py --dp-size 2 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address 141.xx.xx.2 --dp-rpc-port 12321 --vllm-start-port 7100 python launch_online_dp.py --dp-size 2 --tp-size 8 --dp-size-local 2 --dp-rank-start 0 --dp-address 141.xx.xx.2 --dp-rpc-port 12321 --vllm-start-port 7100
# d0 # d0
python launch_online_dp.py --dp-size 32 --tp-size 1 --dp-size-local 16 --dp-rank-start 0 --dp-address 141.xx.xx.3 --dp-rpc-port 12321 --vllm-start-port 7100 python launch_online_dp.py --dp-size 32 --tp-size 1 --dp-size-local 16 --dp-rank-start 0 --dp-address 141.xx.xx.3 --dp-rpc-port 12321 --vllm-start-port 7100
# d1 # d1
python launch_online_dp.py --dp-size 32 --tp-size 1 --dp-size-local 16 --dp-rank-start 16 --dp-address 141.xx.xx.3 --dp-rpc-port 12321 --vllm-start-port 7100 python launch_online_dp.py --dp-size 32 --tp-size 1 --dp-size-local 16 --dp-rank-start 16 --dp-address 141.xx.xx.3 --dp-rpc-port 12321 --vllm-start-port 7100
``` ```
7. Run the `proxy.sh` script on the prefill master node 2. Run the `proxy.sh` script on the prefill master node
Run a proxy server on the same node with the prefiller service instance. You can get the proxy program in the repository's examples: [load\_balance\_proxy\_server\_example.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/disaggregated_prefill_v1/load_balance_proxy_server_example.py) Run a proxy server on the same node with the prefiller service instance. You can get the proxy program in the repository's examples: [load\_balance\_proxy\_server\_example.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/disaggregated_prefill_v1/load_balance_proxy_server_example.py)
@@ -653,13 +728,21 @@ bash proxy.sh
Once your server is started, you can query the model with input prompts: Once your server is started, you can query the model with input prompts:
```shell ```shell
curl http://<node0_ip>:<port>/v1/completions \ curl http://<node0_ip>:<port>/v1/chat/completions \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"model": "kimi_k25", "model": "kimi_k25",
"prompt": "The future of AI is", "messages": [{
"max_completion_tokens": 50, "role": "user",
"temperature": 0 "content": [
{
"type": "text",
"text": "The future of AI is"
}]
}],
"max_tokens": 1024,
"temperature": 1.0,
"top_p": 0.95
}' }'
``` ```
@@ -671,12 +754,14 @@ Here are two accuracy evaluation methods.
1. Refer to [Using AISBench](../../developer_guide/evaluation/using_ais_bench.md) for details. 1. Refer to [Using AISBench](../../developer_guide/evaluation/using_ais_bench.md) for details.
2. After execution, you can get the result, here is the result of `Kimi-K2.5-w4a8` in `vllm-ascend:v0.17.0rc1` for reference only. 2. After execution, you can get the result, here is the result of `Kimi-K2.5-w4a8` in `vllm-ascend:v0.18.0rc1` for reference only.
| dataset | version | metric | mode | vllm-api-general-chat | note | | dataset | version | metric | mode | vllm-api-general-chat | note |
|----- | ----- | ----- | ----- | -----| ----- | |----- | ----- | ----- | ----- | -----| ----- |
| gsm8k | - | accuracy | gen | 94.62 | 1 Atlas 800 A3 (64G × 16) | | GSM8K | - | accuracy | gen | 96.07 | 1 Atlas 800 A3 (64G × 16) |
| textvqa | - | accuracy | gen | 80.29 | 1 Atlas 800 A3 (64G × 16) | | AIME2025 | - | accuracy | gen | 90.00 | 1 Atlas 800 A3 (64G × 16) |
| GPQA | - | accuracy | gen | 84.85 | 1 Atlas 800 A3 (64G × 16) |
| TextVQA | - | accuracy | gen | 80.29 | 1 Atlas 800 A3 (64G × 16) |
## Performance ## Performance
@@ -704,3 +789,28 @@ vllm bench serve --model Eco-Tech/Kimi-K2.5-w4a8 --dataset-name random --random-
``` ```
After about several minutes, you can get the performance evaluation result. After about several minutes, you can get the performance evaluation result.
## Best Practices
In this chapter, we recommend best practices for three scenarios:
- Long-context: For long sequences with low concurrency (≤ 4): set `dp1 tp16`; For long sequences with high concurrency (> 4): set `dp2 tp8`
- Low-latency: For short sequences with low latency: we recommend setting `dp2 tp8`
- High-throughput: For short sequences with high throughput: we also recommend setting `dp4 tp4`
**Notice:**
`max-model-len` and `max-num-seqs` need to be set according to the actual usage scenario. For other settings, please refer to the **[Deployment](#deployment)** chapter.
## FAQ
- **Q: Why is the TPOT performance poor in Long-context test?**
A: Please ensure that the FIA operator replacement script has been executed successfully to complete the replacement of FIA operators. Here is the script: [A2](../../../../tools/install_flash_infer_attention_score_ops_a2.sh) and [A3](../../../../tools/install_flash_infer_attention_score_ops_a3.sh)
- **Q: Startup fails with HCCL port conflicts (address already bound). What should I do?**
A: Clean up old processes and restart: `pkill -f VLLM*`.
- **Q: How to handle OOM or unstable startup?**
A: Reduce `--max-num-seqs` and `--max-model-len` first. If needed, reduce concurrency and load-testing pressure (e.g., `max-concurrency` / `num-prompts`).