### What this PR does / why we need it? This pr is for https://github.com/vllm-project/vllm-ascend/issues/3241 , which is in-house solution for offloading KV cache data from the GPU memory to other medium (in particular, CPU memory)。Previous solutions required reliance on third-party components, which had issues with compatibility between different versions. ### How was this patch tested? use the following script for testing: export CUDA_VISIBLE_DEVICES=0 export TP=1 export MODEL_PATH=/model/Qwen3-14B export MODEL_NAME=Qwen3-14B export PORT=10000 #export ASCEND_LAUNCH_BLOCKING=1 #export ASCEND_SLOG_PRINT_TO_STDOUT=1 python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port ${PORT} --dtype bfloat16 --model ${MODEL_PATH} --served-model-name ${MODEL_NAME} --tensor-parallel-size ${TP} --gpu-memory-utilization 0.7 --max-model-len 32768 --trust-remote-code --disable-log-requests \ --block-size 128 \ --kv-transfer-config '{"kv_connector":"OffloadingConnector","kv_role":"kv_both","kv_connector_extra_config":{"block_size": 128, "num_cpu_blocks": 1000, "spec_name":"NPUOffloadingSpec", "spec_module_path": "vllm_ascend.kv_offload.npu"}}' - vLLM version: v0.11.0rc3 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 --------- Signed-off-by: HF-001 <1670186653@qq.com>
0 lines
0 B
Python
0 lines
0 B
Python
The file is empty.