diff --git a/docs/source/user_guide/feature_guide/index.md b/docs/source/user_guide/feature_guide/index.md index 8a2e9c41..3dcbfd71 100644 --- a/docs/source/user_guide/feature_guide/index.md +++ b/docs/source/user_guide/feature_guide/index.md @@ -17,4 +17,5 @@ dynamic_batch kv_pool external_dp large_scale_ep +ucm_deployment ::: diff --git a/docs/source/user_guide/feature_guide/ucm_deployment.md b/docs/source/user_guide/feature_guide/ucm_deployment.md new file mode 100644 index 00000000..774aae2f --- /dev/null +++ b/docs/source/user_guide/feature_guide/ucm_deployment.md @@ -0,0 +1,141 @@ +# UCM-Enhanced Prefix Caching Deployment Guide + +## Overview + +Unified Cache Management (UCM) provides an external KV-cache storage layer designed for prefix-caching scenarios in vLLM/vLLM-Ascend. Unlike KV Pooling, which expands prefix-cache capacity only by aggregating device memory and therefore remains limited by HBM/DRAM size and lacks persistence, UCM decouples compute from storage and adopts a tiered design. Each node uses local DRAM as a fast cache, while a shared backend—such as 3FS or enterprise-grade storage—serves as the persistent KV store. This approach removes the capacity ceiling imposed by device memory, enables durable and reliable prefix caching, and allows cache capacity to scale with the storage system rather than with compute resources. + +## Prerequisites + +* OS: Linux +* A hardware with Ascend NPU. It’s usually the Atlas 800 A2 series. +* **vLLM: main branch** +* **vLLM Ascend: main branch** + +## UCM Installation + +**Please refer to the [official UCM installation guide for Ascend NPU](https://ucm.readthedocs.io/en/latest/getting-started/quickstart_vllm_ascend.html)** + +## Configure UCM for Prefix Caching + +Modify the UCM configuration file to specify which UCM connector to use and where KV blocks should be stored. +You may directly edit the example file at: + +`unified-cache-management/examples/ucm_config_example.yaml` + +**For updated configuration options, please refer to the [official UCM documentation for prefix-caching](https://ucm.readthedocs.io/en/latest/user-guide/prefix-cache/nfs_store.html)** + +A minimal configuration looks like this: + +```yaml +ucm_connectors: + - ucm_connector_name: "UcmNfsStore" + ucm_connector_config: + storage_backends: "/mnt/test" + use_direct: false + +load_only_first_rank: false +``` + +Explanation: + +* ucm_connector_name: "UcmNfsStore": + Specifies `UcmNfsStore` as the UCM connector. + +* storage_backends: + Specify the directory used for storing KV blocks. It can be a local directory or an NFS-mounted path. UCM will store KV blocks here. + **⚠️ Make sure to replace `"/mnt/test"` with your actual storage directory.** + +* use_direct: + Whether to enable direct I/O (optional). Default is `false`. + +* load_only_first_rank: + Controls whether only rank 0 loads KV cache and broadcasts it to other ranks. + This feature is currently not supported on Ascend, so it must be set to `false` (all ranks load/dump independently). + +## Launching Inference + +In this guide, we describe **online inference** using vLLM with the UCM connector, deployed as an OpenAI-compatible server. For best performance with UCM, it is recommended to set `block_size` to 128. + +To start the vLLM server with the Qwen/Qwen2.5-14B-Instruct model, run: + +```bash +vllm serve Qwen/Qwen2.5-14B-Instruct \ +--max-model-len 20000 \ +--tensor-parallel-size 2 \ +--gpu_memory_utilization 0.87 \ +--block_size 128 \ +--trust-remote-code \ +--port 7800 \ +--enforce-eager \ +--no-enable-prefix-caching \ +--kv-transfer-config \ +'{ + "kv_connector": "UCMConnector", + "kv_role": "kv_both", + "kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"} +}' +``` + +**⚠️ Make sure to replace `"/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.** + +If you see log as below: + +```bash +INFO: Started server process [1049932] +INFO: Waiting for application startup. +INFO: Application startup complete. +``` + +Congratulations, you have successfully started the vLLM server with UCM connector! + +## Evaluating UCM Prefix Caching Performance +After launching the vLLM server with `UCMConnector` enabled, the easiest way to observe the prefix caching effect is to run the built-in `vllm bench` CLI. Executing the following command **twice** in a separate terminal shows the improvement clearly. + +```bash +vllm bench serve \ +--backend vllm \ +--model Qwen/Qwen2.5-14B-Instruct \ +--host 127.0.0.1 \ +--port 7800 \ +--dataset-name random \ +--num-prompts 12 \ +--random-input-len 16000 \ +--random-output-len 2 \ +--request-rate inf \ +--seed 123456 \ +--percentile-metrics "ttft,tpot,itl,e2el" \ +--metric-percentiles "90,99" \ +--ignore-eos +``` + +### After the first execution +The `vllm bench` terminal prints the benchmark result: + +``` +---------------Time to First Token---------------- +Mean TTFT (ms): 15323.87 +``` + +Inspecting the vLLM server logs reveals entries like: + +``` +INFO ucm_connector.py:228: request_id: xxx, total_blocks_num: 125, hit hbm: 0, hit external: 0 +``` + +This indicates that for the first inference request, UCM did not hit any cached KV blocks. As a result, the full 16K-token prefill must be computed, leading to a relatively large TTFT. + +### After the second execution +Running the same benchmark again produces: + +``` +---------------Time to First Token---------------- +Mean TTFT (ms): 1920.68 +``` + +The vLLM server logs now contain similar entries: + +``` +INFO ucm_connector.py:228: request_id: xxx, total_blocks_num: 125, hit hbm: 0, hit external: 125 +``` + +This indicates that during the second request, UCM successfully retrieved all 125 cached KV blocks from the storage backend. Leveraging the fully cached prefix significantly reduces the initial latency observed by the model, yielding an approximate **8× improvement in TTFT** compared to the initial run. diff --git a/mypy.ini b/mypy.ini index 7cee0bc5..b11cfba1 100644 --- a/mypy.ini +++ b/mypy.ini @@ -29,4 +29,7 @@ ignore_missing_imports = True allow_untyped_imports = True [mypy-xlite.*] +ignore_missing_imports = True + +[mypy-ucm.*] ignore_missing_imports = True \ No newline at end of file diff --git a/requirements-dev.txt b/requirements-dev.txt index 87fa8a66..aa4701b7 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -21,4 +21,5 @@ pytest_mock msserviceprofiler>=1.2.2 mindstudio-probe>=8.3.0 arctic-inference==0.1.1 -xlite \ No newline at end of file +xlite +uc-manager \ No newline at end of file diff --git a/vllm_ascend/distributed/__init__.py b/vllm_ascend/distributed/__init__.py index 618c5752..91743a8f 100644 --- a/vllm_ascend/distributed/__init__.py +++ b/vllm_ascend/distributed/__init__.py @@ -38,3 +38,7 @@ def register_connector(): "MooncakeLayerwiseConnector", "vllm_ascend.distributed.mooncake_layerwise_connector", "MooncakeLayerwiseConnector") + + KVConnectorFactory.register_connector( + "UCMConnector", "vllm_ascend.distributed.ucm_connector", + "UCMConnectorV1") diff --git a/vllm_ascend/distributed/ucm_connector.py b/vllm_ascend/distributed/ucm_connector.py new file mode 100644 index 00000000..f44e5ee2 --- /dev/null +++ b/vllm_ascend/distributed/ucm_connector.py @@ -0,0 +1,237 @@ +# SPDX-License-Identifier: Apache-2.0 +from typing import TYPE_CHECKING, Any, Optional + +import torch +from ucm.integration.vllm.ucm_connector import UCMConnector +from vllm.config import VllmConfig +from vllm.distributed.kv_transfer.kv_connector.v1.base import ( + KVConnectorBase_V1, KVConnectorMetadata, KVConnectorRole) +from vllm.logger import init_logger +from vllm.v1.core.sched.output import SchedulerOutput + +logger = init_logger(__name__) + +if TYPE_CHECKING: + from vllm.attention.backends.abstract import AttentionMetadata + from vllm.distributed.kv_transfer.kv_connector.v1.metrics import ( + KVConnectorPromMetrics, KVConnectorStats, PromMetric, PromMetricT) + from vllm.forward_context import ForwardContext + from vllm.v1.core.kv_cache_manager import KVCacheBlocks + from vllm.v1.kv_cache_interface import KVCacheConfig + from vllm.v1.request import Request + + +class UCMConnectorV1(KVConnectorBase_V1): + + def __init__( + self, + vllm_config: "VllmConfig", + role: KVConnectorRole, + kv_cache_config: "KVCacheConfig", + ): + super().__init__(vllm_config=vllm_config, + role=role, + kv_cache_config=kv_cache_config) + assert vllm_config.kv_transfer_config is not None + + ImplCls = UCMConnector + self._ucm_engine = ImplCls(vllm_config, role) + + # ============================== + # Worker-side methods + # ============================== + def start_load_kv(self, forward_context: "ForwardContext", + **kwargs: Any) -> None: + """ + Start loading the KV cache from the connector to vLLM's paged + KV buffer. This is called from the forward context before the + forward pass to enable async loading during model execution. + + Args: + forward_context (ForwardContext): the forward context. + **kwargs: additional arguments for the load operation + + Note: + The number of elements in kv_caches and layer_names should be + the same. + + """ + self._ucm_engine.start_load_kv(forward_context, **kwargs) + + def wait_for_layer_load(self, layer_name: str) -> None: + """ + Block until the KV for a specific layer is loaded into vLLM's + paged buffer. This is called from within attention layer to ensure + async copying from start_load_kv is complete. + + This interface will be useful for layer-by-layer pipelining. + + Args: + layer_name: the name of that layer + """ + self._ucm_engine.wait_for_layer_load(layer_name) + + def save_kv_layer( + self, + layer_name: str, + kv_layer: torch.Tensor, + attn_metadata: "AttentionMetadata", + **kwargs: Any, + ) -> None: + """ + Start saving the a layer of KV cache from vLLM's paged buffer + to the connector. This is called from within attention layer to + enable async copying during execution. + + Args: + layer_name (str): the name of the layer. + kv_layer (torch.Tensor): the paged KV buffer of the current + layer in vLLM. + attn_metadata (AttentionMetadata): the attention metadata. + **kwargs: additional arguments for the save operation. + """ + self._ucm_engine.save_kv_layer(layer_name, kv_layer, attn_metadata, + **kwargs) + + def wait_for_save(self) -> None: + """ + Block until all the save operations is done. This is called + as the forward context exits to ensure that the async saving + from save_kv_layer is complete before finishing the forward. + + This prevents overwrites of paged KV buffer before saving done. + """ + self._ucm_engine.wait_for_save() + + def clear_connector_metadata(self) -> None: + """Clear the connector metadata. + + This function should be called by the model runner every time + after the model execution. + """ + self._ucm_engine.clear_connector_metadata() + + def bind_connector_metadata( + self, connector_metadata: KVConnectorMetadata) -> None: + """Set the connector metadata from the scheduler. + + This function should be called by the model runner every time + before the model execution. The metadata will be used for runtime + KV cache loading and saving. + + Args: + connector_metadata (dict): the connector metadata. + """ + self._ucm_engine.bind_connector_metadata(connector_metadata) + + def get_block_ids_with_load_errors(self) -> set[int]: + """ + Get the set of block IDs that failed to load. + + Returns: + Set of block IDs that encountered load errors. + Empty set if no load errors occurred. + """ + return self._ucm_engine.get_block_ids_with_load_errors() + + # ============================== + # Scheduler-side methods + # ============================== + def get_num_new_matched_tokens( + self, + request: "Request", + num_computed_tokens: int, + ) -> tuple[int | None, bool]: + """ + Get number of new tokens that can be loaded from the + external KV cache beyond the num_computed_tokens. + + Args: + request (Request): the request object. + num_computed_tokens (int): the number of locally + computed tokens for this request + + Returns: + the number of tokens that can be loaded from the + external KV cache beyond what is already computed. + """ + return self._ucm_engine.get_num_new_matched_tokens( + request, num_computed_tokens) + + def update_state_after_alloc(self, request: "Request", + blocks: "KVCacheBlocks", + num_external_tokens: int) -> None: + """ + Update KVConnector state after block allocation. + """ + self._ucm_engine.update_state_after_alloc(request, blocks, + num_external_tokens) + + def build_connector_meta( + self, scheduler_output: SchedulerOutput) -> KVConnectorMetadata: + """ + Build the connector metadata for this step. + + This function should NOT modify fields in the scheduler_output. + Also, calling this function will reset the state of the connector. + + Args: + scheduler_output (SchedulerOutput): the scheduler output object. + """ + return self._ucm_engine.build_connector_meta(scheduler_output) + + def request_finished( + self, + request: "Request", + block_ids: list[int], + ) -> tuple[bool, dict[str, Any] | None]: + """ + Called when a request has finished, before its blocks are freed. + + Returns: + True if the request is being saved/sent asynchronously and blocks + should not be freed until the request_id is returned from + get_finished(). + Optional KVTransferParams to be included in the request outputs + returned by the engine. + """ + return self._ucm_engine.request_finished(request, block_ids) + + # ============================== + # Metrics & Stats + # ============================== + + @classmethod + def build_kv_connector_stats( + cls, + data: dict[str, Any] | None = None + ) -> Optional["KVConnectorStats"]: + """ + KVConnectorStats resolution method. This method allows dynamically + registered connectors to return their own KVConnectorStats object, + which can implement custom aggregation logic on the data dict. + """ + return UCMConnector.build_kv_connector_stats(data) + + @classmethod + def build_prom_metrics( + cls, + vllm_config: "VllmConfig", + metric_types: dict[type["PromMetric"], type["PromMetricT"]], + labelnames: list[str], + per_engine_labelvalues: dict[int, list[object]], + ) -> Optional["KVConnectorPromMetrics"]: + """ + Create a KVConnectorPromMetrics subclass which should register + per-connector Prometheus metrics and implement observe() to + expose connector transfer stats via Prometheus. + + This implementation forwards the call to the underlying + UCMConnector engine. + """ + return UCMConnector.build_prom_metrics( + vllm_config, + metric_types, + labelnames, + per_engine_labelvalues, + )