### What this PR does / why we need it? This PR introduces a new model loader called Netloader, which leverages high-bandwidth P2P direct transfer between NPU cards to achieve weight loading. Netloader is implemented as a plugin through the newly added 'register_model_loader' function in vLLM 0.10. It facilitates the process of weight loading by sending weights from a pre-loaded model (server) to an empty model of a newly started instance (client). The server operates concurrently with normal inference tasks through sub-threads and the 'stateless_init_torch_distributed_process_group' in vLLM. The client initiates a transfer request after verifying that the model and partitioning method are the same as the server's, and uses HCCL's collective communication (send/recv) to load the weights in the order they are stored in the model. Application Scenarios: 1. Significantly Reduces Inference Instance Startup Time By reusing the weights of already loaded instances and performing high-speed transfers directly between computing cards, this method reduces model loading latency compared to traditional remote/local pull methods. 2. Reduces Network and Storage Pressure Avoids the need to repeatedly download weight files from remote repositories, reducing the impact on centralized storage and network traffic, thereby enhancing overall system stability and service quality. 3. Improves Resource Utilization and Reduces Costs Accelerating the loading process reduces reliance on redundant computing pools, allowing computing resources to be elastically scaled and reclaimed as needed. 4. Enhances Business Continuity and High Availability In fault recovery scenarios, new instances can quickly take over existing services, avoiding prolonged business interruptions and improving the system's high availability and user experience. ### Does this PR introduce _any_ user-facing change? Netloader utilizes the existing --load-format=netloader and --model-loader-extra-config to be activated. The model-loader-extra-config needs to be input as a JSON string (as it is now) Afterwards, you can check whether the outputs for the same sentence are consistent when the temperature is set to 0. Signed-off-by: destinysky <kangrui10@126.com> - vLLM version: v0.11.0rc3 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 --------- Signed-off-by: destinysky <kangrui10@126.com>
85 lines
3.0 KiB
Python
85 lines
3.0 KiB
Python
#
|
|
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
#
|
|
|
|
import time
|
|
|
|
from vllm.logger import logger
|
|
|
|
from .executor.elastic_load import P2PLoad
|
|
from .interaction.elastic import ElasticClient
|
|
|
|
|
|
def elastic_load(
|
|
model,
|
|
device_id: int,
|
|
model_path: str,
|
|
sources: list,
|
|
tp: int,
|
|
pp: int,
|
|
):
|
|
"""
|
|
Loads a model using elastic loading across multiple devices.
|
|
|
|
Parameters:
|
|
- model: The model instance to be loaded.
|
|
- device_id: The ID of the current device (i.e. global rank).
|
|
- model_path: The path to the model file.
|
|
- sources: A list of source configurations, each containing device_id and sources.
|
|
- tp: Tensor parallel size, indicating the number of devices for tensor parallelism.
|
|
- pp: Pipeline parallel size, indicating the number of devices for pipeline parallelism.
|
|
|
|
Returns:
|
|
- The loaded model if successful, otherwise None.
|
|
"""
|
|
|
|
# Filter sources for the current device
|
|
sources_this_device = []
|
|
for s in sources:
|
|
if isinstance(
|
|
s, dict
|
|
) and "device_id" in s and s["device_id"] == device_id and isinstance(
|
|
s["sources"], list):
|
|
sources_this_device += s["sources"]
|
|
if len(sources_this_device) == 0:
|
|
return None
|
|
|
|
try:
|
|
# Initialize the interaction layer with the ElasticClient
|
|
with ElasticClient(sources_this_device, device_id, model_path, tp,
|
|
pp) as client_interaction_layer:
|
|
if client_interaction_layer.s is None or client_interaction_layer.server_addr is None:
|
|
raise RuntimeError(
|
|
"Failed to initialize ElasticClient: socket or server_addr is None"
|
|
)
|
|
ack = client_interaction_layer.ack
|
|
if ack is None:
|
|
raise RuntimeError("ElasticClient.register did not return ack")
|
|
|
|
t0 = time.perf_counter()
|
|
elastic_loader = P2PLoad(ack[0],
|
|
client_interaction_layer.server_addr,
|
|
ack[1])
|
|
model_loaded = elastic_loader.load(model=model)
|
|
if model_loaded is None:
|
|
logger.error("Failed to load model")
|
|
return None
|
|
logger.info("Finish elastic load (duration: {}s)".format(
|
|
time.perf_counter() - t0))
|
|
return model_loaded
|
|
except Exception as e:
|
|
logger.error(f"elastic_load error: {e}")
|
|
return None
|