Add pyhccl (#503)

This is the first step to support trl vllm serve on Ascend NPU
https://github.com/vllm-project/vllm-ascend/issues/459.
This PR can work properly only when
https://github.com/vllm-project/vllm/pull/16464 is merged into vLLM.

---------

Signed-off-by: hzji210@gmail.com <hzji210@gmail.com>
This commit is contained in:
Huazhong Ji
2025-04-17 14:57:52 +08:00
committed by GitHub
parent 64fdf4cbef
commit c3d1a3782a
8 changed files with 589 additions and 1 deletions

View File

@@ -30,5 +30,6 @@ class NPUCommunicator(DeviceCommunicatorBase):
device_group: Optional[ProcessGroup] = None,
unique_name: str = ""):
super().__init__(cpu_group, device, device_group, unique_name)
# TODO(hz): Refer to CudaCommunicator's implementation to integrate PyHcclCommunicator
# init device according to rank
self.device = torch.npu.current_device()