Files
xc-llm-ascend/csrc/utils/inc/kernel/data_copy.h
shiro-zzzz 0617d7d394 [Kernel] add custom moe ops for prefill (#4194)
### What this PR does / why we need it?
1.Add the implementation of normal Aclnn operators: MoeCombineNormal,
MoeDispatchNormal, NotifyDispatch,and DispatchLayout.

- MoeCombineNormal: Implements the combine logic within MoE operations.
- MoeDispatchNormal: Implements the dispatch logic within MoE
operations.
- NotifyDispatch: Exchanges topk_idx information among different ranks
to calculate the device memory required for the dispatch stage.
- DispatchLayout: Used to calculate information related to the device
memory layout for the dispatch stage.

2.Provide PyTorch interfaces for normal operators—get_dispatch_layout,
dispatch_prefill, and combine_prefill—to be used for MoE communication
during the prefill stage in vLLM.

- get_dispatch_layout: Calculates information related to the device
memory layout for the dispatch operator, and is called before
dispatch_prefill.
- dispatch_prefill: Initiates the dispatch operation.
- combine_prefill: Initiates the combine operation.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
The functionality has already been validated using the local Qwen model.
Test cases will be added after support for multi-NPU use cases in the CI
pipeline is finalized.


- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c

Signed-off-by: shiro-zzzz <zhangdianhao@huawei.com>
2025-12-08 19:11:58 +08:00

68 lines
2.1 KiB
C++

#ifndef CAM_DATACOPY_GM2GM_H
#define CAM_DATACOPY_GM2GM_H
#include <type_traits>
#include "comm_args.h"
using namespace AscendC;
using namespace Moe;
template <typename T>
FORCE_INLINE_AICORE void SetAtomicOpType(int op)
{
switch (op) {
case ADD:
AscendC::SetAtomicAdd<T>();
break;
case MUL:
// Ignore setting the atomic register when performing mul
break;
case MAX:
AscendC::SetAtomicMax<T>();
break;
case MIN:
AscendC::SetAtomicMin<T>();
break;
default:
AscendC::SetAtomicNone();
}
}
template <typename T>
FORCE_INLINE_AICORE void CpUB2GM(__gm__ T *gmAddr, __ubuf__ T *ubAddr, uint32_t size)
{
LocalTensor<uint8_t> ubTensor;
GlobalTensor<uint8_t> gmTensor;
DataCopyExtParams dataCopyParams(1, size, 0, 0, 0);
ubTensor.address_.logicPos = static_cast<uint8_t>(TPosition::VECIN);
ubTensor.address_.bufferAddr = reinterpret_cast<uint64_t>(ubAddr);
gmTensor.SetGlobalBuffer(reinterpret_cast<__gm__ uint8_t *>(gmAddr));
DataCopyPad(gmTensor, ubTensor, dataCopyParams);
}
template <typename T>
FORCE_INLINE_AICORE void CpGM2UB(__ubuf__ T *ubAddr, __gm__ T *gmAddr, uint32_t size)
{
LocalTensor<uint8_t> ubTensor;
GlobalTensor<uint8_t> gmTensor;
DataCopyExtParams dataCopyParams(1, size, 0, 0, 0);
ubTensor.address_.logicPos = static_cast<uint8_t>(TPosition::VECIN);
ubTensor.address_.bufferAddr = reinterpret_cast<uint64_t>(ubAddr);
gmTensor.SetGlobalBuffer(reinterpret_cast<__gm__ uint8_t *>(gmAddr));
DataCopyPadExtParams<uint8_t> padParams;
DataCopyPad(ubTensor, gmTensor, dataCopyParams, padParams);
}
template<typename T>
FORCE_INLINE_AICORE void CopyUB2UB(__ubuf__ T *dst, __ubuf__ T *src, const uint32_t calCount)
{
LocalTensor<T> srcTensor;
LocalTensor<T> dstTensor;
TBuffAddr srcAddr, dstAddr;
srcAddr.bufferAddr = reinterpret_cast<uint64_t>(src);
dstAddr.bufferAddr = reinterpret_cast<uint64_t>(dst);
srcTensor.SetAddr(srcAddr);
dstTensor.SetAddr(dstAddr);
DataCopy(dstTensor, srcTensor, calCount);
}
#endif // CAM_DATACOPY_GM2GM_H