Files
xc-llm-ascend/csrc/aclnn_torch_adapter/NPUBridge.h
Slightwind 9fdabb7b60 [feature] Add Custom Op grouped_matmul_swiglu_quant (#4431)
This PR introduces the `EXEC_NPU_CMD` macro, serving as an adapter layer
to simplify the invocation of `aclnn` operators on Ascend NPUs.

**Key Changes:**
* **Adapter Layer:** Added `EXEC_NPU_CMD` macro and related dependencies
to standardize `aclnn` calls.
* **Operator Support:** Integrated `grouped_matmul_swiglu_quant` as a
reference implementation to demonstrate the usage of the new macro.

---


- vLLM version: v0.11.2

---------

Signed-off-by: SlightwindSec <slightwindsec@gmail.com>
2025-11-27 21:56:18 +08:00

30 lines
838 B
C++

// Copyright (c) 2020, Huawei Technologies Co., Ltd
// All rights reserved.
//
// This source code is licensed under the BSD-style license found in the
// LICENSE file in the root directory of this source tree.
#pragma once
#include <c10/core/StorageImpl.h>
#include "NPUStorageImpl.h"
namespace vllm_ascend
{
class NPUBridge
{
public:
// at::tensor to NPUStorageImpl
static NPUStorageImpl *GetNpuStorageImpl(const at::Tensor &tensor);
// c10::StorageImpl to NPUStorageImpl
static NPUStorageImpl *GetNpuStorageImpl(c10::StorageImpl *storageImpl);
// c10::Storage to NPUStorageImpl
static NPUStorageImpl *GetNpuStorageImpl(c10::Storage &&storage);
// tensor to NPUStorageDesc
static NPUStorageDesc &GetNpuStorageImplDesc(const at::Tensor &tensor);
};
}