Refactor the ops PyTorch adapter,cleanup for csrc/torch_binding.cpp (#6732)

### What this PR does / why we need it?
Refactor the ops PyTorch adapter,cleanup for csrc/torch_binding.cpp,
more details see
https://github.com/vllm-project/vllm-ascend/issues/6486

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
install the new package to test the new modification, here is the
result:


- vLLM version: v0.15.0
- vLLM main:
9562912cea

---------

Signed-off-by: liziyu <liziyu16@huawei.com>
Signed-off-by: wangxiaoteng <wangxiaoteng@huawei.com>
Signed-off-by: luomin2005 <luomin2005@huawei.com>
Co-authored-by: liziyu <56102866+liziyu179@users.noreply.github.com>
Co-authored-by: wangxiaoteng <wangxiaoteng@huawei.com>
This commit is contained in:
luomin2005
2026-02-24 09:12:43 +08:00
committed by GitHub
parent f0caeeadcb
commit f41eeeb11e
15 changed files with 1037 additions and 735 deletions

View File

@@ -0,0 +1,40 @@
/*
* Copyright (c) Huawei Technologies Co., Ltd. 2026. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef APPLY_TOP_K_TOP_P_CUSTOM_TORCH_ADPT_H
#define APPLY_TOP_K_TOP_P_CUSTOM_TORCH_ADPT_H
namespace vllm_ascend {
at::Tensor npu_apply_top_k_top_p(
const at::Tensor& logits,
const c10::optional<at::Tensor>& p,
const c10::optional<at::Tensor>& k)
{
TORCH_CHECK(p.has_value() || k.has_value(),
"apply_top_k_top_p: p and k cannot be None at the same time.");
at::Tensor out = at::empty_like(logits);
EXEC_NPU_CMD(
aclnnApplyTopKTopPCustom,
logits,
p,
k,
out);
return out;
}
}
#endif