[Misc] Move lora patch file into lora module (#2797)

Cleanup useless file in patch module. Update the lora support list is OK
in vLLM Ascend, no need to patch vLLM


- vLLM version: v0.10.1.1
- vLLM main:
f4962a6d55

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
wangxiyuan
2025-09-08 21:42:12 +08:00
committed by GitHub
parent 85d989a3b9
commit 7d6d9449a8
10 changed files with 64 additions and 72 deletions

View File

@@ -559,9 +559,8 @@ class TestNPUPlatform(TestBase):
def test_get_punica_wrapper(self):
result = self.platform.get_punica_wrapper()
self.assertEqual(
result,
"vllm_ascend.lora.punica_wrapper.punica_npu.PunicaWrapperNPU")
self.assertEqual(result,
"vllm_ascend.lora.punica_npu.PunicaWrapperNPU")
@patch("torch.npu.reset_peak_memory_stats")
@patch("torch.npu.max_memory_allocated")