[2/4][Refactor] Refactor torchair utils (#1892)
There is a lot torchair specified logic in common code. It results hard
code maintenance. We will create a new torchair module to launch
torchair related logic there. I plan to add 4 PR.
1. Refactor worker
2. Refactor utils (this PR)
- simple change that move all torchair related util function to torchair
module
3. Refactor model_runner
4. Refactor attention
- vLLM version: v0.9.2
- vLLM main:
8188196a1c
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
@@ -26,9 +26,9 @@ from vllm.distributed.parallel_state import get_ep_group
|
||||
import vllm_ascend.envs as envs
|
||||
from vllm_ascend.ascend_config import get_ascend_config
|
||||
from vllm_ascend.ops.fused_moe import select_experts
|
||||
from vllm_ascend.torchair.utils import npu_stream_switch, npu_wait_tensor
|
||||
from vllm_ascend.utils import (ACL_FORMAT_FRACTAL_NZ, FusedMoEState,
|
||||
dispose_tensor, get_fused_moe_state,
|
||||
npu_stream_switch, npu_wait_tensor)
|
||||
dispose_tensor, get_fused_moe_state)
|
||||
|
||||
|
||||
def apply_mlp(hidden_states: torch.Tensor,
|
||||
|
||||
Reference in New Issue
Block a user