[Refactor]310p_e2e test case update (#6539)

### What this PR does / why we need it?
This pull request significantly enhances the test suite by adding new
end-to-end test cases for Qwen3 models on the 310P hardware platform.
The primary goal is to ensure the stability and correctness of these
models under diverse operational conditions, including various
parallelism strategies, data types, and quantization methods.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
This commit is contained in:
pu-zhe
2026-02-07 09:28:37 +08:00
committed by GitHub
parent c3db1aca2f
commit 1cc225711d
8 changed files with 144 additions and 142 deletions

View File

@@ -251,9 +251,11 @@ class AscendSharedFusedMoE310(SharedFusedMoE, AscendFusedMoE310):
shared_experts: torch.nn.Module,
gate: torch.nn.Module | None = None,
use_overlapped: bool = True,
routed_input_transform: torch.nn.Module | None = None,
**kwargs,
):
AscendFusedMoE310.__init__(self, **kwargs)
self._routed_input_transform = routed_input_transform
self._shared_experts = shared_experts
self.use_overlapped = use_overlapped
self.shared_expert_stream = None

View File

@@ -25,9 +25,7 @@ from vllm_ascend.worker.worker import NPUWorker, init_workspace_manager
class NPUWorker310(NPUWorker):
def init_device(self):
self.device = self._init_device()
# TODO: There is accuracy issue when jit_compile is disabled currently.
torch_npu.npu.set_compile_mode(jit_compile=True)
torch_npu.npu.set_compile_mode(jit_compile=False)
init_workspace_manager(self.device, num_ubatches=1)