[Refactor]310p_e2e test case update (#6539)
### What this PR does / why we need it? This pull request significantly enhances the test suite by adding new end-to-end test cases for Qwen3 models on the 310P hardware platform. The primary goal is to ensure the stability and correctness of these models under diverse operational conditions, including various parallelism strategies, data types, and quantization methods. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? E2E test - vLLM version: v0.15.0 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0 --------- Signed-off-by: pu-zhe <zpuaa@outlook.com>
This commit is contained in:
6
.github/workflows/_e2e_test.yaml
vendored
6
.github/workflows/_e2e_test.yaml
vendored
@@ -403,7 +403,7 @@ jobs:
|
||||
PYTORCH_NPU_ALLOC_CONF: max_split_size_mb:256
|
||||
VLLM_WORKER_MULTIPROC_METHOD: spawn
|
||||
run: |
|
||||
pytest -sv --durations=0 tests/e2e/310p/test_offline_inference_310p.py
|
||||
pytest -sv --durations=0 tests/e2e/310p/singlecard/test_dense_model_singlecard.py
|
||||
|
||||
e2e_310p-4cards:
|
||||
name: 310p multicards 4cards
|
||||
@@ -462,5 +462,5 @@ jobs:
|
||||
VLLM_WORKER_MULTIPROC_METHOD: spawn
|
||||
run: |
|
||||
pytest -sv --durations=0 \
|
||||
tests/e2e/310p/test_offline_inference_parallel_310p.py \
|
||||
tests/e2e/310p/test_offline_inference_w8a8_310p.py
|
||||
tests/e2e/310p/multicard/test_dense_model_multicard.py \
|
||||
tests/e2e/310p/multicard/test_moe_model_multicard.py
|
||||
|
||||
Reference in New Issue
Block a user