Files
xc-llm-ascend/tests/e2e/310p/test_offline_inference_parallel_310p.py
wangxiyuan a25209252f [CI] Add 310p e2e test back (#5797)
This PR add 310 e2e test back to ensure the related PR will be tested on
310.
1. for light e2e, we'll run 310p test if the changed files are located
in `vllm_ascend/_310p`
2. for full e2e, we'll always run 310p test
3. for main2main test, we'll stop run 310p test

- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
2026-01-15 15:47:13 +08:00

21 lines
639 B
Python

import pytest
from tests.e2e.conftest import VllmRunner
@pytest.mark.parametrize("dtype", ["float16"])
@pytest.mark.parametrize("max_tokens", [5])
@pytest.skip("310p does not support parallel inference now. Fix me")
def test_models(dtype: str, max_tokens: int) -> None:
example_prompts = [
"Hello, my name is",
"The future of AI is",
]
with VllmRunner("Qwen/Qwen3-0.6B",
tensor_parallel_size=4,
dtype=dtype,
max_model_len=2048,
enforce_eager=True) as vllm_model:
vllm_model.generate_greedy(example_prompts, max_tokens)