[TEST]Add sending request with and without chat (#5286)

### What this PR does / why we need it?
This PR adds the method for sending chat and non-chat request, we need
it to test much folloing cases.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
by running the test

- vLLM version: release/v0.13.0
- vLLM main:
ad32e3e19c

---------

Signed-off-by: jiangyunfan1 <jiangyunfan1@h-partners.com>
This commit is contained in:
jiangyunfan1
2025-12-26 18:04:17 +08:00
committed by GitHub
parent 0dfdfa9526
commit 48854aef5c
2 changed files with 33 additions and 19 deletions

View File

@@ -20,7 +20,7 @@ from vllm.utils.network_utils import get_open_port
from tests.e2e.conftest import RemoteOpenAIServer
from tools.aisbench import run_aisbench_cases
from tools.send_request import send_text_request
from tools.send_request import send_v1_chat_completions
MODELS = [
"vllm-ascend/Qwen3-32B-W8A8",
@@ -90,9 +90,9 @@ async def test_models(model: str, tp_size: int) -> None:
server_port=port,
env_dict=env_dict,
auto_port=False) as server:
send_text_request(prompts[0],
model,
server,
request_args=api_keyword_args)
send_v1_chat_completions(prompts[0],
model,
server,
request_args=api_keyword_args)
# aisbench test
run_aisbench_cases(model, port, aisbench_cases)