[ModelRunner] Revert "[Fix] Pads query_start_loc to satisfy FIA/TND constraint (#6459)
This reverts commit56f5d3bd49. ### What this PR does / why we need it? The patch https://github.com/vllm-project/vllm-ascend/pull/6357 which break the functionality availability in the spec_decode scenario, let's revert and make CI happy first ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.14.1 - vLLM main:dc917cceb8Signed-off-by: wangli <wangli858794774@gmail.com>
This commit is contained in:
@@ -44,29 +44,6 @@ CASE_DS_ACLGRAPH = LLMTestCase(
|
||||
],
|
||||
)
|
||||
|
||||
CASE_QWEN_FULL = LLMTestCase(
|
||||
model="Qwen/Qwen3-0.6B",
|
||||
prompts=PROMPTS_SHORT,
|
||||
golden_answers=[
|
||||
" Lina. I'm a 22-year-old student from China. I'm interested in studying in the US. I want to know if there are any",
|
||||
' the same as the president of the United Nations. This is because the president of the United States is the same as the president of the United Nations. The president',
|
||||
' Paris. The capital of France is also the capital of the Republic of France. The capital of France is also the capital of the European Union. The capital of',
|
||||
' not just a technological frontier but a profound transformation of how we live, work, and interact with the world. As we stand at the intersection of artificial intelligence and'
|
||||
],
|
||||
)
|
||||
|
||||
CASE_DS_FULL = LLMTestCase(
|
||||
model="vllm-ascend/DeepSeek-V2-Lite-W8A8",
|
||||
quantization="ascend",
|
||||
prompts=PROMPTS_SHORT,
|
||||
golden_answers=[
|
||||
'\nI am a 20 year old female, and I have been suffering from depression for 3 years now. I have been on medication for 2',
|
||||
' a man who has been in the public eye for decades. He has been a senator, a governor, and a businessman. He has also been married to the',
|
||||
' Paris, which is also the largest city in the country. The city is located on the River Seine and is known for its beautiful architecture, museums, and art',
|
||||
' here, and it’s not what you think.\nThe future of AI is here, and it’s not what you think.\nThe future of'
|
||||
],
|
||||
)
|
||||
|
||||
CASE_QWEN_FULL_DECODE_ONLY = LLMTestCase(
|
||||
model="Qwen/Qwen3-0.6B",
|
||||
prompts=PROMPTS_LONG,
|
||||
@@ -117,23 +94,6 @@ def test_piecewise_res_consistency(cur_case: LLMTestCase):
|
||||
sampling_params=cur_case.sampling_params,
|
||||
golden_answers=cur_case.golden_answers)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"cur_case", [CASE_QWEN_FULL, CASE_DS_FULL])
|
||||
def test_full_res_consistency(cur_case: LLMTestCase, monkeypatch):
|
||||
monkeypatch.delenv("HCCL_OP_EXPANSION_MODE", raising=False)
|
||||
runner_kwargs = {
|
||||
"model_name": cur_case.model,
|
||||
"max_model_len": 1024,
|
||||
"compilation_config": {
|
||||
"cudagraph_capture_sizes": [4, 8, 32, 64],
|
||||
"cudagraph_mode": "FULL_DECODE_ONLY"
|
||||
},
|
||||
"quantization": cur_case.quantization,
|
||||
}
|
||||
gen_and_valid(runner_kwargs=runner_kwargs,
|
||||
prompts=cur_case.prompts,
|
||||
sampling_params=cur_case.sampling_params,
|
||||
golden_answers=cur_case.golden_answers)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"cur_case", [CASE_QWEN_FULL_DECODE_ONLY, CASE_DS_FULL_DECODE_ONLY])
|
||||
|
||||
Reference in New Issue
Block a user