[CORE] concurrent partial prefills (#2372)

# What this PR does / why we need it?

When processing a mix of large and small requests, the TTFT of responses
is significantly reduc\ed. Please refer to
https://github.com/vllm-project/vllm/pull/10235, which achieves the same
effect by simply limiting the number of prompt fills for long requests.
This solution can be applied to both AscendScheduler (V0) and vLLM
Scheduler (V1). Tests show that TTFT can be significantly improved when
handling such mixed requests. However, This capability is currently
missing when Ascend Scheduler is enabled.

This benchmark used the Qwen3-8B model, with a context length of 128K,
running on a single card.

Regarding dataset selection, the sharegpt_clean dataset is used, with
its content concatenated and cropped. Small requests with token=50 and
medium requests with token=10240 were constructed (there were also large
requests with token=102400, but these were ignored because when using
the Prefill First scheduling strategy, max_num_batched_tokens will not
be set to such a large value). When loading vLLM, set
max_num_batched_tokens=22000. This length can accommodate two
medium-sized requests and some short requests, reflecting an extreme
scenario where the budget is almost entirely occupied by longer
requests.

Next, we mix 990 small requests and 100 medium requests into one type of
load scenario (hereinafter referred to as 10%), and similarly generate
load scenarios with 5% medium requests and 1% load scenarios.

Performance tests were conducted separately for enabling vLLMScheduler,
AscendScheduler, and AscendScheduler (long prompt concurrency set to 1).

- vLLM version: v0.10.2
- vLLM main:
1dfea5f4a9

---------

Signed-off-by: Csrayz <jover@cmbchina.com>
This commit is contained in:
Csrayz
2025-09-24 17:12:55 +08:00
committed by GitHub
parent 2d885869c5
commit 80524f5711
5 changed files with 73 additions and 1 deletions

View File

@@ -50,6 +50,8 @@ class TestAscendSchedulerConfig(TestBase):
scheduler_cls="vllm_ascend.core.scheduler.AscendScheduler",
max_num_batched_tokens=2048,
max_model_len=2048,
max_long_partial_prefills=1,
long_prefill_token_threshold=512,
),
)
self.assertEqual(ascend_config.enable_chunked_prefill, False)
@@ -58,6 +60,8 @@ class TestAscendSchedulerConfig(TestBase):
"vllm_ascend.core.scheduler.AscendScheduler")
self.assertEqual(ascend_config.max_num_batched_tokens, 2048)
self.assertEqual(ascend_config.encoder_cache_size, 2048)
self.assertEqual(ascend_config.max_long_partial_prefills, 1)
self.assertEqual(ascend_config.long_prefill_token_threshold, 512)
def test_not_implemented_policy(self):
with self.assertRaises(NotImplementedError) as context:

View File

@@ -221,7 +221,7 @@ class TestAscendScheduler(TestBase):
len(requests) - i - 1)
def test_schedule(self):
'''Test scheduling.
'''Test scheduling.
Two cases: default APC/no prompt logprobs; APC=True + prompt logprobs
'''
scheduler = self.create_scheduler()
@@ -279,6 +279,27 @@ class TestAscendScheduler(TestBase):
for i, request in enumerate(requests):
self.assertEqual(scheduler.running[i], request)
def test_concurrent_partial_prefills_schedule(self):
'''Test concurrent partial prefills scheduling.
total requests = 10, every request has 10 token.
while set long_prefill_token_threshold = 1, scheduler can
only schedule max_long_partial_prefills long request.
'''
scheduler = self.create_scheduler()
scheduler.scheduler_config.chunked_prefill_enabled = False
scheduler.scheduler_config.max_long_partial_prefills = 2
scheduler.scheduler_config.long_prefill_token_threshold = 1
requests = create_requests(num_requests=10, num_tokens=20)
for request in requests:
scheduler.add_request(request)
# Test initial scheduling
output = scheduler.schedule()
self.assertEqual(len(output.scheduled_new_reqs),
scheduler.scheduler_config.max_long_partial_prefills)
self.assertEqual(output.scheduled_cached_reqs.num_reqs, 0)
self.assertEqual(len(output.finished_req_ids), 0)
def test_schedule_enable_prefix_caching(self):
'''Test scheduling.
Two cases: default APC/no prompt logprobs; APC=True + prompt logprobs