[Eagle] Fix attn_mask index out of range in high concurrency situations (#3187)

### What this PR does / why we need it?
- Fixes the bug that Multiple calls (maybe >100) to eagle3-qwen3-8b often incurs "attn_mask index out of range" error

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
```
 python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --served-model-name Eagle3 --port 8000  --model Qwen/Qwen3-8B   --seed 42     -tp 1  --speculative_config '{"model": "Tengyunw/qwen3_8b_eagle3", "draft_tensor_parallel_size": 1, "num_speculative_tokens": 5, "method": "eagle3"}'
```

Co-authored-by: liuruijin17
[ricklrj@outlook.com](mailto:ricklrj@outlook.com)
- vLLM version: v0.10.2
- vLLM main:
52d0cb8458

Signed-off-by: Icey <1790571317@qq.com>
This commit is contained in:
Icey
2025-09-28 18:09:26 +08:00
committed by GitHub
parent 1705501ae2
commit 68c5401ad6

View File

@@ -1,5 +1,4 @@
# SPDX-License-Identifier: Apache-2.0
import os
from typing import Optional
import numpy as np
@@ -72,8 +71,7 @@ class EagleProposer(Proposer):
1,
device=device,
dtype=torch.int32)
attn_mask_len = min(self.vllm_config.model_config.max_model_len,
int(os.getenv("PAGED_ATTENTION_MASK_LEN", 10000)))
attn_mask_len = self.vllm_config.model_config.max_model_len
self.attn_mask_builder = AttentionMaskBuilder(
attn_mask_len, self.vllm_config.model_config.dtype)