[CI] Fix CI by addressing max_split_size_mb config (#3258)

### What this PR does / why we need it?
Fix CI by addressing max_split_size_mb config

### Does this PR introduce _any_ user-facing change?
No, test onyl

### How was this patch tested?
Full CI passed, espcially eagle one


- vLLM version: v0.10.2
- vLLM main:
https://github.com/vllm-project/vllm/commit/releases/v0.11.0

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
wangxiyuan
2025-09-29 14:05:12 +08:00
committed by GitHub
parent 69cc99d004
commit c73dd8fecb
6 changed files with 4 additions and 19 deletions

View File

@@ -17,7 +17,6 @@
# limitations under the License.
#
import json
import os
from typing import Any, Dict
import jsonschema
@@ -35,7 +34,6 @@ from vllm.outputs import RequestOutput
from tests.e2e.conftest import VllmRunner
os.environ["PYTORCH_NPU_ALLOC_CONF"] = "max_split_size_mb:256"
MODEL_NAME = "Qwen/Qwen3-0.6B"
GuidedDecodingBackend = ["xgrammar", "guidance", "outlines"]