Main2main upgrade to vllm 0317 afternoon (#7409)
### What this PR does / why we need it?
1.fix "TypeError: get_attn_backend() remove variable": [Refactor
`check_and_update_config`](https://github.com/vllm-project/vllm/pull/35122)
2.fix [Rename `compile_ranges_split_points` to
`compile_ranges_endpoints`](https://github.com/vllm-project/vllm/pull/36027)
3.fix "RuntimeError: device_allocator not a DeviceAllocator":[Replace
memory related torch.cuda
APIs"](https://github.com/vllm-project/vllm/pull/37031)
4.fix [Support multiple KV groups in OffloadingSpec
](https://github.com/vllm-project/vllm/pull/36610) removed
self.offloaded_block_size and changed self.gpu_block_size from a scalar
to a tuple of per-group block sizes, adding block_size_factor.
5.fix [Consolidate
SupportsEagle](https://github.com/vllm-project/vllm/pull/36063) renamed
get_eagle3_aux_hidden_state_layers() to
get_eagle3_default_aux_hidden_state_layers() and added a
supports_eagle3() guard before calling it.
### Does this PR introduce _any_ user-facing change?
NA
### How was this patch tested?
E2E
- vLLM version: v0.17.0
- vLLM main:
8a680463fa
---------
Signed-off-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: Claude Code <noreply@anthropic.com>
This commit is contained in:
@@ -32,6 +32,12 @@ TENSOR_PARALLELS = [1]
|
||||
@pytest.mark.parametrize("model", MODELS)
|
||||
@pytest.mark.parametrize("tp_size", TENSOR_PARALLELS)
|
||||
async def test_models(model: str, tp_size: int) -> None:
|
||||
from vllm_ascend.utils import vllm_version_is
|
||||
|
||||
if not vllm_version_is("0.17.0"):
|
||||
pytest.skip(
|
||||
"EPLB output is different without EPLB, see issue: https://github.com/vllm-project/vllm-ascend/issues/7408",
|
||||
)
|
||||
encode_port = get_open_port()
|
||||
pd_port = get_open_port()
|
||||
vllm_server_args = [
|
||||
|
||||
@@ -76,6 +76,12 @@ def test_qwen3_moe_distributed_aiv_tp2():
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_qwen3_moe_w8a8_distributed_tp2_ep_dynamic_eplb():
|
||||
from vllm_ascend.utils import vllm_version_is
|
||||
|
||||
if not vllm_version_is("0.17.0"):
|
||||
pytest.skip(
|
||||
"EPLB output is different without EPLB, see issue: https://github.com/vllm-project/vllm-ascend/issues/7408",
|
||||
)
|
||||
model = "vllm-ascend/Qwen3-30B-A3B-W8A8"
|
||||
port = get_open_port()
|
||||
compilation_config = json.dumps({"cudagraph_capture_sizes": [8]})
|
||||
|
||||
Reference in New Issue
Block a user