[EPLB]Eplb Config Renaming (#5533)

### What this PR does / why we need it?
1. Rename num_iterations_eplb_update to expert_heat_collection_interval.
2. Rename num_wait_worker_iterations to algorithm_execution_interval.
3. Rename init_redundancy_expert to num_redundant_experts because the
variable with the same meaning in vLLM is named this way.
4. Delete gate_eplb because we don't need this feature.
5. Move eplb config into a dict in additional config.
6. Depend on pr5817

### Does this PR introduce _any_ user-facing change?

before this pr:
`--additional-config '{"dynamic_eplb":true,
"num_iterations_eplb_update": 4000, "num_wait_worker_iterations": 150,
"init_redundancy_expert": 16, "expert_map_path": "xxx.json"}'`

after this pr: 
`--additional-config
'{"eplb_config":{"dynamic_eplb":true,"expert_heat_collection_interval":4000,
"algorithm_execution_interval":150,"num_redundant_experts": 16,
"expert_map_path": "xxx.json"}}'`

### How was this patch tested?

#### test qwen3-235b eplb num_redundant_experts=16

without pr5817
| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 83.33 |

with pr5817
| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 86.67 |

- vLLM version: v0.13.0
- vLLM main:
45c1ca1ca1

Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
This commit is contained in:
LI SHENGYONG
2026-01-15 10:26:44 +08:00
committed by GitHub
parent ea01aeaab7
commit da958ee386
21 changed files with 174 additions and 349 deletions

View File

@@ -55,7 +55,7 @@ deployment:
}
}'
--additional-config
'{"dynamic_eplb":true,"num_iterations_eplb_update":2048,"num_wait_worker_iterations":200}'
'{"enable_prefill_optimizations":true,"enable_weight_nz_layout":true,"eplb_config": {"dynamic_eplb":true,"expert_heat_collection_interval":2048,"algorithm_execution_interval":200}}'
-
server_cmd: >
@@ -92,7 +92,7 @@ deployment:
}
}'
--additional-config
'{"dynamic_eplb":true,"num_iterations_eplb_update":2048,"num_wait_worker_iterations":200}'
'{"enable_prefill_optimizations":true,"enable_weight_nz_layout":true,"eplb_config": {"dynamic_eplb":true,"expert_heat_collection_interval":2048,"algorithm_execution_interval":200}}'
-
server_cmd: >
vllm serve vllm-ascend/DeepSeek-R1-0528-W8A8
@@ -130,7 +130,7 @@ deployment:
}
}'
--additional-config
'{"multistream_overlap_shared_expert":true,"dynamic_eplb":true,"num_iterations_eplb_update":2048,"num_wait_worker_iterations":200}'
'{"multistream_overlap_shared_expert":true,"dynamic_eplb":true,"expert_heat_collection_interval":2048,"algorithm_execution_interval":200}'
-
server_cmd: >
vllm serve vllm-ascend/DeepSeek-R1-0528-W8A8
@@ -167,7 +167,7 @@ deployment:
}
}'
--additional-config
'{"multistream_overlap_shared_expert":true,"dynamic_eplb":true,"num_iterations_eplb_update":2048,"num_wait_worker_iterations":200}'
'{"multistream_overlap_shared_expert":true,"eplb_config": {"dynamic_eplb":true,"expert_heat_collection_interval":2048,"algorithm_execution_interval":200}}'
benchmarks:
perf:
case_type: performance

View File

@@ -51,7 +51,7 @@ deployment:
}
}'
--additional-config
'{"dynamic_eplb":true,"num_iterations_eplb_update":2048,"num_wait_worker_iterations":200}'
'{"eplb_config": {"dynamic_eplb":true,"expert_heat_collection_interval":2048,"algorithm_execution_interval":200}}'
-
server_cmd: >
@@ -87,5 +87,5 @@ deployment:
}
}'
--additional-config
'{"dynamic_eplb":true,"num_iterations_eplb_update":2048,"num_wait_worker_iterations":200}'
'{"eplb_config": {"dynamic_eplb":true,"expert_heat_collection_interval":2048,"algorithm_execution_interval":200}}'
benchmarks:

View File

@@ -70,11 +70,12 @@ async def test_models(model: str) -> None:
additional_config: dict[str, Any] = {
"enable_shared_expert_dp": False,
"multistream_overlap_shared_expert": False,
"dynamic_eplb": True,
"num_iterations_eplb_update": 14000,
"num_wait_worker_iterations": 30,
"init_redundancy_expert": 0,
"gate_eplb": False
"eplb_config": {
"dynamic_eplb": True,
"expert_heat_collection_interval": 512,
"algorithm_execution_interval": 100,
"num_redundant_experts": 0
}
}
server_args = [
"--quantization", "ascend", "--seed", "1024",

View File

@@ -70,13 +70,13 @@ async def test_models(model: str) -> None:
"8192", "--max-num-seqs", "12", "--trust-remote-code",
"--gpu-memory-utilization", "0.9"
]
env_dict["EXPERT_MAP_RECORD"] = "true"
env_dict["DYNAMIC_EPLB"] = "true"
additional_config["dynamic_eplb"] = True
additional_config["num_iterations_eplb_update"] = 14000
additional_config["num_wait_worker_iterations"] = 30
additional_config["init_redundancy_expert"] = 0
additional_config["gate_eplb"] = False
additional_config["eplb_config"] = {
"dynamic_eplb": True,
"expert_heat_collection_interval": 512,
"algorithm_execution_interval": 100,
"num_redundant_experts": 0
}
server_args.extend(
["--compilation-config",
json.dumps(compilation_config)])