[EPLB]Eplb Config Renaming (#5533)
### What this PR does / why we need it?
1. Rename num_iterations_eplb_update to expert_heat_collection_interval.
2. Rename num_wait_worker_iterations to algorithm_execution_interval.
3. Rename init_redundancy_expert to num_redundant_experts because the
variable with the same meaning in vLLM is named this way.
4. Delete gate_eplb because we don't need this feature.
5. Move eplb config into a dict in additional config.
6. Depend on pr5817
### Does this PR introduce _any_ user-facing change?
before this pr:
`--additional-config '{"dynamic_eplb":true,
"num_iterations_eplb_update": 4000, "num_wait_worker_iterations": 150,
"init_redundancy_expert": 16, "expert_map_path": "xxx.json"}'`
after this pr:
`--additional-config
'{"eplb_config":{"dynamic_eplb":true,"expert_heat_collection_interval":4000,
"algorithm_execution_interval":150,"num_redundant_experts": 16,
"expert_map_path": "xxx.json"}}'`
### How was this patch tested?
#### test qwen3-235b eplb num_redundant_experts=16
without pr5817
| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 83.33 |
with pr5817
| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 86.67 |
- vLLM version: v0.13.0
- vLLM main:
45c1ca1ca1
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
This commit is contained in:
@@ -91,7 +91,6 @@ from vllm_ascend.compilation.acl_graph import (ACLGraphWrapper,
|
||||
from vllm_ascend.eplb.adaptor.vllm_adaptor import VllmEplbAdaptor
|
||||
from vllm_ascend.eplb.core.eplb_device_transfer_loader import \
|
||||
D2DExpertWeightLoader
|
||||
from vllm_ascend.eplb.core.eplb_utils import EPLBParamUtils
|
||||
from vllm_ascend.eplb.core.eplb_worker import EplbProcess
|
||||
from vllm_ascend.eplb.eplb_updator import EplbUpdator
|
||||
from vllm_ascend.eplb.utils import model_register
|
||||
@@ -290,13 +289,11 @@ class NPUModelRunner(GPUModelRunner):
|
||||
|
||||
self.use_aclgraph = self._use_aclgraph()
|
||||
|
||||
self.dynamic_eplb = self.ascend_config.dynamic_eplb or self.ascend_config.expert_map_record_path
|
||||
eplb_config = self.ascend_config.eplb_config
|
||||
self.dynamic_eplb = eplb_config.dynamic_eplb
|
||||
if self.dynamic_eplb:
|
||||
EPLBParamUtils.check_dynamic_eplb(self.ascend_config.dynamic_eplb)
|
||||
EPLBParamUtils.check_expert_map_record_path(
|
||||
self.ascend_config.expert_map_record_path)
|
||||
self.is_eplb_warmuped = False
|
||||
self.policy_type = self.ascend_config.eplb_policy_type
|
||||
self.policy_type = eplb_config.eplb_policy_type
|
||||
self.eplb_loader = D2DExpertWeightLoader()
|
||||
self.manager = Manager()
|
||||
self.shared_dict = self.manager.dict({
|
||||
@@ -308,8 +305,7 @@ class NPUModelRunner(GPUModelRunner):
|
||||
policy_type=self.policy_type,
|
||||
enable_d2d=True)
|
||||
self.process = self.eplb_process._launch_process()
|
||||
ascend_config = get_ascend_config()
|
||||
self.eplb_updator = EplbUpdator(ascend_config, self.eplb_loader,
|
||||
self.eplb_updator = EplbUpdator(eplb_config, self.eplb_loader,
|
||||
self.eplb_process, self.process)
|
||||
# Input Batch
|
||||
# NOTE(Chen): Ideally, we should initialize the input batch inside
|
||||
|
||||
Reference in New Issue
Block a user