[BugFix] Fix ascend config check (#1092)
Fix the ascend config check logic:
1. refactor check_ascend_config to make it clear:
1. torchair graph should not work with enforce_eager=True
2. aclgraph should not work with torchair graph
3. add refresh config for rlhf case
4. fix a typo in model runner
5. change expert_tensor_parallel_size default to 0 to keep the same as
before
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
This commit is contained in:
@@ -28,7 +28,8 @@ The following table lists the additional configuration options available in vLLM
|
||||
| ---- | ---- | ------- | ----------- |
|
||||
| `torchair_graph_config` | dict | `{}` | The config options for torchair graph mode |
|
||||
| `ascend_scheduler_config` | dict | `{}` | The config options for ascend scheduler |
|
||||
| `expert_tensor_parallel_size` | str | `1` | Expert tensor parallel size the model to use. |
|
||||
| `expert_tensor_parallel_size` | str | `0` | Expert tensor parallel size the model to use. |
|
||||
| `refresh` | bool | `false` | Whether to refresh global ascend config content. This value is usually used by rlhf case. |
|
||||
|
||||
The details of each config option are as follows:
|
||||
|
||||
@@ -40,6 +41,7 @@ The details of each config option are as follows:
|
||||
| `use_cached_graph` | bool | `False` | Whether to use cached graph |
|
||||
| `graph_batch_sizes` | list[int] | `[]` | The batch size for torchair graph cache |
|
||||
| `graph_batch_sizes_init` | bool | `False` | Init graph batch size dynamically if `graph_batch_sizes` is empty |
|
||||
| `enable_multistream_shared_expert`| bool | `False` | Whether to enable multistream shared expert |
|
||||
|
||||
**ascend_scheduler_config**
|
||||
|
||||
@@ -59,12 +61,14 @@ A full example of additional configuration is as follows:
|
||||
"enabled": true,
|
||||
"use_cached_graph": true,
|
||||
"graph_batch_sizes": [1, 2, 4, 8],
|
||||
"graph_batch_sizes_init": true
|
||||
"graph_batch_sizes_init": false,
|
||||
"enable_multistream_shared_expert": false
|
||||
},
|
||||
"ascend_scheduler_config": {
|
||||
"enabled": true,
|
||||
"chunked_prefill_enabled": true,
|
||||
},
|
||||
"expert_tensor_parallel_size": 1
|
||||
"expert_tensor_parallel_size": 1,
|
||||
"refresh": false,
|
||||
}
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user