[Nightly] Avoid max_model_len being smaller than the decoder prompt to prevent single-node-accuray-tests from failing (#5174)
### What this PR does / why we need it?
[Nightly] Avoid max_model_len being smaller than the decoder prompt to
prevent single-node-accuray-tests from failing
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
- vLLM version: v0.12.0
- vLLM main:
ad32e3e19c
---------
Signed-off-by: ZT-AIA <1028681969@qq.com>
Signed-off-by: ZT-AIA <63220130+ZT-AIA@users.noreply.github.com>
This commit is contained in:
@@ -6,6 +6,7 @@ tasks:
|
||||
metrics:
|
||||
- name: "acc,none"
|
||||
value: 0.58
|
||||
max_model_len: 8192
|
||||
tensor_parallel_size: 2
|
||||
gpu_memory_utilization: 0.7
|
||||
enable_expert_parallel: True
|
||||
|
||||
@@ -6,5 +6,6 @@ tasks:
|
||||
metrics:
|
||||
- name: "acc,none"
|
||||
value: 0.55
|
||||
max_model_len: 8192
|
||||
batch_size: 32
|
||||
gpu_memory_utilization: 0.7
|
||||
|
||||
Reference in New Issue
Block a user