### What this PR does / why we need it?
Support prompt logprobs in V1. This also enable lm_eval to test accuracy
on V1
### Does this PR introduce _any_ user-facing change?
support prompt logprobs output
### How was this patch tested?
CI passed with accuracy test.
Using lm_eval, which use prompt logprobs as output to test accuracy, to
test:
```python
VLLM_USE_V1=1 lm_eval \
--model vllm \
--model_args pretrained=Qwen/Qwen2.5-7B-Instruct,max_model_len=4096,block_size=4 \
--tasks ceval-valid_computer_network \
--batch_size 8
```
After this pr, the accuracy test results of `Qwen/Qwen2.5-7B-Instruct`
on V1 is:
```bash
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|----------------------------|------:|------|-----:|--------|---|-----:|---|-----:|
|ceval-valid_computer_network| 2|none | 0|acc |↑ |0.7368|± |0.1038|
| | |none | 0|acc_norm|↑ |0.7368|± |0.1038|
```
Closes: https://github.com/vllm-project/vllm-ascend/issues/1043
Signed-off-by: MengqingCao <cmq0113@163.com>