### What this PR does / why we need it?
fix https://github.com/vllm-project/vllm-ascend/issues/2865, lm-eval [got an official update last
month](https://github.com/EleutherAI/lm-evaluation-harness/releases/tag/v0.4.9.2),
so let's bump the version.
### How was this patch tested?
- vLLM version: v0.13.0
- vLLM main:
2f4e6548ef
Signed-off-by: wangli <wangli858794774@gmail.com>
26 lines
378 B
Plaintext
26 lines
378 B
Plaintext
-r requirements-lint.txt
|
|
-r requirements.txt
|
|
modelscope
|
|
openai
|
|
pytest >= 6.0,<9.0.0
|
|
pytest-asyncio
|
|
pytest-mock
|
|
lm-eval==0.4.9.2
|
|
types-jsonschema
|
|
xgrammar
|
|
zmq
|
|
types-psutil
|
|
pytest-cov
|
|
regex
|
|
sentence_transformers
|
|
ray>=2.47.1,<=2.48.0
|
|
protobuf>3.20.0
|
|
librosa
|
|
soundfile
|
|
pytest_mock
|
|
msserviceprofiler>=1.2.2
|
|
mindstudio-probe>=8.3.0
|
|
arctic-inference==0.1.1
|
|
xlite==0.1.0rc0
|
|
uc-manager
|