### What this PR does / why we need it?
vllm-ascend need to dump data during model execution to debug some
precision problems, here msprobe provide the corresponding abilities, so
msprobe will join vllm-ascend to make debug easier
### Does this PR introduce _any_ user-facing change?
```
'dump_config': '/path/to/config.json'
```
- vLLM version: v0.11.0
- vLLM main:
2918c1b49c
---------
Signed-off-by: Tjh-UKN <2559659915@qq.com>
22 lines
426 B
Plaintext
22 lines
426 B
Plaintext
-r requirements-lint.txt
|
|
-r requirements.txt
|
|
modelscope
|
|
openai
|
|
pytest >= 6.0,<9.0.0
|
|
pytest-asyncio
|
|
pytest-mock
|
|
lm-eval[api] @ git+https://github.com/EleutherAI/lm-evaluation-harness.git@206b7722158f58c35b7ffcd53b035fdbdda5126d
|
|
types-jsonschema
|
|
xgrammar
|
|
zmq
|
|
types-psutil
|
|
pytest-cov
|
|
regex
|
|
sentence_transformers
|
|
ray>=2.47.1,<=2.48.0
|
|
protobuf>3.20.0
|
|
librosa
|
|
soundfile
|
|
pytest_mock
|
|
msserviceprofiler>=1.2.2
|
|
mindstudio-probe>=8.3.0 |