### What this PR does / why we need it?
Support pipeline parallel with ray backend in V1Engine.
Fixes #1751
### Does this PR introduce _any_ user-facing change?
Users could specify ray as distributed backend when inferencing with pp
### How was this patch tested?
CI passed with new added test.
- vLLM version: v0.9.2
- vLLM main:
32142b3c62
---------
Signed-off-by: MengqingCao <cmq0113@163.com>
18 lines
223 B
Plaintext
18 lines
223 B
Plaintext
-r requirements-lint.txt
|
|
-r requirements.txt
|
|
modelscope
|
|
openai
|
|
pytest >= 6.0
|
|
pytest-asyncio
|
|
pytest-mock
|
|
lm-eval
|
|
types-jsonschema
|
|
xgrammar
|
|
zmq
|
|
types-psutil
|
|
pytest-cov
|
|
regex
|
|
sentence_transformers
|
|
ray>=2.47.1
|
|
protobuf==4.25.6
|