### What this PR does / why we need it?
Add support for V1 Engine.
Please note that this is just the initial version, and there may be some
places need to be fixed or optimized in the future, feel free to leave
some comments to us.
### Does this PR introduce _any_ user-facing change?
To use V1 Engine on NPU device, you need to set the env variable shown
below:
```bash
export VLLM_USE_V1=1
export VLLM_WORKER_MULTIPROC_METHOD=spawn
```
If you are using vllm for offline inferencing, you must add a `__main__`
guard like:
```bash
if __name__ == '__main__':
llm = vllm.LLM(...)
```
Find more details
[here](https://docs.vllm.ai/en/latest/getting_started/troubleshooting.html#python-multiprocessing).
### How was this patch tested?
I have tested the online serving with `Qwen2.5-7B-Instruct` using this
command:
```bash
vllm serve Qwen/Qwen2.5-7B-Instruct --max_model_len 26240
```
Query the model with input prompts:
```bash
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen2.5-7B-Instruct",
"prompt": "The future of AI is",
"max_tokens": 7,
"temperature": 0
}'
```
---------
Signed-off-by: shen-shanshan <467638484@qq.com>
Co-authored-by: didongli182 <didongli@huawei.com>
2.5 KiB
2.5 KiB
Feature Support
| Feature | Supported | CI Coverage | Guidance Document | Current Status | Next Step |
|---|---|---|---|---|---|
| Chunked Prefill | ❌ | NA | Plan in 2025.03.30 | ||
| Automatic Prefix Caching | ❌ | NA | Plan in 2025.03.30 | ||
| LoRA | ❌ | NA | Plan in 2025.06.30 | ||
| Prompt adapter | ❌ | NA | Plan in 2025.06.30 | ||
| Speculative decoding | ✅ | Basic functions available | Need fully test | ||
| Pooling | ✅ | Basic functions available(Bert) | Need fully test and add more models support | ||
| Enc-dec | ❌ | NA | Plan in 2025.06.30 | ||
| Multi Modality | ✅ | ✅ | Basic functions available(LLaVA/Qwen2-vl/Qwen2-audio/internVL) | Improve performance, and add more models support | |
| LogProbs | ✅ | Basic functions available | Need fully test | ||
| Prompt logProbs | ✅ | Basic functions available | Need fully test | ||
| Async output | ✅ | Basic functions available | Need fully test | ||
| Multi step scheduler | ✅ | Basic functions available | Need fully test | ||
| Best of | ✅ | Basic functions available | Need fully test | ||
| Beam search | ✅ | Basic functions available | Need fully test | ||
| Guided Decoding | ✅ | Basic functions available | Find more details at the issue | ||
| Tensor Parallel | ✅ | Basic functions available | Need fully test | ||
| Pipeline Parallel | ✅ | Basic functions available | Need fully test |