[Doc] add qwen3 reranker (#5086)

### What this PR does / why we need it?
add qwen3 reranker tutorials
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0

---------

Signed-off-by: TingW09 <944713709@qq.com>
This commit is contained in:
TingW09
2025-12-18 10:54:07 +08:00
committed by GitHub
parent 8069442b41
commit 879ec2d1c4
4 changed files with 248 additions and 40 deletions

View File

@@ -1,56 +1,46 @@
# Qwen3-Embedding-8B
# Qwen3-Embedding
## Introduction
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This guide describes how to run the model with vLLM Ascend. Note that only 0.9.2rc1 and higher versions of vLLM Ascend support the model.
## Run Docker Container
## Supported Features
Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.
## Environment Preparation
### Model Weight
- `Qwen3-Embedding-8B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Embedding-8B)
- `Qwen3-Embedding-4B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Embedding-4B)
- `Qwen3-Embedding-0.6B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Embedding-0.6B)
It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`
### Installation
You can use our official docker image to run `Qwen3-Embedding` series models.
- Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).
if you don't want to use the docker image as above, you can also build all from source:
- Install `vllm-ascend` from source, refer to [installation](../installation.md).
## Deployment
Using the Qwen3-Embedding-8B model as an example, first run the docker container with the following command:
```{code-block} bash
:substitutions:
# Update the vllm-ascend image
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
```
Set up environment variables:
```bash
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=True
# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
```
### Online Inference
```bash
vllm serve Qwen/Qwen3-Embedding-8B --runner pooling
vllm serve Qwen/Qwen3-Embedding-8B --task embed --host 127.0.0.1 --port 8888
```
Once your server is started, you can query the model with input prompts.
```bash
curl http://localhost:8000/v1/embeddings -H "Content-Type: application/json" -d '{
"model": "Qwen/Qwen3-Embedding-8B",
"messages": [
{"role": "user", "content": "Hello"}
]
curl http://127.0.0.1:8888/v1/embeddings -H "Content-Type: application/json" -d '{
"input": [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
}'
```
@@ -81,7 +71,7 @@ if __name__=="__main__":
input_texts = queries + documents
model = LLM(model="Qwen/Qwen3-Embedding-8B",
runner="pooling",
task="embed",
distributed_executor_backend="mp")
outputs = model.embed(input_texts)
@@ -98,3 +88,31 @@ Processed prompts: 0%|
Processed prompts: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 31.95it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
[[0.7477798461914062, 0.07548339664936066], [0.0886271521449089, 0.6311039924621582]]
```
## Performance
Run performance of `Qwen3-Reranker-8B` as an example.
Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/) for more details.
Take the `serve` as an example. Run the code as follows.
```bash
vllm bench serve --model Qwen3-Embedding-8B --backend openai-embeddings --dataset-name random --host 127.0.0.1 --port 8888 --endpoint /v1/embeddings --tokenizer /root/.cache/Qwen3-Embedding-8B --random-input 200 --save-result --result-dir ./
```
After about several minutes, you can get the performance evaluation result. With this tutorial, the performance result is:
```bash
============ Serving Benchmark Result ============
Successful requests: 1000
Failed requests: 0
Benchmark duration (s): 6.78
Total input tokens: 108032
Request throughput (req/s): 31.11
Total Token throughput (tok/s): 15929.35
----------------End-to-end Latency----------------
Mean E2EL (ms): 4422.79
Median E2EL (ms): 4412.58
P99 E2EL (ms): 6294.52
==================================================
```