[Doc] add embedding rerank doc (#7364)

This commit is contained in:
woodx
2025-06-20 12:53:54 +08:00
committed by GitHub
parent 1d6515ef2a
commit 97011abc8a
3 changed files with 108 additions and 2 deletions

View File

@@ -51,3 +51,4 @@ print("Embeddings:", [x.get("embedding") for x in response.get("data", [])])
| **GTE (QwenEmbeddingModel)** | `Alibaba-NLP/gte-Qwen2-7B-instruct` | N/A | Alibabas general text embedding model (7B), achieving stateoftheart multilingual performance in English and Chinese. |
| **GME (MultimodalEmbedModel)** | `Alibaba-NLP/gme-Qwen2-VL-2B-Instruct` | `gme-qwen2-vl` | Multimodal embedding model (2B) based on Qwen2VL, encoding image + text into a unified vector space for crossmodal retrieval. |
| **CLIP (CLIPEmbeddingModel)** | `openai/clip-vit-large-patch14-336` | N/A | OpenAIs CLIP model (ViTL/14) for embedding images (and text) into a joint latent space; widely used for image similarity search. |
| **BGE (BgeEmbeddingModel)** | `BAAI/bge-large-en-v1.5` | N/A | Currently only support `attention-backend` `triton` and `torch_native`. BAAI's BGE embedding models optimized for retrieval and reranking tasks. |

View File

@@ -0,0 +1,49 @@
# Rerank Models
SGLang offers comprehensive support for rerank models by incorporating optimized serving frameworks with a flexible programming interface. This setup enables efficient processing of cross-encoder reranking tasks, improving the accuracy and relevance of search result ordering. SGLangs design ensures high throughput and low latency during reranker model deployment, making it ideal for semantic-based result refinement in large-scale retrieval systems.
```{important}
They are executed with `--is-embedding` and some may require `--trust-remote-code`
```
## Example Launch Command
```shell
python3 -m sglang.launch_server \
--model-path BAAI/bge-reranker-v2-m3 \
--host 0.0.0.0 \
--disable-radix-cache \
--chunked-prefill-size -1 \
--attention-backend triton \
--is-embedding \
--port 30000
```
## Example Client Request
```python
import requests
url = "http://127.0.0.1:30000/v1/rerank"
payload = {
"model": "BAAI/bge-reranker-v2-m3",
"query": "what is panda?",
"documents": [
"hi",
"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China."
]
}
response = requests.post(url, json=payload)
response_json = response.json()
for item in response_json:
print(f"Score: {item['score']:.2f} - Document: '{item['document']}'")
```
## Supported rerank models
| Model Family (Rerank) | Example HuggingFace Identifier | Chat Template | Description |
|------------------------------------------------|--------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------|
| **BGE-Reranker (BgeRerankModel)** | `BAAI/bge-reranker-v2-m3` | N/A | Currently only support `attention-backend` `triton` and `torch_native`. high-performance cross-encoder reranker model from BAAI. Suitable for reranking search results based on semantic relevance. |