[Doc] add embedding rerank doc (#7364)
This commit is contained in:
@@ -51,3 +51,4 @@ print("Embeddings:", [x.get("embedding") for x in response.get("data", [])])
|
||||
| **GTE (QwenEmbeddingModel)** | `Alibaba-NLP/gte-Qwen2-7B-instruct` | N/A | Alibaba’s general text embedding model (7B), achieving state‑of‑the‑art multilingual performance in English and Chinese. |
|
||||
| **GME (MultimodalEmbedModel)** | `Alibaba-NLP/gme-Qwen2-VL-2B-Instruct` | `gme-qwen2-vl` | Multimodal embedding model (2B) based on Qwen2‑VL, encoding image + text into a unified vector space for cross‑modal retrieval. |
|
||||
| **CLIP (CLIPEmbeddingModel)** | `openai/clip-vit-large-patch14-336` | N/A | OpenAI’s CLIP model (ViT‑L/14) for embedding images (and text) into a joint latent space; widely used for image similarity search. |
|
||||
| **BGE (BgeEmbeddingModel)** | `BAAI/bge-large-en-v1.5` | N/A | Currently only support `attention-backend` `triton` and `torch_native`. BAAI's BGE embedding models optimized for retrieval and reranking tasks. |
|
||||
|
||||
49
docs/supported_models/rerank_models.md
Normal file
49
docs/supported_models/rerank_models.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Rerank Models
|
||||
|
||||
SGLang offers comprehensive support for rerank models by incorporating optimized serving frameworks with a flexible programming interface. This setup enables efficient processing of cross-encoder reranking tasks, improving the accuracy and relevance of search result ordering. SGLang’s design ensures high throughput and low latency during reranker model deployment, making it ideal for semantic-based result refinement in large-scale retrieval systems.
|
||||
|
||||
```{important}
|
||||
They are executed with `--is-embedding` and some may require `--trust-remote-code`
|
||||
```
|
||||
|
||||
## Example Launch Command
|
||||
|
||||
```shell
|
||||
python3 -m sglang.launch_server \
|
||||
--model-path BAAI/bge-reranker-v2-m3 \
|
||||
--host 0.0.0.0 \
|
||||
--disable-radix-cache \
|
||||
--chunked-prefill-size -1 \
|
||||
--attention-backend triton \
|
||||
--is-embedding \
|
||||
--port 30000
|
||||
```
|
||||
|
||||
## Example Client Request
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
url = "http://127.0.0.1:30000/v1/rerank"
|
||||
|
||||
payload = {
|
||||
"model": "BAAI/bge-reranker-v2-m3",
|
||||
"query": "what is panda?",
|
||||
"documents": [
|
||||
"hi",
|
||||
"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China."
|
||||
]
|
||||
}
|
||||
|
||||
response = requests.post(url, json=payload)
|
||||
response_json = response.json()
|
||||
|
||||
for item in response_json:
|
||||
print(f"Score: {item['score']:.2f} - Document: '{item['document']}'")
|
||||
```
|
||||
|
||||
## Supported rerank models
|
||||
|
||||
| Model Family (Rerank) | Example HuggingFace Identifier | Chat Template | Description |
|
||||
|------------------------------------------------|--------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **BGE-Reranker (BgeRerankModel)** | `BAAI/bge-reranker-v2-m3` | N/A | Currently only support `attention-backend` `triton` and `torch_native`. high-performance cross-encoder reranker model from BAAI. Suitable for reranking search results based on semantic relevance. |
|
||||
Reference in New Issue
Block a user