[Docs] clean up structured outputs docs (#2654)
This commit is contained in:
2
.github/workflows/pr-test.yml
vendored
2
.github/workflows/pr-test.yml
vendored
@@ -52,7 +52,7 @@ jobs:
|
|||||||
runs-on: 1-gpu-runner
|
runs-on: 1-gpu-runner
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
range: [0-6, 6-15, 15-23, 23-30, 30-100]
|
range: [0-6, 6-16, 16-23, 23-30, 30-100]
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout code
|
- name: Checkout code
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
|
|||||||
@@ -1,13 +1,13 @@
|
|||||||
# DeepSeek V3 Support
|
# DeepSeek V3 Support
|
||||||
|
|
||||||
The SGLang and DeepSeek teams worked together to get DeepSeek V3 FP8 running on NVIDIA and AMD GPUs **from day one**. SGLang also has supported [MLA optimization](https://lmsys.org/blog/2024-09-04-sglang-v0-3/#deepseek-multi-head-latent-attention-mla-throughput-optimizations) and [DP attention](https://lmsys.org/blog/2024-12-04-sglang-v0-4/#data-parallelism-attention-for-deepseek-models), making SGLang one of the best open-source LLM engines for running DeepSeek models.
|
The SGLang and DeepSeek teams collaborated to get DeepSeek V3 FP8 running on NVIDIA and AMD GPUs **from day one**. SGLang also supports [MLA optimization](https://lmsys.org/blog/2024-09-04-sglang-v0-3/#deepseek-multi-head-latent-attention-mla-throughput-optimizations) and [DP attention](https://lmsys.org/blog/2024-12-04-sglang-v0-4/#data-parallelism-attention-for-deepseek-models), making SGLang one of the best open-source LLM engines for running DeepSeek models. SGLang is the inference engine recommended by the official [DeepSeek team](https://github.com/deepseek-ai/DeepSeek-V3/tree/main?tab=readme-ov-file#62-inference-with-sglang-recommended).
|
||||||
|
|
||||||
Special thanks to Meituan's Search & Recommend Platform Team and Baseten's Model Performance Team for implementing the model, and DataCrunch for providing GPU resources.
|
Special thanks to Meituan's Search & Recommend Platform Team and Baseten's Model Performance Team for implementing the model, and DataCrunch for providing GPU resources.
|
||||||
|
|
||||||
## Hardware Recommendation
|
## Hardware Recommendation
|
||||||
- 8 x NVIDIA H200 GPUs
|
- 8 x NVIDIA H200 GPUs
|
||||||
|
|
||||||
If you do not have GPUs with large enough memory, please try multi-node tensor parallelism ([help 1](https://github.com/sgl-project/sglang/blob/637de9e8ce91fd3e92755eb2a842860925954ab1/docs/backend/backend.md?plain=1#L88-L95) [help 2](https://github.com/sgl-project/sglang/blob/637de9e8ce91fd3e92755eb2a842860925954ab1/docs/backend/backend.md?plain=1#L152-L168)). Here is an example serving with [2 H20 node](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208)
|
If you do not have GPUs with large enough memory, please try multi-node tensor parallelism. There is an example serving with [2 H20 nodes](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208) below.
|
||||||
|
|
||||||
## Installation & Launch
|
## Installation & Launch
|
||||||
|
|
||||||
@@ -61,10 +61,10 @@ For example, there are two H20 nodes, each with 8 GPUs. The first node's IP is `
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# node 1
|
# node 1
|
||||||
GLOO_SOCKET_IFNAME=eth0 python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --nccl-init 10.0.0.1:5000 --nnodes 2 --node-rank 0 --trust-remote-code
|
python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --nccl-init 10.0.0.1:5000 --nnodes 2 --node-rank 0 --trust-remote-code
|
||||||
|
|
||||||
# node 2
|
# node 2
|
||||||
GLOO_SOCKET_IFNAME=eth0 python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --nccl-init 10.0.0.1:5000 --nnodes 2 --node-rank 1 --trust-remote-code
|
python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --tp 16 --nccl-init 10.0.0.1:5000 --nnodes 2 --node-rank 1 --trust-remote-code
|
||||||
```
|
```
|
||||||
|
|
||||||
If you have two H100 nodes, the usage is similar to the aforementioned H20.
|
If you have two H100 nodes, the usage is similar to the aforementioned H20.
|
||||||
@@ -72,9 +72,3 @@ If you have two H100 nodes, the usage is similar to the aforementioned H20.
|
|||||||
## DeepSeek V3 Optimization Plan
|
## DeepSeek V3 Optimization Plan
|
||||||
|
|
||||||
https://github.com/sgl-project/sglang/issues/2591
|
https://github.com/sgl-project/sglang/issues/2591
|
||||||
|
|
||||||
## Appendix
|
|
||||||
|
|
||||||
SGLang is the inference engine officially recommended by the DeepSeek team.
|
|
||||||
|
|
||||||
https://github.com/deepseek-ai/DeepSeek-V3/tree/main?tab=readme-ov-file#62-inference-with-sglang-recommended
|
|
||||||
|
|||||||
@@ -159,10 +159,10 @@ python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instr
|
|||||||
|
|
||||||
# Run 405B (fp16) on two nodes
|
# Run 405B (fp16) on two nodes
|
||||||
## on the first node, replace the `172.16.4.52:20000` with your own first node ip address and port
|
## on the first node, replace the `172.16.4.52:20000` with your own first node ip address and port
|
||||||
GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 0 --disable-cuda-graph
|
python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 0
|
||||||
|
|
||||||
## on the first node, replace the `172.16.4.52:20000` with your own first node ip address and port
|
## on the first node, replace the `172.16.4.52:20000` with your own first node ip address and port
|
||||||
GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 1 --disable-cuda-graph
|
python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 1
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|||||||
@@ -221,17 +221,15 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Structured Outputs (JSON, Regex, EBNF)\n",
|
"## Structured Outputs (JSON, Regex, EBNF)\n",
|
||||||
"You can specify a JSON schema, Regular Expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. \n",
|
"You can specify a JSON schema, [regular expression](https://en.wikipedia.org/wiki/Regular_expression) or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"SGLang supports two grammar backends:\n",
|
"SGLang supports two grammar backends:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and Regular Expression constraints.\n",
|
"- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.\n",
|
||||||
"- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.\n",
|
"- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.\n",
|
||||||
" - XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)\n",
|
" - XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"> 🔔 Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified at a time.\n",
|
"Initialize the XGrammar backend using `--grammar-backend xgrammar` flag\n",
|
||||||
"\n",
|
|
||||||
"Initialise xgrammar backend using `--grammar-backend xgrammar` flag\n",
|
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
|
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
|
||||||
"--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)\n",
|
"--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)\n",
|
||||||
|
|||||||
@@ -11,20 +11,22 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"With SGLang, You can define a JSON schema, EBNF or regular expression to constrain the model's output.\n",
|
"## Structured Outputs (JSON, Regex, EBNF)\n",
|
||||||
|
"You can specify a JSON schema, [regular expression](https://en.wikipedia.org/wiki/Regular_expression) or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[JSON Schema](https://json-schema.org/): Formats output into structured JSON objects with validation rules.\n",
|
"SGLang supports two grammar backends:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[EBNF (Extended Backus-Naur Form)](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form): Defines complex syntax rules, especially for recursive patterns like nested structures.\n",
|
"- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.\n",
|
||||||
|
"- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.\n",
|
||||||
|
" - XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[Regular Expressions](https://en.wikipedia.org/wiki/Regular_expression): Matches text patterns for simple validation and formatting.\n",
|
"Initialize the XGrammar backend using `--grammar-backend xgrammar` flag\n",
|
||||||
|
"```bash\n",
|
||||||
|
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
|
||||||
|
"--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)\n",
|
||||||
|
"```\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Grammar Backend\n",
|
"We suggest using XGrammar whenever possible for its better performance. For more details, see [XGrammar technical overview](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar)."
|
||||||
"\n",
|
|
||||||
"SGLang has two backends: [Outlines](https://github.com/dottxt-ai/outlines) (default) and [XGrammar](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar). We suggest using XGrammar whenever possible for its better performance. For more details, see [XGrammar technical overview](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar).\n",
|
|
||||||
"\n",
|
|
||||||
"* Xgrammar Backend: JSON and EBNF\n",
|
|
||||||
"* Outlines Backend: JSON and regular expressions"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -208,15 +210,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sglang.utils import (\n",
|
|
||||||
" execute_shell_command,\n",
|
|
||||||
" wait_for_server,\n",
|
|
||||||
" terminate_process,\n",
|
|
||||||
" print_highlight,\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"import requests\n",
|
|
||||||
"\n",
|
|
||||||
"server_process = execute_shell_command(\n",
|
"server_process = execute_shell_command(\n",
|
||||||
" \"\"\"\n",
|
" \"\"\"\n",
|
||||||
"python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-1B-Instruct --port=30010 --grammar-backend xgrammar\n",
|
"python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-1B-Instruct --port=30010 --grammar-backend xgrammar\n",
|
||||||
@@ -39,10 +39,9 @@ The `sampling_params` follows this format
|
|||||||
```python
|
```python
|
||||||
# The maximum number of output tokens
|
# The maximum number of output tokens
|
||||||
max_new_tokens: int = 128,
|
max_new_tokens: int = 128,
|
||||||
# Stop when hitting any of the strings in this list.
|
# Stop when hitting any of the strings in this list
|
||||||
stop: Optional[Union[str, List[str]]] = None,
|
stop: Optional[Union[str, List[str]]] = None,
|
||||||
# Stop when hitting any of the token_ids in this list. Could be useful when mixed with
|
# Stop when hitting any of the token_ids in this list
|
||||||
# `min_new_tokens`.
|
|
||||||
stop_token_ids: Optional[List[int]] = [],
|
stop_token_ids: Optional[List[int]] = [],
|
||||||
# Sampling temperature
|
# Sampling temperature
|
||||||
temperature: float = 1.0,
|
temperature: float = 1.0,
|
||||||
@@ -52,26 +51,26 @@ top_p: float = 1.0,
|
|||||||
top_k: int = -1,
|
top_k: int = -1,
|
||||||
# Min-p sampling
|
# Min-p sampling
|
||||||
min_p: float = 0.0,
|
min_p: float = 0.0,
|
||||||
# Whether to ignore EOS token.
|
# Whether to ignore EOS token
|
||||||
ignore_eos: bool = False,
|
ignore_eos: bool = False,
|
||||||
# Whether to skip the special tokens during detokenization.
|
# Whether to skip the special tokens during detokenization
|
||||||
skip_special_tokens: bool = True,
|
skip_special_tokens: bool = True,
|
||||||
# Whether to add spaces between special tokens during detokenization.
|
# Whether to add spaces between special tokens during detokenization
|
||||||
spaces_between_special_tokens: bool = True,
|
spaces_between_special_tokens: bool = True,
|
||||||
# Do parallel sampling and return `n` outputs.
|
# Do parallel sampling and return `n` outputs.
|
||||||
n: int = 1,
|
n: int = 1,
|
||||||
|
|
||||||
## Structured Outputs
|
## Structured Outputs
|
||||||
# Only one of the below three can be set at a time:
|
# Only one of the below three can be set for a request.
|
||||||
|
|
||||||
# Constrains the output to follow a given regular expression.
|
# Constrain the output to follow a given JSON schema.
|
||||||
regex: Optional[str] = None,
|
|
||||||
# Constrains the output to follow a given JSON schema.
|
|
||||||
json_schema: Optional[str] = None,
|
json_schema: Optional[str] = None,
|
||||||
# Constrains the output to follow a given EBNF Grammar.
|
# Constrain the output to follow a given regular expression.
|
||||||
|
regex: Optional[str] = None,
|
||||||
|
# Constrain the output to follow a given EBNF grammar.
|
||||||
ebnf: Optional[str] = None,
|
ebnf: Optional[str] = None,
|
||||||
|
|
||||||
## Penalties. See [Performance Implications on Penalties] section below for more informations.
|
## Penalties.
|
||||||
|
|
||||||
# Float that penalizes new tokens based on their frequency in the generated text so far.
|
# Float that penalizes new tokens based on their frequency in the generated text so far.
|
||||||
# Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to
|
# Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to
|
||||||
@@ -185,17 +184,15 @@ The `image_data` can be a file name, a URL, or a base64 encoded string. See also
|
|||||||
Streaming is supported in a similar manner as [above](#streaming).
|
Streaming is supported in a similar manner as [above](#streaming).
|
||||||
|
|
||||||
### Structured Outputs (JSON, Regex, EBNF)
|
### Structured Outputs (JSON, Regex, EBNF)
|
||||||
You can specify a JSON schema, Regular Expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints.
|
You can specify a JSON schema, regular expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.
|
||||||
|
|
||||||
SGLang supports two grammar backends:
|
SGLang supports two grammar backends:
|
||||||
|
|
||||||
- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and Regular Expression constraints.
|
- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.
|
||||||
- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.
|
- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.
|
||||||
- XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)
|
- XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)
|
||||||
|
|
||||||
> 🔔 Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified at a time.
|
Initialize the XGrammar backend using `--grammar-backend xgrammar` flag
|
||||||
|
|
||||||
Initialise xgrammar backend using `--grammar-backend xgrammar` flag
|
|
||||||
```bash
|
```bash
|
||||||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \
|
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \
|
||||||
--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)
|
--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)
|
||||||
|
|||||||
@@ -171,15 +171,15 @@ class CompletionRequest(BaseModel):
|
|||||||
top_k: int = -1
|
top_k: int = -1
|
||||||
min_p: float = 0.0
|
min_p: float = 0.0
|
||||||
min_tokens: int = 0
|
min_tokens: int = 0
|
||||||
regex: Optional[str] = None
|
|
||||||
json_schema: Optional[str] = None
|
json_schema: Optional[str] = None
|
||||||
|
regex: Optional[str] = None
|
||||||
|
ebnf: Optional[str] = None
|
||||||
repetition_penalty: float = 1.0
|
repetition_penalty: float = 1.0
|
||||||
stop_token_ids: Optional[List[int]] = None
|
stop_token_ids: Optional[List[int]] = None
|
||||||
no_stop_trim: bool = False
|
no_stop_trim: bool = False
|
||||||
ignore_eos: bool = False
|
ignore_eos: bool = False
|
||||||
skip_special_tokens: bool = True
|
skip_special_tokens: bool = True
|
||||||
lora_path: Optional[Union[List[Optional[str]], Optional[str]]] = None
|
lora_path: Optional[Union[List[Optional[str]], Optional[str]]] = None
|
||||||
ebnf: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
class CompletionResponseChoice(BaseModel):
|
class CompletionResponseChoice(BaseModel):
|
||||||
@@ -315,13 +315,13 @@ class ChatCompletionRequest(BaseModel):
|
|||||||
min_p: float = 0.0
|
min_p: float = 0.0
|
||||||
min_tokens: int = 0
|
min_tokens: int = 0
|
||||||
regex: Optional[str] = None
|
regex: Optional[str] = None
|
||||||
|
ebnf: Optional[str] = None
|
||||||
repetition_penalty: float = 1.0
|
repetition_penalty: float = 1.0
|
||||||
stop_token_ids: Optional[List[int]] = None
|
stop_token_ids: Optional[List[int]] = None
|
||||||
no_stop_trim: bool = False
|
no_stop_trim: bool = False
|
||||||
ignore_eos: bool = False
|
ignore_eos: bool = False
|
||||||
skip_special_tokens: bool = True
|
skip_special_tokens: bool = True
|
||||||
lora_path: Optional[Union[List[Optional[str]], Optional[str]]] = None
|
lora_path: Optional[Union[List[Optional[str]], Optional[str]]] = None
|
||||||
ebnf: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
class FunctionResponse(BaseModel):
|
class FunctionResponse(BaseModel):
|
||||||
|
|||||||
@@ -19,6 +19,14 @@ _SAMPLING_EPS = 1e-6
|
|||||||
|
|
||||||
|
|
||||||
class SamplingParams:
|
class SamplingParams:
|
||||||
|
"""
|
||||||
|
The sampling parameters.
|
||||||
|
|
||||||
|
See docs/references/sampling_params.md or
|
||||||
|
https://sgl-project.github.io/references/sampling_params.html
|
||||||
|
for the documentation.
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
max_new_tokens: int = 128,
|
max_new_tokens: int = 128,
|
||||||
@@ -33,9 +41,9 @@ class SamplingParams:
|
|||||||
repetition_penalty: float = 1.0,
|
repetition_penalty: float = 1.0,
|
||||||
min_new_tokens: int = 0,
|
min_new_tokens: int = 0,
|
||||||
spaces_between_special_tokens: bool = True,
|
spaces_between_special_tokens: bool = True,
|
||||||
regex: Optional[str] = None,
|
|
||||||
n: int = 1,
|
n: int = 1,
|
||||||
json_schema: Optional[str] = None,
|
json_schema: Optional[str] = None,
|
||||||
|
regex: Optional[str] = None,
|
||||||
ebnf: Optional[str] = None,
|
ebnf: Optional[str] = None,
|
||||||
no_stop_trim: bool = False,
|
no_stop_trim: bool = False,
|
||||||
ignore_eos: bool = False,
|
ignore_eos: bool = False,
|
||||||
|
|||||||
@@ -578,6 +578,8 @@ def _set_envs_and_config(server_args: ServerArgs):
|
|||||||
os.environ["NCCL_NVLS_ENABLE"] = "0"
|
os.environ["NCCL_NVLS_ENABLE"] = "0"
|
||||||
os.environ["TORCH_NCCL_AVOID_RECORD_STREAMS"] = "1"
|
os.environ["TORCH_NCCL_AVOID_RECORD_STREAMS"] = "1"
|
||||||
os.environ["CUDA_DEVICE_MAX_CONNECTIONS"] = "4"
|
os.environ["CUDA_DEVICE_MAX_CONNECTIONS"] = "4"
|
||||||
|
if "GLOO_SOCKET_IFNAME" not in os.environ:
|
||||||
|
os.environ["GLOO_SOCKET_IFNAME"] = "eth0"
|
||||||
|
|
||||||
# Set prometheus env vars
|
# Set prometheus env vars
|
||||||
if server_args.enable_metrics:
|
if server_args.enable_metrics:
|
||||||
|
|||||||
@@ -42,7 +42,6 @@ class ServerArgs:
|
|||||||
model_path: str
|
model_path: str
|
||||||
tokenizer_path: Optional[str] = None
|
tokenizer_path: Optional[str] = None
|
||||||
tokenizer_mode: str = "auto"
|
tokenizer_mode: str = "auto"
|
||||||
skip_tokenizer_init: bool = False
|
|
||||||
load_format: str = "auto"
|
load_format: str = "auto"
|
||||||
trust_remote_code: bool = True
|
trust_remote_code: bool = True
|
||||||
dtype: str = "auto"
|
dtype: str = "auto"
|
||||||
@@ -54,6 +53,7 @@ class ServerArgs:
|
|||||||
chat_template: Optional[str] = None
|
chat_template: Optional[str] = None
|
||||||
is_embedding: bool = False
|
is_embedding: bool = False
|
||||||
revision: Optional[str] = None
|
revision: Optional[str] = None
|
||||||
|
skip_tokenizer_init: bool = False
|
||||||
return_token_ids: bool = False
|
return_token_ids: bool = False
|
||||||
|
|
||||||
# Port for the HTTP server
|
# Port for the HTTP server
|
||||||
@@ -276,17 +276,6 @@ class ServerArgs:
|
|||||||
"tokenizer if available, and 'slow' will "
|
"tokenizer if available, and 'slow' will "
|
||||||
"always use the slow tokenizer.",
|
"always use the slow tokenizer.",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
|
||||||
"--skip-tokenizer-init",
|
|
||||||
action="store_true",
|
|
||||||
help="If set, skip init tokenizer and pass input_ids in generate request",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--return-token-ids",
|
|
||||||
action="store_true",
|
|
||||||
default=ServerArgs.return_token_ids,
|
|
||||||
help="Whether to return token IDs in the output, this may introduce additional overhead.",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--load-format",
|
"--load-format",
|
||||||
type=str,
|
type=str,
|
||||||
@@ -394,6 +383,17 @@ class ServerArgs:
|
|||||||
"name, a tag name, or a commit id. If unspecified, will use "
|
"name, a tag name, or a commit id. If unspecified, will use "
|
||||||
"the default version.",
|
"the default version.",
|
||||||
)
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--skip-tokenizer-init",
|
||||||
|
action="store_true",
|
||||||
|
help="If set, skip init tokenizer and pass input_ids in generate request",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--return-token-ids",
|
||||||
|
action="store_true",
|
||||||
|
default=ServerArgs.return_token_ids,
|
||||||
|
help="Whether to return token IDs in the output, this may introduce additional overhead.",
|
||||||
|
)
|
||||||
|
|
||||||
# Memory and scheduling
|
# Memory and scheduling
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
|
|||||||
Reference in New Issue
Block a user