[Docs] clean up structured outputs docs (#2654)
This commit is contained in:
@@ -159,10 +159,10 @@ python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instr
|
||||
|
||||
# Run 405B (fp16) on two nodes
|
||||
## on the first node, replace the `172.16.4.52:20000` with your own first node ip address and port
|
||||
GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 0 --disable-cuda-graph
|
||||
python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 0
|
||||
|
||||
## on the first node, replace the `172.16.4.52:20000` with your own first node ip address and port
|
||||
GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 1 --disable-cuda-graph
|
||||
python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-405B-Instruct --tp 16 --nccl-init-addr 172.16.4.52:20000 --nnodes 2 --node-rank 1
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
@@ -221,17 +221,15 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Structured Outputs (JSON, Regex, EBNF)\n",
|
||||
"You can specify a JSON schema, Regular Expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. \n",
|
||||
"You can specify a JSON schema, [regular expression](https://en.wikipedia.org/wiki/Regular_expression) or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.\n",
|
||||
"\n",
|
||||
"SGLang supports two grammar backends:\n",
|
||||
"\n",
|
||||
"- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and Regular Expression constraints.\n",
|
||||
"- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.\n",
|
||||
"- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.\n",
|
||||
" - XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)\n",
|
||||
"\n",
|
||||
"> 🔔 Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified at a time.\n",
|
||||
"\n",
|
||||
"Initialise xgrammar backend using `--grammar-backend xgrammar` flag\n",
|
||||
"Initialize the XGrammar backend using `--grammar-backend xgrammar` flag\n",
|
||||
"```bash\n",
|
||||
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
|
||||
"--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)\n",
|
||||
|
||||
@@ -11,20 +11,22 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With SGLang, You can define a JSON schema, EBNF or regular expression to constrain the model's output.\n",
|
||||
"## Structured Outputs (JSON, Regex, EBNF)\n",
|
||||
"You can specify a JSON schema, [regular expression](https://en.wikipedia.org/wiki/Regular_expression) or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.\n",
|
||||
"\n",
|
||||
"[JSON Schema](https://json-schema.org/): Formats output into structured JSON objects with validation rules.\n",
|
||||
"SGLang supports two grammar backends:\n",
|
||||
"\n",
|
||||
"[EBNF (Extended Backus-Naur Form)](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form): Defines complex syntax rules, especially for recursive patterns like nested structures.\n",
|
||||
"- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.\n",
|
||||
"- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.\n",
|
||||
" - XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)\n",
|
||||
"\n",
|
||||
"[Regular Expressions](https://en.wikipedia.org/wiki/Regular_expression): Matches text patterns for simple validation and formatting.\n",
|
||||
"Initialize the XGrammar backend using `--grammar-backend xgrammar` flag\n",
|
||||
"```bash\n",
|
||||
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n",
|
||||
"--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"## Grammar Backend\n",
|
||||
"\n",
|
||||
"SGLang has two backends: [Outlines](https://github.com/dottxt-ai/outlines) (default) and [XGrammar](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar). We suggest using XGrammar whenever possible for its better performance. For more details, see [XGrammar technical overview](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar).\n",
|
||||
"\n",
|
||||
"* Xgrammar Backend: JSON and EBNF\n",
|
||||
"* Outlines Backend: JSON and regular expressions"
|
||||
"We suggest using XGrammar whenever possible for its better performance. For more details, see [XGrammar technical overview](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -208,15 +210,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sglang.utils import (\n",
|
||||
" execute_shell_command,\n",
|
||||
" wait_for_server,\n",
|
||||
" terminate_process,\n",
|
||||
" print_highlight,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"server_process = execute_shell_command(\n",
|
||||
" \"\"\"\n",
|
||||
"python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-1B-Instruct --port=30010 --grammar-backend xgrammar\n",
|
||||
@@ -39,10 +39,9 @@ The `sampling_params` follows this format
|
||||
```python
|
||||
# The maximum number of output tokens
|
||||
max_new_tokens: int = 128,
|
||||
# Stop when hitting any of the strings in this list.
|
||||
# Stop when hitting any of the strings in this list
|
||||
stop: Optional[Union[str, List[str]]] = None,
|
||||
# Stop when hitting any of the token_ids in this list. Could be useful when mixed with
|
||||
# `min_new_tokens`.
|
||||
# Stop when hitting any of the token_ids in this list
|
||||
stop_token_ids: Optional[List[int]] = [],
|
||||
# Sampling temperature
|
||||
temperature: float = 1.0,
|
||||
@@ -52,26 +51,26 @@ top_p: float = 1.0,
|
||||
top_k: int = -1,
|
||||
# Min-p sampling
|
||||
min_p: float = 0.0,
|
||||
# Whether to ignore EOS token.
|
||||
# Whether to ignore EOS token
|
||||
ignore_eos: bool = False,
|
||||
# Whether to skip the special tokens during detokenization.
|
||||
# Whether to skip the special tokens during detokenization
|
||||
skip_special_tokens: bool = True,
|
||||
# Whether to add spaces between special tokens during detokenization.
|
||||
# Whether to add spaces between special tokens during detokenization
|
||||
spaces_between_special_tokens: bool = True,
|
||||
# Do parallel sampling and return `n` outputs.
|
||||
n: int = 1,
|
||||
|
||||
## Structured Outputs
|
||||
# Only one of the below three can be set at a time:
|
||||
# Only one of the below three can be set for a request.
|
||||
|
||||
# Constrains the output to follow a given regular expression.
|
||||
regex: Optional[str] = None,
|
||||
# Constrains the output to follow a given JSON schema.
|
||||
# Constrain the output to follow a given JSON schema.
|
||||
json_schema: Optional[str] = None,
|
||||
# Constrains the output to follow a given EBNF Grammar.
|
||||
# Constrain the output to follow a given regular expression.
|
||||
regex: Optional[str] = None,
|
||||
# Constrain the output to follow a given EBNF grammar.
|
||||
ebnf: Optional[str] = None,
|
||||
|
||||
## Penalties. See [Performance Implications on Penalties] section below for more informations.
|
||||
## Penalties.
|
||||
|
||||
# Float that penalizes new tokens based on their frequency in the generated text so far.
|
||||
# Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to
|
||||
@@ -185,17 +184,15 @@ The `image_data` can be a file name, a URL, or a base64 encoded string. See also
|
||||
Streaming is supported in a similar manner as [above](#streaming).
|
||||
|
||||
### Structured Outputs (JSON, Regex, EBNF)
|
||||
You can specify a JSON schema, Regular Expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints.
|
||||
You can specify a JSON schema, regular expression or [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_form) to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified for a request.
|
||||
|
||||
SGLang supports two grammar backends:
|
||||
|
||||
- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and Regular Expression constraints.
|
||||
- [Outlines](https://github.com/dottxt-ai/outlines) (default): Supports JSON schema and regular expression constraints.
|
||||
- [XGrammar](https://github.com/mlc-ai/xgrammar): Supports JSON schema and EBNF constraints.
|
||||
- XGrammar currently uses the [GGML BNF format](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md)
|
||||
|
||||
> 🔔 Only one constraint parameter (`json_schema`, `regex`, or `ebnf`) can be specified at a time.
|
||||
|
||||
Initialise xgrammar backend using `--grammar-backend xgrammar` flag
|
||||
Initialize the XGrammar backend using `--grammar-backend xgrammar` flag
|
||||
```bash
|
||||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \
|
||||
--port 30000 --host 0.0.0.0 --grammar-backend [xgrammar|outlines] # xgrammar or outlines (default: outlines)
|
||||
|
||||
Reference in New Issue
Block a user