From ad46550d25b83b828c895b41f1e1fd0f6fe70bf4 Mon Sep 17 00:00:00 2001 From: yang_zcybb Date: Thu, 13 Mar 2025 13:12:14 +0800 Subject: [PATCH] [Doc] Fix typo in backend/sampling_params (#3835) Co-authored-by: yangzhice.124 --- docs/backend/sampling_params.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/backend/sampling_params.md b/docs/backend/sampling_params.md index 5f967b986..dae45f252 100644 --- a/docs/backend/sampling_params.md +++ b/docs/backend/sampling_params.md @@ -8,7 +8,7 @@ If you want a high-level endpoint that can automatically handle chat templates, The `/generate` endpoint accepts the following parameters in JSON format. For in detail usage see the [native api doc](./native_api.ipynb). -* `prompt: Optional[Union[List[str], str]] = None` The input prompt. Can be a single prompt or a batch of prompts. +* `text: Optional[Union[List[str], str]] = None` The input prompt. Can be a single prompt or a batch of prompts. * `input_ids: Optional[Union[List[List[int]], List[int]]] = None` Alternative to `text`. Specify the input as token IDs instead of text. * `sampling_params: Optional[Union[List[Dict], Dict]] = None` The sampling parameters as described in the sections below. * `return_logprob: Optional[Union[List[bool], bool]] = None` Whether to return log probabilities for tokens.