Support penalty in overlap mode; return logprob with chunked prefill; improve benchmark scripts (#3988)
Co-authored-by: SangBin Cho <rkooo567@gmail.com> Co-authored-by: dhou-xai <dhou@x.ai> Co-authored-by: Hanming Lu <hanming_lu@berkeley.edu>
This commit is contained in:
@@ -210,8 +210,7 @@
|
||||
"response = requests.post(url, json=data)\n",
|
||||
"print_highlight(response.text)\n",
|
||||
"assert response.json()[\"success\"] is True\n",
|
||||
"assert response.json()[\"message\"] == \"Succeeded to update model weights.\"\n",
|
||||
"assert response.json().keys() == {\"success\", \"message\"}"
|
||||
"assert response.json()[\"message\"] == \"Succeeded to update model weights.\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -411,7 +410,7 @@
|
||||
" },\n",
|
||||
")\n",
|
||||
"output = response.json()\n",
|
||||
"output_tokens = output[\"token_ids\"]\n",
|
||||
"output_tokens = output[\"output_ids\"]\n",
|
||||
"\n",
|
||||
"output_text = tokenizer.decode(output_tokens, skip_special_tokens=False)\n",
|
||||
"print_highlight(f\"Tokenized Output: {output_tokens}\")\n",
|
||||
|
||||
@@ -96,7 +96,6 @@ Please consult the documentation below to learn more about the parameters you ma
|
||||
* `schedule_policy`: The scheduling policy to control the processing order of waiting prefill requests in a single engine.
|
||||
* `schedule_conservativeness`: Can be used to decrease/increase the conservativeness of the server when taking new requests. Highly conservative behavior leads to starvation, but low conservativeness leads to slowed-down performance.
|
||||
* `cpu_offload_gb`: Reserve this amount of RAM in GB for offloading of model parameters to the CPU.
|
||||
* `prefill_only_one_req`: When this flag is turned on, the engine prefills only one request at a time.
|
||||
|
||||
## Other runtime options
|
||||
|
||||
|
||||
Reference in New Issue
Block a user