Support penalty in overlap mode; return logprob with chunked prefill; improve benchmark scripts (#3988)
Co-authored-by: SangBin Cho <rkooo567@gmail.com> Co-authored-by: dhou-xai <dhou@x.ai> Co-authored-by: Hanming Lu <hanming_lu@berkeley.edu>
This commit is contained in:
@@ -210,8 +210,7 @@
|
||||
"response = requests.post(url, json=data)\n",
|
||||
"print_highlight(response.text)\n",
|
||||
"assert response.json()[\"success\"] is True\n",
|
||||
"assert response.json()[\"message\"] == \"Succeeded to update model weights.\"\n",
|
||||
"assert response.json().keys() == {\"success\", \"message\"}"
|
||||
"assert response.json()[\"message\"] == \"Succeeded to update model weights.\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -411,7 +410,7 @@
|
||||
" },\n",
|
||||
")\n",
|
||||
"output = response.json()\n",
|
||||
"output_tokens = output[\"token_ids\"]\n",
|
||||
"output_tokens = output[\"output_ids\"]\n",
|
||||
"\n",
|
||||
"output_text = tokenizer.decode(output_tokens, skip_special_tokens=False)\n",
|
||||
"print_highlight(f\"Tokenized Output: {output_tokens}\")\n",
|
||||
|
||||
Reference in New Issue
Block a user