[Doc] Update max_tokens to max_completion_tokens in all docs (#6248)

### What this PR does / why we need it?

Fix:

```
DeprecationWarning: max_tokens is deprecated in favor of the max_completion_tokens field.
```

- vLLM version: v0.14.1
- vLLM main:
d68209402d

Signed-off-by: shen-shanshan <467638484@qq.com>
This commit is contained in:
Shanshan Shen
2026-01-26 11:57:40 +08:00
committed by GitHub
parent 418fccf0bc
commit e3eefdecbd
28 changed files with 43 additions and 43 deletions

View File

@@ -127,7 +127,7 @@ curl http://<IP>:<Port>/v1/completions \
-d '{
"model": "qwen-2.5-7b-instruct",
"prompt": "Beijing is a",
"max_tokens": 5,
"max_completion_tokens": 5,
"temperature": 0
}'
```