Update docs (#1768)
Co-authored-by: Chayenne Zhao <zhaochenyang20@gmail.com> Co-authored-by: Chayenne <zhaochen20@outlook.com>
This commit is contained in:
@@ -6,11 +6,11 @@ Achieving a large batch size is the most important thing for attaining high thro
|
||||
|
||||
When the server is running at full load, look for the following in the log:
|
||||
|
||||
```Decode batch. #running-req: 233, #token: 370959, token usage: 0.82, gen throughput (token/s): 4594.01, #queue-req: 417```
|
||||
```Decode batch. #running-req: 233, #token: 370959, token usage: 0.82, gen throughput (token/s): 4594.01, #queue-req: 317```
|
||||
|
||||
### Tune Your Request Submission Speed
|
||||
`#queue-req` indicates the number of requests in the queue. If you frequently see `#queue-req == 0`, it suggests you are bottlenecked by the request submission speed.
|
||||
A healthy range for `#queue-req` is `50 - 1000`.
|
||||
A healthy range for `#queue-req` is `50 - 500`.
|
||||
On the other hand, do not make `#queue-req` too large because it will also increase the scheduling overhead on the server.
|
||||
|
||||
### Tune `--schedule-conservativeness`
|
||||
@@ -31,6 +31,10 @@ If OOM happens during prefill, try to decrease `--chunked-prefill-size` to `4096
|
||||
If OOM happens during decoding, try to decrease `--max-running-requests`.
|
||||
You can also try to decrease `--mem-fraction-static`, which reduces the memory usage of the KV cache memory pool and helps both prefill and decoding.
|
||||
|
||||
### Try advanced options
|
||||
- To enable the experimental overlapped scheduler, add `--enable-overlap-scheduler`. It overlaps CPU scheduler with GPU computation and can accelerate almost all workloads. This does not work for constrained decoding currenly.
|
||||
- To enable torch.compile acceleration, add `--enable-torch-compile`. It accelerates small models on small batch sizes. This does not work for FP8 currenly.
|
||||
|
||||
### (Minor) Tune `--schedule-policy`
|
||||
If you have many shared prefixes, use the default `--schedule-policy lpm`. `lpm` stands for longest prefix match.
|
||||
When you have no shared prefixes at all or you always send the requests with the shared prefixes together,
|
||||
|
||||
Reference in New Issue
Block a user