Update docs (#486)
This commit is contained in:
@@ -359,6 +359,10 @@ python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port
|
||||
```
|
||||
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --tp 2
|
||||
```
|
||||
- Add `--dp 2` to enable data parallelism. It can also be used together with tp. Data parallelism is better for throughput if there is enough memory.
|
||||
```
|
||||
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --dp 2 --tp 2
|
||||
```
|
||||
- If you see out-of-memory errors during serving, please try to reduce the memory usage of the KV cache pool by setting a smaller value of `--mem-fraction-static`. The default value is `0.9`
|
||||
```
|
||||
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --mem-fraction-static 0.7
|
||||
|
||||
31
docs/hyperparameter_tuning.md
Normal file
31
docs/hyperparameter_tuning.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Guide on Hyperparameter Tuning
|
||||
|
||||
## Achieving Peak Throughput
|
||||
|
||||
Achieving a large batch size is the most important thing for attaining high throughput.
|
||||
|
||||
When the server is running at full load, look for the following in the log:
|
||||
```[gpu_id=0] #running-req: 233, #token: 370959, token usage: 0.82, gen throughput (token/s): 4594.01, #queue-req: 417```
|
||||
|
||||
### Tune Your Request Submission Speed
|
||||
`#queue-req` indicates the number of requests in the queue. If you frequently see `#queue-req == 0`, it suggests you are bottlenecked by the request submission speed.
|
||||
A healthy range for `#queue-req` is `100 - 3000`.
|
||||
|
||||
### Tune `--schedule-conservativeness`
|
||||
`token usage` indicates the KV cache memory utilization of the server. `token usage > 0.9` means good utilization.
|
||||
If you frequently see `token usage < 0.9` and `#queue-req > 0`, it means the server is too conservative about taking in new requests. You can decrease `--schedule-conservativeness` to a value like 0.3.
|
||||
The case of serving being too conservative can happen when users send many requests with a large `max_new_tokens` but the requests stop very early due to EOS or stop strings.
|
||||
|
||||
On the other hand, if you see `token usage` very high and you frequently see warnings like
|
||||
`decode out of memory happened, #retracted_reqs: 1, #new_token_ratio: 0.9998 -> 1.0000`, you can increase `--schedule-conservativeness` to a value like 1.3.
|
||||
|
||||
### Tune `--dp-size` and `--tp-size`
|
||||
Data parallelism is better for throughput. When there is enough GPU memory, always favor data parallelism for throughput.
|
||||
|
||||
### (Minor) Tune `--schedule-heuristic`
|
||||
If you have many shared prefixes, use the default `--schedule-heuristic lpm`. `lpm` stands for longest prefix match.
|
||||
When you have no shared prefixes at all or you always send the requests with the shared prefixes together,
|
||||
you can try `--schedule-heuristic fcfs`. `fcfs` stands for first come first serve.
|
||||
|
||||
### (Minor) Tune `--max-prefill-tokens`, `--mem-fraction-static`, `--max-running-requests`.
|
||||
If you see out of memory errors, you can decrease them. Otherwise, the default value should work well.
|
||||
Reference in New Issue
Block a user