From 87a0db82b867d52e775b96e344b0e16ff60cdb67 Mon Sep 17 00:00:00 2001 From: Lianmin Zheng Date: Thu, 15 Aug 2024 10:54:24 -0700 Subject: [PATCH] update hyperparameter guide (#1114) --- docs/en/hyperparameter_tuning.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/docs/en/hyperparameter_tuning.md b/docs/en/hyperparameter_tuning.md index 53b92435c..02a0657c3 100644 --- a/docs/en/hyperparameter_tuning.md +++ b/docs/en/hyperparameter_tuning.md @@ -10,7 +10,8 @@ When the server is running at full load, look for the following in the log: ### Tune Your Request Submission Speed `#queue-req` indicates the number of requests in the queue. If you frequently see `#queue-req == 0`, it suggests you are bottlenecked by the request submission speed. -A healthy range for `#queue-req` is `100 - 1000`. +A healthy range for `#queue-req` is `50 - 1000`. +On the other hand, do not make `#queue-req` too large because it will also increase the scheduling overhead on the server. ### Tune `--schedule-conservativeness` `token usage` indicates the KV cache memory utilization of the server. `token usage > 0.9` means good utilization. @@ -19,13 +20,14 @@ The case of serving being too conservative can happen when users send many reque On the other hand, if you see `token usage` very high and you frequently see warnings like `decode out of memory happened, #retracted_reqs: 1, #new_token_ratio: 0.9998 -> 1.0000`, you can increase `--schedule-conservativeness` to a value like 1.3. +If you see `decode out of memory happened` occasionally but not frequently, it is okay. ### Tune `--dp-size` and `--tp-size` Data parallelism is better for throughput. When there is enough GPU memory, always favor data parallelism for throughput. -### (Minor) Tune `--max-prefill-tokens`, `--mem-fraction-static`, `--max-running-requests` +### Avoid out-of-memory by tuning `--chunked-prefill-size`, `--mem-fraction-static`, `--max-running-requests` If you see out of memory (OOM) errors, you can decrease these parameters. -If OOM happens during prefill, try to decrease `--max-prefill-tokens`. +If OOM happens during prefill, try to decrease `--chunked-prefill-size` to `4096` or `2048`. If OOM happens during decoding, try to decrease `--max-running-requests`. You can also try to decrease `--mem-fraction-static`, which reduces the memory usage of the KV cache memory pool and helps both prefill and decoding.