2024-01-08 04:37:50 +00:00
|
|
|
## Flashinfer Mode
|
|
|
|
|
|
2024-01-15 16:12:57 -08:00
|
|
|
[flashinfer](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving.
|
|
|
|
|
It can be used in SGLang runtime to accelerate attention computation.
|
2024-01-08 04:37:50 +00:00
|
|
|
|
|
|
|
|
### Install flashinfer
|
|
|
|
|
|
2024-03-11 05:49:27 -07:00
|
|
|
See https://docs.flashinfer.ai/installation.html.
|
2024-02-06 19:28:29 -08:00
|
|
|
|
2024-01-15 16:12:57 -08:00
|
|
|
### Run a Server With Flashinfer Mode
|
2024-01-08 04:37:50 +00:00
|
|
|
|
2024-03-11 20:06:52 +08:00
|
|
|
Add `--enable-flashinfer` argument to enable flashinfer when launching a server.
|
2024-01-08 04:37:50 +00:00
|
|
|
|
|
|
|
|
Example:
|
|
|
|
|
|
|
|
|
|
```bash
|
2024-03-11 20:06:52 +08:00
|
|
|
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --enable-flashinfer
|
2024-01-15 16:12:57 -08:00
|
|
|
```
|