Co-authored-by: Ying Sheng <sqy1415@gmail.com> Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com> Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu> Co-authored-by: parasol-aser <3848358+parasol-aser@users.noreply.github.com> Co-authored-by: LiviaSun <33578456+ChuyueSun@users.noreply.github.com> Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
20 lines
515 B
Markdown
20 lines
515 B
Markdown
## Flashinfer Mode
|
|
|
|
[`flashinfer`](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving; we use it here to support our attention computation.
|
|
|
|
### Install flashinfer
|
|
|
|
```bash
|
|
git submodule update --init --recursive
|
|
pip install 3rdparty/flashinfer/python
|
|
```
|
|
|
|
### Run Sever With Flashinfer Mode
|
|
|
|
Add through `--model_mode` argument from the command line.
|
|
|
|
Example:
|
|
|
|
```bash
|
|
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --model-mode flashinfer
|
|
``` |