From d737da5f17ebd179fa9d6a79fb28e6d09398848d Mon Sep 17 00:00:00 2001 From: Lianmin Zheng Date: Thu, 4 Jul 2024 00:55:40 -0700 Subject: [PATCH] Update README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 90280f99f..c22c257b5 100644 --- a/README.md +++ b/README.md @@ -49,10 +49,10 @@ pip install -e "python[all]" pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.3/ ``` -### Method 3: Using Docker -The docker images are vailable on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags). +### Method 3: Using docker +The docker images are available on Docker Hub as [lmsysorg/sglang](https://hub.docker.com/r/lmsysorg/sglang/tags). -### Notes +### Common Notes - If you see errors from the Triton compiler, please install the [Triton Nightly](https://triton-lang.org/main/getting-started/installation.html). - If you cannot install FlashInfer, check out its [installation](https://docs.flashinfer.ai/installation.html#) page. If you still cannot install it, you can use the slower Triton kernels by adding `--disable-flashinfer` when launching the server. - If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"`.