diff --git a/docs/get_started/install.md b/docs/get_started/install.md index 05e3eaefe..c14610159 100644 --- a/docs/get_started/install.md +++ b/docs/get_started/install.md @@ -51,6 +51,8 @@ docker run --gpus all \ python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000 ``` +You can also find the nightly docker images [here](https://hub.docker.com/r/lmsysorg/sglang/tags?name=nightly). + ## Method 4: Using Kubernetes Please check out [OME](https://github.com/sgl-project/ome), a Kubernetes operator for enterprise-grade management and serving of large language models (LLMs).