Files

6 lines
277 B
Markdown
Raw Permalink Normal View History

2026-01-19 10:38:50 +08:00
# Modal
vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.
For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).