Files
sglang/docs/references/nvidia_jetson.md
2025-02-02 20:29:10 -08:00

2.5 KiB

Apply SGLang on NVIDIA Jetson Orin

Prerequisites

Before starting, ensure the following:

To install torch from this index:

pip install torch --index-url https://pypi.jetson-ai-lab.dev/jp6/cu126

Installation

Please refer to Installation Guide to install FlashInfer and SGLang.


Running Inference

Launch the server:

python -m sglang.launch_server \
  --model-path deepseek-ai/DeepSeek-R1-Distill-Llama-8B \
  --device cuda \
  --dtype half \
  --attention-backend flashinfer \
  --mem-fraction-static 0.8 \
  --context-length 8192

The quantization and limited context length (--dtype half --context-length 8192) are due to the limited computational resources in Nvidia jetson kit. A detailed explanation can be found in Server Arguments.

After launching the engine, refer to Chat completions to test the usability.


Running quantization with TorchAO

TorchAO is suggested to NVIDIA Jetson Orin.

python -m sglang.launch_server \
    --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \
    --device cuda \
    --dtype bfloat16 \
    --attention-backend flashinfer \
    --mem-fraction-static 0.8 \
    --context-length 8192 \
    --torchao-config int4wo-128

This enables TorchAO's int4 weight-only quantization with a 128-group size. The usage of --torchao-config int4wo-128 is also for memory efficiency.


Structured output with XGrammar

Please refer to SGLang doc structured output.


Thanks to the support from shahizat.

References