From ce62dc73f06c6dcc37631dc1e94cc74d434e0a6d Mon Sep 17 00:00:00 2001 From: Lianmin Zheng Date: Tue, 9 Jul 2024 01:32:46 -0700 Subject: [PATCH] Update model_support.md --- docs/model_support.md | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/docs/model_support.md b/docs/model_support.md index a77a3c288..08e942938 100644 --- a/docs/model_support.md +++ b/docs/model_support.md @@ -1,18 +1,15 @@ ## How to Support a New Model -To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models). +To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models). You can learn from existing model implementations and create new files for the new models. Most models are based on the transformer architecture, making them very similar. -You can learn from existing model implementations and create new files for the new models. Most models are based on the transformer architecture, making them very similar. +Another valuable resource is the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models). vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang. -Another valuable resource is the vLLM model implementations. vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang. - -1. Compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical. Specifically, - - Replace `Attention` with `RadixAttention`. +To port a model from vLLM to SGLang, you can compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical. Specifically, + - Replace vllm's `Attention` with `RadixAttention`. - Replace vllm's `LogitsProcessor` with SGLang's `LogitsProcessor`. - Remove `Sample`. - Change `forward()` functions, and add `input_metadata`. - Add `EntryClass` at the end. - - Test correctness by comparing the final logits and outputs of two following commands: + - Test correctness by comparing the final logits and outputs of the two following commands: - `python3 playground/reference_hf.py --model [new model]` - `python3 -m sglang.bench_latency --model [new model] --correct --output-len 16` -2. Convert models from vLLM to SGLang by visiting the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models).