SGL Kernel
Kernel Library for SGLang
Installation
For CUDA 11.8:
pip3 install sgl-kernel -i https://docs.sglang.ai/whl/cu118
For CUDA 12.1 or CUDA 12.4:
pip3 install sgl-kernel
Developer Guide
Development Environment Setup
Use Docker to set up the development environment. See Docker setup guide.
Create and enter development container:
docker run -itd --shm-size 32g --gpus all -v $HOME/.cache:/root/.cache --ipc=host --name sglang_zhyncs lmsysorg/sglang:dev /bin/zsh
docker exec -it sglang_zhyncs /bin/zsh
Project Structure
Dependencies
Third-party libraries:
Kernel Development
Steps to add a new kernel:
- Implement the kernel in csrc
- Expose the interface in include/sgl_kernel_ops.h
- Create torch extension in csrc/torch_extension.cc
- Update CMakeLists.txt to include new CUDA source
- Expose Python interface in python
Development Tips
-
When implementing kernels in csrc, only define pure CUDA files and C++ interfaces. If you need to use
Torch::tensor, use<torch/all.h>instead of<torch/extension.h>. Using<torch/extension.h>will cause compilation errors when using SABI. -
When creating torch extensions, simply add the function definition with
m.def:m.def("register_graph_buffers", register_graph_buffers); -
When exposing Python interfaces, avoid using kwargs in C++ interface kernels.
Avoid this:
torch.ops.sgl_kernel.apply_rope_pos_ids_cos_sin_cache.default( q=query.view(query.shape[0], -1, head_size), k=key.view(key.shape[0], -1, head_size), q_rope=query.view(query.shape[0], -1, head_size), k_rope=key.view(key.shape[0], -1, head_size), cos_sin_cache=cos_sin_cache, pos_ids=positions.long(), interleave=(not is_neox), cuda_stream=get_cuda_stream(), )Use this instead:
torch.ops.sgl_kernel.apply_rope_pos_ids_cos_sin_cache.default( query.view(query.shape[0], -1, head_size), key.view(key.shape[0], -1, head_size), query.view(query.shape[0], -1, head_size), key.view(key.shape[0], -1, head_size), cos_sin_cache, positions.long(), (not is_neox), get_cuda_stream(), )
Build & Install
Development build:
make build
Note:
The sgl-kernel is rapidly evolving. If you experience a compilation failure, try using make rebuild.
Testing & Benchmarking
- Add pytest tests in tests/
- Add benchmarks using triton benchmark in benchmark/
- Run test suite
Release new version
Update version in pyproject.toml and version.py