Commit Graph

2500 Commits

Author SHA1 Message Date
Shu Wang
ad4e58bf67 Support fp8 gemm for blackwell (#4558) 2025-03-20 12:40:28 -07:00
Ke Bao
bfb03c6182 Update doc for MTP and DP attention (#4622) 2025-03-20 11:31:48 -07:00
Yuhong Guo
b36ab493b3 Enable setting sglang logger from Env Variable SGLANG_LOGGING_CONFIG_PATH (#4592)
Signed-off-by: Yuhong Guo <yuhong.gyh@antgroup.com>
2025-03-20 02:10:32 -07:00
JieXin Liang
9e93ef3f8e [fix] fix illegal mem access and clean up triton attention backend (#4571) 2025-03-20 02:01:52 -07:00
Chuyue Sun
fad86a6863 Support n in OpenAI API completions (#3446)
Co-authored-by: Shan Yu <shanyu1@g.ucla.edu>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: chuyue sun <chuyue@lmsys.us-northcentral1-a.compute.internal>
2025-03-20 13:46:46 +08:00
strgrb
df7014a8d2 avoid cudaStreamSynchronize in DeepSeekV2AttentionMLA (#4577)
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
2025-03-19 10:02:26 -07:00
JieXin Liang
4942074174 [fix] fix initialization of _ENABLE_TORCH_INFERENCE_MODE (#4549) 2025-03-19 09:57:59 -07:00
Hongbo Xu
ba52fd1868 Add clang-format to pre-commit config (#4583) 2025-03-19 09:50:19 -07:00
lukec
b6944f97a6 Support FlashMLA backend cuda graph (#4514)
Co-authored-by: yinfan98 <1106310035@qq.com>
Co-authored-by: Hongbosherlock <hongbosherlock@gmail.com>
Co-authored-by: ispobock <ispobaoke@163.com>
2025-03-19 08:25:34 -07:00
Jinyan Chen
f44db16c8e [Feature] Integrate DeepEP into SGLang (#4232)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
Co-authored-by: Xuting Zhou <xutingz@nvidia.com>
2025-03-19 08:16:31 -07:00
strgrb
f9c53cbb42 Create col-major and tma-aligned x_scale for deep_gemm.gemm_fp8_fp8_bf16_nt (#4515)
Co-authored-by: Zhang Kaihong <zhangkaihong.zkh@alibaba-inc.com>
2025-03-19 00:02:43 -07:00
Baizhou Zhang
90532b7627 [Fix] Fix raw_bs bug when using flashinfer mla and eagle (#4557) 2025-03-18 21:26:53 -07:00
JieXin Liang
c0e9a36c5f Optimize Triton decoding kernel for dynamic workload (#4553) 2025-03-18 21:25:38 -07:00
aoshen524
588865f0e0 [Feature] Support Tensor Parallelism and Weight Slicing for Lora (#4274)
Co-authored-by: ShenAo1111 <1377693092@qq.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-03-18 20:33:07 -07:00
Cheng Wan
3196999f63 Reduce computation and communication in DP attention (#4521) 2025-03-18 13:41:36 -07:00
James Liu
9e0186f352 [Feature] Support EAGLE 3 (#4247) 2025-03-18 07:35:23 -07:00
Wei Wu
8baf9a0c18 [Fix] Type annotation correction for UpdateWeightsFromTensorReqInput (#4532)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-03-18 00:52:47 -07:00
Yineng Zhang
c787298547 use sgl custom all reduce (#4441) 2025-03-18 00:46:41 -07:00
Ke Bao
45212ce18b Add deepseek v2 torch compile pr test (#4538) 2025-03-18 00:29:24 -07:00
Yineng Zhang
c16b33ccac cleanup deps 3/n (#4541) 2025-03-18 00:11:36 -07:00
Albert
2d0045125f Fix the incorrect args in benchmark_and_profiling.md (#4542)
Signed-off-by: Tianyu Zhou <albert.zty@antgroup.com>
2025-03-18 00:07:06 -07:00
Xiaoyu Zhang
804d250a0d remove useless backend forward in rotary_embedding (#4500) 2025-03-17 23:54:00 -07:00
Xiaoyu Zhang
dd865befde [Hotfix] solve fp8 w8a8 ci test fail (#4531) 2025-03-17 23:17:04 -07:00
Mick
d373a48c98 fix: second_per_grid_ts should be used to get mrope position (#3682) 2025-03-17 18:12:38 -07:00
Mick
98be3bd306 refactor: rewrite bench-mmmu-sglang (#4458) 2025-03-17 18:11:47 -07:00
Zhiqiang Xie
a98290aea3 Unit test for Hierarchical Caching (#4486) 2025-03-17 17:45:00 -07:00
Xiaoyu Zhang
9b81f9bd34 sglang quant module remove vllm dependency (#4507) 2025-03-17 15:51:59 -07:00
Yineng Zhang
f81a27f65e upgrade sgl-kernel 0.0.5.post3 (#4522) 2025-03-17 14:49:56 -07:00
Yineng Zhang
988ab646ec bump v0.0.5.post3 (#4520) 2025-03-17 13:05:59 -07:00
Ke Bao
3ded4b215d Revert "feat: update grouped_topk to support softmax and sigmoid" (#4505) 2025-03-17 11:30:26 -07:00
Yinghai Lu
f4d7ab7a63 [sgl-router] improvement to avoid hang (#4482)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2025-03-17 10:37:50 -07:00
Lianmin Zheng
c38ca4fc8e Update readme (#4517) 2025-03-17 08:22:42 -07:00
Lianmin Zheng
82dec1f70b Remove redundant type conversion (#4513) 2025-03-17 05:57:35 -07:00
yiakwy-xpu-ml-framework-team
5f9b2c62ff [ROCm] fix dtype (#4510) 2025-03-17 05:20:50 -07:00
Lianmin Zheng
5493c3343e Fix data parallel + tensor parallel (#4499) 2025-03-17 05:13:16 -07:00
HandH1998
f2ab37e500 [Doc] add doc for quantization w8a8_fp8 or w8a8_int8 (#4495) 2025-03-17 02:25:00 -07:00
Wei Wu
91ba98fe50 [Fix] Resolve GPU Memory Leak in update_weights_from_tensor (#4446) 2025-03-17 08:54:30 +00:00
Yinghai Lu
c614dbdf95 Nicer standalone engine inferface (#4480) 2025-03-17 01:42:04 -07:00
Xihuai Wang
927ca935a7 Constraint Decoding: Tool call with text (#4067) 2025-03-17 01:06:46 -07:00
Stefan He
ef3c2dd08e Support Online Quantization for W8A8 (#4485) 2025-03-17 00:28:56 -07:00
Wenbo Yang
75b656488a Support serving DeepSeek-R1-Channel-INT8 with 32 L40S. (#4418) 2025-03-17 00:03:43 -07:00
Mick
0f52fb55ec config: Update fused moe config (#4493) 2025-03-16 23:51:58 -07:00
萝卜菜
d6d21640d3 [Feature] Support Deepseek-VL2 (#2798)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Yi Zhang <1109276519@qq.com>
2025-03-16 23:07:59 -07:00
JieXin Liang
0212d2e288 [Fix] use torch.inference_mode() instead of torch.no_grad() (#4372) 2025-03-16 22:54:16 -07:00
Byron Hsu
8cc300f536 Fix router test (#4483) 2025-03-16 22:49:47 -07:00
mlmz
452db50808 Constraint Decoding: Set xgrammar as the default grammar backend (#4386) 2025-03-16 18:53:43 -07:00
Rin Intachuen
d1112d8548 Add endpoint for file support, purely to speed up processing of input_embeds. (#2797) 2025-03-16 18:30:37 -07:00
woodx
48efec7b05 Feature: support code completion (#3612) 2025-03-16 18:26:19 -07:00
yiakwy-xpu-ml-framework-team
9b8333d992 [ROCm] enable moe topk softmax in amd (#4448) 2025-03-16 18:16:55 -07:00
Zhiqiang Xie
f5bbf6037d Fix: Complete int32 to int64 conversion (#4465) 2025-03-16 18:14:27 -07:00