Commit Graph

1781 Commits

Author SHA1 Message Date
Xiaoyu Zhang
924ca7c92c Add DeepSeek V3/R1 shared experts fusion (#4918) 2025-04-04 01:59:29 -07:00
fzyzcjy
6ff9c6a5e7 Cleanup unused resources after DeepEP operation (#4996) 2025-04-04 00:36:04 -07:00
fzyzcjy
77e929a1a2 Support async DeepEP by splitting into two stages (#4995) 2025-04-04 00:32:27 -07:00
fzyzcjy
febe21ce03 Small refactor DeepEPDispatcher into subclasses (#4994) 2025-04-04 00:24:18 -07:00
JieXin Liang
a995a773a0 [fix] remove cuda_device_count_stateless (#5060) 2025-04-04 00:18:26 -07:00
Tommy Yang
31035dda44 Add H20 fused MoE kernel tuning configs for DeepSeek V3/R1 (#5057) 2025-04-03 22:14:28 -07:00
AniZpZ
d95269f9b3 [2/3] fix dsv3 awq issue (#4625)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
2025-04-03 17:36:39 -07:00
Yineng Zhang
e53bf190bc upgrade sgl-kernel v0.0.7 (#5049) 2025-04-03 17:07:54 -07:00
Yineng Zhang
3289c1207d Update the retry count (#5051) 2025-04-03 17:07:38 -07:00
Ravi Theja
69df9761dd Add LlavaLlamaForCausaLM in MultiModal Processors (#5039)
Co-authored-by: Ravi Theja Desetty <ravitheja@Ravis-MacBook-Pro.local>
2025-04-03 15:41:12 -07:00
Lianmin Zheng
74885a848b Revert "Replace enable_flashinfer_mla argument with attention_backend" (#5048) 2025-04-03 13:30:56 -07:00
fzyzcjy
8e10fec9a8 Small refactor DeepEPMode to clean up code a bit (#4992) 2025-04-03 02:56:44 -07:00
Baizhou Zhang
e8999b13b7 Replace enable_flashinfer_mla argument with attention_backend (#5005) 2025-04-03 02:53:58 -07:00
saltyfish66
e41549c3d6 fix: fix illegal cuda memory access at fused_moe_kernel (#4727)
Co-authored-by: yuethe <yuethe@tencent.com>
2025-04-03 00:07:32 -07:00
Kaiyu Yang
31da75abed Update tokenizer_manager.py (#5008) 2025-04-02 13:56:19 -07:00
Qingquan Song
e983e43248 Add Eagle Speculative Decoding to FA3 Backend (#4951)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
Co-authored-by: zcnrex <zcnrex@gmail.com>
2025-04-02 13:09:02 -07:00
Xiaoyu Zhang
e9c6ce461d sgl scaled_fp8_quant support output padding (#4861) 2025-04-02 23:53:57 +08:00
Zhiqiang Xie
3fadc64793 bug fix for hicache host eviction (#4989) 2025-04-02 00:33:50 -07:00
Zhiqiang Xie
e119f04215 Large page size aligned hierarchical caching (#4581) 2025-04-01 22:38:15 -07:00
XinyuanTong
9eb49e878b [VLM RLHF] Take Image input for verl vlm rollout (#4915)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: GeLee <leege233@gmail.com>
2025-04-01 20:03:17 -07:00
Zhiqiang Xie
12047f5e94 Prevent memory leak of retract_decode when page_size > 1 (#4977) 2025-04-01 15:30:45 -07:00
Yineng Zhang
fda6bb78da update bench_serving (#4958) 2025-04-01 15:10:56 -07:00
Jinyan Chen
23c764b18a [Feature] Support DeepEP Low Latency (#4767)
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: ch-wan <cwan39@gatech.edu>
2025-04-01 09:23:25 -07:00
Yuhong Guo
87fafa0105 Revert PR 4764 & 4813 related to R1 RoPE (#4959) 2025-03-31 20:56:58 -07:00
Yineng Zhang
1c63e79756 use fa3 in sgl-kernel (#4954) 2025-03-31 16:14:49 -07:00
Mick
5cb552b1d4 refactor: multimodal data (#4754) 2025-03-31 09:57:51 -07:00
JieXin Liang
51ac297ace [feat] interface for platforms abstraction (#4928) 2025-03-31 00:04:21 -07:00
Zhiqiang Xie
a169b9f813 Fix oom error for large page size (#4913)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-03-30 21:34:21 -07:00
Baizhou Zhang
4a63bc32b7 [Fix] Add torch compile for torch.clamp back (#4936) 2025-03-30 20:46:07 -07:00
fzyzcjy
a303325fdb Fix DeepSeek bug causing 2.2% MMLU drop when TP!=DP (#4883)
Co-authored-by: ch-wan <cwan39@gatech.edu>
2025-03-30 20:10:21 -07:00
Baizhou Zhang
e62d60fe6d [Fix] avoid stream sync and torch compile in prefill for fa3 backend (#4932) 2025-03-30 13:53:44 -07:00
SEPLOS
032f8faaab Fix sglang frontend's incorrect dependency on torch (#4931) 2025-03-30 13:00:24 -07:00
Lianmin Zheng
9adf178cc2 Fix 2-gpu CI test and suppress some warnings (#4930) 2025-03-30 12:51:44 -07:00
Lianmin Zheng
4ede6770cd Fix retract for page size > 1 (#4914) 2025-03-30 02:57:15 -07:00
Lianmin Zheng
b26bc86b36 Support page size > 1 + eagle (#4908) 2025-03-30 00:46:23 -07:00
fzyzcjy
8690c40bb0 Improve stack trace of retry errors (#4845) 2025-03-29 08:21:31 -07:00
fzyzcjy
b1cfb4e972 Fix BadRequestError wrong arguments and remove openai dependency (#4882) 2025-03-29 08:16:21 -07:00
Yineng Zhang
19e96e5923 bump v0.4.4.post3 (#4878) 2025-03-28 23:21:24 -07:00
Yineng Zhang
d8a136a113 upgrade sgl-kernel 0.0.5.post4 (#4873) 2025-03-28 19:48:56 -07:00
Baizhou Zhang
20c90be23d [Feature] Support FA3 backend for MLA (#4831) 2025-03-28 18:30:14 -07:00
Qingquan Song
044c315970 Make torch compile configurable for biased_grouped_topk (#4749) 2025-03-28 10:57:52 -07:00
Fr4nk1in
c483377ed7 Fix wrong variable name when stopping memory profile (#4772) 2025-03-28 10:35:02 -07:00
Lianmin Zheng
74e0ac1dbd Clean up import vllm in quantization/__init__.py (#4834) 2025-03-28 10:34:10 -07:00
chaobo jia
ef9a378a20 [Feature] add multi-rank support for Lora (#4492)
Co-authored-by: rudy152 <czh1137892874@gmail.com>
2025-03-28 09:38:44 -07:00
Qingquan Song
6ffb6bd47a Fix fa3 cuda graph page_size > 1 precision and page_size=1 speed (#4855) 2025-03-28 01:35:59 -07:00
Lianmin Zheng
47e6628aae Fix CI tests (#4853) 2025-03-28 00:28:35 -07:00
fzyzcjy
8c04f0f2e1 Support with_stack and record_shapes in profiler (#4740)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-03-27 23:01:42 -07:00
fzyzcjy
265e756494 Super tiny remove unused code (#4750) 2025-03-27 22:32:14 -07:00
fzyzcjy
d3f71f5e19 Fix torch.cuda.MemPool() internal assertion failure (#4687)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-03-27 22:29:36 -07:00
Kebe
e0166f8ab4 Remove empty tool function name (#4704)
Signed-off-by: Kebe <mail@kebe7jun.com>
2025-03-27 22:23:30 -07:00