Commit Graph

3981 Commits

Author SHA1 Message Date
SijiaYang
cb9d91ea8a feat: support DeepSeek-R1-W4AFP8 model with ep-moe mode (#7762)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
2025-07-07 14:47:21 -07:00
Yineng Zhang
6a6e0bb7fd docs: update README (#7821) 2025-07-07 02:47:04 -07:00
Haohui Mai
076313bd09 [AMD] Fail gracefully when AITER is unavailable gfx90a GPUs (#7187) 2025-07-07 09:09:58 +00:00
Ziming Huang
9abe1163ac fix duplicate args in schedule_batch (#7816) 2025-07-07 01:31:03 -07:00
Simo Lin
3646f6bb3e [misc] release new router version (#7798) 2025-07-06 22:54:17 -07:00
Simo Lin
35724aa182 [docs] update router readme (#7797) 2025-07-06 22:54:11 -07:00
Zhiqiang Xie
2fc824b84c Kernels for efficient KV cache IO (#7313) 2025-07-06 22:53:36 -07:00
Yuan Luo
253454de9b Integrate triton moe kernel (#7689)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
2025-07-06 20:05:49 -07:00
Simo Lin
ea3e7ffec7 [bugfix] Fix sgl-router get_server_info endpoint compatibility issue (#7813) 2025-07-06 19:52:57 -07:00
yuhsuan-t
8d4a01cbd7 Log the timestamps of each prefill/decode iteration (#6094)
Co-authored-by: yuhsuan-t <12108766+yuhsaun-t@users.noreply.github.com>
2025-07-07 01:57:27 +00:00
Ke Bao
a3398d8478 Optimize moe align block size kernel (#7794) 2025-07-07 09:20:30 +08:00
Nan Jiang
ba69c153f6 [RL]: Fix error tagging in multi-stage wake up (#7812)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
2025-07-06 16:51:29 -07:00
Stefan He
3589aa79b0 [RL] Fix illegal memory for _import_static_state (#7733)
Co-authored-by: nanjiangwill <willjiang2018@gmail.com>
2025-07-06 16:25:21 -07:00
Hubert Lu
e00715eb66 [AMD] Add test_fused_moe.py and test_rope_rocm.py to AMD CI (#5246) 2025-07-06 01:47:16 -07:00
Lifu Huang
ea4bf12286 Fix division-by-zero bug in LoRA triton kernels. (#7785) 2025-07-06 00:45:29 -07:00
fzyzcjy
a291439a59 Support logprobs in two-batch overlap (#7709) 2025-07-05 19:05:32 -07:00
JieXin Liang
54411f6afa fix: disable dsv3_router_gemm in dsv3_nextn (#7793) 2025-07-05 19:01:01 -07:00
Yineng Zhang
625018d259 fix: free disk space (#7803) 2025-07-05 18:52:25 -07:00
Simo Lin
5732d904cc [misc] remove pdlb rust (#7796) 2025-07-05 17:44:51 -07:00
Yineng Zhang
ec5f9c6269 chore: bump v0.4.9 (#7802) 2025-07-05 17:40:29 -07:00
Yineng Zhang
62f5522ffe chore: upgrade sgl-kernel v0.2.4 (#7801) 2025-07-05 17:37:40 -07:00
Lifu Huang
01f9873048 Fix CI test OOM issue. (#7799) 2025-07-05 15:11:02 -07:00
Mick
199d621845 ci: fix port args (#7792) 2025-07-05 15:06:42 -07:00
Yineng Zhang
f200af0d8c chore: bump sgl-kernel v0.2.4 (#7800) 2025-07-05 15:03:31 -07:00
Lianmin Zheng
5589b75024 Add treemask mode to build_eagle_tree & release sgl-kernel 0.2.3 (#7756)
Co-authored-by: Pranjal Shankhdhar <pranjal.ssh@gmail.com>
2025-07-05 12:17:05 -07:00
JieXin Liang
c04a8a820b [fix] fix misusing of is_cuda (#7790) 2025-07-05 04:02:14 -07:00
Cheng Wan
6c903611ca Fix incorrect spec_num_draft_tokens in draft_extend (#7757) 2025-07-05 02:18:16 -07:00
Yineng Zhang
77cfea689d chore: upgrade sgl-kernel v0.2.3 (#7786) 2025-07-05 01:55:55 -07:00
Cheng Wan
8fc910db03 DP Attention with Auto DeepEP Dispatch (#7222) 2025-07-05 01:54:24 -07:00
Yineng Zhang
75354d9ae9 fix: use nvidia-nccl-cu12 2.27.5 (#7787) 2025-07-05 01:28:21 -07:00
Yineng Zhang
4fece12be9 chore: bump sgl-kernel v0.2.3 (#7784) 2025-07-05 00:05:45 -07:00
Mick
c797322280 fix: fix apply_shuffle_mul_sum (#7444) 2025-07-04 23:23:30 -07:00
Gang Chen
ef8a29c429 Embedding parallel by attn_tp (#7623) 2025-07-04 23:21:56 -07:00
Qi Yuhang
8e9fb43d82 Optimize Hopper CUTLASS FP8 Blockwise Grouped GEMM Kernel in Small K Scenario (#7782) 2025-07-04 22:25:49 -07:00
Leng Yue
8364608930 add model: qwen2-audio (#7596) 2025-07-04 21:13:10 -07:00
SijiaYang
da3890e82a [1/n]: add cutlass W4A8 moe kernel for hopper architecture (#7772)
Signed-off-by: yangsijia.614 <yangsijia.614@bytedance.com>
Co-authored-by: yicwang <yichen.wang@bytedance.com>
2025-07-04 20:50:12 -07:00
Cheng Wan
cb432f1770 saving hidden_states.clone() (#7705) 2025-07-04 20:07:42 -07:00
Ximingwang-09
1964c325de [feat] Support EAGLE3 for Qwen (#7745)
Co-authored-by: 纬杭 <ximing.wxm@antgroup.com>
Co-authored-by: zyksir <zyksir@outlook.com>
2025-07-04 19:50:28 -07:00
Caproni
af5647748a [Fix] Alloc return type error (#7778)
Signed-off-by: Capronir <839972205@qq.com>
2025-07-04 19:00:40 -07:00
Zilin Zhu
af46f299f9 [RL] add pause and continue generation for async rl training (#7419) 2025-07-04 18:49:49 -07:00
Zilin Zhu
16a6b1d83a [RL] Add --nccl-port to prevent port conflict (#7418) 2025-07-04 18:48:57 -07:00
Lianmin Zheng
14229ccf8f Move mem_fraction_static adjustment for multimodal models to server_args.py & Fix session control & Other cleanups (#7748) 2025-07-04 16:33:33 -07:00
Kay Yan
975a5ec69c [fix] update bench_speculative.py for compatibility (#7764)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
2025-07-04 16:32:54 +08:00
Yuchen Cheng
1e3e3add3d fix(docs): fix the broken link in docs/references/production_metrics.md (#7741)
Signed-off-by: rudeigerc <rudeigerc@gmail.com>
2025-07-03 23:46:07 -07:00
Yi Zhang
8c298031d5 refactor llama4 dp attention logic (#7729) 2025-07-03 22:48:11 -07:00
YanbingJiang
4de0395343 Add V2-lite model test (#7390)
Co-authored-by: DiweiSun <105627594+DiweiSun@users.noreply.github.com>
2025-07-03 22:25:50 -07:00
Ke Bao
8b1942c6cc Remove type conversion and fix id map in topk (#7759) 2025-07-03 18:13:32 -07:00
Yi Zhang
489934be0a fuse renormal into moe topk softmax kernel python code (#7751)
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-07-03 16:22:14 -07:00
Xinyuan Tong
43f93f632c fix CI: update native api ipynb (#7754)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-07-03 15:25:00 -07:00
Yineng Zhang
aca1101a13 chore: bump sgl-kernel 0.2.2 (#7755) 2025-07-03 12:49:10 -07:00