Commit Graph

5246 Commits

Author SHA1 Message Date
chenge@xiaohongshu.com
1b1701f1f7 model: support dots.vlm1 model (#8778)
Co-authored-by: weishi <bushou@xiaohongshu.com>
Co-authored-by: Ezra-Yu <1105212286@qq.com>
Co-authored-by: Jianfei Wang <905787410@qq.com>
Co-authored-by: qianwu <wangjianfei@xiaohongshu.com>
2025-09-12 17:38:38 +08:00
ybyang
6d40308905 Revert add mainprocess's proctitle (#10351) 2025-09-12 16:48:30 +08:00
Yuan Luo
24dc2bee97 Fix Bailing MoE model bugs (#10362)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
2025-09-12 00:36:02 -07:00
strgrb
fac07c9b08 Support LingV2 model (#10359)
Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
Co-authored-by: guoyuhong <yuhong.gyh@antgroup.com>
2025-09-11 23:53:52 -07:00
Yineng Zhang
b3839a7f99 fix: resolve transfer_kv_all_layer_direct_lf_pf import error (#10360) 2025-09-11 23:53:23 -07:00
chenqianfzh
4aa39d72c4 fix the break in FlashInferFusedMoE (#10356)
Co-authored-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com>
2025-09-11 23:47:48 -07:00
huangtingwei
b4c2c421e9 support memory_pool_host page first direct layout (#10031)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-09-11 23:19:44 -07:00
Chang Su
53ca15529a Implement Standalone gRPC Server for SGLang Python Scheduler (#10283) 2025-09-11 20:57:17 -07:00
Keyang Ru
a23bdeaf04 [router] Basic OAI Response api (#10346) 2025-09-11 20:56:17 -07:00
Yi Zhang
27778010fc fix dual stream bug (#10352) 2025-09-11 20:53:42 -07:00
EduardDurech
46d8fb1c98 model: support Apertus (#9774) 2025-09-11 20:49:10 -07:00
Trevor Morris
c7e85f5378 fix: flashinfer_cutlass_moe: Use max of global expert scales instead of local for input scale (#10296) 2025-09-11 20:19:17 -07:00
Shu Wang
3df05f4d6a [NVIDIA] [3/N] Nvfp4 Masked Gemm: Add flashinfer grouped_gemm_nt_masked (#9199) 2025-09-11 20:18:43 -07:00
Keyang Ru
7b141f816c [router][ci] Add gpu utilization analyze with nvml (#10345) 2025-09-11 19:26:02 -07:00
Zaili Wang
7bc5fb0d78 [CPU][doc] add torch.compile param in example commands (#10349) 2025-09-11 19:22:46 -07:00
Lianmin Zheng
144ee5f37c [Auto Sync] Update server_args.py (20250912) (#10347)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
2025-09-11 19:18:07 -07:00
Yineng Zhang
b0d25e72c4 chore: bump v0.5.2 (#10221) 2025-09-11 16:09:20 -07:00
gongwei-130
a2424068ec add try catch for quant config hf download (#10340) 2025-09-11 15:00:21 -07:00
zk-lover
c5d2b01cea [LongCat] Optimize zero_experts_compute_triton by changing mask (#10303) 2025-09-11 14:56:25 -07:00
Minglei Zhu
46ccbed2cd update GLM nightly test threshold (#10331) 2025-09-11 14:54:58 -07:00
Hubert Lu
fe68c1486f Fix errors of hicache kernels in sgl-kernel for ROCm (#10339) 2025-09-11 14:54:34 -07:00
eigen
70c0c1f926 fix: trtllm-gen attention take zero-init workspace (#10330) 2025-09-11 14:35:23 -07:00
Yi Zhang
760b788a58 add qwen3-next doc (#10327) 2025-09-11 14:29:11 -07:00
Keyang Ru
1ee11df8ac [router][ci] add gpu process check and free port before start server (#10338) 2025-09-11 14:24:16 -07:00
Keyang Ru
dee197e11b [router] Add OpenAI backend support - core function (#10254) 2025-09-11 14:13:51 -07:00
Yi Zhang
ab795ae840 add h20 qwen3 next config (#10264)
Co-authored-by: cao1zhg <114661107+cao1zhg@users.noreply.github.com>
2025-09-11 14:02:24 -07:00
Keyang Ru
480d1b8b20 [router] add benchmark for regular router and pd router (#10280) 2025-09-11 12:04:11 -07:00
Stefan He
6c18ab46a2 [Qwen3-Next] switch to triton and cache conv states to accelerate MTP from 300 tok/s to 341 tok/s (#10335)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
2025-09-11 11:59:48 -07:00
cao1zhg
4a0e0be2a2 [bugfix] fix norm type error in qwen3_next model (#10322)
Co-authored-by: caoyizhong.cyz <caoyizhong.cyz@alibaba-inc.com>
Co-authored-by: Yi Zhang <1109276519@qq.com>
2025-09-12 00:05:59 +08:00
Lianmin Zheng
64f296f8e6 [Minor] Improve the style of server args (#10328) 2025-09-11 07:06:29 -07:00
Lianmin Zheng
956d805dde [Auto Sync] Update parallel_state.py (20250911) (#10326)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
2025-09-11 06:36:29 -07:00
Yi Zhang
30c6e1f569 Qwen3-Next support (#10233)
Co-authored-by: cao1zhg <114661107+cao1zhg@users.noreply.github.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: qingquansong <ustcsqq@gmail.com>
Co-authored-by: Yaoyao Ding <dingyaoyao.cs@gmail.com>
Co-authored-by: Ke Bao <ISPObaoke@163.com>
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
2025-09-11 04:11:49 -07:00
Yineng Zhang
bfe01a5eef chore: upgrade v0.3.9.post2 sgl-kernel (#10297) 2025-09-11 04:10:29 -07:00
Hank Han
3dd6420a4d [CI] add pyproject.toml to deepseek w4a8 ci (#10314) 2025-09-11 02:10:50 -07:00
Yineng Zhang
532f998b0f chore: bump sgl-kernel 0.3.9.post2 (#10311) 2025-09-11 01:29:50 -07:00
Yineng Zhang
de15d1405a Revert "Fix flashinfer version in sgl-kernel (#10135)" (#10310) 2025-09-11 01:27:58 -07:00
Xiaoyu Zhang
37367da639 [fix CI] Fix logical condition in fused MoE layer for compressed tensor quantization (#10299) 2025-09-10 23:54:09 -07:00
Zaili Wang
ef959d7b85 [CPU] fix OOM when mem-fraction is not set (#9090) 2025-09-10 23:52:22 -07:00
BourneSun0527
4aa1e69bc7 [chore]Add sgl-router to npu images (#10229) 2025-09-10 23:51:16 -07:00
Yi Zhang
dc491b399d add flash linear attention triton kernel (#10239) 2025-09-10 21:47:20 -07:00
Even Zhou
5b64f006ec [Feature] Support DeepEP normal & Redundant Experts on NPU (#9881) 2025-09-10 20:35:26 -07:00
Yineng Zhang
5b7448de77 chore: bump sgl-kernel 0.3.9.post1 (#10294) 2025-09-10 18:26:34 -07:00
Yineng Zhang
6d55f60e77 Revert "[1/2] Optimizations and refactors about quant kernel (#9534)" (#10292) 2025-09-10 18:24:23 -07:00
Lianmin Zheng
033b75f559 [Auto Sync] Update serving_base.py, serving_chat.py, servin... (20250910) (#10282)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: cctry <shiyang@x.ai>
2025-09-10 16:58:59 -07:00
Xinyuan Tong
f3b5db6ee8 Feat: support disable tool parser (#10184) 2025-09-10 14:03:55 -07:00
Rain Jiang
2286e85e77 pass a_scale from fp8 quant result instead of hard code to 1.0f (#10241)
Co-authored-by: Yichen Wang <yichen.wang@bytedance.com>
Co-authored-by: Jinwu Guo <641876696@qq.com>
2025-09-10 12:56:05 -07:00
Hubert Lu
91b3555d2d Add tests to AMD CI for MI35x (#9662)
Co-authored-by: Sai Enduri <saimanas.enduri@amd.com>
2025-09-10 12:50:05 -07:00
Yi Zhang
9e2f7252db add dual stream for qwen2_moe (#10252) 2025-09-10 12:49:43 -07:00
Pavani Majety
21176b0093 [Bugfix] Fix Weightloading for the original nvidia/Deepseek-R1-FP4 checkpoint (#9940)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: fzyzcjy <5236035+fzyzcjy@users.noreply.github.com>
2025-09-10 12:00:23 -07:00
Lifu Huang
941002945b [1/2] Refactor LoRA to support backend-specific batch preprocessing. (#10251) 2025-09-10 09:58:37 -07:00