Sundara Raman Ramachandran
|
94d0f656fb
|
[Performance] Dynamic Batch Tokenizer (#9382)
|
2025-09-14 01:56:04 +08:00 |
|
Binyao Jiang
|
9752861002
|
[Fix] Support qwen3-next MTP+DP (#10392)
|
2025-09-13 17:45:04 +08:00 |
|
Yi Zhang
|
297d374510
|
support qwen3_next blackwell (#10403)
|
2025-09-13 17:18:26 +08:00 |
|
Binyao Jiang
|
31e9d3a5aa
|
[Fix] Init mamba related memory pools with torch.zeros (#10400)
|
2025-09-13 14:16:48 +08:00 |
|
Xinyuan Tong
|
6f4676ef85
|
fix: tool parse in large streaming chunk beginning with normal content (#10397)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
2025-09-12 22:29:35 -07:00 |
|
narutolhy
|
99757cc3e6
|
fix probs name which without temp scaling name (#9984)
|
2025-09-13 12:19:57 +08:00 |
|
Lianmin Zheng
|
cdddab056c
|
[Auto Sync] Update xgrammar_backend.py (20250913) (#10395)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
|
2025-09-12 17:46:56 -07:00 |
|
Teng Ma
|
49f169d53e
|
[HiCache] doc: update deployment in readme (#10332)
Signed-off-by: Teng Ma <sima.mt@alibaba-inc.com>
|
2025-09-12 16:35:37 -07:00 |
|
Teng Ma
|
7fce2fd91a
|
[HiCache] fix mooncake config in different tp size (#10377)
|
2025-09-12 16:34:23 -07:00 |
|
Even Zhou
|
16cd550c85
|
Support Qwen3-Next on Ascend NPU (#10379)
|
2025-09-12 16:31:37 -07:00 |
|
Muqi Li
|
d5e2a37414
|
Benchmark: Support API_KEY without 'bearer' (#10380)
|
2025-09-12 16:29:04 -07:00 |
|
Mohammad Miadh Angkad
|
321fecab74
|
Add sentencepiece to project dependencies (#10386)
|
2025-09-12 16:02:54 -07:00 |
|
kk
|
78b7465cad
|
Fix GPU fault issue when run dsv3 with dp mode and enable torch-compile (#10361)
Co-authored-by: wunhuang <wunhuang@amd.com>
|
2025-09-12 15:05:51 -07:00 |
|
Lianmin Zheng
|
2269cf1e2f
|
[Auto Sync] Update base_grammar_backend.py, llguidance_back... (20250911) (#10333)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
|
2025-09-12 12:55:55 -07:00 |
|
Yi Zhang
|
151e287d1a
|
fix: add fast path for function call (#9023)
Co-authored-by: tazjin <mail@tazj.in>
|
2025-09-12 10:28:54 -07:00 |
|
fzyzcjy
|
efedbe6ca9
|
Fix global input scale incompatible with CuTe DSL moe (#10370)
|
2025-09-12 03:22:49 -07:00 |
|
Shu Wang
|
36acd2ff16
|
Fix chunked prefix cache for nvfp4 (#10180)
Co-authored-by: Elfie Guo <elfieg@nvidia.com>
|
2025-09-12 03:20:30 -07:00 |
|
amysaq2023
|
30d20ce84f
|
Support loading weights from remote instance (#8215)
Signed-off-by: Anqi Shen <amy.saq@antgroup.com>
Co-authored-by: Chayenne <74843776+zhaochenyang20@users.noreply.github.com>
|
2025-09-12 17:40:22 +08:00 |
|
chenge@xiaohongshu.com
|
1b1701f1f7
|
model: support dots.vlm1 model (#8778)
Co-authored-by: weishi <bushou@xiaohongshu.com>
Co-authored-by: Ezra-Yu <1105212286@qq.com>
Co-authored-by: Jianfei Wang <905787410@qq.com>
Co-authored-by: qianwu <wangjianfei@xiaohongshu.com>
|
2025-09-12 17:38:38 +08:00 |
|
ybyang
|
6d40308905
|
Revert add mainprocess's proctitle (#10351)
|
2025-09-12 16:48:30 +08:00 |
|
Yuan Luo
|
24dc2bee97
|
Fix Bailing MoE model bugs (#10362)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
|
2025-09-12 00:36:02 -07:00 |
|
strgrb
|
fac07c9b08
|
Support LingV2 model (#10359)
Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
Co-authored-by: guoyuhong <yuhong.gyh@antgroup.com>
|
2025-09-11 23:53:52 -07:00 |
|
chenqianfzh
|
4aa39d72c4
|
fix the break in FlashInferFusedMoE (#10356)
Co-authored-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com>
|
2025-09-11 23:47:48 -07:00 |
|
huangtingwei
|
b4c2c421e9
|
support memory_pool_host page first direct layout (#10031)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-09-11 23:19:44 -07:00 |
|
Chang Su
|
53ca15529a
|
Implement Standalone gRPC Server for SGLang Python Scheduler (#10283)
|
2025-09-11 20:57:17 -07:00 |
|
Yi Zhang
|
27778010fc
|
fix dual stream bug (#10352)
|
2025-09-11 20:53:42 -07:00 |
|
EduardDurech
|
46d8fb1c98
|
model: support Apertus (#9774)
|
2025-09-11 20:49:10 -07:00 |
|
Trevor Morris
|
c7e85f5378
|
fix: flashinfer_cutlass_moe: Use max of global expert scales instead of local for input scale (#10296)
|
2025-09-11 20:19:17 -07:00 |
|
Shu Wang
|
3df05f4d6a
|
[NVIDIA] [3/N] Nvfp4 Masked Gemm: Add flashinfer grouped_gemm_nt_masked (#9199)
|
2025-09-11 20:18:43 -07:00 |
|
Lianmin Zheng
|
144ee5f37c
|
[Auto Sync] Update server_args.py (20250912) (#10347)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
|
2025-09-11 19:18:07 -07:00 |
|
Yineng Zhang
|
b0d25e72c4
|
chore: bump v0.5.2 (#10221)
|
2025-09-11 16:09:20 -07:00 |
|
gongwei-130
|
a2424068ec
|
add try catch for quant config hf download (#10340)
|
2025-09-11 15:00:21 -07:00 |
|
zk-lover
|
c5d2b01cea
|
[LongCat] Optimize zero_experts_compute_triton by changing mask (#10303)
|
2025-09-11 14:56:25 -07:00 |
|
eigen
|
70c0c1f926
|
fix: trtllm-gen attention take zero-init workspace (#10330)
|
2025-09-11 14:35:23 -07:00 |
|
Yi Zhang
|
ab795ae840
|
add h20 qwen3 next config (#10264)
Co-authored-by: cao1zhg <114661107+cao1zhg@users.noreply.github.com>
|
2025-09-11 14:02:24 -07:00 |
|
Stefan He
|
6c18ab46a2
|
[Qwen3-Next] switch to triton and cache conv states to accelerate MTP from 300 tok/s to 341 tok/s (#10335)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
|
2025-09-11 11:59:48 -07:00 |
|
cao1zhg
|
4a0e0be2a2
|
[bugfix] fix norm type error in qwen3_next model (#10322)
Co-authored-by: caoyizhong.cyz <caoyizhong.cyz@alibaba-inc.com>
Co-authored-by: Yi Zhang <1109276519@qq.com>
|
2025-09-12 00:05:59 +08:00 |
|
Lianmin Zheng
|
64f296f8e6
|
[Minor] Improve the style of server args (#10328)
|
2025-09-11 07:06:29 -07:00 |
|
Lianmin Zheng
|
956d805dde
|
[Auto Sync] Update parallel_state.py (20250911) (#10326)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2025-09-11 06:36:29 -07:00 |
|
Yi Zhang
|
30c6e1f569
|
Qwen3-Next support (#10233)
Co-authored-by: cao1zhg <114661107+cao1zhg@users.noreply.github.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: qingquansong <ustcsqq@gmail.com>
Co-authored-by: Yaoyao Ding <dingyaoyao.cs@gmail.com>
Co-authored-by: Ke Bao <ISPObaoke@163.com>
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
|
2025-09-11 04:11:49 -07:00 |
|
Yineng Zhang
|
bfe01a5eef
|
chore: upgrade v0.3.9.post2 sgl-kernel (#10297)
|
2025-09-11 04:10:29 -07:00 |
|
Yineng Zhang
|
de15d1405a
|
Revert "Fix flashinfer version in sgl-kernel (#10135)" (#10310)
|
2025-09-11 01:27:58 -07:00 |
|
Xiaoyu Zhang
|
37367da639
|
[fix CI] Fix logical condition in fused MoE layer for compressed tensor quantization (#10299)
|
2025-09-10 23:54:09 -07:00 |
|
Zaili Wang
|
ef959d7b85
|
[CPU] fix OOM when mem-fraction is not set (#9090)
|
2025-09-10 23:52:22 -07:00 |
|
Yi Zhang
|
dc491b399d
|
add flash linear attention triton kernel (#10239)
|
2025-09-10 21:47:20 -07:00 |
|
Even Zhou
|
5b64f006ec
|
[Feature] Support DeepEP normal & Redundant Experts on NPU (#9881)
|
2025-09-10 20:35:26 -07:00 |
|
Yineng Zhang
|
6d55f60e77
|
Revert "[1/2] Optimizations and refactors about quant kernel (#9534)" (#10292)
|
2025-09-10 18:24:23 -07:00 |
|
Lianmin Zheng
|
033b75f559
|
[Auto Sync] Update serving_base.py, serving_chat.py, servin... (20250910) (#10282)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: cctry <shiyang@x.ai>
|
2025-09-10 16:58:59 -07:00 |
|
Xinyuan Tong
|
f3b5db6ee8
|
Feat: support disable tool parser (#10184)
|
2025-09-10 14:03:55 -07:00 |
|
Rain Jiang
|
2286e85e77
|
pass a_scale from fp8 quant result instead of hard code to 1.0f (#10241)
Co-authored-by: Yichen Wang <yichen.wang@bytedance.com>
Co-authored-by: Jinwu Guo <641876696@qq.com>
|
2025-09-10 12:56:05 -07:00 |
|