Commit Graph

4977 Commits

Author SHA1 Message Date
Lianmin Zheng
646076b71e Update guidelines for syncing code between repos (#9831) 2025-08-30 16:10:35 -07:00
Lianmin Zheng
0d04008936 [CI] Code sync tools (#9830) 2025-08-30 16:02:29 -07:00
Lianmin Zheng
05e4787243 [CI] Fix the trigger condition for PR test workflows (#9761) 2025-08-30 15:47:10 -07:00
Lianmin Zheng
1e61b4960f [Auto Sync] Update parallel_state.py (20250830) (#9828)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-30 14:25:39 -07:00
Yineng Zhang
300676afac chore: upgrade transformers 4.56.0 (#9827) 2025-08-30 14:07:34 -07:00
PGFLMG
7fe89f7cdb [sgl-kernel] fix: fix missing FetchContent_Populate for fmt (#9826) 2025-08-30 12:57:42 -07:00
Yineng Zhang
9970e3bf32 chore: upgrade sgl-kernel 0.3.7.post1 with deepgemm fix (#9822) 2025-08-30 04:02:25 -07:00
Mohammad Miadh Angkad
70eedb58bb Fix typo in warning message about DeepGEMM JIT (#9802) 2025-08-30 03:35:53 -07:00
Yineng Zhang
9c99949ef3 chore: update Dockerfile (#9820) 2025-08-30 03:08:14 -07:00
Yineng Zhang
c5082f0f73 chore: fix cuda driver api issue and bump sgl-kernel 0.3.7.post1 (#9746) 2025-08-30 02:01:54 -07:00
Liangsheng Yin
836873b99f Fix memory leak when aborting decode request in PD-Disagg (#9817)
Co-authored-by: Lianmin Zheng <15100009+merrymercy@users.noreply.github.com>
2025-08-30 14:36:03 +08:00
Yineng Zhang
8abe8deae6 fix: dsv3 lite q_lora_rank none (#9815) 2025-08-29 23:24:14 -07:00
hlu1
1e85589dc5 Make fp4_quantize kernels work on sm103 (#9807)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-29 21:15:08 -07:00
hzh0425
c2a26e725c feature(eplb): add min-rebalancing-utilization-threshold for eplb (#8345)
Co-authored-by: yizhang2077 <1109276519@qq.com>
2025-08-30 11:24:29 +08:00
yilian49
591e6c5983 Small bug fix in transformers model implementation (#9809) 2025-08-29 18:51:44 -07:00
pranavm-nvidia
42f34437ab Adds initialize_moe_config to bench_one_batch so MOE backend is respected (#9670) 2025-08-29 17:29:32 -07:00
Kaixi Hou
5c34b4f1c7 [NVIDIA] [2/N] Optimize silu_and_mul_scaled_fp4_grouped_quant perf (#9556) 2025-08-29 17:17:03 -07:00
Faraz
ff9b561817 Fix TRTLLM MLA Cuda KV Blocks Causing accuracy drop (#9675) 2025-08-29 17:16:10 -07:00
Pavani Majety
fcd72bd100 [ModelOpt] Fix Weight Loading for DSR1-FP4 Quantization (#9712)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
2025-08-29 17:13:52 -07:00
Yineng Zhang
3d8fc43400 chore: upgrade flashinfer 0.3.0rc1 (#9793) 2025-08-29 16:24:17 -07:00
KerwinKai
87a0f7d2c2 [feat] Support EAGLE3 for Qwen2 (#9216) 2025-08-29 12:59:51 -07:00
narutolhy
839c93bd2d feat: add original logprobs to response (#8375)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: luhongyu.4869 <luhongyu.4869@bytedance.com>
2025-08-29 11:43:57 -07:00
JiLi
f1e9bbaff5 feat: Add flexible validation for partial weight updates (#9663)
Co-authored-by: RichardW <rich-junwang@users.noreply.github.com>
Co-authored-by: Zhuorany <yzr1914001753@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: Night <32424487+PrinsYin@users.noreply.github.com>
Co-authored-by:zhaochenyang20 <zhaochen20@outlook.com>
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-08-29 11:19:26 -07:00
gongwei-130
3fd1431df2 support enable in the reasoning field to enable thingking for thinkin… (#9715) 2025-08-29 10:57:32 -07:00
hzh0425
161e9dc51e feat(hicache-3fs): 3FS-Store Backup Optimizations For MLA Model. (#9692) 2025-08-29 10:48:51 -07:00
Zhiqiang Xie
54e872d343 [HiCache] resolve conflict between chunked-prefill and hicache hit count (#9776) 2025-08-30 01:30:54 +08:00
Xuchun Shang
e5b29bf14e [PD] Support get_model_info interface for mini_lb (#9792)
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
Co-authored-by: Teng Ma <sima.mt@alibaba-inc.com>
2025-08-29 00:54:03 -07:00
gongwei-130
9a7c8842ba accomendate json schema in the "schema" field, not in "json_schema" field of response_format (#9786) 2025-08-28 23:51:50 -07:00
hlu1
7a16db9bd9 Make sm100 fp8 kernels available on sm103 (#9789)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-28 23:47:29 -07:00
pansicheng
09a1df2231 add bench_mix.py (#9788) 2025-08-28 23:44:26 -07:00
sogalin
4b7034ddb0 ROCm 7.0 update (#9757) 2025-08-28 22:24:34 -07:00
Liangsheng Yin
a23c30205d Raise error when topk>1 and page>1 for paged attention backends. (#9784)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-29 12:47:34 +08:00
hlu1
a7d825fccc Skip some tests on Blackwell (#9777)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-08-28 20:00:32 -07:00
hzh0425
38cd5fb1e0 bugfix(hicache): Move exists check before key suffixing (#9749) 2025-08-28 18:29:47 -07:00
Zhiqiang Xie
001f51940a [HiCache] change the default policy to write through (#9772) 2025-08-28 18:28:39 -07:00
Ma Mingfei
5ad296bda1 Optimize prefill performance on cpu backend (#8750) 2025-08-28 17:21:55 -07:00
wangyu
9f81d741a2 fix: fix MLA for ShardedModelLoader/RemoteModelLoader (#6287)
Signed-off-by: wangyu <wangyu.steph@bytedance.com>
2025-08-28 16:10:09 -07:00
wangyu
a38c149758 feat(draft_model): support draft_model for RemoteModelLoader (#6407)
Signed-off-by: wangyu <wangyu.steph@bytedance.com>
2025-08-28 16:09:52 -07:00
chenxu140
74dd4249ac [Feature] Support NPUGraph for DeepSeek on Ascend NPU (#9355)
Co-authored-by: Even Zhou <even.y.zhou@outlook.com>
2025-08-28 16:06:24 -07:00
zixuanzhang226
dc20c22f76 feat: add tuned fused moe config for GLM-4.5-Air-FP8 tp = 4 on B200 (#9770) 2025-08-28 16:00:28 -07:00
Hubert Lu
711390a971 [AMD] Support Hierarchical Caching on AMD GPUs (#8236) 2025-08-28 15:27:07 -07:00
Simo Lin
5343058875 [router] grpc router bootstraps (#9759) 2025-08-28 12:07:06 -07:00
Lianmin Zheng
fce7ae33f8 [Sync] Update server_args.py (20250828) (#9745)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-28 10:33:00 -07:00
Rain Jiang
6b39f9cf8c Support compile sgl-kernel on cuda 13.0 (#9721) 2025-08-28 10:18:03 -07:00
Simo Lin
07c9d8fba2 [router] add llama3.2 multi json streaming parser (#9735) 2025-08-28 05:57:13 -07:00
Qiaolin Yu
4a4772ae03 Support speculative decoding in hybrid attention backend (#9573) 2025-08-28 01:11:42 -07:00
yhyang201
c377923304 [feat] Reduce GPU memory overhead by using weakref (#9673) 2025-08-28 01:09:06 -07:00
Xinyuan Tong
f84b57c80e Move git clone command up from README (#9740) 2025-08-28 00:27:00 -07:00
zyksir
aee094e430 add support for nvidia/gpt-oss-120b-Eagle3 (#9739) 2025-08-28 00:20:20 -07:00
huangtingwei
55349e361d support mooncake store dp attention (#9684) 2025-08-28 12:31:31 +08:00