Bruce-x-1997
|
21e1bc475c
|
[router] fix FunctionCallResponse proto, support arguments is null (#9875)
Co-authored-by: forestlee95 <forestlee95@foxmail.com>
|
2025-09-01 20:37:15 -07:00 |
|
Chang Su
|
9a0cac1be0
|
[router] add grpc pd and regular router init (#9893)
|
2025-09-01 20:06:15 -07:00 |
|
Xiaoyu Zhang
|
b5245064f6
|
[code style] restruct fused_moe to avoid very long single file (#9878)
|
2025-09-02 11:04:27 +08:00 |
|
LukasBluebaum
|
9d9fa9a537
|
[router] Fix short timeout for the prefill client (#9803)
|
2025-09-01 19:57:04 -07:00 |
|
hzh0425
|
58d06fdc95
|
[HiCacheStorage]: Improve 3fs kvstore‘s performance and resolve mla issues (#9876)
|
2025-09-01 19:01:48 -07:00 |
|
huangtingwei
|
cb9e0e4180
|
[HiCacheStorage] fix abort request host memory leaks (#9874)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-09-01 18:59:29 -07:00 |
|
Rain Jiang
|
9db8025376
|
support fp8 kvcache for hybrid attn backend on GPT-OSS (#9783)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
2025-09-01 19:17:12 +00:00 |
|
Chang Su
|
598c0bc19d
|
[router] add tokenizer download support from hf hub (#9882)
|
2025-09-01 10:40:37 -07:00 |
|
huangtingwei
|
b361750a4a
|
Mooncake store get zero copy meta optimization (#9857)
|
2025-09-01 03:27:56 -07:00 |
|
Yineng Zhang
|
16e56ea693
|
chore: bump v0.5.2rc0 (#9862)
|
2025-09-01 03:07:36 -07:00 |
|
Yineng Zhang
|
349b491c63
|
chore: upgrade flashinfer 0.3.0 (#9864)
|
2025-09-01 03:07:19 -07:00 |
|
ybyang
|
5f77e1292d
|
Support Multi Process Tokenizer Manager(#6555) (#8964)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Signed-off-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Huang Long <121648372+LLLL114@users.noreply.github.com>
Co-authored-by: huanglong <huanglong@linux.alibaba.com>
Co-authored-by: Shangming Cai <csmthu@gmail.com>
|
2025-09-01 01:00:13 -07:00 |
|
Sai Enduri
|
4750cddf68
|
Update docker build workflows for gfx942 ROCm 7.0. (#9794)
Co-authored-by: Hubert Lu <Hubert.Lu@amd.com>
|
2025-09-01 00:37:12 -07:00 |
|
fzyzcjy
|
065e523d7b
|
Tiny allow DeepGEMM on cu12.9 (#9858)
|
2025-08-31 23:29:56 -07:00 |
|
Baizhou Zhang
|
7de2ce45b2
|
Disable radix cache in test_lora_update.py for better stability (#9852)
|
2025-08-31 22:28:22 -07:00 |
|
hzh0425
|
8c2ffaaf0f
|
fix(hicahce-long-bench): adjust context workload generator to use full query set (#9847)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-08-31 14:51:18 -07:00 |
|
Pawel Kowalski
|
20445327b2
|
fix inconsistent arguments for generated shared prefix bench (#9073)
Co-authored-by: Pawel Kowalski <pawel.kowalski@silo.ai>
|
2025-08-31 14:27:33 -07:00 |
|
Liangsheng Yin
|
6d3c20cf5b
|
fix set_interal_state API (#9850)
|
2025-09-01 01:31:35 +08:00 |
|
Zhiqiang Xie
|
8b6966d020
|
[HiCache] Storage Refactoring (#9797)
Co-authored-by: pansicheng <27603155+pansicheng@users.noreply.github.com>
|
2025-08-31 22:58:21 +08:00 |
|
Kevin Xiang Li
|
a391f73adc
|
Fuse gate_proj and up_proj in Qwen 2.5 VL's vision MLP (#9661)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-08-31 11:08:28 +00:00 |
|
Lianmin Zheng
|
25c7395934
|
Fix input logprob index (#9841)
Co-authored-by: Sheng Shen <sheng.s@berkeley.edu>
|
2025-08-31 02:56:47 -07:00 |
|
Teng Ma
|
f05c68733e
|
[HiCache] Clear kvcache in storage backend with fastAPI (#9750)
Co-authored-by: hzh0425 <hzh0425@apache.org>
|
2025-08-31 17:41:44 +08:00 |
|
Vincent Zhong
|
9a0d0b754d
|
[Performance] Improve Qwen RMSNorm by replacing with native RMSNorm op (#9709)
|
2025-08-31 17:20:50 +08:00 |
|
VDV1985
|
ba861293cf
|
[feat]Ascend NPU Gemma-3-12b and Gemma-3-27b support (#8909)
|
2025-08-31 00:25:07 -07:00 |
|
Chang Su
|
c112bcc461
|
[router] global tool parser registry (#9840)
|
2025-08-30 23:35:39 -07:00 |
|
Guoyuan Lin
|
5e194b2143
|
[Model] Support Meituan LongCat-Flash && LongCat-Flash-MTP (#9824)
|
2025-08-30 23:29:21 -07:00 |
|
Chang Su
|
fd5ce576a4
|
Tool parser.benchmark (#9835)
|
2025-08-30 21:08:11 -07:00 |
|
Simo Lin
|
92d79646e5
|
[router] add reasoning parser readme (#9837)
|
2025-08-30 21:06:23 -07:00 |
|
Zhiqiang Xie
|
f9076a5a2c
|
hot fix for mooncake batch set api (#9836)
|
2025-08-30 21:01:51 -07:00 |
|
Lianmin Zheng
|
646076b71e
|
Update guidelines for syncing code between repos (#9831)
|
2025-08-30 16:10:35 -07:00 |
|
Lianmin Zheng
|
0d04008936
|
[CI] Code sync tools (#9830)
|
2025-08-30 16:02:29 -07:00 |
|
Lianmin Zheng
|
05e4787243
|
[CI] Fix the trigger condition for PR test workflows (#9761)
|
2025-08-30 15:47:10 -07:00 |
|
Lianmin Zheng
|
1e61b4960f
|
[Auto Sync] Update parallel_state.py (20250830) (#9828)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
2025-08-30 14:25:39 -07:00 |
|
Yineng Zhang
|
300676afac
|
chore: upgrade transformers 4.56.0 (#9827)
|
2025-08-30 14:07:34 -07:00 |
|
PGFLMG
|
7fe89f7cdb
|
[sgl-kernel] fix: fix missing FetchContent_Populate for fmt (#9826)
|
2025-08-30 12:57:42 -07:00 |
|
Yineng Zhang
|
9970e3bf32
|
chore: upgrade sgl-kernel 0.3.7.post1 with deepgemm fix (#9822)
|
2025-08-30 04:02:25 -07:00 |
|
Mohammad Miadh Angkad
|
70eedb58bb
|
Fix typo in warning message about DeepGEMM JIT (#9802)
|
2025-08-30 03:35:53 -07:00 |
|
Yineng Zhang
|
9c99949ef3
|
chore: update Dockerfile (#9820)
|
2025-08-30 03:08:14 -07:00 |
|
Yineng Zhang
|
c5082f0f73
|
chore: fix cuda driver api issue and bump sgl-kernel 0.3.7.post1 (#9746)
|
2025-08-30 02:01:54 -07:00 |
|
Liangsheng Yin
|
836873b99f
|
Fix memory leak when aborting decode request in PD-Disagg (#9817)
Co-authored-by: Lianmin Zheng <15100009+merrymercy@users.noreply.github.com>
|
2025-08-30 14:36:03 +08:00 |
|
Yineng Zhang
|
8abe8deae6
|
fix: dsv3 lite q_lora_rank none (#9815)
|
2025-08-29 23:24:14 -07:00 |
|
hlu1
|
1e85589dc5
|
Make fp4_quantize kernels work on sm103 (#9807)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
|
2025-08-29 21:15:08 -07:00 |
|
hzh0425
|
c2a26e725c
|
feature(eplb): add min-rebalancing-utilization-threshold for eplb (#8345)
Co-authored-by: yizhang2077 <1109276519@qq.com>
|
2025-08-30 11:24:29 +08:00 |
|
yilian49
|
591e6c5983
|
Small bug fix in transformers model implementation (#9809)
|
2025-08-29 18:51:44 -07:00 |
|
pranavm-nvidia
|
42f34437ab
|
Adds initialize_moe_config to bench_one_batch so MOE backend is respected (#9670)
|
2025-08-29 17:29:32 -07:00 |
|
Kaixi Hou
|
5c34b4f1c7
|
[NVIDIA] [2/N] Optimize silu_and_mul_scaled_fp4_grouped_quant perf (#9556)
|
2025-08-29 17:17:03 -07:00 |
|
Faraz
|
ff9b561817
|
Fix TRTLLM MLA Cuda KV Blocks Causing accuracy drop (#9675)
|
2025-08-29 17:16:10 -07:00 |
|
Pavani Majety
|
fcd72bd100
|
[ModelOpt] Fix Weight Loading for DSR1-FP4 Quantization (#9712)
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
|
2025-08-29 17:13:52 -07:00 |
|
Yineng Zhang
|
3d8fc43400
|
chore: upgrade flashinfer 0.3.0rc1 (#9793)
|
2025-08-29 16:24:17 -07:00 |
|
KerwinKai
|
87a0f7d2c2
|
[feat] Support EAGLE3 for Qwen2 (#9216)
|
2025-08-29 12:59:51 -07:00 |
|