Faraz
|
4b04998d38
|
TRTLLM Gen MLA Decode Kernel Integration (same as #7938) (#8632)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
|
2025-07-31 16:03:40 -07:00 |
|
pansicheng
|
3dde86194a
|
Conditionally import HiCacheHF3FS (#8598)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-31 14:59:29 -07:00 |
|
Trevor Morris
|
b7170cc820
|
[bugfix] Fix flashinfer cutlass EP moe after MoE refactor (#8630)
|
2025-07-31 13:57:08 -07:00 |
|
Simo Lin
|
5c14515fec
|
[bug] remove pdlb from minilb since its no longer available (#8634)
|
2025-07-31 13:54:02 -07:00 |
|
Vishwanath Venkatesan
|
2cd2e27f80
|
SGLang HiCache NIXL Connector (#8488)
Signed-off-by: Vishwanath Venkatesan <vvenkatesan@nvidia.com>
Co-authored-by: Moein Khazraee <moein@nvidia.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-31 13:09:42 -07:00 |
|
Chang Su
|
743638bc03
|
misc: Remove debug print to logger.info (#8633)
|
2025-07-31 12:56:52 -07:00 |
|
Brayden Zhong
|
4acf690206
|
[Optimization][Perf] Disable the GC during CUDA graph capture to speed up by up to 3x (#8577)
|
2025-07-31 11:31:21 -07:00 |
|
Ke Bao
|
8fbcfd0723
|
Update step3v default config (#8626)
|
2025-08-01 00:49:26 +08:00 |
|
Ke Bao
|
3c307dc057
|
Fix hf3fs_fuse import error (#8623)
|
2025-07-31 22:42:31 +08:00 |
|
Shangming Cai
|
016fd25127
|
[PD] Use batch transfer for rdma transport and add notes for mnnvl usage (#8595)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-07-31 21:29:34 +08:00 |
|
Yineng Zhang
|
023288645b
|
chore: bump v0.4.10 (#8608)
|
2025-07-31 20:50:17 +08:00 |
|
Cheng Wan
|
7a1f7fc504
|
[Feature] Hybrid EP and TP (#8590)
|
2025-07-31 02:53:25 -07:00 |
|
Chang Su
|
51c38163c1
|
model: support Step3V (#8583)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: nnnobody-code <nnnobody@foxmail.com>
Co-authored-by: ispobock <ispobaoke@gmail.com>
Co-authored-by: Qiaolin-Yu <qy254@cornell.edu>
Co-authored-by: Qiaolin-Yu <liin1211@outlook.com>
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-07-31 02:41:00 -07:00 |
|
Cheng Wan
|
32fa1e9cc2
|
[4/N] MoE Refactor: Unified Triton Kernel for FusedMoE and EPMoE (#8515)
|
2025-07-31 02:34:02 -07:00 |
|
Cheng Wan
|
e179e0b797
|
update sgl-kernel for EP: python part (#8550)
|
2025-07-31 00:14:39 -07:00 |
|
huangtingwei
|
d904959233
|
Support l3 cache (mooncake store) for hiradix cache (#7211)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
Co-authored-by: AniZpZ <zhuangsen.zp@antgroup.com>
Co-authored-by: zuoyuan <zhangzuo21@mails.tsinghua.edu.cn>
Co-authored-by: @wangyueneng.wyn <wangyueneng.wyn@antgroup.com>
Co-authored-by: JinYan Su <jinyansu792@gmail.com>
|
2025-07-30 23:15:51 -07:00 |
|
huangtingwei
|
26c8a310bd
|
fix incorrect increase of hit count (#8533)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-31 06:02:42 +00:00 |
|
yi wang
|
5963e50503
|
[bugfix] Fix 2 minor bugs in the hicache storage layer (#8404)
|
2025-07-31 05:47:14 +00:00 |
|
Binyao Jiang
|
59aab76f0a
|
Bug: Fix google gemma3n-mm audio input not working bug (#8365)
|
2025-07-30 21:23:09 -07:00 |
|
Lifu Huang
|
67e53b16f5
|
Bump transfomers to 4.54.1 to fix Gemma cache issue. (#8541)
|
2025-07-30 19:50:54 -07:00 |
|
pansicheng
|
299803343d
|
Add hf3fs support for hicache storage (based on #7704) (#7280)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-30 17:42:41 -07:00 |
|
Chang Su
|
a79a5d7012
|
Revert "Fix the input tools format and history tool_calls in OpenAI API (#6556)" (#8584)
|
2025-07-30 13:12:05 -07:00 |
|
Adarsh Shirawalmath
|
ec5f944271
|
[Model] Add support for Arcee Foundational Model (#8154)
|
2025-07-30 10:45:25 -07:00 |
|
Elfie Guo
|
e3f08c77bc
|
Update cutlass_moe.py (#8545)
|
2025-07-29 23:46:34 -07:00 |
|
hzh0425
|
2fbb754e1d
|
feature(pd-hicache): Prefill instances support reusing the RemoteStorage Cache via HiCache. (#8516)
Co-authored-by: Shangming Cai <csmthu@gmail.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
|
2025-07-29 21:19:25 -07:00 |
|
hzh0425
|
a85ebf50b8
|
feat(hicache): support file backend reading directory config form env. (#8498)
|
2025-07-29 21:18:46 -07:00 |
|
Cheng Wan
|
9effeb5bdd
|
Support EPLB in FusedMoE (#8448)
|
2025-07-29 16:02:41 -07:00 |
|
Mick
|
1992ef9ba7
|
fix: temporarily disable cuda-ipc for mm data tensor (#8431)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-07-29 22:42:03 +00:00 |
|
Lianmin Zheng
|
a4c3b121d8
|
Split the scheduler into multiple mixin classes to reduce the file size (#8483)
|
2025-07-29 12:46:50 -07:00 |
|
Elfie Guo
|
4d16c88b6e
|
Update cutlass_moe.py (#8535)
|
2025-07-29 10:49:41 -07:00 |
|
Yineng Zhang
|
6478831be9
|
chore: bump v0.4.9.post6 (#8517)
|
2025-07-29 02:30:07 -07:00 |
|
Lifu Huang
|
fb16fbaf52
|
Fix incorrect KV cache allocation for MTP models. (#8482)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-28 22:54:50 -07:00 |
|
fzyzcjy
|
0ce84c822b
|
Support colocating requests (#7973)
|
2025-07-28 22:51:49 -07:00 |
|
fzyzcjy
|
59d0bf012f
|
Tiny add warnings for DeepEP when it is suboptimal (#8426)
|
2025-07-28 22:51:38 -07:00 |
|
fzyzcjy
|
7df2c0c2db
|
Reduce memory usage for fp4 moe (#8413)
|
2025-07-28 22:51:23 -07:00 |
|
Yineng Zhang
|
8240a6b013
|
chore: add glm 4.5 fp8 tp4 config (#8480)
|
2025-07-28 16:14:01 -07:00 |
|
Yineng Zhang
|
3a04aa4be7
|
chore: add glm4 fp8 tp8 config (#8478)
|
2025-07-28 16:08:53 -07:00 |
|
Stefan He
|
74e7e45710
|
Fix DEEPEP BF16 compatibility for Deepseek Style model like GLM 4.5 (#8469)
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
|
2025-07-28 14:36:08 -07:00 |
|
Cheng Wan
|
9c138a0445
|
[3/N] MoE Refactor: Simplify DeepEP Output (#8421)
|
2025-07-28 11:37:17 -07:00 |
|
Timofey
|
c8f549d96d
|
Fix parsing ChatCompletionMessage (#7273)
Co-authored-by: Timofey K <timosha1113@gmail.com>
|
2025-07-28 11:35:14 -07:00 |
|
Kaixi Hou
|
134fa43e19
|
[NVIDIA] Change to use num_local_experts (#8453)
|
2025-07-28 10:38:19 -07:00 |
|
Yineng Zhang
|
ccfe52a057
|
fix: update dep (#8467)
|
2025-07-28 10:19:33 -07:00 |
|
harrisonlimh
|
747dd45077
|
feat: throttle requests at scheduler based on --max_queued_requests (#7565)
|
2025-07-28 22:32:33 +08:00 |
|
erictanjn
|
a9dd3ec3e9
|
fix:reorder topk experts to ensure shared expert replaces minimal score (#8125)
|
2025-07-28 20:36:46 +08:00 |
|
Yineng Zhang
|
45bc170b36
|
chore: bump v0.4.9.post5 (#8458)
|
2025-07-28 02:11:06 -07:00 |
|
Minglei Zhu
|
25f73c6cf3
|
fix GLM4_MOE launch with compressed_tensor quant model (#8456)
|
2025-07-28 01:31:20 -07:00 |
|
Binyao Jiang
|
581e7dcb92
|
GLM-4.5 Model Support Follow-up (#8445)
|
2025-07-27 23:35:20 -07:00 |
|
Yuxuan Zhang
|
6d6a8bc278
|
GLM-4.5 Model Support (#8224)
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-27 22:54:07 -07:00 |
|
Shangming Cai
|
2fd5c7049f
|
[PD] Fix abort_request for PD disaggregation (#8352)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: ybyang <10629930+whybeyoung@users.noreply.github.com>
|
2025-07-27 21:48:27 -07:00 |
|
Stefan He
|
4ad9737045
|
chore: bump transformer to 4.54.0 (#8416)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
|
2025-07-27 21:27:25 -07:00 |
|