Commit Graph

2919 Commits

Author SHA1 Message Date
huangtingwei
d904959233 Support l3 cache (mooncake store) for hiradix cache (#7211)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
Co-authored-by: AniZpZ <zhuangsen.zp@antgroup.com>
Co-authored-by: zuoyuan <zhangzuo21@mails.tsinghua.edu.cn>
Co-authored-by: @wangyueneng.wyn <wangyueneng.wyn@antgroup.com>
Co-authored-by: JinYan Su <jinyansu792@gmail.com>
2025-07-30 23:15:51 -07:00
huangtingwei
26c8a310bd fix incorrect increase of hit count (#8533)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-07-31 06:02:42 +00:00
yi wang
5963e50503 [bugfix] Fix 2 minor bugs in the hicache storage layer (#8404) 2025-07-31 05:47:14 +00:00
Binyao Jiang
59aab76f0a Bug: Fix google gemma3n-mm audio input not working bug (#8365) 2025-07-30 21:23:09 -07:00
Lifu Huang
67e53b16f5 Bump transfomers to 4.54.1 to fix Gemma cache issue. (#8541) 2025-07-30 19:50:54 -07:00
pansicheng
299803343d Add hf3fs support for hicache storage (based on #7704) (#7280)
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-07-30 17:42:41 -07:00
Chang Su
a79a5d7012 Revert "Fix the input tools format and history tool_calls in OpenAI API (#6556)" (#8584) 2025-07-30 13:12:05 -07:00
Adarsh Shirawalmath
ec5f944271 [Model] Add support for Arcee Foundational Model (#8154) 2025-07-30 10:45:25 -07:00
Elfie Guo
e3f08c77bc Update cutlass_moe.py (#8545) 2025-07-29 23:46:34 -07:00
hzh0425
2fbb754e1d feature(pd-hicache): Prefill instances support reusing the RemoteStorage Cache via HiCache. (#8516)
Co-authored-by: Shangming Cai <csmthu@gmail.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
2025-07-29 21:19:25 -07:00
hzh0425
a85ebf50b8 feat(hicache): support file backend reading directory config form env. (#8498) 2025-07-29 21:18:46 -07:00
Cheng Wan
9effeb5bdd Support EPLB in FusedMoE (#8448) 2025-07-29 16:02:41 -07:00
Mick
1992ef9ba7 fix: temporarily disable cuda-ipc for mm data tensor (#8431)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-07-29 22:42:03 +00:00
Lianmin Zheng
a4c3b121d8 Split the scheduler into multiple mixin classes to reduce the file size (#8483) 2025-07-29 12:46:50 -07:00
Elfie Guo
4d16c88b6e Update cutlass_moe.py (#8535) 2025-07-29 10:49:41 -07:00
Yineng Zhang
6478831be9 chore: bump v0.4.9.post6 (#8517) 2025-07-29 02:30:07 -07:00
Lifu Huang
fb16fbaf52 Fix incorrect KV cache allocation for MTP models. (#8482)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
2025-07-28 22:54:50 -07:00
fzyzcjy
0ce84c822b Support colocating requests (#7973) 2025-07-28 22:51:49 -07:00
fzyzcjy
59d0bf012f Tiny add warnings for DeepEP when it is suboptimal (#8426) 2025-07-28 22:51:38 -07:00
fzyzcjy
7df2c0c2db Reduce memory usage for fp4 moe (#8413) 2025-07-28 22:51:23 -07:00
Yineng Zhang
8240a6b013 chore: add glm 4.5 fp8 tp4 config (#8480) 2025-07-28 16:14:01 -07:00
Yineng Zhang
3a04aa4be7 chore: add glm4 fp8 tp8 config (#8478) 2025-07-28 16:08:53 -07:00
Stefan He
74e7e45710 Fix DEEPEP BF16 compatibility for Deepseek Style model like GLM 4.5 (#8469)
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
2025-07-28 14:36:08 -07:00
Cheng Wan
9c138a0445 [3/N] MoE Refactor: Simplify DeepEP Output (#8421) 2025-07-28 11:37:17 -07:00
Timofey
c8f549d96d Fix parsing ChatCompletionMessage (#7273)
Co-authored-by: Timofey K <timosha1113@gmail.com>
2025-07-28 11:35:14 -07:00
Kaixi Hou
134fa43e19 [NVIDIA] Change to use num_local_experts (#8453) 2025-07-28 10:38:19 -07:00
Yineng Zhang
ccfe52a057 fix: update dep (#8467) 2025-07-28 10:19:33 -07:00
harrisonlimh
747dd45077 feat: throttle requests at scheduler based on --max_queued_requests (#7565) 2025-07-28 22:32:33 +08:00
erictanjn
a9dd3ec3e9 fix:reorder topk experts to ensure shared expert replaces minimal score (#8125) 2025-07-28 20:36:46 +08:00
Yineng Zhang
45bc170b36 chore: bump v0.4.9.post5 (#8458) 2025-07-28 02:11:06 -07:00
Minglei Zhu
25f73c6cf3 fix GLM4_MOE launch with compressed_tensor quant model (#8456) 2025-07-28 01:31:20 -07:00
Binyao Jiang
581e7dcb92 GLM-4.5 Model Support Follow-up (#8445) 2025-07-27 23:35:20 -07:00
Yuxuan Zhang
6d6a8bc278 GLM-4.5 Model Support (#8224)
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
2025-07-27 22:54:07 -07:00
Shangming Cai
2fd5c7049f [PD] Fix abort_request for PD disaggregation (#8352)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: ybyang <10629930+whybeyoung@users.noreply.github.com>
2025-07-27 21:48:27 -07:00
Stefan He
4ad9737045 chore: bump transformer to 4.54.0 (#8416)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
2025-07-27 21:27:25 -07:00
Qiaolin Yu
2810338401 [feat] Support different attention backends for prefill and decode (#6338)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
2025-07-28 11:42:29 +08:00
Chang Su
dd487e5553 bugfix: Fix XGrammar backend to use model's EOS tokens for constrained generation (#8422) 2025-07-28 10:01:02 +08:00
Chang Su
b47eda3316 bugfix: Fix multiple finish_reason chunks and tool_calls finish reason check (#8417) 2025-07-27 13:31:06 -07:00
Binyao Jiang
e983d66680 Fix: Improve test_openai_function_calling unit test and fix reasoning_parser.py think_start_token logic (#8316)
Co-authored-by: Chang Su <chang.s.su@oracle.com>
2025-07-27 13:12:59 -07:00
fzyzcjy
b58c3c285e Support ue8m0 for triton quant kernel (#7603) 2025-07-27 13:04:35 -07:00
Lifu Huang
df90645525 Support overlapped lora updates (#8213) 2025-07-27 13:00:44 -07:00
Shangming Cai
22e00eeb4a [Bugfix] Prevent PD server crash from invalid grammar (#8062)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-07-28 00:17:51 +08:00
Yuan Luo
b3eac168e7 Support triton kernels v3.4.0 for fused_moe (#8258)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: Cheng Wan <cwan@x.ai>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
2025-07-27 02:28:49 -07:00
Yineng Zhang
10ee89559e chore: upgrade flashinfer v0.2.9rc2 (#8406) 2025-07-27 01:41:22 -07:00
Cheng Wan
4d921f2b79 [hotfix] fix merge conflicts in FlashInferEPMoE (#8405) 2025-07-27 01:24:10 -07:00
Kevin Xiang Li
44d600cd67 Support precomputed_embeddings for Llama 4 (#8156)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
2025-07-27 01:14:49 -07:00
Elfie Guo
5c9c275bc8 Use FlashInfer FP4 gemm. (#8241) 2025-07-27 01:05:22 -07:00
Cheng Wan
bf0f448fe5 [2/N] MoE Refactor: Unify weight loader and quant methods (#8397) 2025-07-27 01:00:21 -07:00
Yingchun Lai
36d6f0ba5b fix: fix the missing metrics on non-rank0 nodes (#7720) 2025-07-27 00:55:25 -07:00
Li Hui
2a1936de96 Add A800 fused MoE kernel tuning configs for Qwen3-Coder-480B-A35B-Instruct (#8351) 2025-07-27 00:46:07 -07:00