Elfie Guo
|
4d16c88b6e
|
Update cutlass_moe.py (#8535)
|
2025-07-29 10:49:41 -07:00 |
|
Yineng Zhang
|
6478831be9
|
chore: bump v0.4.9.post6 (#8517)
|
2025-07-29 02:30:07 -07:00 |
|
Lifu Huang
|
fb16fbaf52
|
Fix incorrect KV cache allocation for MTP models. (#8482)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-28 22:54:50 -07:00 |
|
fzyzcjy
|
0ce84c822b
|
Support colocating requests (#7973)
|
2025-07-28 22:51:49 -07:00 |
|
fzyzcjy
|
59d0bf012f
|
Tiny add warnings for DeepEP when it is suboptimal (#8426)
|
2025-07-28 22:51:38 -07:00 |
|
fzyzcjy
|
7df2c0c2db
|
Reduce memory usage for fp4 moe (#8413)
|
2025-07-28 22:51:23 -07:00 |
|
Yineng Zhang
|
8240a6b013
|
chore: add glm 4.5 fp8 tp4 config (#8480)
|
2025-07-28 16:14:01 -07:00 |
|
Yineng Zhang
|
3a04aa4be7
|
chore: add glm4 fp8 tp8 config (#8478)
|
2025-07-28 16:08:53 -07:00 |
|
Stefan He
|
74e7e45710
|
Fix DEEPEP BF16 compatibility for Deepseek Style model like GLM 4.5 (#8469)
Co-authored-by: Minglei Zhu <mingleizhu1122@gmail.com>
|
2025-07-28 14:36:08 -07:00 |
|
Cheng Wan
|
9c138a0445
|
[3/N] MoE Refactor: Simplify DeepEP Output (#8421)
|
2025-07-28 11:37:17 -07:00 |
|
Timofey
|
c8f549d96d
|
Fix parsing ChatCompletionMessage (#7273)
Co-authored-by: Timofey K <timosha1113@gmail.com>
|
2025-07-28 11:35:14 -07:00 |
|
Kaixi Hou
|
134fa43e19
|
[NVIDIA] Change to use num_local_experts (#8453)
|
2025-07-28 10:38:19 -07:00 |
|
Yineng Zhang
|
ccfe52a057
|
fix: update dep (#8467)
|
2025-07-28 10:19:33 -07:00 |
|
harrisonlimh
|
747dd45077
|
feat: throttle requests at scheduler based on --max_queued_requests (#7565)
|
2025-07-28 22:32:33 +08:00 |
|
erictanjn
|
a9dd3ec3e9
|
fix:reorder topk experts to ensure shared expert replaces minimal score (#8125)
|
2025-07-28 20:36:46 +08:00 |
|
Yineng Zhang
|
45bc170b36
|
chore: bump v0.4.9.post5 (#8458)
|
2025-07-28 02:11:06 -07:00 |
|
Minglei Zhu
|
25f73c6cf3
|
fix GLM4_MOE launch with compressed_tensor quant model (#8456)
|
2025-07-28 01:31:20 -07:00 |
|
Binyao Jiang
|
581e7dcb92
|
GLM-4.5 Model Support Follow-up (#8445)
|
2025-07-27 23:35:20 -07:00 |
|
Yuxuan Zhang
|
6d6a8bc278
|
GLM-4.5 Model Support (#8224)
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
|
2025-07-27 22:54:07 -07:00 |
|
Shangming Cai
|
2fd5c7049f
|
[PD] Fix abort_request for PD disaggregation (#8352)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: ybyang <10629930+whybeyoung@users.noreply.github.com>
|
2025-07-27 21:48:27 -07:00 |
|
Stefan He
|
4ad9737045
|
chore: bump transformer to 4.54.0 (#8416)
Co-authored-by: Binyao Jiang <byjiang1996@gmail.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
|
2025-07-27 21:27:25 -07:00 |
|
Qiaolin Yu
|
2810338401
|
[feat] Support different attention backends for prefill and decode (#6338)
Co-authored-by: tianqilin.99 <tianqilin.99@bytedance.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
|
2025-07-28 11:42:29 +08:00 |
|
Chang Su
|
dd487e5553
|
bugfix: Fix XGrammar backend to use model's EOS tokens for constrained generation (#8422)
|
2025-07-28 10:01:02 +08:00 |
|
Chang Su
|
b47eda3316
|
bugfix: Fix multiple finish_reason chunks and tool_calls finish reason check (#8417)
|
2025-07-27 13:31:06 -07:00 |
|
Binyao Jiang
|
e983d66680
|
Fix: Improve test_openai_function_calling unit test and fix reasoning_parser.py think_start_token logic (#8316)
Co-authored-by: Chang Su <chang.s.su@oracle.com>
|
2025-07-27 13:12:59 -07:00 |
|
fzyzcjy
|
b58c3c285e
|
Support ue8m0 for triton quant kernel (#7603)
|
2025-07-27 13:04:35 -07:00 |
|
Lifu Huang
|
df90645525
|
Support overlapped lora updates (#8213)
|
2025-07-27 13:00:44 -07:00 |
|
Shangming Cai
|
22e00eeb4a
|
[Bugfix] Prevent PD server crash from invalid grammar (#8062)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-07-28 00:17:51 +08:00 |
|
Yuan Luo
|
b3eac168e7
|
Support triton kernels v3.4.0 for fused_moe (#8258)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: Cheng Wan <cwan@x.ai>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
|
2025-07-27 02:28:49 -07:00 |
|
Yineng Zhang
|
10ee89559e
|
chore: upgrade flashinfer v0.2.9rc2 (#8406)
|
2025-07-27 01:41:22 -07:00 |
|
Cheng Wan
|
4d921f2b79
|
[hotfix] fix merge conflicts in FlashInferEPMoE (#8405)
|
2025-07-27 01:24:10 -07:00 |
|
Kevin Xiang Li
|
44d600cd67
|
Support precomputed_embeddings for Llama 4 (#8156)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-07-27 01:14:49 -07:00 |
|
Elfie Guo
|
5c9c275bc8
|
Use FlashInfer FP4 gemm. (#8241)
|
2025-07-27 01:05:22 -07:00 |
|
Cheng Wan
|
bf0f448fe5
|
[2/N] MoE Refactor: Unify weight loader and quant methods (#8397)
|
2025-07-27 01:00:21 -07:00 |
|
Yingchun Lai
|
36d6f0ba5b
|
fix: fix the missing metrics on non-rank0 nodes (#7720)
|
2025-07-27 00:55:25 -07:00 |
|
Li Hui
|
2a1936de96
|
Add A800 fused MoE kernel tuning configs for Qwen3-Coder-480B-A35B-Instruct (#8351)
|
2025-07-27 00:46:07 -07:00 |
|
Mick
|
0bcc195f4e
|
fix: minor fix TransportProxyTensor under tp (#8382)
|
2025-07-27 00:38:49 -07:00 |
|
Kaixi Hou
|
85486b6f6f
|
[NVIDIA] Add Flashinfer MoE blockscale fp8 backend (#8036)
|
2025-07-27 00:34:41 -07:00 |
|
fzyzcjy
|
62222bd27e
|
Minor tool for comparison of benchmark results (#7974)
|
2025-07-27 00:27:50 -07:00 |
|
fzyzcjy
|
ed0fdbf35b
|
Tool to dump and compare internal activation tensors (#7976)
|
2025-07-27 00:27:21 -07:00 |
|
Xinyuan Tong
|
b602f42354
|
Urgent Fix: intern-s1 chat-template matching (#8403)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-07-27 00:22:31 -07:00 |
|
Zhiqiang Xie
|
528bd1ed85
|
HiCache, check before terminate prefetching (#8372)
|
2025-07-26 23:13:16 -07:00 |
|
Lifu Huang
|
761546315c
|
Remove slot usage in code to be backward-compatible with python 3.9 (#8396)
|
2025-07-26 21:24:22 -07:00 |
|
Lifu Huang
|
5c705b1dce
|
Add perf tests for LoRA (#8314)
|
2025-07-26 14:55:22 -07:00 |
|
RunningLeon
|
b7094a5ef1
|
model: support intern-s1 (#8350)
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: zxy <zhou0493@e.ntu.edu.sg>
Co-authored-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-07-26 13:48:51 -07:00 |
|
fzyzcjy
|
da0c026084
|
Tiny assert EPLB is used together with expert parallel (#8381)
|
2025-07-26 03:20:39 -07:00 |
|
Mick
|
3212c2ad3f
|
vlm: optimize tensor transport (#6003)
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-07-26 17:41:01 +08:00 |
|
Mick
|
534756749a
|
chore: improvements on mm_utils (#7737)
|
2025-07-26 17:38:56 +08:00 |
|
Stefan He
|
ce32bc2ba9
|
Extract update_weights from RL Engine to SGLang to keep simplicity and fix torch reduce (#8267)
Co-authored-by: CuiBo 82354186+SuperCB@users.noreply.github.com
Co-authored-by: GeLee 865038696@qq.com
Co-authored-by: 杨睿 yangruipis@163.com
|
2025-07-26 02:00:59 -07:00 |
|
Cheng Wan
|
e236d8fee8
|
Save peak memory in logits processor (#8343)
|
2025-07-26 01:46:42 -07:00 |
|