Trevor Morris
|
9bd4872a34
|
[bugfix] Fix typo in modelopt quant: 'FusedMoE' object has no attribute 'local_num_experts' (#8768)
|
2025-08-04 11:08:08 -07:00 |
|
azhurkevich
|
915140fd18
|
[NVIDIA] Add Low Latency NVFP4 decode kernels from Flashinfer (#8552)
Co-authored-by: Cheng Wan <cwan@x.ai>
|
2025-08-04 03:10:02 -07:00 |
|
Baron Liu
|
36fc9260a2
|
[bugfix] fix import path in HiCacheController (#8749)
|
2025-08-03 22:19:15 -07:00 |
|
Even Zhou
|
fee0ab0fba
|
[CI] Ascend NPU CI enhancement (#8294)
Co-authored-by: ronnie_zheng <zl19940307@163.com>
|
2025-08-03 22:16:38 -07:00 |
|
Baizhou Zhang
|
f2d68ded6d
|
Rename lora_path to lora_id in batches (#8437)
|
2025-08-03 21:08:28 -07:00 |
|
Yuan Luo
|
3b87a9e8ae
|
Fix bug of refactoring TopKOutput in w4afp8 (#8745)
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
|
2025-08-03 20:05:02 -07:00 |
|
YyWangCS
|
f024795e57
|
Replace torch.jit.script with torch.compile in get_masked_input_and_mask to fix benchmark underreporting (#8733)
|
2025-08-03 19:02:51 -07:00 |
|
Cheng Wan
|
b102353f8f
|
[MoE] Enable renormalize=False in Triton kernels (#8735)
|
2025-08-03 17:03:04 -07:00 |
|
huangtingwei
|
76ba5bbe12
|
fix args typo in memory_pool_host (#8662)
|
2025-08-03 13:47:29 -07:00 |
|
Yingchun Lai
|
ed6f7597b3
|
Fix the missing 'lof' choice of --schedule-policy server args (#7114)
|
2025-08-03 12:29:42 -07:00 |
|
tql.99
|
e67276ecb3
|
feat: support cutlass_moe_fp8 kernel for fusedmoe in sm90 (#8678)
|
2025-08-03 10:47:15 -07:00 |
|
Ke Bao
|
0242bb9c74
|
Fix triton kernels topk with keyword arguments (#8732)
|
2025-08-03 10:45:15 -07:00 |
|
Yuxuan Zhang
|
760286e3d3
|
use fp32 for e_score_correction_bias in GLM-4.5 (#8729)
|
2025-08-03 10:43:40 -07:00 |
|
Zilin Zhu
|
3435a24e81
|
[RL] fix update weight for FusedMoE with EP (#8676)
|
2025-08-03 10:20:39 -07:00 |
|
yhyang201
|
00da906584
|
feat: Support DP Attention for step3_vl (#8699)
|
2025-08-03 19:35:26 +08:00 |
|
Yineng Zhang
|
8cd344586e
|
chore: bump v0.4.10.post2 (#8727)
|
2025-08-03 03:43:29 -07:00 |
|
Cheng Wan
|
0e0eef00ce
|
[DP] fix the compatibility issue between DP attention and --attention-backend triton (#8723)
|
2025-08-03 03:06:57 -07:00 |
|
Cheng Wan
|
cb099d2095
|
[CUDA Graph] save cuda graph memory by using next_token_logits_buffer (#8579)
|
2025-08-03 03:06:47 -07:00 |
|
Cheng Wan
|
7a91330149
|
Save cuda graph memory for fa3 (#8567)
|
2025-08-03 03:06:31 -07:00 |
|
ybyang
|
6f9baf1002
|
[Improvements] Merge health check route (#8444)
Signed-off-by: ybyang <ybyang7@iflytek.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Kan Wu <wukanustc@gmail.com>
|
2025-08-03 01:59:06 -07:00 |
|
Jasper James
|
a31b7a7024
|
feat: Add new moe triton for NVIDIA RTX 6000 Ada (#8547)
|
2025-08-03 00:57:35 -07:00 |
|
Varun Vinayak Shenoy
|
7ed8e51bc3
|
[fix] Fix divide by zero error for llama4. (#8683)
|
2025-08-03 00:55:55 -07:00 |
|
Trevor Morris
|
32f2815451
|
Do layernorm before allgather for DP attention (#8631)
|
2025-08-03 00:53:08 -07:00 |
|
Guanhua Wang
|
f7b2853ff8
|
[feat] support minimum token load balance in dp attention (#7379)
|
2025-08-03 00:46:47 -07:00 |
|
Zhiqiang Xie
|
b0add2da00
|
HiCache storage, style change and bug fix (#8719)
|
2025-08-03 15:05:04 +08:00 |
|
Wenxuan Tan
|
0305c5053f
|
Reduce memory accumulation in long-running server (#8306)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
|
2025-08-03 15:03:16 +08:00 |
|
Lifu Huang
|
8675bdf246
|
Support limiting max loaded loras in CPU. (#8650)
|
2025-08-03 00:02:23 -07:00 |
|
Cheng Wan
|
a437aa9987
|
[hotfix] fix mixtral with tensor-level compressed-tensor quantization (#8721)
|
2025-08-02 22:59:25 -07:00 |
|
fzyzcjy
|
0e612dbf12
|
Tiny fix CI pytest error (#8524)
|
2025-08-02 22:48:42 -07:00 |
|
Liangsheng Yin
|
9f47d686e5
|
Fix fused MoE when routed_scaling_factor is None (#8709)
|
2025-08-03 12:42:01 +08:00 |
|
DarkSharpness
|
e273aa6dcf
|
[Feature] Radix Tree in C++ (#7369)
|
2025-08-02 19:50:14 -07:00 |
|
fzyzcjy
|
8ada1ab6c7
|
Fix triton moe error caused by TopK refactor (#8705)
|
2025-08-02 18:49:47 -07:00 |
|
Lianmin Zheng
|
e314b084c5
|
[FIX] Fix the nightly CI by disabling swa mem pool for gemma2 (#8693)
|
2025-08-02 18:43:14 -07:00 |
|
fzyzcjy
|
403566bcca
|
Remove assertions about per group quant fp8 (#8717)
|
2025-08-02 17:08:40 -07:00 |
|
Stefan He
|
4ca43b061c
|
Add tensor.detach() back to update weight util (#8691)
|
2025-08-02 00:41:05 -07:00 |
|
Wenchen Lo
|
ea93079b30
|
model: adapt mllama4 to VisionAttention (#8512)
Co-authored-by: root <mickjagger19@icloud.com>
|
2025-08-02 00:39:40 -07:00 |
|
Yusong Gao
|
4bec99ecd0
|
Fix: resolve prefill of retracted request out-of-memory issue when ignore_eos is enabled (#7434)
|
2025-08-02 14:43:45 +08:00 |
|
Trevor Morris
|
89caf7a3c6
|
[bugfix] Apply routed scaling factor to cutlass_fused_experts_fp8 (#8688)
|
2025-08-01 19:00:24 -07:00 |
|
Nicolas Castet
|
82e6c3a65a
|
Add support for NCCL symmetric memory for TP allreduces (#8238)
|
2025-08-01 23:30:55 +00:00 |
|
Baron Liu
|
b89d37cb11
|
[bugfix] Add 'disaggregation_mode' parameter to warmup function when compile deep_gemm manually (#8618)
|
2025-08-01 16:02:53 -07:00 |
|
Swipe4057
|
5deab1283a
|
upgrade xgrammar 0.1.22 (#8522)
|
2025-08-01 15:59:15 -07:00 |
|
hzh0425
|
d1c4d51c08
|
bugfix(hicache): Fix 'MooncakeStore' not defined error. (#8668)
|
2025-08-01 15:58:17 -07:00 |
|
Ke Bao
|
e252192679
|
Fix deepgemm masked grouped gemm jit compile (#8679)
|
2025-08-01 15:37:59 -07:00 |
|
Trevor Morris
|
6a7528e623
|
[bugfix] Fix page size for create_flashmla_kv_indices_triton() for cutlass mla (#8685)
|
2025-08-01 14:28:04 -07:00 |
|
Minglei Zhu
|
2ae95d17e8
|
Disable tp for shared experts under expert parallelism for GLM4.5 model (#8647) (#8647)
Co-authored-by: Stefan He <hebiaobuaa@gmail.com>
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
|
2025-08-01 12:02:35 -07:00 |
|
萝卜菜
|
2d401bd99d
|
[fix] fix pd disagg error of vlms (#8094)
|
2025-08-02 02:16:29 +08:00 |
|
Cheng Wan
|
6c88f6c8d9
|
[5/N] MoE Refactor: Update MoE parallelism arguments (#8658)
|
2025-08-01 01:20:03 -07:00 |
|
Binyao Jiang
|
c8d3a402c1
|
Bug: apply final_hidden_states*=self.routed_scaling_factor at MoE lay… (#8511)
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
|
2025-08-01 00:07:41 -07:00 |
|
Xinyuan Tong
|
7e831efee8
|
Fix chat template handling for OpenAI serving (#8635)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
2025-07-31 21:49:45 -07:00 |
|
pansicheng
|
20b5563eda
|
Add hf3fs_utils.cpp to package-data (#8653)
|
2025-08-01 12:41:09 +08:00 |
|