Lianmin Zheng
|
8ba313304d
|
Revert "fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512" (#5772)
|
2025-04-26 23:26:08 -07:00 |
|
zhanweidu
|
021020632a
|
add switch to disable open api doc (#3744)
Signed-off-by: congcongke <zhanweidu@163.com>
|
2025-04-26 23:18:47 -07:00 |
|
Kebe
|
7e944246c3
|
Add memory_saver check (#4986)
Signed-off-by: Kebe <mail@kebe7jun.com>
|
2025-04-26 20:20:05 -07:00 |
|
lambert0312
|
a086a11305
|
Use sgl-kernel sgl_per_token_group_quant_int8 (#4971)
|
2025-04-26 20:19:42 -07:00 |
|
Michał Moskal
|
bdbe5f816b
|
update llguidance to 0.7.11; adds StructTag (#4870)
|
2025-04-26 20:13:57 -07:00 |
|
aoshen524
|
9ad28f639e
|
fix(srt): check if sample_indices is not None before usage. (#5633)
|
2025-04-26 19:51:01 -07:00 |
|
yan97ao
|
d7b1ce65a5
|
Handle JSONDecodeError while processing request data (#5599)
|
2025-04-26 19:50:50 -07:00 |
|
JieXin Liang
|
f55933e1cc
|
[misc] more decode step log for batch_one_batch (#5565)
|
2025-04-26 19:50:28 -07:00 |
|
Stefan He
|
408ba02218
|
Add Llama 4 to FA3 test (#5509)
|
2025-04-26 19:49:31 -07:00 |
|
vzed
|
094891c01a
|
fix: Use is not None instead of != None for None checks. (#5687)
|
2025-04-26 19:26:57 -07:00 |
|
Frankey_8080
|
a21ef36352
|
support for the DeepSeek model by enabling streaming response parsing (#5592)
|
2025-04-26 18:59:31 -07:00 |
|
JieXin Liang
|
3c4dc38a9a
|
[fix] fix bench_one_batch_server (#5607)
|
2025-04-26 18:49:45 -07:00 |
|
DavidBao
|
d8fbc7c096
|
[feature] support for roberta embedding models (#5730)
|
2025-04-26 18:47:06 -07:00 |
|
Ke Bao
|
799c4bb502
|
Fuse MLA set kv cache kernel (#5748)
|
2025-04-26 18:42:22 -07:00 |
|
vzed
|
df2cf583ce
|
we fix the non existent access of decrypted_config_file (#5685)
|
2025-04-26 18:32:37 -07:00 |
|
saltyfish66
|
133ded039a
|
perf: update H20 fused_moe_triton kernel config to get higher throughput during prefilling (#5716)
|
2025-04-26 18:15:07 -07:00 |
|
Yuhong Guo
|
f87a6ab359
|
Resolves the 404 Not Found error when running compile_deep_gemm.py in multi-node setups (#5720)
|
2025-04-26 18:13:13 -07:00 |
|
JieXin Liang
|
eebfdb9459
|
[fix] fix potential bumpy throughtput with deepgemm (#5722)
|
2025-04-26 18:12:48 -07:00 |
|
Wenxuan Tan
|
dfb322642f
|
Use device_id in dist init to reduce NCCL communicator warmup & creation overhead (#5728)
|
2025-04-26 18:11:09 -07:00 |
|
Kyungmin Lee
|
63c13a2c73
|
fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512 (#5733)
|
2025-04-26 18:10:23 -07:00 |
|
liwenju0
|
4d1e52abea
|
Add an assertion to enhance the robustness of the operator (#5736)
|
2025-04-26 18:09:12 -07:00 |
|
Yi Zhang
|
1f963d7f64
|
Bugfix for minicpmo vision test (#5760)
|
2025-04-26 23:18:02 +08:00 |
|
ZXN
|
04d0123fd9
|
[Fix]: support deepseek-vl2-tiny model (#5552)
Co-authored-by: bppps <zouyu.zzx@alibaba-inc.com>
|
2025-04-26 17:52:53 +08:00 |
|
Mick
|
feda9b11b3
|
fix: fix one more bug from merging mm_inputs (#5718)
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: XinyuanTong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-04-25 17:28:33 -07:00 |
|
Ke Bao
|
c3948ba67e
|
Reorder loop in shared expert weight loading (#5719)
|
2025-04-25 17:27:42 -07:00 |
|
Xiaoyu Zhang
|
18ce468d56
|
update triton 3.2.0 h200 fused moe triton config and add warning about triton fused_moe_kernel performance degradation due to different Triton versions. (#5740)
|
2025-04-25 16:24:59 -07:00 |
|
Lianmin Zheng
|
21514ff5bd
|
Disable flaky eagle tests (#5753)
|
2025-04-25 15:54:39 -07:00 |
|
Lianmin Zheng
|
5641a09458
|
Revert "[Model] Support ArcticForCausalLM architecture (Snowflake/snowflake-arctic-instruct)" (#5754)
|
2025-04-25 15:50:28 -07:00 |
|
michael-amd
|
93c6fb12c7
|
Fix: deepseek forward absorb (#5723)
Co-authored-by: ispobock <ispobaoke@163.com>
|
2025-04-25 13:48:55 -07:00 |
|
IAN
|
11e27d0926
|
[PD]: Support Muti Prefill in one node (#5704)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
|
2025-04-26 00:30:47 +08:00 |
|
shangmingc
|
50eda8398e
|
[PD] Add kvargs table and thread pool for kvcache sender of mooncake (#5738)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-04-25 18:15:01 +08:00 |
|
Liangsheng Yin
|
c55550cbf0
|
[PD] Better logs (#5715)
|
2025-04-25 17:25:45 +08:00 |
|
Brayden Zhong
|
43fb95c2fa
|
[Model] Support ArcticForCausalLM architecture (Snowflake/snowflake-arctic-instruct) (#5078)
Co-authored-by: vincent-4 <vincentzhongy+githubvincent4@gmail.com>
|
2025-04-25 15:24:09 +08:00 |
|
Baizhou Zhang
|
a14654dd68
|
Fix weight loading bug for Deepseek v3+nextn (#5684)
|
2025-04-24 21:29:56 +08:00 |
|
Yuhong Guo
|
5d93a950ee
|
[BugFix] Fix combination of MTP and --n-share-experts-fusionwith R1 (#5707)
|
2025-04-24 21:13:51 +08:00 |
|
Mick
|
c998d04b46
|
vlm: enable radix cache for qwen-vl models (#5349)
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-04-23 20:35:05 -07:00 |
|
Yineng Zhang
|
b1f6d89b5f
|
fix: update truss bench_serving (#5683)
|
2025-04-23 13:28:35 -07:00 |
|
shangmingc
|
e0673969b9
|
[PD] Add support for dp attention with mooncake (#5530)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-04-23 17:20:27 +08:00 |
|
Cheng Wan
|
711efe7814
|
Integrating PD disaggregation with DP attention and DeepEP (#5435)
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
|
2025-04-23 01:46:01 -07:00 |
|
Yineng Zhang
|
fbb5f229d4
|
fix awq_dequantize import (#5669)
|
2025-04-23 01:36:26 -07:00 |
|
fzyzcjy
|
71d1785f2d
|
Remove unnecessary torch.full in DeepSeek (#5601)
|
2025-04-22 21:24:29 -07:00 |
|
Baizhou Zhang
|
3f87f83116
|
Fuse q_a_proj and kv_a_proj (#5619)
|
2025-04-22 20:35:08 -07:00 |
|
Baizhou Zhang
|
ce5412b62e
|
Turn on DeepGemm By Default and Update Doc (#5628)
|
2025-04-22 16:10:08 -07:00 |
|
Yineng Zhang
|
7282ab741a
|
fix: update bench_speculative (#5649)
|
2025-04-22 16:08:15 -07:00 |
|
HAI
|
b0feda090c
|
Revert "Support aiter RMSNorm in AMD" (#5646)
|
2025-04-22 15:20:24 -07:00 |
|
Ke Bao
|
6b6e748775
|
Remove q concat in FA3 backend for DeepSeek decode (#5638)
|
2025-04-22 11:43:12 -07:00 |
|
JieXin Liang
|
917324862e
|
[fix] reduce dp capture bs (#5634)
Co-authored-by: alcanerian <alcanerian@gmail.com>
|
2025-04-22 11:08:45 -07:00 |
|
lukec
|
2ed96c7a8a
|
fix flashmla bug (#5272)
|
2025-04-22 10:36:23 -07:00 |
|
saltyfish66
|
2aa3f5e2d0
|
[feature] Add H20 fp8_w8a8 FusedMoE config for --n-share-experts-fusion=16 (#5641)
Co-authored-by: yuethe <yuethe@tencent.com>
|
2025-04-22 09:33:13 -07:00 |
|
lambert0312
|
76d17c7ecb
|
Fix shared experts fusion error without quantization (#5632)
|
2025-04-22 09:22:26 -07:00 |
|