HAI
|
d364b9b0f2
|
ROCm: update AITER (#5816)
|
2025-04-28 11:01:20 -07:00 |
|
Lianmin Zheng
|
849c83a0c0
|
[CI] test chunked prefill more (#5798)
|
2025-04-28 10:57:17 -07:00 |
|
JiLi
|
d73ddeb196
|
feat: Add fused moe triton config for qwen3-30b-fp8 moe on h20 (#5850)
|
2025-04-28 10:49:33 -07:00 |
|
ybyang
|
74cb12a878
|
[config] qwen3moe_tune_h20 fp8 tp4 (#5846)
|
2025-04-28 10:21:06 -07:00 |
|
ybyang
|
c6c6264073
|
[PD] support pd fake transfer for warmup (#5726)
|
2025-04-29 00:33:20 +08:00 |
|
yhyang201
|
92ab0a2055
|
feat: Add fused moe triton config for qwen3bf16 moe on h20 (#5839)
|
2025-04-28 09:30:59 -07:00 |
|
XinyuanTong
|
0045f4b2af
|
feat: Add fused moe triton config for qwen3 moe on h100 (#5833)
|
2025-04-28 08:37:13 -07:00 |
|
mlmz
|
8601300beb
|
fix: fix the error where the content is None when reasoning and tool … (#5838)
|
2025-04-28 08:36:08 -07:00 |
|
mlmz
|
6fa6f38ed3
|
Feat: add support for thinking mode via chat_template_kwargs.enable_t… (#5551)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
Co-authored-by: Chayenne <zhaochen20@outlook.com>
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
Co-authored-by: Yineng Zhang <me@zhyncs.com>
|
2025-04-28 07:07:45 -07:00 |
|
Lianmin Zheng
|
693723d1f7
|
Revert "Tiny refactor DefaultModelLoader.Source" (#5825)
|
2025-04-28 01:18:57 -07:00 |
|
fzyzcjy
|
644ed409d1
|
Tiny refactor DefaultModelLoader.Source (#5482)
|
2025-04-28 00:35:51 -07:00 |
|
Lianmin Zheng
|
3029889cb4
|
Turn on overlap scheduler for multimodal models (#5771)
|
2025-04-27 23:45:09 -07:00 |
|
Yineng Zhang
|
41ac0c6d48
|
chore: upgrade sgl-kernel 0.1.0 (#5690)
|
2025-04-27 21:00:50 -07:00 |
|
Trevor Morris
|
84810da4ae
|
Add Cutlass MLA attention backend (#5390)
|
2025-04-27 20:58:53 -07:00 |
|
Liangsheng Yin
|
40d9b8acce
|
Improve overlap scheduling (#5788)
|
2025-04-28 11:19:16 +08:00 |
|
Lianmin Zheng
|
daed453e84
|
[CI] Improve github summary & enable fa3 for more models (#5796)
|
2025-04-27 15:29:46 -07:00 |
|
Baizhou Zhang
|
84022c0e56
|
Release v0.4.6 (#5795)
|
2025-04-27 14:07:05 -07:00 |
|
Lianmin Zheng
|
a38f6932cc
|
[CI] Fix test case (#5790)
|
2025-04-27 08:55:35 -07:00 |
|
Liangsheng Yin
|
beb65c7433
|
[PD]Reduce kv transfer threads (#5791)
|
2025-04-27 23:03:30 +08:00 |
|
Lianmin Zheng
|
621e96bf9b
|
[CI] Fix ci tests (#5769)
|
2025-04-27 07:18:10 -07:00 |
|
Lianmin Zheng
|
35ca04d2fa
|
[CI] fix port conflicts (#5789)
|
2025-04-27 05:17:44 -07:00 |
|
Lianmin Zheng
|
9c088829ee
|
Revert "Use device_id in dist init to reduce NCCL communicator warmup & creation overhead" (#5786)
|
2025-04-27 04:03:02 -07:00 |
|
Lianmin Zheng
|
005aad32ad
|
Revert "[fix] fix bench_one_batch_server" (#5785)
|
2025-04-27 03:48:33 -07:00 |
|
Lianmin Zheng
|
6e313c1b8b
|
Revert "Revert "fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512"" (#5777)
|
2025-04-27 01:04:15 -07:00 |
|
Lianmin Zheng
|
8ba313304d
|
Revert "fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512" (#5772)
|
2025-04-26 23:26:08 -07:00 |
|
zhanweidu
|
021020632a
|
add switch to disable open api doc (#3744)
Signed-off-by: congcongke <zhanweidu@163.com>
|
2025-04-26 23:18:47 -07:00 |
|
Kebe
|
7e944246c3
|
Add memory_saver check (#4986)
Signed-off-by: Kebe <mail@kebe7jun.com>
|
2025-04-26 20:20:05 -07:00 |
|
lambert0312
|
a086a11305
|
Use sgl-kernel sgl_per_token_group_quant_int8 (#4971)
|
2025-04-26 20:19:42 -07:00 |
|
Michał Moskal
|
bdbe5f816b
|
update llguidance to 0.7.11; adds StructTag (#4870)
|
2025-04-26 20:13:57 -07:00 |
|
aoshen524
|
9ad28f639e
|
fix(srt): check if sample_indices is not None before usage. (#5633)
|
2025-04-26 19:51:01 -07:00 |
|
yan97ao
|
d7b1ce65a5
|
Handle JSONDecodeError while processing request data (#5599)
|
2025-04-26 19:50:50 -07:00 |
|
JieXin Liang
|
f55933e1cc
|
[misc] more decode step log for batch_one_batch (#5565)
|
2025-04-26 19:50:28 -07:00 |
|
Stefan He
|
408ba02218
|
Add Llama 4 to FA3 test (#5509)
|
2025-04-26 19:49:31 -07:00 |
|
vzed
|
094891c01a
|
fix: Use is not None instead of != None for None checks. (#5687)
|
2025-04-26 19:26:57 -07:00 |
|
Frankey_8080
|
a21ef36352
|
support for the DeepSeek model by enabling streaming response parsing (#5592)
|
2025-04-26 18:59:31 -07:00 |
|
JieXin Liang
|
3c4dc38a9a
|
[fix] fix bench_one_batch_server (#5607)
|
2025-04-26 18:49:45 -07:00 |
|
DavidBao
|
d8fbc7c096
|
[feature] support for roberta embedding models (#5730)
|
2025-04-26 18:47:06 -07:00 |
|
Ke Bao
|
799c4bb502
|
Fuse MLA set kv cache kernel (#5748)
|
2025-04-26 18:42:22 -07:00 |
|
vzed
|
df2cf583ce
|
we fix the non existent access of decrypted_config_file (#5685)
|
2025-04-26 18:32:37 -07:00 |
|
saltyfish66
|
133ded039a
|
perf: update H20 fused_moe_triton kernel config to get higher throughput during prefilling (#5716)
|
2025-04-26 18:15:07 -07:00 |
|
Yuhong Guo
|
f87a6ab359
|
Resolves the 404 Not Found error when running compile_deep_gemm.py in multi-node setups (#5720)
|
2025-04-26 18:13:13 -07:00 |
|
JieXin Liang
|
eebfdb9459
|
[fix] fix potential bumpy throughtput with deepgemm (#5722)
|
2025-04-26 18:12:48 -07:00 |
|
Wenxuan Tan
|
dfb322642f
|
Use device_id in dist init to reduce NCCL communicator warmup & creation overhead (#5728)
|
2025-04-26 18:11:09 -07:00 |
|
Kyungmin Lee
|
63c13a2c73
|
fix: import vllm_rotary_embedding error when head_size not in 64, 128, 256, 512 (#5733)
|
2025-04-26 18:10:23 -07:00 |
|
liwenju0
|
4d1e52abea
|
Add an assertion to enhance the robustness of the operator (#5736)
|
2025-04-26 18:09:12 -07:00 |
|
Yi Zhang
|
1f963d7f64
|
Bugfix for minicpmo vision test (#5760)
|
2025-04-26 23:18:02 +08:00 |
|
ZXN
|
04d0123fd9
|
[Fix]: support deepseek-vl2-tiny model (#5552)
Co-authored-by: bppps <zouyu.zzx@alibaba-inc.com>
|
2025-04-26 17:52:53 +08:00 |
|
Mick
|
feda9b11b3
|
fix: fix one more bug from merging mm_inputs (#5718)
Co-authored-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: XinyuanTong <115166877+JustinTong0323@users.noreply.github.com>
|
2025-04-25 17:28:33 -07:00 |
|
Ke Bao
|
c3948ba67e
|
Reorder loop in shared expert weight loading (#5719)
|
2025-04-25 17:27:42 -07:00 |
|
Xiaoyu Zhang
|
18ce468d56
|
update triton 3.2.0 h200 fused moe triton config and add warning about triton fused_moe_kernel performance degradation due to different Triton versions. (#5740)
|
2025-04-25 16:24:59 -07:00 |
|