Commit Graph

1832 Commits

Author SHA1 Message Date
Chang Su
aee62d744b Optimize GPU memory usage in FlashAttentionBackend's strided indexing (#5262)
Co-authored-by: ch-wan <cwan39@gatech.edu>
2025-04-11 00:34:17 -07:00
fzyzcjy
cd7e32e2cb Optimize attention in llama4 (#5127) 2025-04-11 00:32:41 -07:00
HAI
8879944800 ROCm/AITER CK_MoE: update 2-stage kernels & support both Activations (#5228) 2025-04-10 18:19:57 -07:00
Richard Zou
a879811c4b Fix torch.compile cacheing (#5259)
Co-authored-by: zhyncs <me@zhyncs.com>
2025-04-10 18:08:45 -07:00
Ke Bao
1078396f47 Update deps for mllama4 (#5215) 2025-04-10 09:12:44 -07:00
Teng Ma
7e4f72dd8c [PD] Add get_contiguous_buf_infos interface for MLATokenToKVPool (#5204) 2025-04-10 20:05:34 +08:00
Teng Ma
4c31ae9f6d [PD] Support KV transfer with mooncake (#4880)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
Co-authored-by: shangmingc <csmthu@gmail.com>
2025-04-10 14:23:23 +08:00
Xiaoyu Zhang
f730362ee2 reduce moe_align_block_size_kernel small batch mode overhead (#5086) 2025-04-09 17:59:35 -07:00
fzyzcjy
e3c4bd3153 Fix DeepSeek error when using DeepEP mode (#5190) 2025-04-09 17:43:22 -07:00
Stefan He
5db37c8626 [metrics] Add in queue metrics (#4444) 2025-04-09 17:19:27 -07:00
Yineng Zhang
4cb53ecd0c fix: log warning when disable cuda graph (#5209) 2025-04-09 14:16:13 -07:00
Zhaoyang Hao
456b008bd8 Add H20 dtype fp8_w8a8 fused MoE kernel tuning configs for DeepSeek V3/R1 (#5196) 2025-04-09 11:54:36 -07:00
saienduri
7f875f1293 update grok test (#5171) 2025-04-09 11:09:47 -07:00
Mick
fbebcb7aa4 model: support mllama4 (#5144) 2025-04-09 09:28:44 -07:00
HandH1998
4065248214 Support Llama4 fp8 inference (#5194)
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
2025-04-09 20:14:34 +08:00
fzyzcjy
86a876d883 Optimize topk operation in llama4 (#5128) 2025-04-09 02:50:22 -07:00
kk
92823069c4 Fix ci test "test_eval_fp8_accuracy" failed (#5185)
Co-authored-by: wunhuang <wunhuang@amd.com>
2025-04-09 02:44:05 -07:00
fzyzcjy
61970b08d8 Let bench_one_batch support enable_dp_attention (#4058) 2025-04-08 23:44:25 -07:00
Cheng Wan
76c48a0913 [DeepEP] fix: import buffer error (#5179) 2025-04-08 22:12:14 -07:00
Yineng Zhang
90caf06c00 fix: use DeepEPDispatcher on CUDA (#5180) 2025-04-08 21:56:53 -07:00
Yineng Zhang
6669d12707 feat: add DeepGEMM build warning (#5176)
Co-authored-by: grimoire <streetyao@live.com>
2025-04-08 21:16:23 -07:00
Jinyan Chen
bc3f6db2dd [Fix] DeepEP Compatibility with Low Latency (#5068)
Co-authored-by: ch-wan <cwan39@gatech.edu>
2025-04-08 20:31:31 -07:00
Chang Su
aac531c53b [Bugfix] Fix index out of bounds in local attention with large sequences (#5173) 2025-04-08 18:43:13 -07:00
fzyzcjy
466899e69c Fix multimodal hashing error (#5174) 2025-04-08 18:42:26 -07:00
Trevor Morris
11d760d56a FP4 weight loading and inference (2/2) (#3972) 2025-04-08 17:26:21 -07:00
fzyzcjy
5039d54772 Support 2x8xH100 for Llama 4 (#5159) 2025-04-08 14:55:14 -07:00
XinyuanTong
d09a51f1f6 [feat&refactor] Enhance multimodal input support with refactor io_struct (#4938)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-04-08 14:48:07 -07:00
Byron Hsu
6d3b35fae9 [PD] Simplify mini LB (#4911)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-04-08 09:42:34 -07:00
shangmingc
89a554181f [PD] Fix unclosed prefill connection warning of mini_lb (#5155)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-04-08 09:15:06 -07:00
Yun Dai
2695ab0537 Fix loading KV quantization scale; Enable modelopt kv cache (#4686)
Co-authored-by: qingquansong <ustcsqq@gmail.com>
2025-04-08 09:11:35 -07:00
kk
88d6fd9a11 Fix torch compile errors (#5158) 2025-04-08 15:04:37 +00:00
DangKai
cc88d98ab8 fix empty_cache error in pt_weights_iterator (#5151)
Co-authored-by: dangkai.dk <dangkai.dk@alibaba-inc.com>
2025-04-08 01:22:10 -07:00
Yubo Wang
804d9f2e4c Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760) 2025-04-07 23:20:51 -07:00
Chunan Zeng
a7c3f74bec [FA3 Feature] Support multi modal Llama-3.2-11B-Vision-Instruct (#5103) 2025-04-07 22:58:08 -07:00
kk
5a144a8ab9 Fix run time error in ROCm platform (#5147)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: root <root@dell300x-pla-t10-17.pla.dcgpu>
2025-04-07 22:49:40 -07:00
huangtingwei
27f8e6b9c1 fix multimodal hash feature (#5083) 2025-04-07 22:43:23 -07:00
Hubert Lu
afb752bcbe [AMD] Fix missing per_token_group_quant_fp8 for ROCm (#5140) 2025-04-07 22:38:25 -07:00
Yun Dai
9731eca77b [modelopt] automatically inspect if model is ModelOpt quantized and set quantization method (#5145) 2025-04-07 22:12:11 -07:00
mlmz
7c5658c189 feat: disable grammar restrictions within reasoning sections (#4984)
Co-authored-by: tianhaoyu <thy@mail.ecust.edu.cn>
Co-authored-by: DarkSharpness <2040703891@qq.com>
2025-04-07 21:46:47 -07:00
Stefan He
93470a1411 Refactor and Optimize FA3 Code (#5090)
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
2025-04-07 11:52:42 -07:00
Xiaoyu Zhang
db452760e5 [ci] fix llama4 ci error (#5126) 2025-04-07 21:15:46 +08:00
Yineng Zhang
57f99608f4 bump v0.4.5 (#5117) 2025-04-07 00:35:00 -07:00
HAI
819924748a Fix refactor error - fp8.py (#5106)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-04-07 00:34:08 -07:00
Chang Su
f04c80dc42 Add Llama4 support (#5092)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
Co-authored-by: fzyzcjy <ch271828n@outlook.com>
Co-authored-by: ispobock <ispobaoke@163.com>
2025-04-07 00:29:36 -07:00
Yineng Zhang
35e0856b90 bump v0.4.4.post4 (#5091) 2025-04-05 15:36:17 -07:00
Yi Zhang
aba5ca154d python transfer custom allreduce from trt kernel to vllm kernel (#5080) 2025-04-05 15:35:55 -07:00
Yineng Zhang
0d99adb715 upgrade transformers 4.51.0 (#5088) 2025-04-05 14:20:23 -07:00
Baizhou Zhang
efbae697b3 [Revision] Replace enable_flashinfer_mla argument with attention_backend (#5052) 2025-04-05 01:23:02 -07:00
Stefan He
ca8d02abd5 FA3 Spec Decoding to support top k = 1 and add cuda graph support (#5050)
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
Co-authored-by: Chunan Zeng <zcnrex@gmail.com>
2025-04-04 23:03:59 -07:00
inkcherry
7ed77d6b9e fix dummy-load deepseekv2 (#4535) 2025-04-04 15:22:37 -07:00