Commit Graph

1806 Commits

Author SHA1 Message Date
XinyuanTong
d09a51f1f6 [feat&refactor] Enhance multimodal input support with refactor io_struct (#4938)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
2025-04-08 14:48:07 -07:00
Byron Hsu
6d3b35fae9 [PD] Simplify mini LB (#4911)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
2025-04-08 09:42:34 -07:00
shangmingc
89a554181f [PD] Fix unclosed prefill connection warning of mini_lb (#5155)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
2025-04-08 09:15:06 -07:00
Yun Dai
2695ab0537 Fix loading KV quantization scale; Enable modelopt kv cache (#4686)
Co-authored-by: qingquansong <ustcsqq@gmail.com>
2025-04-08 09:11:35 -07:00
kk
88d6fd9a11 Fix torch compile errors (#5158) 2025-04-08 15:04:37 +00:00
DangKai
cc88d98ab8 fix empty_cache error in pt_weights_iterator (#5151)
Co-authored-by: dangkai.dk <dangkai.dk@alibaba-inc.com>
2025-04-08 01:22:10 -07:00
Yubo Wang
804d9f2e4c Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760) 2025-04-07 23:20:51 -07:00
Chunan Zeng
a7c3f74bec [FA3 Feature] Support multi modal Llama-3.2-11B-Vision-Instruct (#5103) 2025-04-07 22:58:08 -07:00
kk
5a144a8ab9 Fix run time error in ROCm platform (#5147)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: root <root@dell300x-pla-t10-17.pla.dcgpu>
2025-04-07 22:49:40 -07:00
huangtingwei
27f8e6b9c1 fix multimodal hash feature (#5083) 2025-04-07 22:43:23 -07:00
Hubert Lu
afb752bcbe [AMD] Fix missing per_token_group_quant_fp8 for ROCm (#5140) 2025-04-07 22:38:25 -07:00
Yun Dai
9731eca77b [modelopt] automatically inspect if model is ModelOpt quantized and set quantization method (#5145) 2025-04-07 22:12:11 -07:00
mlmz
7c5658c189 feat: disable grammar restrictions within reasoning sections (#4984)
Co-authored-by: tianhaoyu <thy@mail.ecust.edu.cn>
Co-authored-by: DarkSharpness <2040703891@qq.com>
2025-04-07 21:46:47 -07:00
Stefan He
93470a1411 Refactor and Optimize FA3 Code (#5090)
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
2025-04-07 11:52:42 -07:00
Xiaoyu Zhang
db452760e5 [ci] fix llama4 ci error (#5126) 2025-04-07 21:15:46 +08:00
Yineng Zhang
57f99608f4 bump v0.4.5 (#5117) 2025-04-07 00:35:00 -07:00
HAI
819924748a Fix refactor error - fp8.py (#5106)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
2025-04-07 00:34:08 -07:00
Chang Su
f04c80dc42 Add Llama4 support (#5092)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
Co-authored-by: fzyzcjy <ch271828n@outlook.com>
Co-authored-by: ispobock <ispobaoke@163.com>
2025-04-07 00:29:36 -07:00
Yineng Zhang
35e0856b90 bump v0.4.4.post4 (#5091) 2025-04-05 15:36:17 -07:00
Yi Zhang
aba5ca154d python transfer custom allreduce from trt kernel to vllm kernel (#5080) 2025-04-05 15:35:55 -07:00
Yineng Zhang
0d99adb715 upgrade transformers 4.51.0 (#5088) 2025-04-05 14:20:23 -07:00
Baizhou Zhang
efbae697b3 [Revision] Replace enable_flashinfer_mla argument with attention_backend (#5052) 2025-04-05 01:23:02 -07:00
Stefan He
ca8d02abd5 FA3 Spec Decoding to support top k = 1 and add cuda graph support (#5050)
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
Co-authored-by: Chunan Zeng <zcnrex@gmail.com>
2025-04-04 23:03:59 -07:00
inkcherry
7ed77d6b9e fix dummy-load deepseekv2 (#4535) 2025-04-04 15:22:37 -07:00
Cheng Wan
4c54f44202 [deepep] fix: shared experts are not initialized when shared experts fusion is enabled (#5072) 2025-04-04 15:08:30 -07:00
Xiaoyu Zhang
924ca7c92c Add DeepSeek V3/R1 shared experts fusion (#4918) 2025-04-04 01:59:29 -07:00
fzyzcjy
6ff9c6a5e7 Cleanup unused resources after DeepEP operation (#4996) 2025-04-04 00:36:04 -07:00
fzyzcjy
77e929a1a2 Support async DeepEP by splitting into two stages (#4995) 2025-04-04 00:32:27 -07:00
fzyzcjy
febe21ce03 Small refactor DeepEPDispatcher into subclasses (#4994) 2025-04-04 00:24:18 -07:00
JieXin Liang
a995a773a0 [fix] remove cuda_device_count_stateless (#5060) 2025-04-04 00:18:26 -07:00
Tommy Yang
31035dda44 Add H20 fused MoE kernel tuning configs for DeepSeek V3/R1 (#5057) 2025-04-03 22:14:28 -07:00
AniZpZ
d95269f9b3 [2/3] fix dsv3 awq issue (#4625)
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
2025-04-03 17:36:39 -07:00
Yineng Zhang
e53bf190bc upgrade sgl-kernel v0.0.7 (#5049) 2025-04-03 17:07:54 -07:00
Yineng Zhang
3289c1207d Update the retry count (#5051) 2025-04-03 17:07:38 -07:00
Ravi Theja
69df9761dd Add LlavaLlamaForCausaLM in MultiModal Processors (#5039)
Co-authored-by: Ravi Theja Desetty <ravitheja@Ravis-MacBook-Pro.local>
2025-04-03 15:41:12 -07:00
Lianmin Zheng
74885a848b Revert "Replace enable_flashinfer_mla argument with attention_backend" (#5048) 2025-04-03 13:30:56 -07:00
fzyzcjy
8e10fec9a8 Small refactor DeepEPMode to clean up code a bit (#4992) 2025-04-03 02:56:44 -07:00
Baizhou Zhang
e8999b13b7 Replace enable_flashinfer_mla argument with attention_backend (#5005) 2025-04-03 02:53:58 -07:00
saltyfish66
e41549c3d6 fix: fix illegal cuda memory access at fused_moe_kernel (#4727)
Co-authored-by: yuethe <yuethe@tencent.com>
2025-04-03 00:07:32 -07:00
Kaiyu Yang
31da75abed Update tokenizer_manager.py (#5008) 2025-04-02 13:56:19 -07:00
Qingquan Song
e983e43248 Add Eagle Speculative Decoding to FA3 Backend (#4951)
Co-authored-by: hebiao064 <hebiaobuaa@gmail.com>
Co-authored-by: Baizhou Zhang <sobereddiezhang@gmail.com>
Co-authored-by: zcnrex <zcnrex@gmail.com>
2025-04-02 13:09:02 -07:00
Xiaoyu Zhang
e9c6ce461d sgl scaled_fp8_quant support output padding (#4861) 2025-04-02 23:53:57 +08:00
Zhiqiang Xie
3fadc64793 bug fix for hicache host eviction (#4989) 2025-04-02 00:33:50 -07:00
Zhiqiang Xie
e119f04215 Large page size aligned hierarchical caching (#4581) 2025-04-01 22:38:15 -07:00
XinyuanTong
9eb49e878b [VLM RLHF] Take Image input for verl vlm rollout (#4915)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Co-authored-by: GeLee <leege233@gmail.com>
2025-04-01 20:03:17 -07:00
Zhiqiang Xie
12047f5e94 Prevent memory leak of retract_decode when page_size > 1 (#4977) 2025-04-01 15:30:45 -07:00
Yineng Zhang
fda6bb78da update bench_serving (#4958) 2025-04-01 15:10:56 -07:00
Jinyan Chen
23c764b18a [Feature] Support DeepEP Low Latency (#4767)
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: ch-wan <cwan39@gatech.edu>
2025-04-01 09:23:25 -07:00
Yuhong Guo
87fafa0105 Revert PR 4764 & 4813 related to R1 RoPE (#4959) 2025-03-31 20:56:58 -07:00
Yineng Zhang
1c63e79756 use fa3 in sgl-kernel (#4954) 2025-03-31 16:14:49 -07:00