Commit Graph

1432 Commits

Author SHA1 Message Date
aoshen524
e79f7420be [Fix] Fix bugs and refactor codes in lora for better scalability. (#3652)
Co-authored-by: ShenAo1111 <1377693092@qq.com>
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
2025-02-20 11:51:57 -08:00
chenxiaobing
d5d80ab477 [Bugfix] Fix scores mask for moe topk (#3705) 2025-02-21 02:17:23 +08:00
Ke Bao
ddcf9fe3be Optimize triton attention custom mask (#3731) 2025-02-21 00:54:41 +08:00
HAI
6252ade985 revert BLOCK and num_warps on HIP (#3722) 2025-02-20 23:30:18 +08:00
yizhang2077
1eb8eade2b add control for cutlass fp8 blockwise gemm (#3727) 2025-02-20 16:10:35 +08:00
Shi Shuai
55de40f782 [Docs]: Fix Multi-User Port Allocation Conflicts (#3601)
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
Co-authored-by: simveit <simp.veitner@gmail.com>
2025-02-19 11:15:44 -08:00
Cheng Wan
6b0aeb58fd [moe] optim: reduce memory consumption in fused_moe (#3692) 2025-02-20 02:25:05 +08:00
Mick
99c1b9d2ee fix: apply cache size limit of attention mask for VisionAttention (#3657) 2025-02-19 20:16:48 +08:00
who who who
634a3561ac AMD Prefill optimize (#3665)
Co-authored-by: AMD-dteng <dteng@amd.com>
Co-authored-by: HAI <hixiao@gmail.com>
2025-02-18 09:35:58 -08:00
Mick
424848d26f fix: remove dependency on latest transformers impl (#3635) 2025-02-19 01:14:11 +08:00
Ke Bao
e5ce395a6c Fix draft decode max batch size (#3676) 2025-02-18 23:03:26 +08:00
Yineng Zhang
f983213a1f update pr-test (#3663) 2025-02-18 17:23:43 +08:00
yigex
ddf39d3fce [ROCm] Optimal MOE Tuning for AMD Radeon Graphics (#3567) 2025-02-17 17:54:10 -08:00
Wen-Heng (Jack) Chung
2eab113206 [ROCm] Add additional block quant GEMM tuning configs for AMD GPUs. (#3616)
Co-authored-by: HAI <hixiao@gmail.com>
2025-02-17 15:54:18 -08:00
Yineng Zhang
058d199d4e use transformers 4.48.3 (#3650) 2025-02-18 04:40:47 +08:00
Yineng Zhang
a5375adc3a chore: bump v0.4.3.post2 (#3645)
Co-authored-by: pankajroark <pankajroark@users.noreply.github.com>
2025-02-18 02:48:30 +08:00
Yineng Zhang
75d171a9c5 chore: update flashinfer v0.2.1.post2 (#3644) 2025-02-18 02:47:42 +08:00
Yineng Zhang
714f3e6362 feat: support flashinfer mla with prefix cache (#3643) 2025-02-18 02:06:43 +08:00
Xiaoyu Zhang
c38f3aed24 support multi-gpu block-gemm tuning (#3639) 2025-02-18 00:00:35 +08:00
Yineng Zhang
e782eb7e6a chore: bump v0.4.3.post1 (#3638) 2025-02-17 21:58:19 +08:00
Yineng Zhang
5f1a485d9e Revert "[ROCm] Use tl.range() in block GEMM kernels with `num_stage… (#3632) 2025-02-17 18:01:21 +08:00
Wen-Heng (Jack) Chung
03caefeb51 [ROCm] Use tl.range() in block GEMM kernels with num_stages set by host. (#3535)
Co-authored-by: HAI <hixiao@gmail.com>
2025-02-16 01:40:38 -08:00
Mick
bcc213df61 Model: Support Qwen 2.5 vl (#3258) 2025-02-16 00:58:53 -08:00
Jiada Li
39416e394a fix lockfile and port_registry file permission error (#3598)
Co-authored-by: jiada li <jiada@lmsys.us-northcentral1-a.compute.internal>
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
2025-02-15 19:14:45 -08:00
Yineng Zhang
bbc47c348f fix apply_token_bitmask_inplace_cuda (#3594) 2025-02-15 23:55:08 +08:00
Yineng Zhang
dfce926921 fix high qps crash when enable mtp (#3592)
Co-authored-by: ispobock <ispobaoke@hotmail.com>
2025-02-15 23:11:28 +08:00
Mick
7711ac6ed0 doc: emphasize and notify the usage of chat_template (#3589)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
2025-02-15 00:10:32 -08:00
Shi Shuai
7443197a63 [CI] Improve Docs CI Efficiency (#3587)
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
2025-02-14 19:57:00 -08:00
Ke Bao
862dd76c76 Support NextN (MTP) speculative decoding for DeepSeek-V3/R1 (#3582) 2025-02-15 05:28:34 +08:00
Shenggui Li
fb4c9c3a30 [fix] added support for vlm in offline inference (#3548) 2025-02-15 05:27:29 +08:00
Chuyue Sun
6cc309557a Add support for OpenAI API o1 model (#3363)
Co-authored-by: Shan Yu <shanyu1@g.ucla.edu>
2025-02-14 11:43:00 +08:00
Yineng Zhang
e0b9a423c8 chore: bump v0.4.3 (#3556) 2025-02-14 09:43:14 +08:00
Yineng Zhang
70f894b810 feat: support flashinfer mla attention for deepseek v3 (#3550) 2025-02-14 08:50:14 +08:00
Wen-Heng (Jack) Chung
871a4aa1bf [ROCm] Add ROCm tuning configs for AMD Instinct MI325X. (#3536) 2025-02-12 20:09:36 -08:00
yizhang2077
98eecbda54 integrate blockwise fp8 kernel (#3529) 2025-02-13 04:39:33 +08:00
Liangsheng Yin
8616357a97 Fix deepseek awq v3 (#3450) 2025-02-12 22:09:52 +08:00
Xiaoyu Zhang
45e3a7bc41 use sgl_per_token_group_quant_fp8 kernel (#3493) 2025-02-12 18:40:42 +08:00
Ata Fatahi
b8318aec48 Make NCCL NVLS configurable (#3502) 2025-02-12 03:25:06 +08:00
HAI
d81ac4434e MI30x: More graph captures for larger batch sizes and concurrencies (#3420) 2025-02-12 03:04:38 +08:00
Ke Bao
7e6d5fc694 Support Eagle cuda graph for Triton backend (#3500) 2025-02-12 02:27:45 +08:00
Wen-Heng (Jack) Chung
cadd5dbe6a Tune MI300X fused MoE Triton kernel JSON config. (#3492) 2025-02-11 10:27:25 -08:00
yigex
fdf04a1426 [ROCm] Add ROCm tuning config to block gemm and Re-tune for AMD Radeon Graphics (#3418)
Co-authored-by: Bruce Xue <yigex@xilinx.com>
Co-authored-by: HAI <hixiao@gmail.com>
2025-02-10 23:55:04 -08:00
Jackmin801
5f0e7de339 [Feat] Return hidden states (experimental) (#3364)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
2025-02-10 15:54:37 -08:00
Xiaoyu Zhang
2f47d710ae refine some typo (#3473) 2025-02-10 23:35:44 +08:00
Ying Sheng
d23cb9a01e [Eagle] reduce one draft forward (#3468) 2025-02-10 20:21:49 +08:00
Ke Bao
2d61132374 Support Eagle2 for Triton backend (#3466) 2025-02-10 20:00:42 +08:00
Yineng Zhang
cddb1cdf8f chore: bump v0.4.2.post4 (#3459) 2025-02-10 14:12:16 +08:00
Baizhou Zhang
c45cab1c00 [Fix] Fix accuracy bug and refactor codes for lora (#3413) 2025-02-10 13:29:00 +08:00
Yineng Zhang
27c4c9cf52 remove _grouped_size_compiled_for_decode_kernels (#3453) 2025-02-10 13:01:21 +08:00
Yineng Zhang
36f6fc5093 feat: enable ragged fa3 by default on hopper 12.4+ (#3442) 2025-02-10 07:43:01 +08:00