Commit Graph

12 Commits

Author SHA1 Message Date
Elfie Guo
bebd0576e5 Integrate trtllm ragged attention for prefill self-attention (#9801) 2025-09-05 17:18:00 +08:00
Faraz
ff9b561817 Fix TRTLLM MLA Cuda KV Blocks Causing accuracy drop (#9675) 2025-08-29 17:16:10 -07:00
Faraz
f508cd3cb7 TRTLLM-MLA FP8 path (#8638)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
2025-08-11 14:02:13 -07:00
Faraz
4b04998d38 TRTLLM Gen MLA Decode Kernel Integration (same as #7938) (#8632)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
2025-07-31 16:03:40 -07:00
Baizhou Zhang
25a6a9aa22 Fix circular import in test_prefix_chunk_info.py (#7097) 2025-06-11 10:57:45 -07:00
Lianmin Zheng
e8e18dcdcc Revert "fix some typos" (#6244) 2025-05-12 12:53:26 -07:00
applesaucethebun
d738ab52f8 fix some typos (#6209)
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
2025-05-13 01:42:38 +08:00
Baizhou Zhang
a42736bbb8 Support MHA with chunked prefix cache for DeepSeek chunked prefill (#5113) 2025-04-15 22:01:22 -07:00
Yubo Wang
804d9f2e4c Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760) 2025-04-07 23:20:51 -07:00
fzyzcjy
26f07294f1 Warn users when release_memory_occupation is called without memory saver enabled (#4566) 2025-03-26 00:18:14 -07:00
Stefan He
4c584fc632 Fix circular imports in gptq.py and unblock test explorer (#4736) 2025-03-24 18:07:08 -07:00
Stefan He
5d7edc8e55 Support FA3 as Attention backend by using --attention-backend fa3 (#4680)
Co-authored-by: qsong <qsong@linkedin.com>
Co-authored-by: qingquansong <ustcsqq@gmail.com>
2025-03-23 23:28:11 -07:00