Xiaoyu Zhang
|
87eddedfa2
|
[ci] fix ci test fused_moe op (#5102)
|
2025-04-09 08:52:46 -07:00 |
|
HandH1998
|
4065248214
|
Support Llama4 fp8 inference (#5194)
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
|
2025-04-09 20:14:34 +08:00 |
|
Yubo Wang
|
804d9f2e4c
|
Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760)
|
2025-04-07 23:20:51 -07:00 |
|
Lianmin Zheng
|
9adf178cc2
|
Fix 2-gpu CI test and suppress some warnings (#4930)
|
2025-03-30 12:51:44 -07:00 |
|
Lianmin Zheng
|
4ede6770cd
|
Fix retract for page size > 1 (#4914)
|
2025-03-30 02:57:15 -07:00 |
|
Lianmin Zheng
|
b26bc86b36
|
Support page size > 1 + eagle (#4908)
|
2025-03-30 00:46:23 -07:00 |
|
chaobo jia
|
ef9a378a20
|
[Feature] add multi-rank support for Lora (#4492)
Co-authored-by: rudy152 <czh1137892874@gmail.com>
|
2025-03-28 09:38:44 -07:00 |
|
fzyzcjy
|
0d3e3072ee
|
Fix CI of test_patch_torch (#4844)
|
2025-03-27 21:22:45 -07:00 |
|
fzyzcjy
|
92bb49a7f9
|
Patch PyTorch's bug that cross-process tensor transfer will lead to wrong device (#4565)
|
2025-03-27 00:22:33 -07:00 |
|
Pan Lyu
|
c913ed4046
|
support clip embedding model (#4506)
|
2025-03-27 00:18:15 -07:00 |
|
Xiaoyu Zhang
|
04e3ff6975
|
Support compressed tensors fp8w8a8 (#4743)
|
2025-03-26 13:21:25 -07:00 |
|
HandH1998
|
2d1b83e57a
|
add dsv3 int8 test (#4705)
|
2025-03-24 21:57:58 -07:00 |
|
yuhsaun-t
|
199bb01d00
|
Add endpoints to dump selected expert ids (#4435)
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
|
2025-03-24 21:34:19 -07:00 |
|
Mick
|
d373a48c98
|
fix: second_per_grid_ts should be used to get mrope position (#3682)
|
2025-03-17 18:12:38 -07:00 |
|
Zhiqiang Xie
|
a98290aea3
|
Unit test for Hierarchical Caching (#4486)
|
2025-03-17 17:45:00 -07:00 |
|
woodx
|
48efec7b05
|
Feature: support code completion (#3612)
|
2025-03-16 18:26:19 -07:00 |
|
lukec
|
21d485f835
|
Fix test_create_kvindices unit test (#4452)
|
2025-03-15 16:01:04 -07:00 |
|
Lianmin Zheng
|
f0afaf5289
|
Add a dummy grok test case (#4399)
|
2025-03-13 15:29:48 -07:00 |
|
Lianmin Zheng
|
c76040e31b
|
Support page size > 1 (#4356)
|
2025-03-12 22:22:39 -07:00 |
|
HandH1998
|
2ac189edc8
|
Amd test fp8 (#4261)
|
2025-03-10 10:12:09 -07:00 |
|
Lianmin Zheng
|
00d25a7f5e
|
Fix quantization and nightly tests (#4258)
|
2025-03-10 03:06:21 -07:00 |
|
Lianmin Zheng
|
aa957102a9
|
Simplify tests & Fix trtllm custom allreduce registration (#4252)
|
2025-03-10 01:24:22 -07:00 |
|
Lianmin Zheng
|
fbd560028a
|
Auto balance CI tests (#4238)
|
2025-03-09 21:05:55 -07:00 |
|
Lianmin Zheng
|
48473684cc
|
Split test_mla.py into two files (#4216)
|
2025-03-08 15:40:49 -08:00 |
|
HandH1998
|
c7f254468f
|
[Feature] DeepSeek V3/R1 INT8 Quantization (channel-wise) (#3888)
Co-authored-by: yych0745 <1398089567@qq.com>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: b0urnee <2769086541@qq.com>
|
2025-03-06 20:54:52 -08:00 |
|
Pan Lyu
|
361971b859
|
Add Support for Qwen2-VL Multi-modal Embedding Models (#3694)
|
2025-03-06 16:46:20 -08:00 |
|
Qubitium-ModelCloud
|
56a724eba3
|
[QUANT] Add GPTQModel Dynamic Quantization + lm_head Quantization (#3790)
Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai>
Co-authored-by: ZX-ModelCloud <zx@modelcloud.ai>
|
2025-03-05 01:11:00 -08:00 |
|
Xihuai Wang
|
95575aa76a
|
Reasoning parser (#4000)
Co-authored-by: Lucas Pickup <lupickup@microsoft.com>
|
2025-03-03 21:16:36 -08:00 |
|
Lianmin Zheng
|
ac2387279e
|
Support penalty in overlap mode; return logprob with chunked prefill; improve benchmark scripts (#3988)
Co-authored-by: SangBin Cho <rkooo567@gmail.com>
Co-authored-by: dhou-xai <dhou@x.ai>
Co-authored-by: Hanming Lu <hanming_lu@berkeley.edu>
|
2025-03-03 00:12:04 -08:00 |
|
Baizhou Zhang
|
90a4b7d98a
|
[Feature]Support ragged prefill in flashinfer mla backend (#3967)
Co-authored-by: Yineng Zhang <me@zhyncs.com>
Co-authored-by: pankajroark <pankajroark@users.noreply.github.com>
|
2025-02-28 18:13:56 -08:00 |
|
KCFindstr
|
bc20e93f2d
|
[feat] Add Vertex AI compatible prediction route for /generate (#3866)
|
2025-02-27 19:42:15 -08:00 |
|
laixin
|
1a6e97577a
|
Feature DeepSeek V3/R1 INT8 Quantization (block-wise) (#3730)
Co-authored-by: HandH1998 <1335248067@qq.com>
|
2025-02-24 05:43:35 -08:00 |
|
aoshen524
|
e79f7420be
|
[Fix] Fix bugs and refactor codes in lora for better scalability. (#3652)
Co-authored-by: ShenAo1111 <1377693092@qq.com>
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
|
2025-02-20 11:51:57 -08:00 |
|
Yineng Zhang
|
e319153be8
|
update unit test (#3636)
|
2025-02-17 21:06:10 +08:00 |
|
Jackmin801
|
5f0e7de339
|
[Feat] Return hidden states (experimental) (#3364)
Co-authored-by: Chayenne <zhaochen20@outlook.com>
|
2025-02-10 15:54:37 -08:00 |
|
Yineng Zhang
|
d39899e85c
|
upgrade flashinfer v0.2.0.post2 (#3288)
Co-authored-by: pankajroark <pankajroark@users.noreply.github.com>
|
2025-02-04 21:41:40 +08:00 |
|
Baizhou Zhang
|
70817a7eae
|
[Feature] Define backends and add Triton backend for Lora (#3161)
Co-authored-by: Ying Sheng <sqy1415@gmail.com>
|
2025-02-03 22:09:13 -08:00 |
|
Mick
|
9f635ea50d
|
[Fix] Address remaining issues of supporting MiniCPMV (#2977)
|
2025-01-28 00:22:13 -08:00 |
|
Byron Hsu
|
27aeb4b7d8
|
[test] deduplicate test_session_control (#3183)
|
2025-01-28 13:17:06 +08:00 |
|
yizhang2077
|
1e3e521544
|
add unit test for block wise fp8 (#3156)
|
2025-01-27 15:32:04 +08:00 |
|
Lianmin Zheng
|
d1a0863251
|
Add a test case for cached_tokens (#3145)
|
2025-01-26 01:39:28 -08:00 |
|
Lianmin Zheng
|
cd493b5afc
|
Improve metrics, logging, and importing orders (#2992)
|
2025-01-19 18:36:59 -08:00 |
|
Enrique Shockwave
|
3bcf5ecea7
|
support regex in xgrammar backend (#2983)
|
2025-01-20 04:34:41 +08:00 |
|
bjmsong
|
d3024f4fc8
|
support e4m3 kvcache in qwen2 & add kv scaling facotr json (#2894)
Co-authored-by: bjmsong <bjmsong@126.com>
|
2025-01-18 11:43:22 +08:00 |
|
Ke Bao
|
d47c5101f1
|
Add ut for qwen model (#2947)
|
2025-01-18 00:03:54 +08:00 |
|
Chang Su
|
a8ccacc8b8
|
[Frontend] Fix request length check and add option to disallow auto truncation in scheduler (#2876)
|
2025-01-16 14:51:19 -08:00 |
|
yizhang2077
|
767c9dec03
|
adapt custom allreduce for tensorrt llm (#2511)
|
2025-01-16 04:57:35 +08:00 |
|
Ke Bao
|
bfbda62c8b
|
Add ut for w8a8 int8 quantization (#2897)
|
2025-01-15 18:29:14 +08:00 |
|
fzyzcjy
|
923f518337
|
CUDA-graph-compatible releasing and resuming KV cache and model weight memory (#2630)
|
2025-01-13 11:38:51 -08:00 |
|
Lianmin Zheng
|
67008f4b32
|
Use only one GPU for MLA CI tests (#2858)
|
2025-01-13 03:55:33 -08:00 |
|