Xiaoyu Zhang
|
87eddedfa2
|
[ci] fix ci test fused_moe op (#5102)
|
2025-04-09 08:52:46 -07:00 |
|
HandH1998
|
4065248214
|
Support Llama4 fp8 inference (#5194)
Co-authored-by: laixinn <xielx@shanghaitech.edu.cn>
Co-authored-by: sleepcoo <sleepcoo@gmail.com>
Co-authored-by: zhyncs <me@zhyncs.com>
|
2025-04-09 20:14:34 +08:00 |
|
fzyzcjy
|
86a876d883
|
Optimize topk operation in llama4 (#5128)
|
2025-04-09 02:50:22 -07:00 |
|
kk
|
92823069c4
|
Fix ci test "test_eval_fp8_accuracy" failed (#5185)
Co-authored-by: wunhuang <wunhuang@amd.com>
|
2025-04-09 02:44:05 -07:00 |
|
yinfan98
|
d2e507df3c
|
[Misc] clean up vllm in sgl-kernel test (#5189)
|
2025-04-09 01:22:13 -07:00 |
|
fzyzcjy
|
61970b08d8
|
Let bench_one_batch support enable_dp_attention (#4058)
|
2025-04-08 23:44:25 -07:00 |
|
Cheng Wan
|
76c48a0913
|
[DeepEP] fix: import buffer error (#5179)
|
2025-04-08 22:12:14 -07:00 |
|
Yineng Zhang
|
90caf06c00
|
fix: use DeepEPDispatcher on CUDA (#5180)
|
2025-04-08 21:56:53 -07:00 |
|
Yineng Zhang
|
6669d12707
|
feat: add DeepGEMM build warning (#5176)
Co-authored-by: grimoire <streetyao@live.com>
|
2025-04-08 21:16:23 -07:00 |
|
Kay Yan
|
f2b70afde0
|
docs: remove the use of Downward API for LWS_WORKER_INDEX (#5110)
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
|
2025-04-08 20:46:11 -07:00 |
|
Jinyan Chen
|
bc3f6db2dd
|
[Fix] DeepEP Compatibility with Low Latency (#5068)
Co-authored-by: ch-wan <cwan39@gatech.edu>
|
2025-04-08 20:31:31 -07:00 |
|
Chang Su
|
aac531c53b
|
[Bugfix] Fix index out of bounds in local attention with large sequences (#5173)
|
2025-04-08 18:43:13 -07:00 |
|
fzyzcjy
|
39efad4fbc
|
Tiny disable model that does not work (#5175)
|
2025-04-08 18:42:37 -07:00 |
|
fzyzcjy
|
466899e69c
|
Fix multimodal hashing error (#5174)
|
2025-04-08 18:42:26 -07:00 |
|
Trevor Morris
|
11d760d56a
|
FP4 weight loading and inference (2/2) (#3972)
|
2025-04-08 17:26:21 -07:00 |
|
fzyzcjy
|
5039d54772
|
Support 2x8xH100 for Llama 4 (#5159)
|
2025-04-08 14:55:14 -07:00 |
|
XinyuanTong
|
d09a51f1f6
|
[feat&refactor] Enhance multimodal input support with refactor io_struct (#4938)
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
2025-04-08 14:48:07 -07:00 |
|
simveit
|
f8194b267c
|
Small improvement of native api docs (#5139)
Co-authored-by: zhaochenyang20 <zhaochen20@outlook.com>
|
2025-04-08 12:09:26 -07:00 |
|
Byron Hsu
|
6d3b35fae9
|
[PD] Simplify mini LB (#4911)
Co-authored-by: Liangsheng Yin <hnyls2002@gmail.com>
|
2025-04-08 09:42:34 -07:00 |
|
Ma Mingfei
|
a73c4df438
|
Add optimized native kernels in sgl-kernel (#5150)
Co-authored-by: Chunyuan WU <chunyuan.wu@intel.com>
Co-authored-by: YanbingJiang <yanbing.jiang@intel.com>
Co-authored-by: blzheng <beilei.zheng@intel.com>
|
2025-04-08 09:37:46 -07:00 |
|
shangmingc
|
89a554181f
|
[PD] Fix unclosed prefill connection warning of mini_lb (#5155)
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
|
2025-04-08 09:15:06 -07:00 |
|
Yun Dai
|
2695ab0537
|
Fix loading KV quantization scale; Enable modelopt kv cache (#4686)
Co-authored-by: qingquansong <ustcsqq@gmail.com>
|
2025-04-08 09:11:35 -07:00 |
|
kk
|
88d6fd9a11
|
Fix torch compile errors (#5158)
|
2025-04-08 15:04:37 +00:00 |
|
DangKai
|
cc88d98ab8
|
fix empty_cache error in pt_weights_iterator (#5151)
Co-authored-by: dangkai.dk <dangkai.dk@alibaba-inc.com>
|
2025-04-08 01:22:10 -07:00 |
|
saienduri
|
3033c11a21
|
Add dummy grok test to amd CI. (#5115)
|
2025-04-08 07:44:59 +00:00 |
|
Yubo Wang
|
fd5a55cfd3
|
Use public model for FA3 speculative decode testing (#5152)
|
2025-04-08 00:08:25 -07:00 |
|
Yubo Wang
|
804d9f2e4c
|
Add unit test on page_size > 1 and mla and integration test for Flash Attention 3 (#4760)
|
2025-04-07 23:20:51 -07:00 |
|
Chunan Zeng
|
a7c3f74bec
|
[FA3 Feature] Support multi modal Llama-3.2-11B-Vision-Instruct (#5103)
|
2025-04-07 22:58:08 -07:00 |
|
kk
|
5a144a8ab9
|
Fix run time error in ROCm platform (#5147)
Co-authored-by: wunhuang <wunhuang@amd.com>
Co-authored-by: root <root@dell300x-pla-t10-17.pla.dcgpu>
|
2025-04-07 22:49:40 -07:00 |
|
huangtingwei
|
27f8e6b9c1
|
fix multimodal hash feature (#5083)
|
2025-04-07 22:43:23 -07:00 |
|
Hubert Lu
|
afb752bcbe
|
[AMD] Fix missing per_token_group_quant_fp8 for ROCm (#5140)
|
2025-04-07 22:38:25 -07:00 |
|
Yun Dai
|
9731eca77b
|
[modelopt] automatically inspect if model is ModelOpt quantized and set quantization method (#5145)
|
2025-04-07 22:12:11 -07:00 |
|
mlmz
|
7c5658c189
|
feat: disable grammar restrictions within reasoning sections (#4984)
Co-authored-by: tianhaoyu <thy@mail.ecust.edu.cn>
Co-authored-by: DarkSharpness <2040703891@qq.com>
|
2025-04-07 21:46:47 -07:00 |
|
yinfan98
|
9798e72baa
|
[Misc] Use pytest.mark.skipif in sgl-kernel test (#5137)
|
2025-04-07 21:35:14 -07:00 |
|
Ke Bao
|
ade714a67f
|
Add Llama4 user guide (#5133)
Co-authored-by: Cheng Wan <54331508+ch-wan@users.noreply.github.com>
|
2025-04-07 19:09:34 -07:00 |
|
Stefan He
|
93470a1411
|
Refactor and Optimize FA3 Code (#5090)
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
|
2025-04-07 11:52:42 -07:00 |
|
Xiaoyu Zhang
|
db452760e5
|
[ci] fix llama4 ci error (#5126)
|
2025-04-07 21:15:46 +08:00 |
|
Yineng Zhang
|
57f99608f4
|
bump v0.4.5 (#5117)
|
2025-04-07 00:35:00 -07:00 |
|
HAI
|
819924748a
|
Fix refactor error - fp8.py (#5106)
Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com>
|
2025-04-07 00:34:08 -07:00 |
|
Chang Su
|
f04c80dc42
|
Add Llama4 support (#5092)
Co-authored-by: Cheng Wan <cwan39@gatech.edu>
Co-authored-by: fzyzcjy <ch271828n@outlook.com>
Co-authored-by: ispobock <ispobaoke@163.com>
|
2025-04-07 00:29:36 -07:00 |
|
mlmz
|
d1bb171180
|
Fix: Reduce the number of document ci attempts to avoid long ci running (#5097)
Co-authored-by: shuaills <shishuaiuoe@gmail.com>
|
2025-04-06 00:43:48 -07:00 |
|
Yineng Zhang
|
35e0856b90
|
bump v0.4.4.post4 (#5091)
|
2025-04-05 15:36:17 -07:00 |
|
Yi Zhang
|
aba5ca154d
|
python transfer custom allreduce from trt kernel to vllm kernel (#5080)
|
2025-04-05 15:35:55 -07:00 |
|
Yineng Zhang
|
496dde8491
|
bump sgl-kernel 0.0.8 (#5089)
|
2025-04-05 14:28:04 -07:00 |
|
Yi Zhang
|
bcbbf519f9
|
sgl-kernel transfer custom allreduce from trt kernel to vllm kernel (#5079)
|
2025-04-05 14:23:20 -07:00 |
|
Yineng Zhang
|
0d99adb715
|
upgrade transformers 4.51.0 (#5088)
|
2025-04-05 14:20:23 -07:00 |
|
Baizhou Zhang
|
efbae697b3
|
[Revision] Replace enable_flashinfer_mla argument with attention_backend (#5052)
|
2025-04-05 01:23:02 -07:00 |
|
Stefan He
|
ca8d02abd5
|
FA3 Spec Decoding to support top k = 1 and add cuda graph support (#5050)
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
Co-authored-by: Chunan Zeng <zcnrex@gmail.com>
|
2025-04-04 23:03:59 -07:00 |
|
Yineng Zhang
|
3f287b8579
|
support sgl-kernel on blackwell (#5074)
|
2025-04-04 16:59:32 -07:00 |
|
inkcherry
|
7ed77d6b9e
|
fix dummy-load deepseekv2 (#4535)
|
2025-04-04 15:22:37 -07:00 |
|