baoqian426
|
2512259944
|
longcontext chunk make attention crash, fix it (#117)
Co-authored-by: root <root@rdtest-node1150.bcc-zwlt.baidu.com>
|
2026-01-17 18:38:23 +08:00 |
|
fromck
|
71a5a04e0a
|
[Misc]Specify that DS32 only supports --kv-cache-dtype bfloat16 (#119)
* [Kernel] add kernels to torch.ops
* [Misc]Specify that DS only supports --kv-cache-dtype bfloat16
---------
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
|
2026-01-17 16:52:02 +08:00 |
|
wzh
|
115eb32068
|
enable int8 bmm
|
2026-01-14 14:30:59 +08:00 |
|
hanhaowen
|
ff8ebfa208
|
enable full cudagraph for deepseek
|
2026-01-12 15:18:12 +08:00 |
|
baoqian426
|
ee0f50e68f
|
[Feature] support deepseek v3/r1/v3.2 (#78)
* [Feature] support deepseek v3/r1/v3.2
* fix gpt_oss
* update readme
* update readme
---------
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-05 22:55:35 +08:00 |
|
hanhaowen
|
b015bb76fd
|
remove qwen2.py llama.py fix llama output
|
2025-12-31 11:39:37 +08:00 |
|
Xinyu Dong
|
b3c30a3cb9
|
[Feature] Support XiaoMi MIMO Flash V2 (#62)
* [Feature] Support MIMO Flash V2
|
2025-12-31 10:16:33 +08:00 |
|
ldh2020
|
58c1db5073
|
[Bugfix] fix the bug of the flash_attention in Qwen3-Next
|
2025-12-21 10:34:43 +08:00 |
|
chenyili
|
7c22d621fb
|
提交vllm0.11.0开发分支
|
2025-12-10 17:51:24 +08:00 |
|
dongxinyu03
|
c728e52505
|
Initial commit for vLLM-Kunlun Plugin
|
2025-12-10 12:05:39 +08:00 |
|