Commit Graph

30 Commits

Author SHA1 Message Date
Xinyu Dong
bf9369f733 Migrate XTorch operations to Kunlun operations (accelerating iteration) (#177)
Signed-off-by: dongxinyu03 <dongxinyu03@baidu.com>
2026-02-12 18:13:00 +08:00
Li Wei
744719587e [Feature] Support glmx (#194)
Signed-off-by: Li Wei <liwei.109@outlook.com>
Co-authored-by: tangshiwen <tangshiwen@baidu.com>
Co-authored-by: Xinyu Dong <dongxinyu03@baidu.com>
2026-02-12 15:40:42 +08:00
fromck
6f12830839 [Kernel] add topk_per_row to optimize the calculation of topk_indexes (#168)
Signed-off-by: chengxiaokang <chengxiaokang@baidu.com>
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-02-02 11:07:49 +08:00
baoqian426
1eaa1336ac [Bugfix]remove mla patch, server args no need --compilation-config for ds v3.1 (#145)
Signed-off-by: baoqian426 <1354987947@qq.com>
2026-01-23 15:59:43 +08:00
fromck
0ce5f1a3f7 Add kernels to optimize RoPE and the decoding stage (#143)
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-01-23 10:29:52 +08:00
fromck
74d4f804e8 add 2 kernels and optimize the calculation of topk_indices (#134)
Co-authored-by: chengxiaokang <chengxiaokang@baidu.com>
2026-01-22 10:29:28 +08:00
youzeyu
92b40628cd delete glmGlmForCausalLM register (#132)
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-20 19:22:33 +08:00
Li Wei
f2019b145f Revert "support glm47 in 0.11.0 version (#116)" (#123)
This reverts commit 9006e37979.
2026-01-20 10:46:11 +08:00
roger-lcc
9006e37979 support glm47 in 0.11.0 version (#116)
* support glm47 in 0.11.0 version

* support glm47 in 0.11.0 version

---------

Co-authored-by: luochencheng <luochencheng@baidu.com>
2026-01-19 20:26:26 +08:00
Li Wei
8f56cbf3ed [refactor]update Kunlun classes with monkey patch (#122)
Signed-off-by: Li Wei <liwei.109@outlook.com>
2026-01-19 20:24:19 +08:00
Shiwen Tang
8988ad08b2 [Feature] Support Mixed-Precision Quantization for MoE (#112) 2026-01-14 18:42:18 +08:00
Lidang Jiang
7ed71432ca [Bug] Fix InternVL KeyError: ((1, 1, 3), '<i8') (#108) 2026-01-13 22:36:03 +08:00
roger-lcc
0455b49519 [Bugs] fix qwen2_vl for 0.11.0 (#94)
Co-authored-by: luochencheng <luochencheng@baidu.com>
2026-01-09 15:05:40 +08:00
dongxinyu03
26b311ccf5 [Feature] DeepSeek Support MTP 2026-01-06 21:37:21 +08:00
Li Wei
9533f68e99 [fix]matmul not support cuda graph 2026-01-06 17:32:45 +08:00
Li Wei
515a4eeda9 [dev] support compressed-tensors w8a8 quantization (#75)
* [dev] support compressed-tensors w8a8 quantization

Co-authored-by: Li Wei <liwei.109@outlook.com>

* [refact]update KunlunScaleMMKernel impl

* [rebase]resolve conflicts and remove redundant code

---------

Co-authored-by: tangshiwen <tangshiwen@baidu.com>
2026-01-06 13:51:53 +08:00
baoqian426
ee0f50e68f [Feature] support deepseek v3/r1/v3.2 (#78)
* [Feature] support deepseek v3/r1/v3.2

* fix gpt_oss

* update readme

* update readme

---------

Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-05 22:55:35 +08:00
Xinyu Dong
fe666fb24f [Feature] Support gpt-oss and update model list (#71)
* [Docs] Update Support Models

* [Feature] Support gpt-oss

* [Docs] fix model support list

* Fix Moe

* Fix

* Fix moe_ep

* remove gpt oss graph support , not yet

---------

Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-04 21:19:49 +08:00
Joeegin
ded24f5026 [Model] Supporet InternVL2_5 on v0.11.0 (#72)
Co-authored-by: v_qiaoyijin <v_qiaoyijin@baidu.com>
2026-01-04 16:38:05 +08:00
hanhaowen
b015bb76fd remove qwen2.py llama.py fix llama output 2025-12-31 11:39:37 +08:00
Xinyu Dong
b3c30a3cb9 [Feature] Support XiaoMi MIMO Flash V2 (#62)
* [Feature] Support MIMO Flash V2
2025-12-31 10:16:33 +08:00
Li Wei
383eb5459a [refactor] remove redundant code in linear 2025-12-24 12:02:09 +08:00
ldh2020
8261a09e2a [Kernel] Optimize the selection and update OP of ssm state 2025-12-21 15:45:32 +08:00
ldh2020
b97c781300 [Kernel] Optimize the recurrent op 2025-12-21 11:22:06 +08:00
Xinyu Dong
5a75795ade [Model] Update llama.py
Remove redundancy
2025-12-15 21:28:56 +08:00
Xinyu Dong
7c7d0326c5 [Model] registry llama.py to vLLM 2025-12-15 21:21:28 +08:00
Xinyu Dong
ca059110b3 [Model] Supporet llama3 on v0.11.0
FULL AND PIECEWISE GRAPH ENBALE
2025-12-15 21:20:44 +08:00
chenyili
7c22d621fb 提交vllm0.11.0开发分支 2025-12-10 17:51:24 +08:00
zhaoyingzhuo
b614823125 [chore] Remove obsolete comments 2025-12-10 15:52:23 +08:00
dongxinyu03
c728e52505 Initial commit for vLLM-Kunlun Plugin 2025-12-10 12:05:39 +08:00