Xinyu Dong
|
fe666fb24f
|
[Feature] Support gpt-oss and update model list (#71)
* [Docs] Update Support Models
* [Feature] Support gpt-oss
* [Docs] fix model support list
* Fix Moe
* Fix
* Fix moe_ep
* remove gpt oss graph support , not yet
---------
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-04 21:19:49 +08:00 |
|
Joeegin
|
ded24f5026
|
[Model] Supporet InternVL2_5 on v0.11.0 (#72)
Co-authored-by: v_qiaoyijin <v_qiaoyijin@baidu.com>
|
2026-01-04 16:38:05 +08:00 |
|
hanhaowen
|
b015bb76fd
|
remove qwen2.py llama.py fix llama output
|
2025-12-31 11:39:37 +08:00 |
|
Xinyu Dong
|
b3c30a3cb9
|
[Feature] Support XiaoMi MIMO Flash V2 (#62)
* [Feature] Support MIMO Flash V2
|
2025-12-31 10:16:33 +08:00 |
|
Li Wei
|
383eb5459a
|
[refactor] remove redundant code in linear
|
2025-12-24 12:02:09 +08:00 |
|
ldh2020
|
8261a09e2a
|
[Kernel] Optimize the selection and update OP of ssm state
|
2025-12-21 15:45:32 +08:00 |
|
ldh2020
|
b97c781300
|
[Kernel] Optimize the recurrent op
|
2025-12-21 11:22:06 +08:00 |
|
Xinyu Dong
|
5a75795ade
|
[Model] Update llama.py
Remove redundancy
|
2025-12-15 21:28:56 +08:00 |
|
Xinyu Dong
|
7c7d0326c5
|
[Model] registry llama.py to vLLM
|
2025-12-15 21:21:28 +08:00 |
|
Xinyu Dong
|
ca059110b3
|
[Model] Supporet llama3 on v0.11.0
FULL AND PIECEWISE GRAPH ENBALE
|
2025-12-15 21:20:44 +08:00 |
|
chenyili
|
7c22d621fb
|
提交vllm0.11.0开发分支
|
2025-12-10 17:51:24 +08:00 |
|
zhaoyingzhuo
|
b614823125
|
[chore] Remove obsolete comments
|
2025-12-10 15:52:23 +08:00 |
|
dongxinyu03
|
c728e52505
|
Initial commit for vLLM-Kunlun Plugin
|
2025-12-10 12:05:39 +08:00 |
|