Commit Graph

127 Commits

Author SHA1 Message Date
baoqian426
c54b2d2a2d Merge pull request #80 from liwei109/aicapx-quant
[fix]matmul not support cuda graph
2026-01-06 17:49:09 +08:00
Li Wei
9533f68e99 [fix]matmul not support cuda graph 2026-01-06 17:32:45 +08:00
Li Wei
515a4eeda9 [dev] support compressed-tensors w8a8 quantization (#75)
* [dev] support compressed-tensors w8a8 quantization

Co-authored-by: Li Wei <liwei.109@outlook.com>

* [refact]update KunlunScaleMMKernel impl

* [rebase]resolve conflicts and remove redundant code

---------

Co-authored-by: tangshiwen <tangshiwen@baidu.com>
2026-01-06 13:51:53 +08:00
baoqian426
ee0f50e68f [Feature] support deepseek v3/r1/v3.2 (#78)
* [Feature] support deepseek v3/r1/v3.2

* fix gpt_oss

* update readme

* update readme

---------

Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-05 22:55:35 +08:00
Xinyu Dong
07bc24a555 [Bugs] Fix moe when without bias (#76) 2026-01-05 10:51:23 +08:00
callmelaoyi
b86953acf9 [Kernel] Qwen3-next 优化 recompute_w_u_fwd & chunk_fwd_o (#74)
Co-authored-by: yuanjizhong <yuanjizhong@baidu.com>
2026-01-05 10:24:51 +08:00
Xinyu Dong
fe666fb24f [Feature] Support gpt-oss and update model list (#71)
* [Docs] Update Support Models

* [Feature] Support gpt-oss

* [Docs] fix model support list

* Fix Moe

* Fix

* Fix moe_ep

* remove gpt oss graph support , not yet

---------

Co-authored-by: hanhaowen <hanhaowen@baidu.com>
2026-01-04 21:19:49 +08:00
Joeegin
ded24f5026 [Model] Supporet InternVL2_5 on v0.11.0 (#72)
Co-authored-by: v_qiaoyijin <v_qiaoyijin@baidu.com>
2026-01-04 16:38:05 +08:00
baoqian426
684ce2761e Merge pull request #69 from chanzhennan/main
[Docs] : update readme.md
2025-12-31 16:44:58 +08:00
baoqian426
e48e4330e5 Merge pull request #67 from xyDong0223/main
[Docs] Update torch and ops for mimo v2
2025-12-31 16:44:42 +08:00
chanzhennan
6bc61d0dfe [Docs] : update readme.md 2025-12-31 16:41:12 +08:00
baoqian426
3290c30ec1 Merge pull request #68 from tanjunchen/main
【Docs】update readme.md
2025-12-31 15:01:49 +08:00
tanjunchen
e8f4e1337c update readme.md
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
2025-12-31 14:55:15 +08:00
Xinyu Dong
c46c46ef77 [Docs] Update torch and ops for mimo v2 2025-12-31 13:17:06 +08:00
baoqian426
cdef33dbb0 Merge pull request #66 from baoqian426/model/remove-llama-qwne2
remove qwen2.py llama.py fix llama output
2025-12-31 11:57:22 +08:00
hanhaowen
b015bb76fd remove qwen2.py llama.py fix llama output 2025-12-31 11:39:37 +08:00
Xinyu Dong
b3c30a3cb9 [Feature] Support XiaoMi MIMO Flash V2 (#62)
* [Feature] Support MIMO Flash V2
2025-12-31 10:16:33 +08:00
WeiJie_Hong
341dc7f296 [Docs] Update base image path in Installation.md (#63) 2025-12-30 19:10:41 +08:00
baoqian426
6382deb32b Merge pull request #60 from tanjunchen/main-1
【Docs】update readme.md
2025-12-29 21:24:26 +08:00
tanjunchen
8c23a955a4 update readme.md
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
2025-12-29 21:21:10 +08:00
Li Wei
9cee025f41 Merge pull request #59 from liwei109/aicapx-quant
[fix]remove weight_loader_v2 to suport cuda graph
2025-12-29 19:56:24 +08:00
Xinyu Dong
7fb627c34e Merge pull request #57 from tanjunchen/main-github-action
Add foundational configuration
2025-12-29 13:18:31 +08:00
tanjunchen
6d7d7c347f Add foundational configuration
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
2025-12-28 20:28:58 +08:00
Xinyu Dong
d17ee45d4c Merge pull request #55 from tanjunchen/main-dev-01
【Docs】update readme and contributing guide
2025-12-28 17:48:14 +08:00
Xinyu Dong
1c21b07232 Merge pull request #56 from tanjunchen/main-dev-02
【Docs】add PULL_REQUEST_TEMPLATE.md and ISSUE_TEMPLATE
2025-12-28 17:47:49 +08:00
tanjunchen
99269e3ce9 add PULL_REQUEST_TEMPLATE.md and ISSUE_TEMPLATE
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
2025-12-27 22:03:56 +08:00
tanjunchen
0efa514bd9 1.add CODE_OF_CONDUCT.md to vLLM Kunlun
2.add MAINTAINERS.md to vLLM Kunlun
3.add MAINTAINERS.md to vLLM Kunlun
4.add contributing guide to vLLM Kunlun

Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
2025-12-27 19:50:12 +08:00
baoqian426
45c6b8e927 Merge pull request #52 from liwei109/awq_gptq
[dev] support AWQ/GPTQ quantization for dense models
2025-12-24 17:05:26 +08:00
baoqian426
ed90690bd3 Merge pull request #50 from liwei109/quant
[refactor] remove redundant code in linear
2025-12-24 17:05:04 +08:00
Li Wei
6546323c71 [dev] support AWQ/GPTQ quantization for dense models 2025-12-24 13:46:06 +08:00
Li Wei
383eb5459a [refactor] remove redundant code in linear 2025-12-24 12:02:09 +08:00
Xinyu Dong
75d0bdae2f Merge pull request #40 from ldh2020/v0.11.0dev
[Kernel] Optimize the performance of Qwen3-Next
2025-12-22 21:50:27 +08:00
Xinyu Dong
c91134fd09 Merge pull request #39 from LiangYC1021/v0.11.0dev
[Kernel] Replace native torch solve_tril by solve_tril_fwd kernel op
2025-12-22 19:33:17 +08:00
hanhaowen
a4b9e92ca1 [Kernel] Replace native torch solve_tril by solve_tril_fwd kernel op 2025-12-22 17:37:19 +08:00
ldh2020
059988adbc Merge pull request #2 from ldh2020/ldh2020-qwen3-next
[Model] Optimize the performance of Qwen3-Next
2025-12-22 11:11:01 +08:00
ldh2020
8261a09e2a [Kernel] Optimize the selection and update OP of ssm state 2025-12-21 15:45:32 +08:00
ldh2020
b97c781300 [Kernel] Optimize the recurrent op 2025-12-21 11:22:06 +08:00
ldh2020
004e164bdb [Kernel] Optimize the recurrent op 2025-12-21 11:18:00 +08:00
ldh2020
58c1db5073 [Bugfix] fix the bug of the flash_attention in Qwen3-Next 2025-12-21 10:34:43 +08:00
Xinyu Dong
911b886e9d [Docs] Update installation.md 2025-12-20 10:16:57 +08:00
Xinyu Dong
6f96615ee3 Merge pull request #23 from ldh2020/v0.11.0dev
[Kernel] Use l2norm kernel op instead of triton op.
2025-12-19 15:26:18 +08:00
chenyili0619
92ce826ece Merge pull request #30 from baidu/28-v0110-only-enble-top-p-or-k-occur-error
[Bug] Fixed the issue where an error occurred when the request included a seed.
2025-12-18 13:05:45 +08:00
Xinyu Dong
ff7131678a Merge pull request #29 from chenyili0619/28-v0110-only-enble-top-p-or-k-occur-error
[Bug] Fixed the issue where an error occurred when the request includ…
2025-12-18 13:04:39 +08:00
chenyili0619
2e2933d217 [Bug] Fixed the issue where an error occurred when the request included a seed. 2025-12-18 13:03:34 +08:00
ldh2020
fce97df908 [Kernel] Use l2norm kernel op instead of triton op. 2025-12-16 16:24:47 +08:00
Xinyu Dong
6b5740ad0a [Docs] Fix Docs 2025-12-16 16:04:29 +08:00
Xinyu Dong
f6a73ac442 [Docs] Update installation.md, Fix Ops Merge pull request #21 from baidu/18-qwen3-30b-a3b-instruct-2507-tool-calling-issue
[Docs] Update installation.md, Fix Ops
2025-12-16 14:59:57 +08:00
Xinyu Dong
8fb42b1c9a [Docs] Update installation.md 2025-12-16 14:49:12 +08:00
Xinyu Dong
aa770e6946 [Model] Support llama3 on v0.11.0 Merge pull request #19 from xyDong0223/v0.11.0dev
[Model] Support llama3 on v0.11.0
2025-12-16 14:15:58 +08:00
Xinyu Dong
5a75795ade [Model] Update llama.py
Remove redundancy
2025-12-15 21:28:56 +08:00