zhihui96
|
f0bf384e2e
|
Merge branch 'baidu:main' into dsv31
|
2026-01-14 14:21:57 +08:00 |
|
Lidang Jiang
|
7ed71432ca
|
[Bug] Fix InternVL KeyError: ((1, 1, 3), '<i8') (#108)
|
2026-01-13 22:36:03 +08:00 |
|
roger-lcc
|
37cc307322
|
register apply_repetition_penalties_ in custom_op (#110)
* fix qwen2_vl for 0.11.0
* register apply_repetition_penalties_ in custom_op
---------
Co-authored-by: luochencheng <luochencheng@baidu.com>
|
2026-01-13 20:22:14 +08:00 |
|
baoqian426
|
fb424acca7
|
Merge pull request #106 from baoqian426/enable-full-cudagraph-deepseek
enable full cudagraph for deepseek
|
2026-01-13 09:57:56 +08:00 |
|
Jin Hanyu
|
bd90350968
|
[Bug] Fix no apply_top_k_top_p issue. (#101)
|
2026-01-12 16:38:03 +08:00 |
|
tanjunchen
|
18fc1c006e
|
update maintainer for vllm-kunlun (#100)
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
|
2026-01-12 16:37:22 +08:00 |
|
hanhaowen
|
ff8ebfa208
|
enable full cudagraph for deepseek
|
2026-01-12 15:18:12 +08:00 |
|
Xinyu Dong
|
87a57e43ca
|
[Docs] Upate URL (#98)
|
2026-01-10 06:02:10 +08:00 |
|
Xinyu Dong
|
7be26ca617
|
[Bugs] Fix Docs Build Problem (#97)
* [Bugs] Docs fixed
* Update contributing.md
* Update index.md
* fix lua to text
* fix title size
|
2026-01-10 05:55:40 +08:00 |
|
baoqian426
|
8c9cabd760
|
Merge pull request #96 from xyDong0223/main
[Docs] Fix v0.11.0 Docs config
|
2026-01-09 17:20:17 +08:00 |
|
Xinyu Dong
|
462c44e2ac
|
[Docs] Fix v0.11.0 Docs config
|
2026-01-09 17:07:18 +08:00 |
|
roger-lcc
|
0455b49519
|
[Bugs] fix qwen2_vl for 0.11.0 (#94)
Co-authored-by: luochencheng <luochencheng@baidu.com>
|
2026-01-09 15:05:40 +08:00 |
|
wzh
|
df436a47f6
|
test
|
2026-01-08 16:02:15 +08:00 |
|
baoqian426
|
2c9b176e6e
|
[Feature] use for dp (#90)
|
2026-01-08 11:05:48 +08:00 |
|
Li Wei
|
c403d921ff
|
[doc] update quantization guide doc (#88)
|
2026-01-07 15:39:51 +08:00 |
|
baoqian426
|
eb40e8a07a
|
[Bugfix] fix can not import compressed_tensors (#87)
Co-authored-by: root <root@rdtest-node1150.bcc-zwlt.baidu.com>
|
2026-01-07 11:32:10 +08:00 |
|
baoqian426
|
62a97db6ed
|
Merge pull request #85 from liwei109/liwei-dev
[fix] update compressed-tensors scheme
|
2026-01-07 09:27:23 +08:00 |
|
Li Wei
|
1c1b84d78c
|
[fix]update compressed-tensors scheme
Deepseek v3.2 is supported now
Signed-off-by: Li Wei <liwei.109@outlook.com>
|
2026-01-06 22:30:27 +08:00 |
|
baoqian426
|
9c2b908908
|
Merge pull request #84 from xyDong0223/main
[Feature] DeepSeek Support MTP
|
2026-01-06 21:56:31 +08:00 |
|
baoqian426
|
c5e4d23e3e
|
Merge pull request #82 from liwei109/quant
[fix] resolve cutlass_scaled_mm inference error
|
2026-01-06 21:42:55 +08:00 |
|
dongxinyu03
|
26b311ccf5
|
[Feature] DeepSeek Support MTP
|
2026-01-06 21:37:21 +08:00 |
|
tangshiwen
|
f811ae968a
|
[fix] resolve cutlass_scaled_mm inference error
|
2026-01-06 20:52:12 +08:00 |
|
baoqian426
|
c54b2d2a2d
|
Merge pull request #80 from liwei109/aicapx-quant
[fix]matmul not support cuda graph
|
2026-01-06 17:49:09 +08:00 |
|
Li Wei
|
9533f68e99
|
[fix]matmul not support cuda graph
|
2026-01-06 17:32:45 +08:00 |
|
Li Wei
|
515a4eeda9
|
[dev] support compressed-tensors w8a8 quantization (#75)
* [dev] support compressed-tensors w8a8 quantization
Co-authored-by: Li Wei <liwei.109@outlook.com>
* [refact]update KunlunScaleMMKernel impl
* [rebase]resolve conflicts and remove redundant code
---------
Co-authored-by: tangshiwen <tangshiwen@baidu.com>
|
2026-01-06 13:51:53 +08:00 |
|
baoqian426
|
ee0f50e68f
|
[Feature] support deepseek v3/r1/v3.2 (#78)
* [Feature] support deepseek v3/r1/v3.2
* fix gpt_oss
* update readme
* update readme
---------
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-05 22:55:35 +08:00 |
|
Xinyu Dong
|
07bc24a555
|
[Bugs] Fix moe when without bias (#76)
|
2026-01-05 10:51:23 +08:00 |
|
callmelaoyi
|
b86953acf9
|
[Kernel] Qwen3-next 优化 recompute_w_u_fwd & chunk_fwd_o (#74)
Co-authored-by: yuanjizhong <yuanjizhong@baidu.com>
|
2026-01-05 10:24:51 +08:00 |
|
Xinyu Dong
|
fe666fb24f
|
[Feature] Support gpt-oss and update model list (#71)
* [Docs] Update Support Models
* [Feature] Support gpt-oss
* [Docs] fix model support list
* Fix Moe
* Fix
* Fix moe_ep
* remove gpt oss graph support , not yet
---------
Co-authored-by: hanhaowen <hanhaowen@baidu.com>
|
2026-01-04 21:19:49 +08:00 |
|
Joeegin
|
ded24f5026
|
[Model] Supporet InternVL2_5 on v0.11.0 (#72)
Co-authored-by: v_qiaoyijin <v_qiaoyijin@baidu.com>
|
2026-01-04 16:38:05 +08:00 |
|
baoqian426
|
684ce2761e
|
Merge pull request #69 from chanzhennan/main
[Docs] : update readme.md
|
2025-12-31 16:44:58 +08:00 |
|
baoqian426
|
e48e4330e5
|
Merge pull request #67 from xyDong0223/main
[Docs] Update torch and ops for mimo v2
|
2025-12-31 16:44:42 +08:00 |
|
chanzhennan
|
6bc61d0dfe
|
[Docs] : update readme.md
|
2025-12-31 16:41:12 +08:00 |
|
baoqian426
|
3290c30ec1
|
Merge pull request #68 from tanjunchen/main
【Docs】update readme.md
|
2025-12-31 15:01:49 +08:00 |
|
tanjunchen
|
e8f4e1337c
|
update readme.md
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
|
2025-12-31 14:55:15 +08:00 |
|
Xinyu Dong
|
c46c46ef77
|
[Docs] Update torch and ops for mimo v2
|
2025-12-31 13:17:06 +08:00 |
|
baoqian426
|
cdef33dbb0
|
Merge pull request #66 from baoqian426/model/remove-llama-qwne2
remove qwen2.py llama.py fix llama output
|
2025-12-31 11:57:22 +08:00 |
|
hanhaowen
|
b015bb76fd
|
remove qwen2.py llama.py fix llama output
|
2025-12-31 11:39:37 +08:00 |
|
Xinyu Dong
|
b3c30a3cb9
|
[Feature] Support XiaoMi MIMO Flash V2 (#62)
* [Feature] Support MIMO Flash V2
|
2025-12-31 10:16:33 +08:00 |
|
WeiJie_Hong
|
341dc7f296
|
[Docs] Update base image path in Installation.md (#63)
|
2025-12-30 19:10:41 +08:00 |
|
baoqian426
|
6382deb32b
|
Merge pull request #60 from tanjunchen/main-1
【Docs】update readme.md
|
2025-12-29 21:24:26 +08:00 |
|
tanjunchen
|
8c23a955a4
|
update readme.md
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
|
2025-12-29 21:21:10 +08:00 |
|
Li Wei
|
9cee025f41
|
Merge pull request #59 from liwei109/aicapx-quant
[fix]remove weight_loader_v2 to suport cuda graph
|
2025-12-29 19:56:24 +08:00 |
|
Xinyu Dong
|
7fb627c34e
|
Merge pull request #57 from tanjunchen/main-github-action
Add foundational configuration
|
2025-12-29 13:18:31 +08:00 |
|
tanjunchen
|
6d7d7c347f
|
Add foundational configuration
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
|
2025-12-28 20:28:58 +08:00 |
|
Xinyu Dong
|
d17ee45d4c
|
Merge pull request #55 from tanjunchen/main-dev-01
【Docs】update readme and contributing guide
|
2025-12-28 17:48:14 +08:00 |
|
Xinyu Dong
|
1c21b07232
|
Merge pull request #56 from tanjunchen/main-dev-02
【Docs】add PULL_REQUEST_TEMPLATE.md and ISSUE_TEMPLATE
|
2025-12-28 17:47:49 +08:00 |
|
tanjunchen
|
99269e3ce9
|
add PULL_REQUEST_TEMPLATE.md and ISSUE_TEMPLATE
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
|
2025-12-27 22:03:56 +08:00 |
|
tanjunchen
|
0efa514bd9
|
1.add CODE_OF_CONDUCT.md to vLLM Kunlun
2.add MAINTAINERS.md to vLLM Kunlun
3.add MAINTAINERS.md to vLLM Kunlun
4.add contributing guide to vLLM Kunlun
Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
|
2025-12-27 19:50:12 +08:00 |
|
baoqian426
|
45c6b8e927
|
Merge pull request #52 from liwei109/awq_gptq
[dev] support AWQ/GPTQ quantization for dense models
|
2025-12-24 17:05:26 +08:00 |
|