Commit Graph

51 Commits

Author SHA1 Message Date
tanjunchen
0efa514bd9 1.add CODE_OF_CONDUCT.md to vLLM Kunlun
2.add MAINTAINERS.md to vLLM Kunlun
3.add MAINTAINERS.md to vLLM Kunlun
4.add contributing guide to vLLM Kunlun

Signed-off-by: tanjunchen <tanjunchen20@gmail.com>
2025-12-27 19:50:12 +08:00
baoqian426
45c6b8e927 Merge pull request #52 from liwei109/awq_gptq
[dev] support AWQ/GPTQ quantization for dense models
2025-12-24 17:05:26 +08:00
baoqian426
ed90690bd3 Merge pull request #50 from liwei109/quant
[refactor] remove redundant code in linear
2025-12-24 17:05:04 +08:00
Li Wei
6546323c71 [dev] support AWQ/GPTQ quantization for dense models 2025-12-24 13:46:06 +08:00
Li Wei
383eb5459a [refactor] remove redundant code in linear 2025-12-24 12:02:09 +08:00
Xinyu Dong
75d0bdae2f Merge pull request #40 from ldh2020/v0.11.0dev
[Kernel] Optimize the performance of Qwen3-Next
2025-12-22 21:50:27 +08:00
Xinyu Dong
c91134fd09 Merge pull request #39 from LiangYC1021/v0.11.0dev
[Kernel] Replace native torch solve_tril by solve_tril_fwd kernel op
2025-12-22 19:33:17 +08:00
hanhaowen
a4b9e92ca1 [Kernel] Replace native torch solve_tril by solve_tril_fwd kernel op 2025-12-22 17:37:19 +08:00
ldh2020
059988adbc Merge pull request #2 from ldh2020/ldh2020-qwen3-next
[Model] Optimize the performance of Qwen3-Next
2025-12-22 11:11:01 +08:00
ldh2020
8261a09e2a [Kernel] Optimize the selection and update OP of ssm state 2025-12-21 15:45:32 +08:00
ldh2020
b97c781300 [Kernel] Optimize the recurrent op 2025-12-21 11:22:06 +08:00
ldh2020
004e164bdb [Kernel] Optimize the recurrent op 2025-12-21 11:18:00 +08:00
ldh2020
58c1db5073 [Bugfix] fix the bug of the flash_attention in Qwen3-Next 2025-12-21 10:34:43 +08:00
Xinyu Dong
911b886e9d [Docs] Update installation.md 2025-12-20 10:16:57 +08:00
Xinyu Dong
6f96615ee3 Merge pull request #23 from ldh2020/v0.11.0dev
[Kernel] Use l2norm kernel op instead of triton op.
2025-12-19 15:26:18 +08:00
chenyili0619
92ce826ece Merge pull request #30 from baidu/28-v0110-only-enble-top-p-or-k-occur-error
[Bug] Fixed the issue where an error occurred when the request included a seed.
2025-12-18 13:05:45 +08:00
Xinyu Dong
ff7131678a Merge pull request #29 from chenyili0619/28-v0110-only-enble-top-p-or-k-occur-error
[Bug] Fixed the issue where an error occurred when the request includ…
2025-12-18 13:04:39 +08:00
chenyili0619
2e2933d217 [Bug] Fixed the issue where an error occurred when the request included a seed. 2025-12-18 13:03:34 +08:00
ldh2020
fce97df908 [Kernel] Use l2norm kernel op instead of triton op. 2025-12-16 16:24:47 +08:00
Xinyu Dong
6b5740ad0a [Docs] Fix Docs 2025-12-16 16:04:29 +08:00
Xinyu Dong
f6a73ac442 [Docs] Update installation.md, Fix Ops Merge pull request #21 from baidu/18-qwen3-30b-a3b-instruct-2507-tool-calling-issue
[Docs] Update installation.md, Fix Ops
2025-12-16 14:59:57 +08:00
Xinyu Dong
8fb42b1c9a [Docs] Update installation.md 2025-12-16 14:49:12 +08:00
Xinyu Dong
aa770e6946 [Model] Support llama3 on v0.11.0 Merge pull request #19 from xyDong0223/v0.11.0dev
[Model] Support llama3 on v0.11.0
2025-12-16 14:15:58 +08:00
Xinyu Dong
5a75795ade [Model] Update llama.py
Remove redundancy
2025-12-15 21:28:56 +08:00
Xinyu Dong
7c7d0326c5 [Model] registry llama.py to vLLM 2025-12-15 21:21:28 +08:00
Xinyu Dong
ca059110b3 [Model] Supporet llama3 on v0.11.0
FULL AND PIECEWISE GRAPH ENBALE
2025-12-15 21:20:44 +08:00
Xinyu Dong
ab23c082b8 [Kernel] Optimize the performance of causal_conv1d. Merge pull request #15 from ldh2020/v0.11.0dev
[Kernel] Optimize the performance of causal_conv1d.
2025-12-12 18:04:33 +08:00
ldh2020
a6310d36ea Merge branch 'baidu:v0.11.0dev' into v0.11.0dev 2025-12-12 17:43:52 +08:00
ldh2020
bec4ad4f91 Merge pull request #1 from ldh2020/ldh2020-causal_conv1d
[Kernel] Optimize the performance of causal_conv1d.
2025-12-12 17:43:18 +08:00
Xinyu Dong
bb3b5a52cb Merge pull request #13 from ldh2020/v0.11.0dev
[Bugfix] fix the bug of torch_solve_tril
2025-12-12 17:37:46 +08:00
ldh2020
cff4727fbb [Kernel] Optimize the performance of causal_conv1d. 2025-12-12 17:22:35 +08:00
ldh2020
9bb2ee06a4 [Bugfix] fix the bug of torch_solve_tril 2025-12-12 17:01:50 +08:00
baoqian426
fae22c2e62 Merge pull request #3 from xyDong0223/main
[Kernel] Enable fast random sample on Kunlun3 Platform
2025-12-11 11:47:30 +08:00
xyDong0223
af2cd6097f [Kernell] fix miss import os 2025-12-11 11:17:28 +08:00
xyDong0223
0b7fb2ad19 Delete docs/source/developer_guide/evaluation/accuracy_report/Qwen3-30B-A3B-coder.md 2025-12-10 21:58:27 +08:00
xyDong0223
f4bf3a6251 Delete docs/source/developer_guide/evaluation/accuracy_report/Qwen2.5-32B.md 2025-12-10 21:58:16 +08:00
xyDong0223
170e7091d1 Delete docs/source/developer_guide/evaluation/accuracy_report/Qwen3-8B.md 2025-12-10 21:58:03 +08:00
xyDong0223
670c2397b8 [Kernel] Enable fast random sample on Kunlun P 2025-12-10 21:52:48 +08:00
xyDong0223
0d4d4967cf Update README.md 2025-12-10 21:46:18 +08:00
chenyili
02c2da9c7a remove rebudant files 2025-12-10 20:31:44 +08:00
xyDong0223
f109a76a39 Merge pull request #2 from WeiJie-520/main
[Doc] Update Qwen model accuracy report
2025-12-10 20:24:05 +08:00
hongweijie
bd66cfa6c2 [Doc] Update Qwen model accuracy report 2025-12-10 17:55:27 +08:00
chenyili
7c22d621fb 提交vllm0.11.0开发分支 2025-12-10 17:51:24 +08:00
xyDong0223
deab7dd0b6 Update README.md 2025-12-10 17:28:00 +08:00
xyDong0223
d6c0c6b126 Merge pull request #1 from caijizhuo/main
[chore] Remove obsolete comments
2025-12-10 17:09:50 +08:00
zhaoyingzhuo
b614823125 [chore] Remove obsolete comments 2025-12-10 15:52:23 +08:00
dongxinyu03
ec935627cb [Doc] Update README 2025-12-10 14:59:29 +08:00
dongxinyu03
1b343812c9 [Doc] Update docs 2025-12-10 14:46:12 +08:00
dongxinyu03
a3d11f9b73 [Doc] Update docs 2025-12-10 14:26:37 +08:00
dongxinyu03
3762e6e3ab [Doc] Update docs 2025-12-10 14:16:10 +08:00