This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX
/
xc-llm-kunlun
Watch
3
Star
0
Fork
0
You've already forked xc-llm-kunlun
Code
Issues
Pull Requests
Actions
Projects
Releases
Wiki
Activity
Files
42c7ef2f27790cebaa3bb84ba0215c66dfb538fa
xc-llm-kunlun
/
vllm_kunlun
/
ops
/
attention
History
baoqian426
1eaa1336ac
[Bugfix]remove mla patch, server args no need --compilation-config for ds v3.1 (
#145
)
...
Signed-off-by: baoqian426 <
1354987947@qq.com
>
2026-01-23 15:59:43 +08:00
..
backends
提交vllm0.11.0开发分支
2025-12-10 17:51:24 +08:00
__init__.py
Initial commit for vLLM-Kunlun Plugin
2025-12-10 12:05:39 +08:00
flashmla.py
Add kernels to optimize RoPE and the decoding stage (
#143
)
2026-01-23 10:29:52 +08:00
layer.py
提交vllm0.11.0开发分支
2025-12-10 17:51:24 +08:00
merge_attn_states.py
longcontext chunk make attention crash, fix it (
#117
)
2026-01-17 18:38:23 +08:00