This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX
/
xc-llm-kunlun
Watch
3
Star
0
Fork
0
You've already forked xc-llm-kunlun
Code
Issues
Pull Requests
Projects
Releases
Wiki
Activity
Files
c9f00c132c821e9264ca3efa23796ea3f08104ca
xc-llm-kunlun
/
vllm_kunlun
/
v1
/
attention
/
backends
/
mla
History
baoqian426
2512259944
longcontext chunk make attention crash, fix it (
#117
)
...
Co-authored-by: root <
root@rdtest-node1150.bcc-zwlt.baidu.com
>
2026-01-17 18:38:23 +08:00
..
__init__.py
[Feature] support deepseek v3/r1/v3.2 (
#78
)
2026-01-05 22:55:35 +08:00
common.py
longcontext chunk make attention crash, fix it (
#117
)
2026-01-17 18:38:23 +08:00
flashmla_sparse.py
[Misc]Specify that DS32 only supports --kv-cache-dtype bfloat16 (
#119
)
2026-01-17 16:52:02 +08:00
flashmla.py
enable full cudagraph for deepseek
2026-01-12 15:18:12 +08:00
indexer.py
[Feature] support deepseek v3/r1/v3.2 (
#78
)
2026-01-05 22:55:35 +08:00