This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX
/
xc-llm-kunlun
Watch
3
Star
0
Fork
0
You've already forked xc-llm-kunlun
Code
Issues
Pull Requests
Actions
Projects
Releases
Wiki
Activity
Files
b82b6026d65c69a2a65e4d55ffe2065c86e561dd
xc-llm-kunlun
/
docs
/
source
/
user_guide
History
Li Wei
71bd70ad6c
[Feature] support compressed-tensors w4a16 quantization (
#154
)
...
- native int4 kimi model inference is supported Signed-off-by: Li Wei <
liwei.109@outlook.com
>
2026-01-27 19:56:22 +08:00
..
configuration
[Bugs] Fix Docs Build Problem (
#97
)
2026-01-10 05:55:40 +08:00
feature_guide
[Feature] support compressed-tensors w4a16 quantization (
#154
)
2026-01-27 19:56:22 +08:00
support_matrix
[doc] update quantization guide doc (
#88
)
2026-01-07 15:39:51 +08:00
release_notes.md
Initial commit for vLLM-Kunlun Plugin
2025-12-10 12:05:39 +08:00