This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX
/
xc-llm-kunlun
Watch
3
Star
0
Fork
0
You've already forked xc-llm-kunlun
Code
Issues
Pull Requests
Actions
Projects
Releases
Wiki
Activity
Files
71bd70ad6c82519951b95b2bb898ee165191d7bf
xc-llm-kunlun
/
docs
/
source
History
Li Wei
71bd70ad6c
[Feature] support compressed-tensors w4a16 quantization (
#154
)
...
- native int4 kimi model inference is supported Signed-off-by: Li Wei <
liwei.109@outlook.com
>
2026-01-27 19:56:22 +08:00
..
_templates
/sections
Initial commit for vLLM-Kunlun Plugin
2025-12-10 12:05:39 +08:00
community
[Bugs] Fix Docs Build Problem (
#97
)
2026-01-10 05:55:40 +08:00
developer_guide
[Doc] Optimize the document (
#136
)
2026-01-22 14:12:44 +08:00
locale/zh_CN
/LC_MESSAGES
提交vllm0.11.0开发分支
2025-12-10 17:51:24 +08:00
logos
Initial commit for vLLM-Kunlun Plugin
2025-12-10 12:05:39 +08:00
tutorials
[Doc] Optimize the document (
#136
)
2026-01-22 14:12:44 +08:00
user_guide
[Feature] support compressed-tensors w4a16 quantization (
#154
)
2026-01-27 19:56:22 +08:00
conf.py
[Docs] Fix v0.11.0 Docs config
2026-01-09 17:07:18 +08:00
faqs.md
[Bugs] Fix Docs Build Problem (
#97
)
2026-01-10 05:55:40 +08:00
index.md
[Bugs] Fix Docs Build Problem (
#97
)
2026-01-10 05:55:40 +08:00
installation.md
[Doc] update base image url(1.Replace conda with uv; 2.Integrate xpytorch and ops into the image.) (
#146
)
2026-01-23 18:55:56 +08:00
quick_start.md
提交vllm0.11.0开发分支
2025-12-10 17:51:24 +08:00