This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX
/
xc-llm-kunlun
Watch
3
Star
0
Fork
0
You've already forked xc-llm-kunlun
Code
Issues
Pull Requests
Projects
Releases
Wiki
Activity
Files
b3c30a3cb9790baaf2b7407c385a7c620d90e6e8
xc-llm-kunlun
/
vllm_kunlun
/
ops
/
quantization
History
Li Wei
6546323c71
[dev] support AWQ/GPTQ quantization for dense models
2025-12-24 13:46:06 +08:00
..
__init__.py
Initial commit for vLLM-Kunlun Plugin
2025-12-10 12:05:39 +08:00
awq.py
[dev] support AWQ/GPTQ quantization for dense models
2025-12-24 13:46:06 +08:00
compressed_tensors_moe.py
提交vllm0.11.0开发分支
2025-12-10 17:51:24 +08:00
gptq.py
[dev] support AWQ/GPTQ quantization for dense models
2025-12-24 13:46:06 +08:00