Logo
Explore Help
Register Sign In
EngineX/xc-llm-ascend
3
0
Fork 0
You've already forked xc-llm-ascend
Code Issues Pull Requests Projects Releases Wiki Activity
Files
a6f6e919e6928e8a23137970cf570eba27524fb1
xc-llm-ascend/vllm_ascend/quantization
History
Hexiang Wang 0ad52517a1 Revert "Refactor quantization layer name mapping to leverage vLLM built-in mappers" (#7237)
Reverts vllm-project/vllm-ascend#7050, which breaks kimi-k2.5 and
qwen-omin.
- vLLM version: v0.17.0
- vLLM main:
4034c3d32e
2026-03-14 00:05:54 +08:00
..
methods
[Feature] support aclgraph for model runner v2 (#7110)
2026-03-13 09:11:46 +08:00
__init__.py
[Lint]Style: Convert vllm-ascend/ to ruff format(Batch #7) (#6023)
2026-02-06 14:56:53 +08:00
compressed_tensors_config.py
[Lint]Style: Convert vllm-ascend/ to ruff format(Batch #7) (#6023)
2026-02-06 14:56:53 +08:00
method_adapters.py
[bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (#7090)
2026-03-10 16:57:05 +08:00
modelslim_config.py
Revert "Refactor quantization layer name mapping to leverage vLLM built-in mappers" (#7237)
2026-03-14 00:05:54 +08:00
quant_parser.py
[misc] move mxfp_compat into device to decouple from quantization init chain (#6918)
2026-03-02 18:17:01 +08:00
utils.py
[Feature][Quant] Reapply auto-detect quantization format and support remote model ID (#7111)
2026-03-13 22:53:25 +08:00
Powered by Gitea Version: 1.24.3 Page: 165ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API