Files
Hermes-3-Llama-3.1-8B-AWQ/README.md
2024-09-03 03:17:46 +00:00

1.2 KiB

inference
inference
false

NousResearch/Hermes-3-Llama-3.1-8B AWQ

** PROCESSING .... ETA 30mins **

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by: