335baba79bcf6f6041d29cdc62bbe26f99accb2a
Model: ZeroWw/Gemmasutra-Mini-2B-v1-GGUF Source: Original Platform
license: mit language:
- en pipeline_tag: text-generation
My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.
Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
Updated on: Sat Aug 03, 17:55:22
Description