license: mit language:

  • en pipeline_tag: text-generation

My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.

Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.

Updated on: Thu Aug 22, 13:56:30

Description
Model synced from source: ZeroWw/Phi-3.5-mini-instruct_Uncensored-GGUF
Readme 25 KiB