c54bf95e92e5a6d32c2d19f3ab53298ac4c47f5d
Model: ZeroWw/Phi-3.5-mini-instruct_Uncensored-GGUF Source: Original Platform
license: mit language:
- en pipeline_tag: text-generation
My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.
Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
Updated on: Thu Aug 22, 13:56:30
Description