Layer-wise & pruned quantization IQ3_M
This commit is contained in:
3
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_M.gguf
Normal file
3
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:0aef3c959bb993ff6b3436ceba8f6c84380632e35024212463d2c1b9d87c649a
|
||||||
|
size 9551090944
|
||||||
Reference in New Issue
Block a user