Layer-wise & pruned quantization Q5_K_S
This commit is contained in:
3
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_S.gguf
Normal file
3
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:fab643caf3db32222fc47668880ab7fb63d6b24b62917ed65fecd052dce5de29
|
||||
size 13905712384
|
||||
Reference in New Issue
Block a user