Layer-wise & pruned quantization Q4_K_M
This commit is contained in:
3
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M.gguf
Normal file
3
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:93460770f3a0cd61c1b343128b29d07e886ccbca03788c6d57f6b5d9322558ea
|
||||
size 12390356224
|
||||
Reference in New Issue
Block a user