Layer-wise & pruned quantization IQ4_NL
This commit is contained in:
3
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ4_NL.gguf
Normal file
3
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:065931827062c69435d6070cb52d0756d825f44dedada29d3e37393ffeb219b6
|
||||
size 11590161664
|
||||
Reference in New Issue
Block a user