Layer-wise & pruned quantization Q8_0
This commit is contained in:
3
Dolphin-Mistral-24B-Venice-Edition-pruned-Q8_0.gguf
Normal file
3
Dolphin-Mistral-24B-Venice-Edition-pruned-Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:79e735a0efaf45354e9cb9d7b38e4309a4d7c528a741fc0d1a0cb4ab817181b9
|
||||
size 21883573504
|
||||
Reference in New Issue
Block a user