初始化项目,由ModelHub XC社区提供模型

Model: Lewdiculous/Eris_PrimeV4-Vision-7B-GGUF-IQ-Imatrix
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-15 13:10:35 +08:00
commit 48fa58ab42
16 changed files with 2494 additions and 0 deletions

48
.gitattributes vendored Normal file
View File

@@ -0,0 +1,48 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-F16.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-IQ3_M-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-IQ3_S-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-IQ3_XXS-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-IQ4_XS-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-Q4_K_M-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-Q4_K_S-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-Q5_K_M-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-Q5_K_S-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-Q6_K-imat.gguf filter=lfs diff=lfs merge=lfs -text
Eris_PrimeV4-Vision-7B-Q8_0-imat.gguf filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
mmproj/mmproj-model-f16.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a8b1602ccfd86cc7f72aa7bc9f27cf31a2c19be5038a6fd4deeebefcdff66f34
size 14484731648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a7389a845a706792ff4a4926350b1e020ab7db4f5e3c65ae8f2449513759427f
size 3284891456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4eb35207741778b717baa47365e381b703bedcd2c94fd4751b7f59f5a38cad0b
size 3182393152

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fd4768bc794b3be1e43b1408ef8c77089c397dff89b6b57081e026e5d2bfaff8
size 2827343680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d4839c6b7b3810b106447a37fc8d0524158c906409f3b13b2f44c55273426f0
size 3907688256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:854bbc911ee893caef7b08dd6d25e36717754a3f08fe7190488884097dc73819
size 4368439104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9d257ab5b4997d7cf3ef708c6bf7ea2932911902ffe272146e071875f466ad51
size 4140373824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8b0b537aaee76452ccf44e29a853bdb75810cec7870ab272397ca469cf363435
size 5131409216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a86efa6e9345fa82573ba1101b4d12999e44c68dc6ab8b001f45924f580fa2b
size 4997715776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4d310c7c2fb94e1c29eade17c211a1b25a40454a714e89398bef1b3290e656aa
size 5942064960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7e53b4008a8db11ea6853057e19e5259a96000f8d4268f4e6141ea0bac3c20ac
size 7695857472

61
README.md Normal file
View File

@@ -0,0 +1,61 @@
---
tags:
- experimental
- testing
- gguf
- roleplay
- quantized
- mistral
- text-generation-inference
---
**These are quants for an experimental model.**
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
Original model weights: <br> https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-7B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/5_Pr7t9cD4MBZRkJ4hwpF.png)
# Vision/multimodal capabilities:
<details><summary>
Click here to see how this would work in practice in a roleplay chat.
</summary>
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qGO0nIfZVcyuio5J07sU-.jpeg)
</details><br>
<details><summary>
Click here to see what your SillyTavern Image Captions extension settings should look like.
</summary>
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UpXOnVrzvsMRYeqMaSOaa.jpeg)
</details><br>
**If you want to use vision functionality:**
* Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf), it's also hosted in this repository inside the **mmproj** folder.
* You can load the **mmproj** by using the corresponding section in the interface:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
* For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
```
--mmproj your-mmproj-file.gguf
```
# Quantization information:
**Steps performed:**
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*

2346
imatrix-with-rp-data.txt Normal file

File diff suppressed because it is too large Load Diff

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9a55f286cffd6ed2a3726ea186b5ce16df77f60aaa75a11b63ab938e3c9af129
size 4988126

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:00205ee8a0d7a381900cd031e43105f86aa0d8c07bf329851e85c71a26632d16
size 624451168