ModelHub XC 48fa58ab42 初始化项目,由ModelHub XC社区提供模型
Model: Lewdiculous/Eris_PrimeV4-Vision-7B-GGUF-IQ-Imatrix
Source: Original Platform
2026-05-15 13:10:35 +08:00

tags
tags
experimental
testing
gguf
roleplay
quantized
mistral
text-generation-inference

These are quants for an experimental model.

     "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
     "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"

Original model weights:
https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-7B

image/png

Vision/multimodal capabilities:

Click here to see how this would work in practice in a roleplay chat.

image/jpeg


Click here to see what your SillyTavern Image Captions extension settings should look like.

image/jpeg


If you want to use vision functionality:

  • Make sure you are using the latest version of KoboldCpp.

To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here, it's also hosted in this repository inside the mmproj folder.

  • You can load the mmproj by using the corresponding section in the interface:

image/png

  • For CLI users, you can load the mmproj file by adding the respective flag to your usual command:
--mmproj your-mmproj-file.gguf

Quantization information:

Steps performed:

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)

Using the latest llama.cpp at the time.

Description
Model synced from source: Lewdiculous/Eris_PrimeV4-Vision-7B-GGUF-IQ-Imatrix
Readme 115 KiB
Languages
Text 100%