Files
bloom-1b7-i1-GGUF/README.md
ModelHub XC bb58d03ef1 初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/bloom-1b7-i1-GGUF
Source: Original Platform
2026-05-05 08:08:40 +08:00

5.1 KiB

base_model, language, library_name, license, quantized_by
base_model language library_name license quantized_by
bigscience/bloom-1b7
ak
ar
as
bm
bn
ca
code
en
es
eu
fon
fr
gu
hi
id
ig
ki
kn
lg
ln
ml
mr
ne
nso
ny
or
pa
pt
rn
rw
sn
st
sw
ta
te
tn
ts
tum
tw
ur
vi
wo
xh
yo
zh
zhs
zht
zu
transformers bigscience-bloom-rail-1.0 mradermacher

About

weighted/imatrix quants of https://huggingface.co/bigscience/bloom-1b7

static quants are available at https://huggingface.co/mradermacher/bloom-1b7-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF i1-IQ1_S 0.9 for the desperate
GGUF i1-IQ1_M 0.9 mostly desperate
GGUF i1-IQ2_XXS 1.0
GGUF i1-IQ2_XS 1.0
GGUF i1-IQ2_S 1.1
GGUF i1-IQ2_M 1.1
GGUF i1-Q2_K_S 1.1 very low quality
GGUF i1-Q2_K 1.2 IQ3_XXS probably better
GGUF i1-IQ3_XXS 1.2 lower quality
GGUF i1-IQ3_XS 1.3
GGUF i1-IQ3_S 1.3 beats Q3_K*
GGUF i1-Q3_K_S 1.3 IQ3_XS probably better
GGUF i1-IQ3_M 1.3
GGUF i1-Q3_K_M 1.4 IQ3_S probably better
GGUF i1-Q3_K_L 1.4 IQ3_M probably better
GGUF i1-IQ4_XS 1.5
GGUF i1-IQ4_NL 1.5 prefer IQ4_XS
GGUF i1-Q4_0 1.5 fast, low quality
GGUF i1-Q4_K_S 1.5 optimal size/speed/quality
GGUF i1-Q4_K_M 1.6 fast, recommended
GGUF i1-Q4_1 1.6
GGUF i1-Q5_K_S 1.7
GGUF i1-Q5_K_M 1.8
GGUF i1-Q6_K 1.9 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.