初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-14 16:13:10 +08:00
commit d9512c1a79
27 changed files with 225 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4eebf661791dbe097d87152fff7b093b0df54012ddebbe1fe25808fa1fb03632
size 2161978400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8f1b8b239907903edef05d64be6b413a58df5ed59533a669ecd6c9bff57b7f27
size 2019634208

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7efc0437a69767e59db537209f42b3f993ed8fc88831584374b7990e2fdd253e
size 2948287520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:16dc3098c1925946fea3c583278daeb266f7bd8aadd867779a53afa9eb8767fc
size 2758495264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f2e4b34d567ae99e6acceeef6bcd1e95ed588b18df36fd3fd7dc70dd3c24a86
size 2605788192

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c3d1da0bec28d2f123d3f8e7dc74e97dce465015d342de36ca938ec5d1b2e5dd
size 2399218720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3a186a01f1bb4e98ec3458e0279a45f64b365c896bfadeba6fd2fbed010cf56a
size 3784829984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:18a1533d9e096bf2e6921cd36beb5c1602fadfcd66bde9a63b21909f36e3f3c9
size 3682331680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b09f2ae62b8db51d953efd04712aaf2009855efd884fbd5ed2c2f0899dbf409e
size 3518753824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0623292974658cae86eec3378c6111232ba0d3028feb3c02676183f5b73a5198
size 3274918944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ed8e78402abf51ea55ca8aa69c6abb99efc711f796005dd7887b8eaf93572751
size 4677995552

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3ffb2c24f5ee3f37cf7938ed48ba0e5524bf749766a2f852bd2925c4dd24cb19
size 4447669280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e0da84232ca85e8d1d1b2ca79f37c05a482b3b77f54a6f7be5413195f1e5a7d8
size 3179138080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:343e3c59cc95ebd6ec977855285967a491b711ba677e668a34fc4ee88d920413
size 2988821536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:032a33c9e1026b37c95059e226d4961550a88faf3cb12826fdf0c96c649c9c55
size 4321963040

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7c05d64d3fe91b85bb5474c3a2e13b7a50e8f9bee5969255d26cafd0aad80036
size 4018924576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7e2092018f0e1d7baaac833385feb51461095a64324ca7cafde935ba784427c8
size 3664505888

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:776203eec46c816310afec8c81939996f1edef0592204a47284283e1d741ef76
size 4675898400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dbf751ea6d3e8698a8758eeeaf53600792fa5194595d09cb1095ccfc54b0dbda
size 5130259488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:85be7751b1bb20c22614aac5744d61c078824fd4fbd94c039d282d4b8adda6c1
size 4920740896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a482e34930dc7fe2bca349a03d1156cdf1113cf75a3d0da6c7eaf4d5785e7a3a
size 4692675616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d9d5bb52842a29d40df2b8ce45b6c68b115db70ce0611d5e3a318a88894ce041
size 5732994080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:88fcf46797fe7be3fdc32aa46ec1ca46964d877d4b566b673cf03507bb5fdb94
size 5599300640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a1e48a2f0a5b4d012c7b386b75aed06dff97b5ae35ef25a64c50018fd2d642f
size 6596013088

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:16dd2112a736b448adb4bc8795de37ca628be2b3845ce7fe66b99ceebf1ddbc0
size 5015200

90
README.md Normal file
View File

@@ -0,0 +1,90 @@
---
base_model: tartuNLP/Llama-3.1-EstLLM-8B-Instruct-0825
datasets:
- nvidia/HelpSteer3
- allenai/tulu-3-sft-mixture
- utter-project/EuroBlocks-SFT-Synthetic-1124
language:
- et
- en
library_name: transformers
license: llama3.1
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/tartuNLP/Llama-3.1-EstLLM-8B-Instruct-0825
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-EstLLM-8B-Instruct-0825-i1-GGUF/resolve/main/Llama-3.1-EstLLM-8B-Instruct-0825.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->