初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-30 05:08:51 +08:00
commit 7a7c1f80a0
27 changed files with 212 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bb215f680d72944ad23414d75832897b1376ff8084e1146545c269ec7876977d
size 2161972960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6064b964559c663a5190b211294ecd40656a5d8cd80eace6631c22cbe6774598
size 2019628768

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:46b51e69078757f364a8f9a24ce23329883d500860d70c2616b2f218994e746b
size 2948282080

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:45a945610009f417594ac89ce6aca8d3379b1f48b668512f74278fd91fa15d9a
size 2758489824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ceb943d7784f242a0c2627a98cbf26e52ae408416baa68fa4d162be1ed0a7cea
size 2605782752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ecf1afe31dfb3330ec1e232e414e662df865eb15508657f334e88f43b0a4c648
size 2399213280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:331e7161554050993933e218e341c1ee2d5cb524e13c10015225f900ad35db9d
size 3784824544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c8c4e93f24f1b4739f44c77ac585c4e3932ef2ba6ebe0af8f91a020fbdfdabda
size 3682326240

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:627a672c7ec0a92c9ebbd79c7010c4cc059aef70babf34af2642a457e871600c
size 3518748384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5ce48b0a59f0a675d747b72131017f43ac4034ed9dbc4f408a413edfa4af0211
size 3274913504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1be836ffbd64a75f4c75df3442ab73860f4e16530e7b07e5857d600875af2b66
size 4677990112

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:267b0bd2efaf4076e4993fea93f098f4becac693984c6ecb617ad0005f64a1b7
size 4447663840

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac5599a0cc4ac5f60c9997da575833f2e589d4c4d8b30fdc49ea248be258bef0
size 3179132640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7588335e0681391c3dda849a4781d4181df07d1122aa30c9a53fbfd0a4bc6a97
size 2988816096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4799c0bc0a6547c1be72543b68af83f49a1119ac585f817a09368d1653307d11
size 4321957600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ca08eb87cce408dd2b19442e7db80b9fb02ee5d855488e1ff5cd5818e8816a11
size 4018919136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:889019ba907ec45505a578896f66349de9799c786c7260f4b254fd91ff3bad4a
size 3664500448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:43d3b8a143f9d69f1a2f06dbc245be104840c61f75b40b932a0f2bdf3be50303
size 4675892960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5574c4b26ca4fdab21e54470205ee2e8c50c5e4bc081c5d8ed75ae01946bb42f
size 5130254048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e658a12e1ad45393d689964ff1e07ecdb0a84790867afa9032f272212ea74015
size 4920735456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:169be809a27a5c282a651aeb765424b22192b597c85bd661c6884ad8d4c78069
size 4692670176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:708f257fcb3f0c8d58495bbbc269cd76a46b0483f0d7043058efc18bfaf69659
size 5732988640

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:05202d06de76d149ae19fa84d2d8263649f471e45bee27949e74aaac3b9a2a59
size 5599295200

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d784a808f0e00b83cde468412a04d1c3180e2ff0c810a5e48c7b6db07cd714fc
size 6596007648

77
README.md Normal file
View File

@@ -0,0 +1,77 @@
---
base_model: wuqiong1/PA-RAG_Meta-Llama-3-8B-Instruct
datasets:
- wuqiong1/PA-RAG_training_data
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/wuqiong1/PA-RAG_Meta-Llama-3-8B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/PA-RAG_Meta-Llama-3-8B-Instruct-i1-GGUF/resolve/main/PA-RAG_Meta-Llama-3-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:adb8d1a9cea5fe741d3c62c0aff763a834d71ab4f8df94e6ee6ab0843646718f
size 4988157