初始化项目,由ModelHub XC社区提供模型
Model: bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF Source: Original Platform
This commit is contained in:
62
.gitattributes
vendored
Normal file
62
.gitattributes
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT-bf16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
baidu_ERNIE-4.5-21B-A3B-PT.imatrix filter=lfs diff=lfs merge=lfs -text
|
||||
166
README.md
Normal file
166
README.md
Normal file
@@ -0,0 +1,166 @@
|
||||
---
|
||||
quantized_by: bartowski
|
||||
pipeline_tag: text-generation
|
||||
base_model: baidu/ERNIE-4.5-21B-A3B-PT
|
||||
base_model_relation: quantized
|
||||
---
|
||||
|
||||
## Llamacpp imatrix Quantizations of ERNIE-4.5-21B-A3B-PT by baidu
|
||||
|
||||
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5924">b5924</a> for quantization.
|
||||
|
||||
Original model: https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-PT
|
||||
|
||||
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
||||
|
||||
Run them in [LM Studio](https://lmstudio.ai/)
|
||||
|
||||
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
|
||||
|
||||
## Prompt format
|
||||
|
||||
No prompt format found, check original model page
|
||||
|
||||
## Download a file (not the whole branch) from below:
|
||||
|
||||
| Filename | Quant type | File Size | Split | Description |
|
||||
| -------- | ---------- | --------- | ----- | ----------- |
|
||||
| [ERNIE-4.5-21B-A3B-PT-bf16.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-bf16.gguf) | bf16 | 43.66GB | false | Full BF16 weights. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q8_0.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q8_0.gguf) | Q8_0 | 23.21GB | false | Extremely high quality, generally unneeded but max available quant. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q6_K_L.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q6_K_L.gguf) | Q6_K_L | 18.15GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q6_K.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q6_K.gguf) | Q6_K | 18.08GB | false | Very high quality, near perfect, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q5_K_L.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_L.gguf) | Q5_K_L | 15.82GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q5_K_M.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_M.gguf) | Q5_K_M | 15.75GB | false | High quality, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q5_K_S.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_S.gguf) | Q5_K_S | 15.23GB | false | High quality, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q4_1.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q4_1.gguf) | Q4_1 | 13.88GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q4_K_L.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_L.gguf) | Q4_K_L | 13.56GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q4_K_M.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_M.gguf) | Q4_K_M | 13.50GB | false | Good quality, default size for most use cases, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q4_K_S.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_S.gguf) | Q4_K_S | 13.01GB | false | Slightly lower quality with more space savings, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q4_0.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q4_0.gguf) | Q4_0 | 12.78GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ4_NL.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ4_NL.gguf) | IQ4_NL | 12.60GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ4_XS.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ4_XS.gguf) | IQ4_XS | 11.96GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q3_K_XL.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_XL.gguf) | Q3_K_XL | 10.73GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q3_K_L.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_L.gguf) | Q3_K_L | 10.66GB | false | Lower quality but usable, good for low RAM availability. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q3_K_M.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_M.gguf) | Q3_K_M | 10.30GB | false | Low quality. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ3_M.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ3_M.gguf) | IQ3_M | 10.29GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q3_K_S.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_S.gguf) | Q3_K_S | 9.85GB | false | Low quality, not recommended. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ3_XS.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XS.gguf) | IQ3_XS | 9.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ3_XXS.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XXS.gguf) | IQ3_XXS | 8.99GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q2_K_L.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q2_K_L.gguf) | Q2_K_L | 8.16GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-Q2_K.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-Q2_K.gguf) | Q2_K | 8.09GB | false | Very low quality but surprisingly usable. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ2_M.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ2_M.gguf) | IQ2_M | 7.16GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ2_S.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ2_S.gguf) | IQ2_S | 6.37GB | false | Low quality, uses SOTA techniques to be usable. |
|
||||
| [ERNIE-4.5-21B-A3B-PT-IQ2_XS.gguf](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF/blob/main/baidu_ERNIE-4.5-21B-A3B-PT-IQ2_XS.gguf) | IQ2_XS | 6.35GB | false | Low quality, uses SOTA techniques to be usable. |
|
||||
|
||||
## Embed/output weights
|
||||
|
||||
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
|
||||
|
||||
## Downloading using huggingface-cli
|
||||
|
||||
<details>
|
||||
<summary>Click to view download instructions</summary>
|
||||
|
||||
First, make sure you have hugginface-cli installed:
|
||||
|
||||
```
|
||||
pip install -U "huggingface_hub[cli]"
|
||||
```
|
||||
|
||||
Then, you can target the specific file you want:
|
||||
|
||||
```
|
||||
huggingface-cli download bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF --include "baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_M.gguf" --local-dir ./
|
||||
```
|
||||
|
||||
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
||||
|
||||
```
|
||||
huggingface-cli download bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF --include "baidu_ERNIE-4.5-21B-A3B-PT-Q8_0/*" --local-dir ./
|
||||
```
|
||||
|
||||
You can either specify a new local-dir (baidu_ERNIE-4.5-21B-A3B-PT-Q8_0) or download them all in place (./)
|
||||
|
||||
</details>
|
||||
|
||||
## ARM/AVX information
|
||||
|
||||
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
|
||||
|
||||
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
|
||||
|
||||
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
|
||||
|
||||
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
|
||||
|
||||
<details>
|
||||
<summary>Click to view Q4_0_X_X information (deprecated</summary>
|
||||
|
||||
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
|
||||
|
||||
<details>
|
||||
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
|
||||
|
||||
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
|
||||
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
|
||||
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
|
||||
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
|
||||
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
|
||||
|
||||
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
|
||||
|
||||
</details>
|
||||
|
||||
</details>
|
||||
|
||||
## Which file should I choose?
|
||||
|
||||
<details>
|
||||
<summary>Click here for details</summary>
|
||||
|
||||
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
||||
|
||||
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
|
||||
|
||||
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
|
||||
|
||||
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
|
||||
|
||||
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
|
||||
|
||||
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
|
||||
|
||||
If you want to get more into the weeds, you can check out this extremely useful feature chart:
|
||||
|
||||
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
|
||||
|
||||
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
|
||||
|
||||
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
||||
|
||||
</details>
|
||||
|
||||
## Credits
|
||||
|
||||
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
|
||||
|
||||
Thank you ZeroWw for the inspiration to experiment with embed/output.
|
||||
|
||||
Thank you to LM Studio for sponsoring my work.
|
||||
|
||||
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_M.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5df377e679a1e35748f57b267a4fd0528c69440c3659a1cbeca2516e83408241
|
||||
size 7164537952
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_S.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ffc41780602c494acd580a48981b9c4328f970d2fd8bbd8f54377811159c315b
|
||||
size 6369913952
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_XS.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ2_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:69e307c57d3d3ee879477ec5c1c120e3be11aabdf753aa9ae565a4d9e9d9f01a
|
||||
size 6346976352
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_M.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f4c58b8c606bf3c3aa0f5b24c79b4017b3adff8305de5b9e02d3ae365f430b7b
|
||||
size 10293595232
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XS.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:40990150cd70b8932cda81b08eaf417daf5b6df59e94f0c9a441b923b7e29b6b
|
||||
size 9350859872
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XXS.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ3_XXS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d16c689bb9acd9fb49cb032530dc2793448c1389986a1c7e35c08d71cdb5de36
|
||||
size 8993094752
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ4_NL.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:1f0e523fd319aa277bb458ce6439d311c1c08a43450f88a6ec99021eaed5cab9
|
||||
size 12603698272
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ4_XS.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:85b364eaf84cd1b77f6c7452e6a673740d4024bfafab948b3f297e7dbd9e6cc8
|
||||
size 11958004832
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q2_K.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a40ceb8afdd27d5ec54d88638facb90fd8de86cf17a800e6d3c097f9d3ba27f4
|
||||
size 8091872352
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q2_K_L.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q2_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:32342fdc17726cbee990405950cc80d90498ae94942e4468ce784563ecddf92f
|
||||
size 8155995232
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_L.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3dc1caf2eeb531b2e28e4e2eb05316d4556983dc23b78d2022d86819eef2d52b
|
||||
size 10663750752
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_M.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:d08e27abe05b3b4aff9073cd21dcd4c88f84d3c9facfb1cf0aeeac3c668debd2
|
||||
size 10297855072
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_S.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8bfc9e88e66b9a440b591f7c4c9b6dd1ddd57e702b6afd032b86f724515db664
|
||||
size 9850039392
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_XL.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q3_K_XL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:97ba6d43934653e8bf48c238cb312566da407528f68e1517377bfe4521406b8f
|
||||
size 10727873632
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_0.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9fe18a8b442b518c6131221fe1a6b357265d45e0475377eedf3c38a7ddf82a91
|
||||
size 12782611552
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_1.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_1.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:57a74a6971738d6ee421175aff59ee0694dc36d943c4fc0148981bb473a72ce4
|
||||
size 13881322592
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_L.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c273a35466595a9313740ad1d45685a3926982b4f1a8cff72fa0720011486b84
|
||||
size 13563391072
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_M.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:307a5147099074f4d80195b3551c989c1f1f17a72948637baaa6ceb677b7b115
|
||||
size 13499268192
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_S.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bf79234dd2fc6b7ca8735f7033a9a6033832fcee1b77a5e4289ea0ab1b98a9fc
|
||||
size 13012642912
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_L.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:96c1a43968cc399cdc97414377ed70dab58fc758d913c1d6dcdad81a6fd3c0bd
|
||||
size 15815576672
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_M.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:da421a4a295d84bb773e240592dd25949ba98d9a5ce33d037e6d80ee9fd0c623
|
||||
size 15751453792
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_S.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c0e2f85caaaf429c25fdef064b658f7d742d4f33e9c8584de12e34c1a8d5c154
|
||||
size 15230340192
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q6_K.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:109e41638407bcee93900753a92e8dfcd6609d0ed3155611ed5d232461b231ca
|
||||
size 18083777632
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q6_K_L.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q6_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b6f87ea87336f26699cd9f9793a10d1da7dc2ad7617feb01d1c42b12128f89c5
|
||||
size 18147900512
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-Q8_0.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f56187000a1796fb6723111ba77200949ed8ced376fb1ad90b26523dd2b912ad
|
||||
size 23205354592
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT-bf16.gguf
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT-bf16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f687d717f38f88ffbc6a052e3cfe4597cb65e1749737a6c09389c2c77cf2fe7c
|
||||
size 43662416896
|
||||
3
baidu_ERNIE-4.5-21B-A3B-PT.imatrix
Normal file
3
baidu_ERNIE-4.5-21B-A3B-PT.imatrix
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f667d712bb48b43adfcdfd0d4fc86124f1c8fa268c5b3fa965f9463dfe835e37
|
||||
size 48395083
|
||||
Reference in New Issue
Block a user