From b433466b825eb455541dd13e39d63493afa54bbb Mon Sep 17 00:00:00 2001 From: Bartowski Date: Wed, 3 Sep 2025 18:46:22 +0000 Subject: [PATCH] Upload README.md with huggingface_hub --- README.md | 165 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 165 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..bcf6a7f --- /dev/null +++ b/README.md @@ -0,0 +1,165 @@ +--- +quantized_by: bartowski +pipeline_tag: text-generation +--- + +## Llamacpp imatrix Quantizations of Skyfall-31B-v4 by TheDrummer + +Using llama.cpp release b6317 for quantization. + +Original model: https://huggingface.co/TheDrummer/Skyfall-31B-v4 + +All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) combined with a subset of combined_all_small.parquet from Ed Addario [here](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_all_small.parquet) + +Run them in [LM Studio](https://lmstudio.ai/) + +Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project + +## Prompt format + +No prompt format found, check original model page + +## Download a file (not the whole branch) from below: + +| Filename | Quant type | File Size | Split | Description | +| -------- | ---------- | --------- | ----- | ----------- | +| [Skyfall-31B-v4-bf16.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/tree/main/TheDrummer_Skyfall-31B-v4-bf16) | bf16 | 62.71GB | true | Full BF16 weights. | +| [Skyfall-31B-v4-Q8_0.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q8_0.gguf) | Q8_0 | 33.32GB | false | Extremely high quality, generally unneeded but max available quant. | +| [Skyfall-31B-v4-Q6_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q6_K_L.gguf) | Q6_K_L | 26.05GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | +| [Skyfall-31B-v4-Q6_K.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q6_K.gguf) | Q6_K | 25.73GB | false | Very high quality, near perfect, *recommended*. | +| [Skyfall-31B-v4-Q5_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q5_K_L.gguf) | Q5_K_L | 22.67GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | +| [Skyfall-31B-v4-Q5_K_M.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q5_K_M.gguf) | Q5_K_M | 22.25GB | false | High quality, *recommended*. | +| [Skyfall-31B-v4-Q5_K_S.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q5_K_S.gguf) | Q5_K_S | 21.65GB | false | High quality, *recommended*. | +| [Skyfall-31B-v4-Q4_1.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q4_1.gguf) | Q4_1 | 19.74GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. | +| [Skyfall-31B-v4-Q4_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q4_K_L.gguf) | Q4_K_L | 19.48GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | +| [Skyfall-31B-v4-Q4_K_M.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q4_K_M.gguf) | Q4_K_M | 18.98GB | false | Good quality, default size for most use cases, *recommended*. | +| [Skyfall-31B-v4-Q4_K_S.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q4_K_S.gguf) | Q4_K_S | 17.95GB | false | Slightly lower quality with more space savings, *recommended*. | +| [Skyfall-31B-v4-Q4_0.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q4_0.gguf) | Q4_0 | 17.88GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. | +| [Skyfall-31B-v4-IQ4_NL.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ4_NL.gguf) | IQ4_NL | 17.85GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. | +| [Skyfall-31B-v4-Q3_K_XL.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q3_K_XL.gguf) | Q3_K_XL | 17.03GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | +| [Skyfall-31B-v4-IQ4_XS.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ4_XS.gguf) | IQ4_XS | 16.90GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | +| [Skyfall-31B-v4-Q3_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q3_K_L.gguf) | Q3_K_L | 16.44GB | false | Lower quality but usable, good for low RAM availability. | +| [Skyfall-31B-v4-Q3_K_M.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q3_K_M.gguf) | Q3_K_M | 15.20GB | false | Low quality. | +| [Skyfall-31B-v4-IQ3_M.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ3_M.gguf) | IQ3_M | 14.07GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | +| [Skyfall-31B-v4-Q3_K_S.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q3_K_S.gguf) | Q3_K_S | 13.74GB | false | Low quality, not recommended. | +| [Skyfall-31B-v4-IQ3_XS.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ3_XS.gguf) | IQ3_XS | 13.07GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | +| [Skyfall-31B-v4-Q2_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q2_K_L.gguf) | Q2_K_L | 12.38GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | +| [Skyfall-31B-v4-IQ3_XXS.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ3_XXS.gguf) | IQ3_XXS | 12.26GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | +| [Skyfall-31B-v4-Q2_K.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-Q2_K.gguf) | Q2_K | 11.73GB | false | Very low quality but surprisingly usable. | +| [Skyfall-31B-v4-IQ2_M.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ2_M.gguf) | IQ2_M | 10.68GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | +| [Skyfall-31B-v4-IQ2_S.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ2_S.gguf) | IQ2_S | 9.81GB | false | Low quality, uses SOTA techniques to be usable. | +| [Skyfall-31B-v4-IQ2_XS.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ2_XS.gguf) | IQ2_XS | 9.48GB | false | Low quality, uses SOTA techniques to be usable. | +| [Skyfall-31B-v4-IQ2_XXS.gguf](https://huggingface.co/bartowski/TheDrummer_Skyfall-31B-v4-GGUF/blob/main/TheDrummer_Skyfall-31B-v4-IQ2_XXS.gguf) | IQ2_XXS | 8.59GB | false | Very low quality, uses SOTA techniques to be usable. | + +## Embed/output weights + +Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. + +## Downloading using huggingface-cli + +
+ Click to view download instructions + +First, make sure you have hugginface-cli installed: + +``` +pip install -U "huggingface_hub[cli]" +``` + +Then, you can target the specific file you want: + +``` +huggingface-cli download bartowski/TheDrummer_Skyfall-31B-v4-GGUF --include "TheDrummer_Skyfall-31B-v4-Q4_K_M.gguf" --local-dir ./ +``` + +If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: + +``` +huggingface-cli download bartowski/TheDrummer_Skyfall-31B-v4-GGUF --include "TheDrummer_Skyfall-31B-v4-Q8_0/*" --local-dir ./ +``` + +You can either specify a new local-dir (TheDrummer_Skyfall-31B-v4-Q8_0) or download them all in place (./) + +
+ +## ARM/AVX information + +Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass. + +Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggml-org/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly. + +As of llama.cpp build [b4282](https://github.com/ggml-org/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0. + +Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggml-org/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase. + +
+ Click to view Q4_0_X_X information (deprecated + +I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking. + +
+ Click to view benchmarks on an AVX2 system (EPYC7702) + +| model | size | params | backend | threads | test | t/s | % (vs Q4_0) | +| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: | +| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% | +| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% | +| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% | +| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% | +| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% | +| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% | +| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% | +| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% | +| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% | +| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% | +| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% | +| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% | +| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% | +| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% | +| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% | +| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% | +| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% | +| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% | + +Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation + +
+ +
+ +## Which file should I choose? + +
+ Click here for details + +A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) + +The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. + +If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. + +If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. + +Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. + +If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. + +If you want to get more into the weeds, you can check out this extremely useful feature chart: + +[llama.cpp feature matrix](https://github.com/ggml-org/llama.cpp/wiki/Feature-matrix) + +But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. + +These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. + +
+ +## Credits + +Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset. + +Thank you ZeroWw for the inspiration to experiment with embed/output. + +Thank you to LM Studio for sponsoring my work. + +Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski