初始化项目,由ModelHub XC社区提供模型

Model: bartowski/magnum-32b-v2-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 11:35:58 +08:00
commit 7290de632f
23 changed files with 237 additions and 0 deletions

55
.gitattributes vendored Normal file
View File

@@ -0,0 +1,55 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
magnum-32b-v2.imatrix filter=lfs diff=lfs merge=lfs -text

121
README.md Normal file
View File

@@ -0,0 +1,121 @@
---
base_model: anthracite-org/magnum-32b-v2
language:
- en
- zh
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of magnum-32b-v2
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3509">b3509</a> for quantization.
Original model: https://huggingface.co/anthracite-org/magnum-32b-v2
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [magnum-32b-v2-Q8_0.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q8_0.gguf) | Q8_0 | 34.55GB | false | Extremely high quality, generally unneeded but max available quant. |
| [magnum-32b-v2-Q6_K_L.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q6_K_L.gguf) | Q6_K_L | 27.06GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [magnum-32b-v2-Q6_K.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q6_K.gguf) | Q6_K | 26.68GB | false | Very high quality, near perfect, *recommended*. |
| [magnum-32b-v2-Q5_K_L.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q5_K_L.gguf) | Q5_K_L | 23.56GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [magnum-32b-v2-Q5_K_M.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q5_K_M.gguf) | Q5_K_M | 23.08GB | false | High quality, *recommended*. |
| [magnum-32b-v2-Q5_K_S.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q5_K_S.gguf) | Q5_K_S | 22.47GB | false | High quality, *recommended*. |
| [magnum-32b-v2-Q4_K_L.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q4_K_L.gguf) | Q4_K_L | 20.28GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [magnum-32b-v2-Q4_K_M.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q4_K_M.gguf) | Q4_K_M | 19.70GB | false | Good quality, default size for must use cases, *recommended*. |
| [magnum-32b-v2-Q4_K_S.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q4_K_S.gguf) | Q4_K_S | 18.64GB | false | Slightly lower quality with more space savings, *recommended*. |
| [magnum-32b-v2-Q3_K_XL.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q3_K_XL.gguf) | Q3_K_XL | 17.80GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [magnum-32b-v2-IQ4_XS.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-IQ4_XS.gguf) | IQ4_XS | 17.56GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [magnum-32b-v2-Q3_K_L.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q3_K_L.gguf) | Q3_K_L | 17.12GB | false | Lower quality but usable, good for low RAM availability. |
| [magnum-32b-v2-Q3_K_M.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q3_K_M.gguf) | Q3_K_M | 15.82GB | false | Low quality. |
| [magnum-32b-v2-IQ3_M.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-IQ3_M.gguf) | IQ3_M | 14.70GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [magnum-32b-v2-Q3_K_S.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q3_K_S.gguf) | Q3_K_S | 14.28GB | false | Low quality, not recommended. |
| [magnum-32b-v2-IQ3_XS.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-IQ3_XS.gguf) | IQ3_XS | 13.60GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [magnum-32b-v2-Q2_K_L.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q2_K_L.gguf) | Q2_K_L | 12.98GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [magnum-32b-v2-Q2_K.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-Q2_K.gguf) | Q2_K | 12.22GB | false | Very low quality but surprisingly usable. |
| [magnum-32b-v2-IQ2_M.gguf](https://huggingface.co/bartowski/magnum-32b-v2-GGUF/blob/main/magnum-32b-v2-IQ2_M.gguf) | IQ2_M | 11.18GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/magnum-32b-v2-GGUF --include "magnum-32b-v2-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/magnum-32b-v2-GGUF --include "magnum-32b-v2-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (magnum-32b-v2-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

3
magnum-32b-v2-IQ2_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b1a946883ada03e423e4dd938a6d52ca3fff1d1f19044987d2c4a77964ee4e1
size 11182682528

3
magnum-32b-v2-IQ3_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1cbdb9e8d560deac57eea29eeb0e1b79c95a28dbb404bc55c65d8571b9d2df34
size 14700593568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:90591eef32ed560cea4e64a34932c4664495437163c46d05dc20e743586ffbaf
size 13603275168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:611ce100cee387607b46dc9571da30792dc365fbc049511d48aa1d56f1377e64
size 17559458208

3
magnum-32b-v2-Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:53e4b71c3e06865463d2e518834aff6de3b3709eb2acf359ffb76334d638722a
size 12222001568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:09d859a912ef9ac78896844633dc7ce0024e480defed1c32915ab403b41b5a05
size 12982321568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:65242382d9d0791b9fc235bc3ec1dafa166b3a4d66286d03e7b3c49f2c9a4c8e
size 17117315488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c2954ddbfe4043221c191e5199e1ba3d392f0890c8521d4ffb2241e13cab45cf
size 15815115168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f6e121a06fce950df42671f29c4cd7fb5f101777f782441bbf2323a70e6edb2b
size 14284194208

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a83c466d60370349cdec938846a78b2a62796fd39c42db986f11891f806fd34
size 17798562208

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:92e2560d02aa2d6277a611af629ead3364115d302bb8b02ec508558c89a2017e
size 20276806048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bec87f37b3c291e80eb9f0f4ea8e734f440d208b3c497edb618dc77fa3dfac70
size 19698962848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:33662d7d28c12ae396d36bc4e97ce2569cf0fd7dc18a63865bb0226d3adce645
size 18641539488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3f92f246509ad4cf08ec76fbc61290f8df01ea30703e12e51b432415811f8137
size 23564091808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3efb8d339411c684ee033d7ad41ba70d5d7e56b4a88b062a4133c8d07e162ec6
size 23083569568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:20e5426596995bcf3d47a100e3d51472da41b49449cccfde0eef5d820130aecf
size 22465237408

3
magnum-32b-v2-Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:563f5a5a030da4070cc2b1598214d9f2f38ca28776e9ac69096199be1dc03c8e
size 26679714208

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:967f07edb04adb591f46ab5a2495d7bad9cba6f7f3aa625d0fe47a4f2590cf86
size 27056832928

3
magnum-32b-v2-Q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e6939f3a9ef70763d262b91e845b0dae70bfca4a1e90c877ef1037957e05d819
size 34553495968

3
magnum-32b-v2.imatrix Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bc99e7d47853d51ae5a3f48946c21661215681ba7e10b4c05e5885c7466700dd
size 14891562