初始化项目,由ModelHub XC社区提供模型

Model: bartowski/mlabonne_Qwen3-14B-abliterated-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 12:51:02 +08:00
commit 2579c5f5ca
28 changed files with 319 additions and 0 deletions

61
.gitattributes vendored Normal file
View File

@@ -0,0 +1,61 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-bf16.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated.imatrix filter=lfs diff=lfs merge=lfs -text
mlabonne_Qwen3-14B-abliterated-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text

180
README.md Normal file
View File

@@ -0,0 +1,180 @@
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: mlabonne/Qwen3-14B-abliterated
tags:
- abliteration
- abliterated
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Qwen3-14B-abliterated by mlabonne
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5415">b5415</a> for quantization.
Original model: https://huggingface.co/mlabonne/Qwen3-14B-abliterated
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## What's new:
Original model updated
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Qwen3-14B-abliterated-bf16.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-bf16.gguf) | bf16 | 29.54GB | false | Full BF16 weights. |
| [Qwen3-14B-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q8_0.gguf) | Q8_0 | 15.70GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Qwen3-14B-abliterated-Q6_K_L.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q6_K_L.gguf) | Q6_K_L | 12.50GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Qwen3-14B-abliterated-Q6_K.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q6_K.gguf) | Q6_K | 12.12GB | false | Very high quality, near perfect, *recommended*. |
| [Qwen3-14B-abliterated-Q5_K_L.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q5_K_L.gguf) | Q5_K_L | 10.99GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Qwen3-14B-abliterated-Q5_K_M.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q5_K_M.gguf) | Q5_K_M | 10.51GB | false | High quality, *recommended*. |
| [Qwen3-14B-abliterated-Q5_K_S.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q5_K_S.gguf) | Q5_K_S | 10.26GB | false | High quality, *recommended*. |
| [Qwen3-14B-abliterated-Q4_K_L.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q4_K_L.gguf) | Q4_K_L | 9.58GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Qwen3-14B-abliterated-Q4_1.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q4_1.gguf) | Q4_1 | 9.39GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Qwen3-14B-abliterated-Q4_K_M.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q4_K_M.gguf) | Q4_K_M | 9.00GB | false | Good quality, default size for most use cases, *recommended*. |
| [Qwen3-14B-abliterated-Q3_K_XL.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q3_K_XL.gguf) | Q3_K_XL | 8.58GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Qwen3-14B-abliterated-Q4_K_S.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q4_K_S.gguf) | Q4_K_S | 8.57GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Qwen3-14B-abliterated-Q4_0.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q4_0.gguf) | Q4_0 | 8.54GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Qwen3-14B-abliterated-IQ4_NL.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ4_NL.gguf) | IQ4_NL | 8.54GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Qwen3-14B-abliterated-IQ4_XS.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ4_XS.gguf) | IQ4_XS | 8.11GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen3-14B-abliterated-Q3_K_L.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q3_K_L.gguf) | Q3_K_L | 7.90GB | false | Lower quality but usable, good for low RAM availability. |
| [Qwen3-14B-abliterated-Q3_K_M.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q3_K_M.gguf) | Q3_K_M | 7.32GB | false | Low quality. |
| [Qwen3-14B-abliterated-IQ3_M.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ3_M.gguf) | IQ3_M | 6.88GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Qwen3-14B-abliterated-Q3_K_S.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q3_K_S.gguf) | Q3_K_S | 6.66GB | false | Low quality, not recommended. |
| [Qwen3-14B-abliterated-Q2_K_L.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q2_K_L.gguf) | Q2_K_L | 6.51GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Qwen3-14B-abliterated-IQ3_XS.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ3_XS.gguf) | IQ3_XS | 6.38GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Qwen3-14B-abliterated-IQ3_XXS.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ3_XXS.gguf) | IQ3_XXS | 5.94GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Qwen3-14B-abliterated-Q2_K.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-Q2_K.gguf) | Q2_K | 5.75GB | false | Very low quality but surprisingly usable. |
| [Qwen3-14B-abliterated-IQ2_M.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ2_M.gguf) | IQ2_M | 5.32GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Qwen3-14B-abliterated-IQ2_S.gguf](https://huggingface.co/bartowski/mlabonne_Qwen3-14B-abliterated-GGUF/blob/main/mlabonne_Qwen3-14B-abliterated-IQ2_S.gguf) | IQ2_S | 4.96GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/mlabonne_Qwen3-14B-abliterated-GGUF --include "mlabonne_Qwen3-14B-abliterated-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/mlabonne_Qwen3-14B-abliterated-GGUF --include "mlabonne_Qwen3-14B-abliterated-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (mlabonne_Qwen3-14B-abliterated-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5c05b774d4af43f748862331af0e5cf6b7c4f4b0ab995491fe9f954a54b80cb4
size 5322941536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:672e0b086d8f56537a60bd108f28f819ac7cbe7f67f34fc259f09dbef7ad4ae7
size 4963312736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5cee38b6201e572084d13f65fe648a298b2328154e369f271640b601b4a7f176
size 6883410016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eea8f9e4433a903abb1e75eb0452089e425fc6dddc6d11eb21071e5fe8246eee
size 6375301216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9df30e4013cbf732b55d6437cfce3765d7e8f9567001aea80b3ccceb53fd698f
size 5942666336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1702dab3f9957cf71f601ea4a1beb48da06b6e4bdcc9f52cc3b28db4e4b9d041
size 8541363296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8a84ad678a549b38ae0bcd285d99b62711cf808d3e1dd5d6b0b2203c8f5e7f69
size 8110730336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c87c0c198e0b3f7861f868176935916ef7f487965b1a781cf76f2b04f483b15
size 5753984096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fee726573fb376291bbf08acdcca961bdbd4cb8b85891b3b979f4feef5c73092
size 6513664096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a64bfd94ddd40fc7a9764bf1fdef7f9dd1a9d9cbf555f52c7458792fecbcc5eb
size 7900651616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c449ac4b31e325be266d652ea516650c6a05e68b09d374b9e51ef652a91a376d
size 7321313376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:43f45742bb89b6aab5d053b8df5604c06ff93b1c41f41de2338e86a276049824
size 6657106016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:074deb9caccf667e3ddaa89de100b763d5cb4f95f605a2884b8991b06449f581
size 8581324896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fe68566263d6414494ffa827d1a6116b26f499b25efe67782cb102d2c800c46c
size 8543001696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:39c1739893f283c21707fd29e52ca82346d6d4fe58cebc1c48222291275cc57d
size 9389522016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8eaa2980edbabdd16de66da92e00e87d9bf93103b71df23b4c220e0471d8e7b8
size 9579110496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3fe972a7c6e847ec791453b89a7333d369fbde329cbd4cc9a4f0598854db5d54
size 9001753696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6b87c4038cebcc654413f63fed97c75a5b9dc9a1bb018af809bb3e3a3b5db496
size 8573475936

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d8e5182452022f17376865add82def239e6424ded3dfc2122ce756d5c4c687cf
size 10994688096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6ba161c229e6e6442864bea87b311ce36b62c0c664f0c8eac75cc43ee0fb619b
size 10514570336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1baf807508e5f3e2e9bc7e3b7587623ad35141bbd6d1a09bb62032f9084a89de
size 10263895136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:db037b17caecc0f5be69fb8e0fd759810acce911fe64d61a0116d46029509a16
size 12121938016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1fa4ba6f3af071be5d70b6654a18702e5086b0783652268bed0ebc5904db707
size 12498739296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:98244eaddf92baf49c9c820d0b1108d8c7906e5abd1575c7f8942873c578f9d8
size 15698534496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:84a64243f08fbaf3b279e237659b61797e49623a092d21f131aab990ebdbc7d3
size 29543423808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b7caf7292d6e4309bceb90b21a43321be48ad56b05c8ac665e31b920dcd0105
size 7709778