初始化项目,由ModelHub XC社区提供模型

Model: bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-22 12:15:54 +08:00
commit be6a5f45f0
29 changed files with 318 additions and 0 deletions

62
.gitattributes vendored Normal file
View File

@@ -0,0 +1,62 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
mmproj-mlabonne_gemma-3-4b-it-abliterated-f32.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
mmproj-mlabonne_gemma-3-4b-it-abliterated-f16.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated-bf16.gguf filter=lfs diff=lfs merge=lfs -text
mlabonne_gemma-3-4b-it-abliterated.imatrix filter=lfs diff=lfs merge=lfs -text

175
README.md Normal file
View File

@@ -0,0 +1,175 @@
---
quantized_by: bartowski
pipeline_tag: image-text-to-text
license: gemma
base_model: mlabonne/gemma-3-4b-it-abliterated
---
## Llamacpp imatrix Quantizations of gemma-3-4b-it-abliterated by mlabonne
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4896">b4896</a> for quantization.
Original model: https://huggingface.co/mlabonne/gemma-3-4b-it-abliterated
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<bos><start_of_turn>user
{system_prompt}
{prompt}<end_of_turn>
<start_of_turn>model
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [mmproj-gemma-3-4b-it-abliterated-f32.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mmproj-mlabonne_gemma-3-4b-it-abliterated-f32.gguf) | f32 | 1.68GB | false | F32 format MMPROJ file, required for vision. |
| [mmproj-gemma-3-4b-it-abliterated-f16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mmproj-mlabonne_gemma-3-4b-it-abliterated-f16.gguf) | f16 | 851MB | false | F16 format MMPROJ file, required for vision. |
| [gemma-3-4b-it-abliterated-bf16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-bf16.gguf) | bf16 | 7.77GB | false | Full BF16 weights. |
| [gemma-3-4b-it-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q8_0.gguf) | Q8_0 | 4.13GB | false | Extremely high quality, generally unneeded but max available quant. |
| [gemma-3-4b-it-abliterated-Q6_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q6_K_L.gguf) | Q6_K_L | 3.35GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [gemma-3-4b-it-abliterated-Q6_K.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q6_K.gguf) | Q6_K | 3.19GB | false | Very high quality, near perfect, *recommended*. |
| [gemma-3-4b-it-abliterated-Q5_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q5_K_L.gguf) | Q5_K_L | 2.99GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [gemma-3-4b-it-abliterated-Q5_K_M.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q5_K_M.gguf) | Q5_K_M | 2.83GB | false | High quality, *recommended*. |
| [gemma-3-4b-it-abliterated-Q5_K_S.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q5_K_S.gguf) | Q5_K_S | 2.76GB | false | High quality, *recommended*. |
| [gemma-3-4b-it-abliterated-Q4_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q4_K_L.gguf) | Q4_K_L | 2.65GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [gemma-3-4b-it-abliterated-Q4_1.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q4_1.gguf) | Q4_1 | 2.56GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [gemma-3-4b-it-abliterated-Q4_K_M.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q4_K_M.gguf) | Q4_K_M | 2.49GB | false | Good quality, default size for most use cases, *recommended*. |
| [gemma-3-4b-it-abliterated-Q3_K_XL.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q3_K_XL.gguf) | Q3_K_XL | 2.40GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [gemma-3-4b-it-abliterated-Q4_K_S.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q4_K_S.gguf) | Q4_K_S | 2.38GB | false | Slightly lower quality with more space savings, *recommended*. |
| [gemma-3-4b-it-abliterated-Q4_0.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q4_0.gguf) | Q4_0 | 2.37GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [gemma-3-4b-it-abliterated-IQ4_NL.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-IQ4_NL.gguf) | IQ4_NL | 2.36GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [gemma-3-4b-it-abliterated-IQ4_XS.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-IQ4_XS.gguf) | IQ4_XS | 2.26GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [gemma-3-4b-it-abliterated-Q3_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q3_K_L.gguf) | Q3_K_L | 2.24GB | false | Lower quality but usable, good for low RAM availability. |
| [gemma-3-4b-it-abliterated-Q3_K_M.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q3_K_M.gguf) | Q3_K_M | 2.10GB | false | Low quality. |
| [gemma-3-4b-it-abliterated-IQ3_M.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-IQ3_M.gguf) | IQ3_M | 1.99GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [gemma-3-4b-it-abliterated-Q3_K_S.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q3_K_S.gguf) | Q3_K_S | 1.94GB | false | Low quality, not recommended. |
| [gemma-3-4b-it-abliterated-Q2_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q2_K_L.gguf) | Q2_K_L | 1.89GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [gemma-3-4b-it-abliterated-IQ3_XS.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-IQ3_XS.gguf) | IQ3_XS | 1.86GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [gemma-3-4b-it-abliterated-Q2_K.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-Q2_K.gguf) | Q2_K | 1.73GB | false | Very low quality but surprisingly usable. |
| [gemma-3-4b-it-abliterated-IQ3_XXS.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-IQ3_XXS.gguf) | IQ3_XXS | 1.69GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [gemma-3-4b-it-abliterated-IQ2_M.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-4b-it-abliterated-IQ2_M.gguf) | IQ2_M | 1.54GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF --include "mlabonne_gemma-3-4b-it-abliterated-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/mlabonne_gemma-3-4b-it-abliterated-GGUF --include "mlabonne_gemma-3-4b-it-abliterated-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (mlabonne_gemma-3-4b-it-abliterated-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6132c3e1b266c5994c9341c50cd72f08ae37e15c6f897e542cc945f6c216ec5d
size 1537982624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42f470af87aa025a4bbd777f3abaac8028e679c63e419c873c867915dd2f5e4c
size 1986803104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8cdc699e801a0b3e8dfc9ce7c42331979c3e6c40414709f1cf09cef82c93e4fc
size 1863390624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9f38ce4e581ed4f9c8a76cfbde258e69c84baead2f53d6f6c69b9cd99e1cf448
size 1689452704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:202d27d6ac0c8f343c5d17135520fb26721fec04089bfb370398be9e95daccc1
size 2363512224

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e353d5bec21241025901f84bdc94eb596c11f061fea80c3dba84ac9147640e54
size 2263242144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e5c90dc94c8a6c84cf0d359e730835bf579db649704f74c3f3e99a59b36ec404
size 1729164704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:94d339ec9d4b62e232911952312c0a978dc3e02d80eb4f135943964d63b7a409
size 1891733664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:53b667c1788ec7afad04e4a705a22be488f785cafa20d0664a2cd1ef8b09750d
size 2236085664

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c572eb298fab2d29039d631dcfe95b0adc02ae1543581c78c0dd997193692cd
size 2098460064

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:040d0d0c1c0fef58f93105b73dcfedf32737680b84b5bf72d4c03eba8fcab985
size 1937364384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7156696ab64fcd46625b510303a099b1be979ec4168de1f5f2918dfc7cc0d88e
size 2398654624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ef36d8d18a5a6d25aba85ecf60b7be07a5cd80d884e8296e1c55a777bea806d2
size 2370065824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f18e120eae51a08996b27f2b59c0c745efa4dfb51e895e057bc031fcac5322b2
size 2564052384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3925876a4db47481d7efe57a9c1ab2874d73c2b4ebf2395c0564e15177cd04c5
size 2652463264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1b18347ba3e998aa2fd4e21172369daa2f772aa0a228e3ed9136378346ccf3b7
size 2489894304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:be0772ab610592b68f7c5364609d2ef1af726889e3cb9fc4d51c0ecf796bfe6c
size 2377930144

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:95a7ff4ee0bb6b8781b3855d403a98d0a8f8f96bf69288a6560f089db16c123c
size 2992267424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b987fc43a6a9392680c21b0795db82af45b2672500444e99a1e1fddbbda57d1a
size 2829698464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2378a12504fe05970bf620e83f89b31d47d9f9bd3e50448532b3486f2e82ae0d
size 2764592544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4fb65e3311713ed7c1fa1ad1b9bd78c93b93c1b8f2c9229142c6ac469fb489b0
size 3190740384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:646d5b88b26582a2e4b807d943b700ac4603eee6760de7b961cc3688b63e631d
size 3353309344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0a9ba33c8a418e4d802b46acfe6bfc42daf8c0eec89cad3a97926648faa604aa
size 4130402464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:61ebdbf4d2ce7002b7f11ceadd699cab78a74e358e8823073574670c323503c7
size 7767803776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:73de0ee962f52dff38215da86b09cec6cb318f0b4aeb8998935f428ba40786a2
size 3419868

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8c0fb064b019a6972856aaae2c7e4792858af3ca4561be2dbf649123ba6c40cb
size 851251104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:804f41f3860612815bdd915ae382cd1966ec94b801dd051ecb4a96018cd97acf
size 1679290272