初始化项目,由ModelHub XC社区提供模型

Model: bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-07 16:58:25 +08:00
commit d63cab626f
29 changed files with 307 additions and 0 deletions

62
.gitattributes vendored Normal file
View File

@@ -0,0 +1,62 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-bf16.gguf filter=lfs diff=lfs merge=lfs -text
XiaomiMiMo_MiMo-VL-7B-SFT-2508-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
mmproj-XiaomiMiMo_MiMo-VL-7B-SFT-2508-f16.gguf filter=lfs diff=lfs merge=lfs -text
mmproj-XiaomiMiMo_MiMo-VL-7B-SFT-2508-bf16.gguf filter=lfs diff=lfs merge=lfs -text

164
README.md Normal file
View File

@@ -0,0 +1,164 @@
---
quantized_by: bartowski
pipeline_tag: image-text-to-text
base_model_relation: quantized
base_model: XiaomiMiMo/MiMo-VL-7B-SFT-2508
---
## Llamacpp imatrix Quantizations of MiMo-VL-7B-SFT-2508 by XiaomiMiMo
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6317">b6317</a> for quantization.
Original model: https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-SFT-2508
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) combined with a subset of combined_all_small.parquet from Ed Addario [here](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_all_small.parquet)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [MiMo-VL-7B-SFT-2508-bf16.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-bf16.gguf) | bf16 | 15.25GB | false | Full BF16 weights. |
| [MiMo-VL-7B-SFT-2508-Q8_0.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q8_0.gguf) | Q8_0 | 8.11GB | false | Extremely high quality, generally unneeded but max available quant. |
| [MiMo-VL-7B-SFT-2508-Q6_K_L.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q6_K_L.gguf) | Q6_K_L | 6.56GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q6_K.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q6_K.gguf) | Q6_K | 6.26GB | false | Very high quality, near perfect, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q5_K_L.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q5_K_L.gguf) | Q5_K_L | 5.83GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q5_K_M.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q5_K_M.gguf) | Q5_K_M | 5.45GB | false | High quality, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q5_K_S.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q5_K_S.gguf) | Q5_K_S | 5.33GB | false | High quality, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q4_K_L.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_L.gguf) | Q4_K_L | 5.15GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q4_1.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_1.gguf) | Q4_1 | 4.89GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [MiMo-VL-7B-SFT-2508-Q4_K_M.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q3_K_XL.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_XL.gguf) | Q3_K_XL | 4.68GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [MiMo-VL-7B-SFT-2508-Q4_K_S.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_S.gguf) | Q4_K_S | 4.48GB | false | Slightly lower quality with more space savings, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q4_0.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_0.gguf) | Q4_0 | 4.47GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [MiMo-VL-7B-SFT-2508-IQ4_NL.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ4_NL.gguf) | IQ4_NL | 4.47GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [MiMo-VL-7B-SFT-2508-IQ4_XS.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ4_XS.gguf) | IQ4_XS | 4.26GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [MiMo-VL-7B-SFT-2508-Q3_K_L.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_L.gguf) | Q3_K_L | 4.14GB | false | Lower quality but usable, good for low RAM availability. |
| [MiMo-VL-7B-SFT-2508-Q3_K_M.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_M.gguf) | Q3_K_M | 3.85GB | false | Low quality. |
| [MiMo-VL-7B-SFT-2508-Q2_K_L.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q2_K_L.gguf) | Q2_K_L | 3.68GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [MiMo-VL-7B-SFT-2508-IQ3_M.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ3_M.gguf) | IQ3_M | 3.65GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [MiMo-VL-7B-SFT-2508-Q3_K_S.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q3_K_S.gguf) | Q3_K_S | 3.53GB | false | Low quality, not recommended. |
| [MiMo-VL-7B-SFT-2508-IQ3_XS.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ3_XS.gguf) | IQ3_XS | 3.40GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [MiMo-VL-7B-SFT-2508-IQ3_XXS.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ3_XXS.gguf) | IQ3_XXS | 3.15GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [MiMo-VL-7B-SFT-2508-Q2_K.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q2_K.gguf) | Q2_K | 3.08GB | false | Very low quality but surprisingly usable. |
| [MiMo-VL-7B-SFT-2508-IQ2_M.gguf](https://huggingface.co/bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF/blob/main/XiaomiMiMo_MiMo-VL-7B-SFT-2508-IQ2_M.gguf) | IQ2_M | 2.87GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF --include "XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/XiaomiMiMo_MiMo-VL-7B-SFT-2508-GGUF --include "XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (XiaomiMiMo_MiMo-VL-7B-SFT-2508-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggml-org/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggml-org/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggml-org/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggml-org/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dcee424a0adb4254fc1739173f9b12c42371abe79a7fc7d02f87b67047fd1de7
size 2867920320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a14d90c0d8532cd675faad7cfba9ded55eea69ff0f7618a1811a1908a8d0abf3
size 3650063808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4ac2f9029b6650682ed6b15f123e52418c39693c33b8a5e88448fd0124e01d71
size 3396373952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ee916cd221128b3b5101d206f285007a7468a5225eba82f47364885e9b218954
size 3152543168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:814632f6019e38779c1ec8eed97013650313e032aac5b565ac736830169af031
size 4474510784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f91bfa0f6d12c63f0504392ffff134839d0060afca646dfbf13ee68e6dce6f6e
size 4260453824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e1672ad62ae750fff40423992fddf441ce32abb119b4bb906e8538cf67197f76
size 3076406720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:81bfedb8e5ac5cdb8ea829ce28b9c375a8f5ce63e472135972012cd303a61d89
size 3683126720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ba737e0595087902b30fca2b1d9b8ce649f312600557023bac5c977d751e441a
size 4138962368

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87e77978bef94afdafebcf9db17ae7dfc25a9065f65d282530c15fb269132936
size 3854011840

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:deef5cc6fd5b0a87a40324c0cfed10ed8c35805d49471d51c54fffc59de22a43
size 3525840320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:32b1a1e47e132948805299a8c73bd41e298512ac220412ab8a18c20a29f49066
size 4682583488

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3d9c81be7093f14edd2f8fa20246691d0c048fe57d01e62d49f712b996cf8055
size 4466908608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:aa58b2487914e35b5f63c74b8817a4fbb6dcbdbe14ba6f693ed1993474f7db00
size 4893187520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:29dea6de719710080fde5de676a553409fde11771952817a307ed41baced357b
size 5145447872

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4c8047ffa0ea97897661b0c300e41b35da64ce023fd3ea518b219fc11e9b2906
size 4684340672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01fb09366bb8c3b5e7f9a447a714c30b1ef432c9028d95b46225bfdf500e917e
size 4480277952

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:809ffbebe6f94afc3d258b570f71c3f57b05b88d80c73a9423865fffe634fffc
size 5832003008

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:31b9c22d7b1c5d315f229364cfe6e8c2ec66fef468e196539afe8035d2023378
size 5448555968

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2f7c6e1daaf9c386808bb1980a74ec6c469eccac9d7d662746a6832da7742940
size 5330738624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:90235db52916cbdfba2ae3894b365ce0ceaefb7ae1d158ab28137af912b7906e
size 6260534720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9bc450e2058cb2e1539a2a5d776de658f1c411daf6c42c8fe08181e7798843a5
size 6561467840

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1fc786570b75444e96530bcd95e30da65852467b423fcd1f380ffefe5facc40c
size 8106511808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:01c5fe29c17a7f055f6deee9bda327eba3d70df1290a3ecbbd732b98850ce899
size 15252229280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:faf1e22ca1cb5455926632c2ef04c7e3a1483124e444422841f2bfe4645da7c4
size 5162880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b47672440b4df1a80620748409bedd65c3d7077ed8219f63371af9bca3df53db
size 1371274400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:df246a4b88206b8c35ca829f1932479551a6be07c1ff1796fb8cace208c23da8
size 1368263840