初始化项目,由ModelHub XC社区提供模型

Model: bartowski/RekaAI_reka-flash-3-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 15:09:06 +08:00
commit 36e25d41ad
30 changed files with 322 additions and 0 deletions

63
.gitattributes vendored Normal file
View File

@@ -0,0 +1,63 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-bf16.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3.imatrix filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
RekaAI_reka-flash-3-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text

175
README.md Normal file
View File

@@ -0,0 +1,175 @@
---
quantized_by: bartowski
pipeline_tag: text-generation
license: apache-2.0
base_model: RekaAI/reka-flash-3
---
## Llamacpp imatrix Quantizations of reka-flash-3 by RekaAI
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4867">b4867</a> for quantization.
Original model: https://huggingface.co/RekaAI/reka-flash-3
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
human: {system_prompt} {prompt} <sep> assistant:
```
## What's new:
Fix chat template
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [reka-flash-3-bf16.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-bf16.gguf) | bf16 | 41.82GB | false | Full BF16 weights. |
| [reka-flash-3-Q8_0.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q8_0.gguf) | Q8_0 | 22.22GB | false | Extremely high quality, generally unneeded but max available quant. |
| [reka-flash-3-Q6_K_L.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q6_K_L.gguf) | Q6_K_L | 18.74GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [reka-flash-3-Q6_K.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q6_K.gguf) | Q6_K | 18.44GB | false | Very high quality, near perfect, *recommended*. |
| [reka-flash-3-Q5_K_L.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q5_K_L.gguf) | Q5_K_L | 16.02GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [reka-flash-3-Q5_K_M.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q5_K_M.gguf) | Q5_K_M | 15.64GB | false | High quality, *recommended*. |
| [reka-flash-3-Q5_K_S.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q5_K_S.gguf) | Q5_K_S | 14.79GB | false | High quality, *recommended*. |
| [reka-flash-3-Q4_K_L.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q4_K_L.gguf) | Q4_K_L | 14.07GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [reka-flash-3-Q4_K_M.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q4_K_M.gguf) | Q4_K_M | 13.61GB | false | Good quality, default size for most use cases, *recommended*. |
| [reka-flash-3-Q4_1.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q4_1.gguf) | Q4_1 | 13.19GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [reka-flash-3-Q4_K_S.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q4_K_S.gguf) | Q4_K_S | 12.63GB | false | Slightly lower quality with more space savings, *recommended*. |
| [reka-flash-3-Q4_0.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q4_0.gguf) | Q4_0 | 11.96GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [reka-flash-3-IQ4_NL.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ4_NL.gguf) | IQ4_NL | 11.95GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [reka-flash-3-Q3_K_XL.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q3_K_XL.gguf) | Q3_K_XL | 11.95GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [reka-flash-3-IQ4_XS.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ4_XS.gguf) | IQ4_XS | 11.49GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [reka-flash-3-Q3_K_L.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q3_K_L.gguf) | Q3_K_L | 11.41GB | false | Lower quality but usable, good for low RAM availability. |
| [reka-flash-3-Q3_K_M.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q3_K_M.gguf) | Q3_K_M | 10.86GB | false | Low quality. |
| [reka-flash-3-IQ3_M.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ3_M.gguf) | IQ3_M | 10.26GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [reka-flash-3-Q3_K_S.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q3_K_S.gguf) | Q3_K_S | 9.93GB | false | Low quality, not recommended. |
| [reka-flash-3-IQ3_XS.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ3_XS.gguf) | IQ3_XS | 9.50GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [reka-flash-3-Q2_K_L.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q2_K_L.gguf) | Q2_K_L | 9.23GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [reka-flash-3-IQ3_XXS.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ3_XXS.gguf) | IQ3_XXS | 9.18GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [reka-flash-3-Q2_K.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-Q2_K.gguf) | Q2_K | 8.63GB | false | Very low quality but surprisingly usable. |
| [reka-flash-3-IQ2_M.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ2_M.gguf) | IQ2_M | 8.51GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [reka-flash-3-IQ2_S.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ2_S.gguf) | IQ2_S | 8.12GB | false | Low quality, uses SOTA techniques to be usable. |
| [reka-flash-3-IQ2_XS.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ2_XS.gguf) | IQ2_XS | 7.83GB | false | Low quality, uses SOTA techniques to be usable. |
| [reka-flash-3-IQ2_XXS.gguf](https://huggingface.co/bartowski/RekaAI_reka-flash-3-GGUF/blob/main/RekaAI_reka-flash-3-IQ2_XXS.gguf) | IQ2_XXS | 7.39GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/RekaAI_reka-flash-3-GGUF --include "RekaAI_reka-flash-3-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/RekaAI_reka-flash-3-GGUF --include "RekaAI_reka-flash-3-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (RekaAI_reka-flash-3-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a680fd5897b4b7e56cdee854c1ed895ab10c0dc76466ee7fdc1e12b1ed1aa0f5
size 8514037504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c1a72fbbb274475fafa8441c8b12822c5d1ed302296f03a09875f66569b5eeaa
size 8123672320

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cd5b685b62f0303255fe5a712b4d7c8a42de615f3587f7a451970daa0a172996
size 7827482368

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b012f332ee6b78adf11618c7920d349223211638793d89b3f35d19a60b425deb
size 7385212672

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:210800c2c8ffe6ae01cfd8be2bb565502d26b578a5304e0fa61680b25d61b30a
size 10258245376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e8959abda19edbcbb899738826c3b5568290c6d4e4586101da82d24f63362c41
size 9501144832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:72fdea8a25ccb27a2cb300814b959bcf2261f1cf175abc0c47c9557e2b587eba
size 9177982720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:21632450893ba7adbce8c23574b7ec6295dfd59d10fde7c0f8a7fc0b41ca7790
size 11949688576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a11cb3156e40b09897798ff390da304c5ad098fb383686d8b0fafcbca63f822f
size 11488151296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:50272dc5f2a48c3e3e0df716f7eca018cb0143e5d358e00b030eba6c0334a7b4
size 8630896384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c72fce2dee92256a1ee3dff58c44200661ea640df02b497c83e138606d9abb78
size 9233008384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d08dafc41eb2ab6ef57b90628725abdf0db4adb88973e840f06e54a5e061341c
size 11412285184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:26071a93d7e77646ae7df4becc8441965b04f5668f0275ec841f4056c4cbde98
size 10863011584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ef8e1c98e9619f2abe4c3a988f1675fcd3dec25c8fdf3c2e1f8ba6871bfb3d9
size 9934628608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9b4dc39b04ffe76a751190ef2217d5c2970029d0c4b77777bd3e91a73e392ca7
size 11951777536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8fa4f55937c4ea0c968e5c9cc3b4770f4f6e080651eb714c28eddc9d2bdec6ca
size 11961460480

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a612c9c33f53caacb15ce59424aee33d6936a3a6e291db1977b5bad9d712eaf
size 13191759616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:95f7a5d2adc770aa1376d0dd09929437bdc87881d33e1b89b60d309d6888b7e0
size 14067967744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8a32d0c9edb872d812223e1f64695fe4fbc2fe143f07e81aa4527003c5e505e0
size 13610362624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c9d3a303368dc94c46e321019af7d7ff65a816370645157779da8543a9f08820
size 12627764992

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:55106d7c7c53074ee6829ede64dcf795959a61801c1982456fcbdc41d61a96bd
size 16016008960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:63f06df500d7752ab557b2bdf4717ce69c8a22ded9583c247ffe4aaad15b8da4
size 15635474176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0806dc45f87105eeb7a1e29ba9b408c9a4294c65a483eb308e5df825f5f56ce5
size 14791755520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4920e3da48ec58bb4b8621030476b4b417ddf6183bd7184b26b1b4ba4d53282d
size 18440726272

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6e609608bf4ccc48577112e13d359bd1264be5a840469afd28c3200dd87ac9bd
size 18739373824

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9650a571bc3db4bb574272e4747d0bdb3d25b5fd39dc62561f79cb11da7d1aa2
size 22217246464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f83338664a2469b542c225b573b54a893b99e1ca4454e71f7f616b2ffb3568c4
size 41815623136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:950839a1896d2ccf8605b0c51fedc21f93a79a2a8efa6c36f9353c87b8b72a5e
size 9956342