初始化项目,由ModelHub XC社区提供模型

Model: bartowski/open-r1_OpenR1-Qwen-7B-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-21 21:04:33 +08:00
commit 272cdf3e1e
26 changed files with 309 additions and 0 deletions

59
.gitattributes vendored Normal file
View File

@@ -0,0 +1,59 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
open-r1_OpenR1-Qwen-7B.imatrix filter=lfs diff=lfs merge=lfs -text

178
README.md Normal file
View File

@@ -0,0 +1,178 @@
---
quantized_by: bartowski
pipeline_tag: text-generation
license: apache-2.0
base_model: open-r1/OpenR1-Qwen-7B
tags:
- generated_from_trainer
- trl
- sft
model_name: OpenR1-Qwen-7B
licence: license
datasets: open-r1/openr1-220k-math
---
## Llamacpp imatrix Quantizations of OpenR1-Qwen-7B by open-r1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4688">b4688</a> for quantization.
Original model: https://huggingface.co/open-r1/OpenR1-Qwen-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [OpenR1-Qwen-7B-Q8_0.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. |
| [OpenR1-Qwen-7B-Q6_K_L.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [OpenR1-Qwen-7B-Q6_K.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. |
| [OpenR1-Qwen-7B-Q5_K_L.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [OpenR1-Qwen-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. |
| [OpenR1-Qwen-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. |
| [OpenR1-Qwen-7B-Q4_K_L.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [OpenR1-Qwen-7B-Q4_1.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q4_1.gguf) | Q4_1 | 4.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [OpenR1-Qwen-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [OpenR1-Qwen-7B-Q3_K_XL.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [OpenR1-Qwen-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
| [OpenR1-Qwen-7B-Q4_0.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [OpenR1-Qwen-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-IQ4_NL.gguf) | IQ4_NL | 4.44GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [OpenR1-Qwen-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [OpenR1-Qwen-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
| [OpenR1-Qwen-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [OpenR1-Qwen-7B-IQ3_M.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [OpenR1-Qwen-7B-Q2_K_L.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [OpenR1-Qwen-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
| [OpenR1-Qwen-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [OpenR1-Qwen-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [OpenR1-Qwen-7B-Q2_K.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. |
| [OpenR1-Qwen-7B-IQ2_M.gguf](https://huggingface.co/bartowski/open-r1_OpenR1-Qwen-7B-GGUF/blob/main/open-r1_OpenR1-Qwen-7B-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/open-r1_OpenR1-Qwen-7B-GGUF --include "open-r1_OpenR1-Qwen-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/open-r1_OpenR1-Qwen-7B-GGUF --include "open-r1_OpenR1-Qwen-7B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (open-r1_OpenR1-Qwen-7B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:588b0e8ba0c3a7d2e158e1e5d7f2a88d79c7232228724eba90e1bd574ab3ee54
size 2780342784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0895a9315530737cb618def6734778af0b25f409b07b05fb654eaf6ba387111e
size 3574012416

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:582e9d5592bf969d22d29f53a5bc699a0694fa2ee41fef02cb24085fb1d59df1
size 3346256384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:36fb03ef56f94552a3b7d4c36280d2aa2c5592df4e3e2aac97e77af3977f8434
size 3114514944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:305d07a87137db97916131a8a38d3d41e23db10469b6a26800af4c47a724d5d4
size 4437813760

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5099c3a99942c8def6523ca7db7e82b2c695725c2cf181fab4fb390821874fe5
size 4218472960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89bc9a20faa397bbe0a578ddb090e061ebb461abd16699bfda2f33f7404a9a82
size 3015940608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:718f4265a02dbeca7f2afc7e071971d0131a4643209a93101bae70ba771bb5c8
size 3548164608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3f4e0df1cc7fdf618c1928b3f910895ef02ca18b7eb2466ec584f431f37574db
size 4088459776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:03a0944da9664b9f1fec75c0082b7d81e774ee341c48223638c697e5d16bda40
size 3808391680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4d7dd07f0bab840187a541f1ea4e30700f2c3f14bf2314c2563fd53fe21b0149
size 3492368896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f6f620fcf8a54391e6053285ebb561776778d30db48fe7f047d0f9601b132098
size 4565332480

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:15127d3042e1801e765771c0001b702be10128b99709c9ea1315359eb595026f
size 4444121600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:03621b2a886ce4656cd6b00e024215ae0a0d8b171884f7b18f22da3c84313926
size 4873284096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d93d27a8c7ec503a32b12df7995c5e62eb68f48dcb6d67b43d1a577394c0f4bb
size 5087564288

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d3bf99666cd19b637948ec9943044b591d3b906d0ee4f3ef1b3eb693ac8f66a6
size 4683074048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0da9e418e68dd86394827ebae88abbd2555c5f7746e2883a5e83945d11426bc7
size 4457769472

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:50cb3b20ab93f0c9d2fec80dab93282c140a1e711defc95c982915966c388265
size 5781197312

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:371039b80d3155d2cb93453db9fe5e0e885cdf9c4511933892b42cd4b6716f67
size 5444831744

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8e45e667d94e01c4e78c165aad1d87c2b0af3711bdc3f6222cc889aca8a7b81b
size 5315176960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:69bf2a2a477b0ab8f3e895a8ce86e4de1c660c23a2861ff0dc3b9acd092b1644
size 6254199296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f1f63bb54e9263c482558df4a89c9593ffcc2da84a5a327b78c4ff57ff1091a
size 6518182400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:97b37fbd3a3d9a64cb70c6167e14588b29f1e2a71d62a62ddc15caa62c012670
size 8098525696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:964d8889403517fbefd387dcec28f342ae8d390e19227430e7df13417c1cf76a
size 4536678