初始化项目,由ModelHub XC社区提供模型

Model: boxomcfoxo/YiffyEstopianMaid-13B-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 00:40:50 +08:00
commit 4ef6393c55
16 changed files with 389 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
yiffyestopianmaid-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text

11
Notice Normal file
View File

@@ -0,0 +1,11 @@
This model merge is derived from the following Huggingface repositories:
BlueNipples/TimeCrystal-l2-13B
cgato/Thespis-13b-DPO-v0.7
KoboldAI/LLaMA2-13B-Estopia
NeverSleep/Noromaid-13B-0.4-DPO
Doctor-Shotgun/cat-v1.0-13b
This merge should therefore be considered a derivative work of Llama 2, and thus inherits its license.
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.

246
README.md Normal file
View File

@@ -0,0 +1,246 @@
---
base_model: KatyMergeTesting/YiffyEstopianMaid-13B
inference: false
language:
- en
tags:
- llama-cpp
- gguf-my-repo
- roleplay
- text-generation-inference
license: llama2
model_creator: Katy Vetteriano
model_name: YiffyEstopianMaid 13B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: boxomcfoxo
---
# YiffyEstopianMaid 13B - GGUF
- Model creator: [Katy Vetteriano](https://huggingface.co/KatyTheCutie)
- Original model: [YiffyEstopianMaid 13B](https://huggingface.co/KatyMergeTesting/YiffyEstopianMaid-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Katy Vetteriano's YiffyEstopianMaid 13B](https://huggingface.co/KatyMergeTesting/YiffyEstopianMaid-13B).
These files were quantized using llama.cpp via ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- recommended-settings start -->
## Recommended settings
- Default preset if using SillyTavern
- Temperature: 0.7
- Min-P: 0.3
- Amount to generate: 256
- Top P: 1
- Repetition penalty: 1.10
<!-- recommended-settings end -->
<!-- licensing start -->
## Licensing
As this model merge is based on Llama 2, it is subject to Meta's LLAMA 2 Community License terms. The appropriate license files are therefore included.
Models that were released under the Apache 2.0 license have also been used in the creation of this model merge.
Due to Apache 2.0's permissive relicensing terms, the merge inherits the LLAMA 2 Community License and is not dual licensed.
The Apache 2.0 license requires that attribution is included at the point of relicensing. This has been done by listing the models in the [Notice file](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/Notice) alongside the LLAMA 2 Community License notice.
<!-- licensing end -->
<!-- quantization_methods start -->
## Explanation of quantization methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- quantization_methods end -->
<!-- provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yiffyestopianmaid-13b.Q2_K.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q2_K.gguf) | Q2_K | 2 | 4.85 GB| 7.35 GB | significant quality loss - not recommended for most purposes |
| [yiffyestopianmaid-13b.Q3_K_S.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [yiffyestopianmaid-13b.Q3_K_M.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [yiffyestopianmaid-13b.Q3_K_L.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [yiffyestopianmaid-13b.Q4_0.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yiffyestopianmaid-13b.Q4_K_S.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [yiffyestopianmaid-13b.Q4_K_M.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [yiffyestopianmaid-13b.Q5_0.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yiffyestopianmaid-13b.Q5_K_S.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [yiffyestopianmaid-13b.Q5_K_M.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [yiffyestopianmaid-13b.Q6_K.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [yiffyestopianmaid-13b.Q8_0.gguf](https://huggingface.co/boxomcfoxo/YiffyEstopianMaid-13B-GGUF/blob/main/yiffyestopianmaid-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- provided-files end -->
<!-- how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantization formats are provided, and most users only want to pick and download a single file.
### In `text-generation-webui`
Under Download Model, you can enter the model repo: boxomcfoxo/YiffyEstopianMaid-13B-GGUF and below it, a specific filename to download, such as: yiffyestopianmaid-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download boxomcfoxo/YiffyEstopianMaid-13B-GGUF yiffyestopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download boxomcfoxo/YiffyEstopianMaid-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download boxomcfoxo/YiffyEstopianMaid-13B-GGUF yiffyestopianmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- how-to-download end -->
<!-- how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m yiffyestopianmaid-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./yiffyestopianmaid-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./yiffyestopianmaid-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- how-to-run end -->

49
USE_POLICY.md Normal file
View File

@@ -0,0 +1,49 @@
# Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
## Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to:
1. Violate the law or others rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com)

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9fc393dffffc1f8fc5a43a89bc51510750141238d20b3f13d2b97e7eb6c3fdd3
size 4854270048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f64db122b93daf2cefdfdd8bd73764d31a5a45d3a827739eeb87946409d37c79
size 6929559648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:536fd937fadf7af1af12de327d41ae74547ee4516479eff90f4a7e43a0a0dd3e
size 6337769568

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1bfb9d7e4774d0b17050e9e2f33b223a5449689ab51218e6a72e63d463de84d2
size 5658980448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:975eaac7884edcb2c0f2e61be1e6551f79026e10eec0f356a6a7cc7b243d9d42
size 7365834848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f943811ea98a54b070e10fc1126f1cf43405a465d39a85b7e42014fa0ca84e64
size 7865956448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b0d2ff8f4b3bc4cc55acbfdc37da761c930649c627fb6bc8a92dc8d2dd3bd4a5
size 7423178848

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f4b9b2ac583eecd4de2c05b364abca6e1f2df82cc690ae63a2afce4fc603b20c
size 8972286048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0dd9df6a8db768ab87a9bd4325d70d11ce9c3e832bb4c2beb642fd5c3d080848
size 9229924448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7491d88e469d6ae0b10b0e28ad727ed1c764dc3e586972ef5c28e0167522259f
size 8972286048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:346944a9e2e312788af51a8c3d68dfcae9c388fe8265e1341446586f6c350307
size 10679140448

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ff6f1dc9bdac3ec4c1af6de225c741ef22836e5c91fd8ca47429c317388cad40
size 13831319648