初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/gpt2-demo-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 16:45:59 +08:00
commit 3bb33d4cd8
14 changed files with 147 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
gpt2-demo.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
gpt2-demo.f16.gguf filter=lfs diff=lfs merge=lfs -text

64
README.md Normal file
View File

@@ -0,0 +1,64 @@
---
base_model: demo-leaderboard/gpt2-demo
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/demo-leaderboard/gpt2-demo
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-GGUF/resolve/main/gpt2-demo.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->

3
gpt2-demo.IQ4_XS.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b5efe37b8a9db755a70baa34c5f6619a4defc167eaab7b877eadfc66f828f529
size 82558784

3
gpt2-demo.Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f9433c8a4a7c9c23829307a2c8ec082edb552a87d98f59b72fa7d35fa10fd628
size 68532032

3
gpt2-demo.Q3_K_L.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c01f04c28627b62df5177cb16e0ead181101ee8c3d1088d21cd2491061668a57
size 85507904

3
gpt2-demo.Q3_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8efa7ee6d9abaf5e5fac5aee88d52414c86c9cda2a81d56e1a96883b0bb1df51
size 81084224

3
gpt2-demo.Q3_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b02418bf56e43dcdd642080782933cf79dcabfc3543d91069a273b2830716af6
size 73563968

3
gpt2-demo.Q4_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e522b84bf13966a02d0210459d2b5ca4db34f5f2d5f70e021181a2298e98e028
size 91148096

3
gpt2-demo.Q4_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3e283dafae6b4a311e3413df390f90e22193ba0ffff546ba184eeafef54a9263
size 85139264

3
gpt2-demo.Q5_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d2ca5405de990160fbbf92433f4f2426947055a244da259d30c6c862d71ed7ac
size 100161344

3
gpt2-demo.Q5_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5023a0a009486e9ebf33ce8ceb2ddd8870c8e8e0611c581843f8f5626bdabb08
size 95461184

3
gpt2-demo.Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:586c281fa681cc0cf949f4de8f6583a866c9e7a387b5a7e34e48c629bc0d805f
size 106741568

3
gpt2-demo.Q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:db77100cd0486d6e0dd170a6e0aa0ae19b276e274bf027021217041287645819
size 136659488

3
gpt2-demo.f16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c88755ae7f4d197265c69a2c21c259319bf8e754c8d2ae9e12870f91fb7a6f0
size 252470752