commit 06d33934777fd85d1349061eaf242246e6aaa936 Author: ModelHub XC Date: Sat May 9 13:32:31 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/gpt2-story-gen-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..9c36969 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,47 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text +gpt2-story-gen.f16.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/README.md b/README.md new file mode 100644 index 0000000..7186090 --- /dev/null +++ b/README.md @@ -0,0 +1,63 @@ +--- +base_model: vamsi10052005/gpt2-story-gen +language: +- en +library_name: transformers +quantized_by: mradermacher +--- +## About + + + + + + +static quants of https://huggingface.co/vamsi10052005/gpt2-story-gen + + +weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q2_K.gguf) | Q2_K | 0.2 | | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q3_K_S.gguf) | Q3_K_S | 0.2 | | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.IQ4_XS.gguf) | IQ4_XS | 0.2 | | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q3_K_L.gguf) | Q3_K_L | 0.2 | | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q5_K_S.gguf) | Q5_K_S | 0.2 | | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q5_K_M.gguf) | Q5_K_M | 0.2 | | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q6_K.gguf) | Q6_K | 0.2 | very good quality | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | +| [GGUF](https://huggingface.co/mradermacher/gpt2-story-gen-GGUF/resolve/main/gpt2-story-gen.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. + + diff --git a/gpt2-story-gen.IQ4_XS.gguf b/gpt2-story-gen.IQ4_XS.gguf new file mode 100644 index 0000000..82efdf4 --- /dev/null +++ b/gpt2-story-gen.IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85591c0d67a3b97a2868c88924d53150a2be80f458d9edffa5f712d5ce3ee1fd +size 82558816 diff --git a/gpt2-story-gen.Q2_K.gguf b/gpt2-story-gen.Q2_K.gguf new file mode 100644 index 0000000..0a3de08 --- /dev/null +++ b/gpt2-story-gen.Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c74fc20fb17b3e928b87007f66f888dfe05dcffc6129de14440307ed8bf5edd5 +size 68532064 diff --git a/gpt2-story-gen.Q3_K_L.gguf b/gpt2-story-gen.Q3_K_L.gguf new file mode 100644 index 0000000..7ce23bf --- /dev/null +++ b/gpt2-story-gen.Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56590edceb31269b944d2cd76bf7689ce19c9590e82f0cd78c911939d3a6154b +size 85507936 diff --git a/gpt2-story-gen.Q3_K_M.gguf b/gpt2-story-gen.Q3_K_M.gguf new file mode 100644 index 0000000..08e6b2a --- /dev/null +++ b/gpt2-story-gen.Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1cdebc9012d06c8fe1a39a314da14787a93b93d4a603603ff410e8a93fe7b9b +size 81084256 diff --git a/gpt2-story-gen.Q3_K_S.gguf b/gpt2-story-gen.Q3_K_S.gguf new file mode 100644 index 0000000..95170f4 --- /dev/null +++ b/gpt2-story-gen.Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28e72dd54418bab067b5d65f94bf5fcaa08fabd3fc582ad252049378a0bd3ed2 +size 73564000 diff --git a/gpt2-story-gen.Q4_K_M.gguf b/gpt2-story-gen.Q4_K_M.gguf new file mode 100644 index 0000000..35e81a3 --- /dev/null +++ b/gpt2-story-gen.Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64b95f78727442620a777680eb7fc6b9160b258551a1896a508381d19d05aa96 +size 91148128 diff --git a/gpt2-story-gen.Q4_K_S.gguf b/gpt2-story-gen.Q4_K_S.gguf new file mode 100644 index 0000000..1f51985 --- /dev/null +++ b/gpt2-story-gen.Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07bda9a74e08db588a0adb0c2cf5d4e6280830dcf4f4d17534d86f9fd563a51e +size 85139296 diff --git a/gpt2-story-gen.Q5_K_M.gguf b/gpt2-story-gen.Q5_K_M.gguf new file mode 100644 index 0000000..efe3e76 --- /dev/null +++ b/gpt2-story-gen.Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c087b19a7fcfc4f5eb3ddf0b45910d367f714ded56ee9f70888e81e2b03891e +size 100161376 diff --git a/gpt2-story-gen.Q5_K_S.gguf b/gpt2-story-gen.Q5_K_S.gguf new file mode 100644 index 0000000..d013001 --- /dev/null +++ b/gpt2-story-gen.Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e22de49d3081b211b764b8b9e6d232ae8d78620879a3b18036803e86988bbf54 +size 95461216 diff --git a/gpt2-story-gen.Q6_K.gguf b/gpt2-story-gen.Q6_K.gguf new file mode 100644 index 0000000..3b4fd51 --- /dev/null +++ b/gpt2-story-gen.Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34975801b46c1f8a88a1d0fd35b348e972ed103b3cd43363ae7c7dbb89bb82c2 +size 106741600 diff --git a/gpt2-story-gen.Q8_0.gguf b/gpt2-story-gen.Q8_0.gguf new file mode 100644 index 0000000..0c1d8e8 --- /dev/null +++ b/gpt2-story-gen.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda8e52de86131633f59e6708714591fed223fbd492de2ffe212b40faf9744fb +size 136659520 diff --git a/gpt2-story-gen.f16.gguf b/gpt2-story-gen.f16.gguf new file mode 100644 index 0000000..4ab2045 --- /dev/null +++ b/gpt2-story-gen.f16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b60ef661c53547ef2d56925cc8a6df61555b6670f299f14723a5c72375b683b9 +size 252470784