commit 8ca36e43709587d572e87d5e5d3f51166985f5ad Author: ModelHub XC Date: Sat May 9 03:53:48 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..638310f --- /dev/null +++ b/.gitattributes @@ -0,0 +1,47 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.f16.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +R3-Qwen3-8B-LoRA-14k.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/R3-Qwen3-8B-LoRA-14k.IQ4_XS.gguf b/R3-Qwen3-8B-LoRA-14k.IQ4_XS.gguf new file mode 100644 index 0000000..cdf0af9 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34a787bef02747573e2e352f41343ae544616baca7406d8db67fff08dfaf0eb6 +size 4593296640 diff --git a/R3-Qwen3-8B-LoRA-14k.Q2_K.gguf b/R3-Qwen3-8B-LoRA-14k.Q2_K.gguf new file mode 100644 index 0000000..6702aca --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8424e4dc333404316d9e9056db6308925b5e8da18eb12964ac4e24c4ad3c0226 +size 3281732864 diff --git a/R3-Qwen3-8B-LoRA-14k.Q3_K_L.gguf b/R3-Qwen3-8B-LoRA-14k.Q3_K_L.gguf new file mode 100644 index 0000000..989c446 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bd059b546b1cdbf10be34e8ae3623477a39ec34f4f82fa8524e286a26f4366a +size 4431394048 diff --git a/R3-Qwen3-8B-LoRA-14k.Q3_K_M.gguf b/R3-Qwen3-8B-LoRA-14k.Q3_K_M.gguf new file mode 100644 index 0000000..a1713e1 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4aef04222c775736b9c635bf536df3c68e1a7949faac492c04c68b66fd81d862 +size 4124161280 diff --git a/R3-Qwen3-8B-LoRA-14k.Q3_K_S.gguf b/R3-Qwen3-8B-LoRA-14k.Q3_K_S.gguf new file mode 100644 index 0000000..d99bddf --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6842998a4a7211da137bcf0143c494d7f16e2f4ba60bf12725b145bd5787900e +size 3769611520 diff --git a/R3-Qwen3-8B-LoRA-14k.Q4_K_M.gguf b/R3-Qwen3-8B-LoRA-14k.Q4_K_M.gguf new file mode 100644 index 0000000..9764c9c --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:febf14f67a888d2bfbf07231204ebf02ad28b6d36dabe49f48f1f5dfc4a41142 +size 5027783936 diff --git a/R3-Qwen3-8B-LoRA-14k.Q4_K_S.gguf b/R3-Qwen3-8B-LoRA-14k.Q4_K_S.gguf new file mode 100644 index 0000000..d3eeb97 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33cf9b45ea936285af3c01fb2057b39b6a9e9d27b33cfd6d7b3815114dbe702c +size 4802012416 diff --git a/R3-Qwen3-8B-LoRA-14k.Q5_K_M.gguf b/R3-Qwen3-8B-LoRA-14k.Q5_K_M.gguf new file mode 100644 index 0000000..f8eac38 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5859c17d1ce80b59f34fea762d9dff1614e4478e180492062611ee0ff14ed55c +size 5851112704 diff --git a/R3-Qwen3-8B-LoRA-14k.Q5_K_S.gguf b/R3-Qwen3-8B-LoRA-14k.Q5_K_S.gguf new file mode 100644 index 0000000..e5092b1 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30e0c23d380b51efa164ff67066fd689bda646953117b31dca340fa0329e8f1c +size 5720761600 diff --git a/R3-Qwen3-8B-LoRA-14k.Q6_K.gguf b/R3-Qwen3-8B-LoRA-14k.Q6_K.gguf new file mode 100644 index 0000000..62ffdb5 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fb05614b0e286e335cf180d9e8019a984ae0c26211144d4431a81b4c3db9384 +size 6725899520 diff --git a/R3-Qwen3-8B-LoRA-14k.Q8_0.gguf b/R3-Qwen3-8B-LoRA-14k.Q8_0.gguf new file mode 100644 index 0000000..036b51f --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.Q8_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cf7bfa6aad6605840d099d7dffa45a98fb922654aa5592d9177064e3b7c62b5 +size 8709518592 diff --git a/R3-Qwen3-8B-LoRA-14k.f16.gguf b/R3-Qwen3-8B-LoRA-14k.f16.gguf new file mode 100644 index 0000000..31a8594 --- /dev/null +++ b/R3-Qwen3-8B-LoRA-14k.f16.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:afdf0e75e100660244df508636c7e2a08d711e03b5cc858281a97e9a32015e30 +size 16388044032 diff --git a/README.md b/README.md new file mode 100644 index 0000000..2eff185 --- /dev/null +++ b/README.md @@ -0,0 +1,69 @@ +--- +base_model: rubricreward/R3-Qwen3-8B-LoRA-14k +language: +- en +library_name: transformers +mradermacher: + readme_rev: 1 +quantized_by: mradermacher +tags: [] +--- +## About + + + + + + +static quants of https://huggingface.co/rubricreward/R3-Qwen3-8B-LoRA-14k + + + +***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#R3-Qwen3-8B-LoRA-14k-GGUF).*** + +weighted/imatrix quants are available at https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-i1-GGUF +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q2_K.gguf) | Q2_K | 3.4 | | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q3_K_S.gguf) | Q3_K_S | 3.9 | | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q3_K_L.gguf) | Q3_K_L | 4.5 | | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.IQ4_XS.gguf) | IQ4_XS | 4.7 | | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q5_K_S.gguf) | Q5_K_S | 5.8 | | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q5_K_M.gguf) | Q5_K_M | 6.0 | | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q6_K.gguf) | Q6_K | 6.8 | very good quality | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | +| [GGUF](https://huggingface.co/mradermacher/R3-Qwen3-8B-LoRA-14k-GGUF/resolve/main/R3-Qwen3-8B-LoRA-14k.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. + +