commit f38846e533463b6446d5df1c78fb6823b65891cd Author: ModelHub XC Date: Fri May 15 08:37:45 2026 +0800 初始化项目,由ModelHub XC社区提供模型 Model: mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF Source: Original Platform diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..ee31822 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,57 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ckpt filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.mlmodel filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +*.safetensors filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text +imatrix.dat filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text +Llama-3-8B-Stroganoff-3.0.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ1_M.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ1_M.gguf new file mode 100644 index 0000000..d098bf3 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ1_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff975e57017bf65aeb34334aae4f54e2adaeea56ae8f9609fbbcee1bdd8599ec +size 2161974016 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ1_S.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ1_S.gguf new file mode 100644 index 0000000..5ec4c77 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ1_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2b3d92b0575801d5de9389fb78a5f3e71f09ef05b14aab5cf32ed4241b09369 +size 2019629824 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ2_M.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_M.gguf new file mode 100644 index 0000000..38d2914 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df241d7b4ea71c913df6e4d6ea0d466237b2f7c5476a57cb840159426237e8d5 +size 2948283136 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ2_S.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_S.gguf new file mode 100644 index 0000000..35f3edc --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a62344f3f889bb53efc13985932c65ce2f3be1f54db694efa1d4597d8947cc2 +size 2758490880 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XS.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XS.gguf new file mode 100644 index 0000000..02fcaea --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:feb5c8509f6a28eb526f285eb37f39edac803c88188a3403fb207aac1be9d1f8 +size 2605783808 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XXS.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XXS.gguf new file mode 100644 index 0000000..8f24a4e --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68ff59f85bf55d12110222cf7674112e86eedb7020304a434b91602a002e71ed +size 2399214336 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ3_M.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_M.gguf new file mode 100644 index 0000000..ce7518e --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c120bc76a4bf7b851de686249caaa282dafef6acfbd5328f9d6cb590e84b39e +size 3784825600 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ3_S.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_S.gguf new file mode 100644 index 0000000..13dc7d0 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8d9b8491cf3addf7fc49b38bc99b2e0d3bb883c770c9a2c33ba9872726ade18 +size 3682327296 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XS.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XS.gguf new file mode 100644 index 0000000..f027b62 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0735938f3035a9bab9498906e1b10670b1d613489ae42a7fa2c62d238aa3950a +size 3518749440 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XXS.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XXS.gguf new file mode 100644 index 0000000..addd7ff --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XXS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:281d3a597328251dcb2518ac22dc0d50c8d80e8d7009e49f8c21b1d1bef057ac +size 3274914560 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-IQ4_XS.gguf b/Llama-3-8B-Stroganoff-3.0.i1-IQ4_XS.gguf new file mode 100644 index 0000000..636b07f --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-IQ4_XS.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f75947de78a5232b647e16d1ba278c6dc3fce8c32e7d2a5a5d8af9a1ba0d356 +size 4447664896 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q2_K.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q2_K.gguf new file mode 100644 index 0000000..1b84403 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q2_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b470c12c911d817b87effc701d20c5a31c15b8fc9e1244d2be54c4288a963a41 +size 3179133696 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_L.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_L.gguf new file mode 100644 index 0000000..2ec934d --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_L.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c5e3eceb0b0a44013d3ca7599b9ee971a3498a5876bfa27ee9741f1eef934ee +size 4321958656 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_M.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_M.gguf new file mode 100644 index 0000000..853be5f --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4be53b9953ab7c76258bb804a1965e7c194583a285f686d382b5c4dc6ed81b1c +size 4018920192 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_S.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_S.gguf new file mode 100644 index 0000000..c6c4ab3 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9606087b6529e9b41054313447d1b43f786aa9f3c52c43d1889269f2e9f9b96 +size 3664501504 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q4_0.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q4_0.gguf new file mode 100644 index 0000000..82bab01 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q4_0.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:facb24065a429a83b8ec62b6a82afa34d660bf1eac0bda22c9e138b0729e3044 +size 4675894016 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_M.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_M.gguf new file mode 100644 index 0000000..ba7f9bb --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d38f325d7ee0b9b16fcd9021008eb3d10af3e3a870eaafa97085fbcd813dd389 +size 4920736512 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_S.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_S.gguf new file mode 100644 index 0000000..dcb2b87 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6dc8c140b0aa78c1069a6df757752c17800105ed5c5cb2deb93cc3515450cf7 +size 4692671232 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_M.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_M.gguf new file mode 100644 index 0000000..de756f8 --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_M.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8478baa1e1f3b76722f8cd9e1ef4923b7c55334af61f9fc9ead720848cbd02ab +size 5732989696 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_S.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_S.gguf new file mode 100644 index 0000000..a83cc1e --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_S.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2df5b0a0771d2588e9dad0ef0f6b526dabe22d3e2714ebf45010a1b00ef70f08 +size 5599296256 diff --git a/Llama-3-8B-Stroganoff-3.0.i1-Q6_K.gguf b/Llama-3-8B-Stroganoff-3.0.i1-Q6_K.gguf new file mode 100644 index 0000000..4bc125f --- /dev/null +++ b/Llama-3-8B-Stroganoff-3.0.i1-Q6_K.gguf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8c3c5e1577af5e0cda5cccab40b4c95357011311f4a5496bde0ed88e1e5fad8 +size 6596008704 diff --git a/README.md b/README.md new file mode 100644 index 0000000..c7b57e5 --- /dev/null +++ b/README.md @@ -0,0 +1,80 @@ +--- +base_model: HiroseKoichi/Llama-3-8B-Stroganoff-3.0 +language: +- en +library_name: transformers +license: llama3 +quantized_by: mradermacher +tags: +- nsfw +- not-for-all-audiences +- llama-3 +- text-generation-inference +- mergekit +- merge +--- +## About + + + + + + +weighted/imatrix quants of https://huggingface.co/HiroseKoichi/Llama-3-8B-Stroganoff-3.0 + + +static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-GGUF +## Usage + +If you are unsure how to use GGUF files, refer to one of [TheBloke's +READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for +more details, including on how to concatenate multi-part files. + +## Provided Quants + +(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) + +| Link | Type | Size/GB | Notes | +|:-----|:-----|--------:|:------| +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | +| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-3.0-i1-GGUF/resolve/main/Llama-3-8B-Stroganoff-3.0.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | + +Here is a handy graph by ikawrakow comparing some lower-quality quant +types (lower is better): + +![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) + +And here are Artefact2's thoughts on the matter: +https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 + +## FAQ / Model Request + +See https://huggingface.co/mradermacher/model_requests for some answers to +questions you might have and/or if you want some other model quantized. + +## Thanks + +I thank my company, [nethype GmbH](https://www.nethype.de/), for letting +me use its servers and providing upgrades to my workstation to enable +this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. + + diff --git a/imatrix.dat b/imatrix.dat new file mode 100644 index 0000000..8e934ec --- /dev/null +++ b/imatrix.dat @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db76e69509524fa559a88a1f0d9fde7586493b57bd82b5f039ec1b0cffc7f8da +size 4988157