初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-14 18:07:40 +08:00
commit 8c855883f1
26 changed files with 209 additions and 0 deletions

59
.gitattributes vendored Normal file
View File

@@ -0,0 +1,59 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
imatrix.dat filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
KwaiCoder-23B-A4B-v1.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e3062f0b36b99d5bc25f665724590add5b94bb0e47a655f59548b68c497d303
size 7792717856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d1d256c83d18edcbc763c08571c3233f96741e3274fc9e5b175053d905e7aa73
size 7438823456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f7ea78c6dcc7a3b99020ad10f0d41fb8be641ef10a64232acf27cb98c119510f
size 9397297184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:647e0ac7c61396deb9423eac239fd9b5b6e384adb41036fbc449c444e8567ecf
size 8925437984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d641fe92930cd8eb7c1091ac41f38d26521f5b7d0c1a86a8d2b80067c677e803
size 8864886816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4b64ad7057fdd5b9925cfa5495d35045b86b008336c2957ce204be464bcf3700
size 8382541856

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0dc7e7b49005b3f4720f160569aa66c099ce495f6f88f72e912cb8e1abb96d88
size 11201024032

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:25f48caea4b44d4ebe1c790cf0e212344d4263b74bc1d6115ee3ebbcf267f2fc
size 11021717536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:715c89bc6dcf8babe3670e249a026fe1defff152ad371d77f64c44bb0805b49c
size 10483011616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c287d8e37f7668297cf907186209f9775f89196bf80a8bc138153c3994a2954f
size 10238779424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:60eb9f7c0488426fa6be7b8736065a8bf52c6783c76d669793581c7f8e7d85ef
size 12636782624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:27c23080d5c9284ce1e1c7339fe4b6f33fbb5d9e486ff1a66fb73d02c278680f
size 9474809888

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:830fddf57cf0ec1b4db907389e9f49058f9cd804add19d301e3b2c6a35b08b82
size 9496829984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c01e2e57d3d287db78f86c171d499fee1cbb880c2870aa97709232577d792cdd
size 12547395616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:53797286f7cbc5915847abac5f013510359e42a125d3ce6c87557ef80a87c06e
size 12039360544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0b8ec1ec5af8c460c81a7ee525e347301318e1e445ac4a43c5a93244c4b00288
size 11021717536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7ba107baa73bc6d7e5f4a07e1b0f6d676c3312dc7c0247ce05bb1899ef30f034
size 13178814496

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bd12e943825adbdf3047e3a7f88d13456c7a5f022dbd0486e5c867537e6a5741
size 14578499616

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3b0f7bd35d567cd7b539293cc3ea0f823a1d241c413f34832b7bdc653b17dd8e
size 15431417888

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4555fdd5da844fd8043e6e62a1c49f7bd55c117c21cc1d0dbf97451dac6e0b11
size 14088978464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac4b27ecb9f01ab64acb09487e38c6c1c19165d2820b45ba01e5527224bd3425
size 17623662624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a485c23b77cf187357bdd3a206a24fd172b6d17e5aa5f82f89e6223c1888214a
size 16474161184

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2cde7dea3a6a9d44f94ccd85ea86a72019177a5984bfe269aefc4d1ab9674b38
size 20840607776

78
README.md Normal file
View File

@@ -0,0 +1,78 @@
---
base_model: Kwaipilot/KwaiCoder-23B-A4B-v1
language:
- multilingual
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code-generation
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Kwaipilot/KwaiCoder-23B-A4B-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 7.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 7.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q2_K.gguf) | i1-Q2_K | 9.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 9.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 13.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q4_1.gguf) | i1-Q4_1 | 14.7 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.6 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/KwaiCoder-23B-A4B-v1-i1-GGUF/resolve/main/KwaiCoder-23B-A4B-v1.i1-Q6_K.gguf) | i1-Q6_K | 20.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

3
imatrix.dat Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:df0f394a66779c63425a323b8eeb16ccd1cab14b53d86e26c04ca2936135a427
size 50632235