初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/prism-coder-14b-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-07 16:45:35 +08:00
commit c18cd20edd
27 changed files with 241 additions and 0 deletions

60
.gitattributes vendored Normal file
View File

@@ -0,0 +1,60 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.imatrix.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
prism-coder-14b.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text

106
README.md Normal file
View File

@@ -0,0 +1,106 @@
---
base_model: dcostenco/prism-coder-14b
language:
- en
- es
- fr
- pt
- de
- zh
- ja
- ko
- ru
- ar
- ro
- uk
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- qwen2
- function-calling
- tool-use
- aac
- accessibility
- prism
- synalux
- bfcl
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/dcostenco/prism-coder-14b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#prism-coder-14b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/prism-coder-14b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own quants) |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/prism-coder-14b-i1-GGUF/resolve/main/prism-coder-14b.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:05b7b961a87b73ca30d3cd132a6e371e7f14b063b1e6bfb842724ae437af3cc7
size 3872309984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e25f7004b931b645032918d552d7ee58f3d669e7ab7f82c669aab2b3be446133
size 3607995104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:034465a291d4b195036f3fbbe1e52c1630f4757e2edc56109c2bcf560a9e9d69
size 5356147424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4ac92414feff35c741fe0d1cbdfbcfe464de2b1787bbe0cdeff2fbf1c7a18511
size 5003727584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fdc4ca85d3e1086b683bfa730d027edc516cf987e2ace663707029d3854a9aa9
size 4704576224

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4fd8caa816f142a200ccffa2ede279211a4bf2d5d726c6a2e42b3df1f4f52e86
size 4312834784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:db62bf279e6739d2f8275b581e0eeafce8d1d6981424d47cf05c6f0785f60fc6
size 6916539104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e4f6dbcebf0b5b4e67f57fa4730988c7f3f15a6b7378212e22ddd361c23716f8
size 6693020384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c017342572bfc6ce89340b4dec3c42d70d15c3ba16eda3002ca6a9780104b271
size 6383362784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cb45ca0306eeccefd2894e2b9ed27775480fdff5495f58385b91ecab1cdc6335
size 5946708704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ba3e19e1c048655096e170936ee03536221e08c8d8bf36244b0025b10a6ff18
size 8549184224

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:33a910239d8dcb43e20b69aded6da7439cc90fe9d67f9334b41196ba6d0b344d
size 8119841504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6b987fc78db5840c9f49f595d8a5d2646ae4bd637484c55ebc48d5dfaeed83c1
size 5770498784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a6daee924762b6378a80011ace6be56621c329c2a7bbc653111c1bd8aaaebf2
size 5397189344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:62e14cf49b5ebbf435b16a2c651daad8de87e802bd8c56a1f1444d7f551bdca4
size 7924769504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3d10ebf6b01ff43c0059ebe97eb404a2ad75d46459525bd044713d8268983407
size 7339205344

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7b254fd9e13a8f1fd7a51dae487506249bb624c29fd25796803c9f16dca25611
size 6659597024

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6e532ca1763d645a2647984df8c458001166513ecb9ad25681077c41d43d1a1c
size 8544269024

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5985a35fc7ea85c3e190ab68988d133440b62f0ab45e675a3e4e8f3f2144ebdc
size 9392141024

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d3dce959863616cebaa48c25d51d40307125cfcbe8fcf51ff4cd76bd8023718c
size 8988111584

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:91134efe77f5425d02bdeca7fc61f4d3c42b22174d2b24aab00b0239dc41b5e2
size 8573432544

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:50cfd407b4f08018f7bda1c3ca69dc6635d0ceb8c57b4a3e8a1eb9900aa7b2a2
size 10508874464

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2eaceb50c72ab5df0e8793025a4e12ad256f02f681379cfffa6944cd14a8f0ef
size 10266555104

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4e5949906de438f0ab0f093f5807adb7af3876afff19e360df4822cd106d2397
size 12124685024

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1f5c8bf95a86f870293d1da5dc03f33cbdaa0fc5ecd945a074c2d23256444a77
size 8604128