7.0 KiB
base_model, datasets, extra_gated_fields, extra_gated_prompt, language, library_name, license, quantized_by
| base_model | datasets | extra_gated_fields | extra_gated_prompt | language | library_name | license | quantized_by | |||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| pints-ai/1.5-Pints-16K-v0.1 |
|
|
Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws. Additionally, the user agrees to bear any damages arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team. For the avoidance of doubt, any artifacts released by the Pints Research team are done so in accordance with the 'fair use' clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier. |
|
transformers | mit | mradermacher |
About
weighted/imatrix quants of https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1
static quants are available at https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|---|---|---|---|
| GGUF | i1-IQ1_S | 0.5 | for the desperate |
| GGUF | i1-IQ1_M | 0.5 | mostly desperate |
| GGUF | i1-IQ2_XXS | 0.5 | |
| GGUF | i1-IQ2_XS | 0.6 | |
| GGUF | i1-IQ2_S | 0.6 | |
| GGUF | i1-IQ2_M | 0.7 | |
| GGUF | i1-Q2_K_S | 0.7 | very low quality |
| GGUF | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| GGUF | i1-IQ3_XXS | 0.7 | lower quality |
| GGUF | i1-IQ3_XS | 0.8 | |
| GGUF | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| GGUF | i1-IQ3_S | 0.8 | beats Q3_K* |
| GGUF | i1-IQ3_M | 0.8 | |
| GGUF | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| GGUF | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| GGUF | i1-IQ4_XS | 1.0 | |
| GGUF | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| GGUF | i1-Q4_0 | 1.0 | fast, low quality |
| GGUF | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| GGUF | i1-Q4_K_M | 1.1 | fast, recommended |
| GGUF | i1-Q4_1 | 1.1 | |
| GGUF | i1-Q5_K_S | 1.2 | |
| GGUF | i1-Q5_K_M | 1.2 | |
| GGUF | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
