初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/1.5-Pints-16K-v0.1-i1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 08:25:58 +08:00
commit f7a463d7e2
26 changed files with 243 additions and 0 deletions

59
.gitattributes vendored Normal file
View File

@@ -0,0 +1,59 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-16K-v0.1.i1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6c81e587d9c812b20aa521dfaff93fb43054baba79bc23d41da668df314bdae3
size 395379648

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3aeafa29e281e1270ac40cbc2e9e452d1c6591a360931f40600c70fcde376c44
size 365593536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1ee192faba5be30d44c10bd618a2861ddf93d2e18968ca0eaac8afd5a91044eb
size 553535424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:eed242ac33945e624e963333104d09dcce2f84e438d54239eca76ae9a81ce295
size 513820608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e923f5a8275d2d217bbfdfc98883bc92b882aab34abc4919c8682207404c212c
size 487883712

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c15f9c8173b581a6a4aecf6f0de23be85cd8da32f939d9289440a429f4a6ed76
size 445023168

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:114c90fb9e1facc1292b6075c195377765c43fa653fd4026c31235037054c3a6
size 721312704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:79955caa5ec5fe6130f5d4d3165d63b6d6c04b2bde5c2d1f3df2d91e69207f6b
size 701258688

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ce9eeed1156d08917746d1192489e075cf6e0ba4f303ebf5842e4d4469f60ca9
size 667638720

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e057cb03d6a468d0fc6c44910470d95a78a09dc3bd0f4f73f50036cc1e7d0191
size 623527872

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:225d5b4548ad96f2bbe7f5d6f65d9bfd3bbaa9edb4686f75e8fcfa2439a449b5
size 900394944

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cfd3c5ea7d4e9688aa730fa75542bfc505da4785320f5f4b10f13d788e260d64
size 853909440

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e2e0475ad0e33e4456ad1143bad6e6860e86c6eb3657018fb027a4aaf6fc6412
size 601298880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a56cdd2040456b3c91975341e7c2a8026280506ee1518bbff19d69a6b0c7b20e
size 561977280

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87596cb695d47bf10db7d055a6c33f5e3e9657e4441bf034719d45a0aa42dc02
size 832592832

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5bd762dc94f3279e04cbf47582616e8da9093f10d16323926bddb254a93b414e
size 770333632

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a2038612306764213c2579ae9c9317022b14a8e44d80f47ecc617dd4822547c6
size 699587520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a48dd3df0c9ab5c63ad4f07fd9da71d71d3bfebff023e854ac7be5848873feda
size 901967808

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:393992d05409f17d9645d968dad5b2d36c5bee390abc253df589d6401eed6c50
size 992579520

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ac49a3570b6dbc8845245d146bd650d7ff43c0beba0b9d547b4663a5e7af598
size 952348608

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c7157beafb27b279d7d6b1821c4aa2f6d1047b0027b719779aaddbe2541a8a9
size 905375680

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8e53491943a1725051d78a6d4ff05898cf773d25dbafe691c597e4a43d11989b
size 1113911232

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ac76c71a502374bfa9da48f399d2149f7f234d5918cb909cac583fef3faaa313
size 1086336960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c6e44603d7d827b026d538ae0eb8dde9146fe27affeaa55edb090ee8b1f153d
size 1285571520

112
README.md Normal file
View File

@@ -0,0 +1,112 @@
---
base_model: pints-ai/1.5-Pints-16K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
extra_gated_fields:
Company: text
Country: country
I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
I want to use this model for:
options:
- Research
- Education
- label: Other
value: other
type: select
Specific date: date_picker
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
that all texts in the training corpora are royalty free, this does not constitute
a legal guarantee that such is the case. **By using any of the models, corpora or
part thereof, the user agrees to bear full responsibility to do the necessary due
diligence to ensure that he / she is in compliance with their local copyright laws.
Additionally, the user agrees to bear any damages arising as a direct cause (or
otherwise) of using any artifacts released by the pints research team, as well as
full responsibility for the consequences of his / her usage (or implementation)
of any such released artifacts. The user also indemnifies Pints Research Team (and
any of its members or agents) of any damage, related or unrelated, to the release
or subsequent usage of any findings, artifacts or code by the team. For the avoidance
of doubt, any artifacts released by the Pints Research team are done so in accordance
with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
community in bringing LLMs to the next frontier.
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF/resolve/main/1.5-Pints-16K-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->