初始化项目,由ModelHub XC社区提供模型

Model: mradermacher/1.5-Pints-2K-v0.1-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-05 08:10:26 +08:00
commit 9e39e200be
14 changed files with 183 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
1.5-Pints-2K-v0.1.f16.gguf filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:54b6f08f1b100b3e78ec910a7d55d788fec34be9aaffd38d7ff632d1616b3678
size 861773504

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2ceea35c5755cd7c8e27d5f8dd0cc3317d447160976ca55fa173c12952c56b55
size 601298624

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f60c2dd6ff931e35e34b5a46ed4fed4a1634c075adf2a674905a632c65495676
size 832592576

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2226dfdaa3b5276c5db207824b72e22c1287dd181e74540fd294fe81ca60862
size 770333376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:df9ac526b48069e2c182e2aa5862bccf2649fd8685415deb3d46c7034f4f00a5
size 699587264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1f34d4aaaf1662e2fb16614a86b275f0caeec9998b46c13f4bd1c5e3d99464b3
size 952348352

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c8e6f1d8dfd14fbfd48b46433ee83a9827331f809d97287f999a24a84d528c3
size 905375424

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8f97bd811fe5a9b7e4d95518407c3b82349bc4bef0c135f618d4b265a331ef12
size 1113910976

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3ccaa020925d5b4a8e815b41a77913959875df45909ac7bd85ccaeed4acc62b0
size 1086336704

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:00ec94d107b56481f569c1225926ccbf3f4f9cb40a037b1823a5fb874920dd89
size 1285571264

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a6514f4c3e2440110ea6934e1485820085373982d9b4c50b3926d692b5602fd8
size 1664785088

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:76d0a946d052134c0bfce2155c8a0f249f0f0614eafc9f18dd4e403a90062601
size 3132709568

100
README.md Normal file
View File

@@ -0,0 +1,100 @@
---
base_model: pints-ai/1.5-Pints-2K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
extra_gated_fields:
Company: text
Country: country
I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
I want to use this model for:
options:
- Research
- Education
- label: Other
value: other
type: select
Specific date: date_picker
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
that all texts in the training corpora are royalty free, this does not constitute
a legal guarantee that such is the case. **By using any of the models, corpora or
part thereof, the user agrees to bear full responsibility to do the necessary due
diligence to ensure that he / she is in compliance with their local copyright laws.
Additionally, the user agrees to bear any damages arising as a direct cause (or
otherwise) of using any artifacts released by the pints research team, as well as
full responsibility for the consequences of his / her usage (or implementation)
of any such released artifacts. The user also indemnifies Pints Research Team (and
any of its members or agents) of any damage, related or unrelated, to the release
or subsequent usage of any findings, artifacts or code by the team. For the avoidance
of doubt, any artifacts released by the Pints Research team are done so in accordance
with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
community in bringing LLMs to the next frontier.
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/pints-ai/1.5-Pints-2K-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-GGUF/resolve/main/1.5-Pints-2K-v0.1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->