ModelHub XC 9e39e200be 初始化项目,由ModelHub XC社区提供模型
Model: mradermacher/1.5-Pints-2K-v0.1-GGUF
Source: Original Platform
2026-05-05 08:10:26 +08:00

base_model, datasets, extra_gated_fields, extra_gated_prompt, language, library_name, license, quantized_by
base_model datasets extra_gated_fields extra_gated_prompt language library_name license quantized_by
pints-ai/1.5-Pints-2K-v0.1
pints-ai/Expository-Prose-V1
HuggingFaceH4/ultrachat_200k
Open-Orca/SlimOrca-Dedup
meta-math/MetaMathQA
HuggingFaceH4/deita-10k-v0-sft
WizardLM/WizardLM_evol_instruct_V2_196k
togethercomputer/llama-instruct
LDJnr/Capybara
HuggingFaceH4/ultrafeedback_binarized
Company Country I agree to use this model for in accordance to the afore-mentioned Terms of Use I want to use this model for Specific date
text country checkbox
options type
Research
Education
label value
Other other
select
date_picker
Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws. Additionally, the user agrees to bear any damages arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team. For the avoidance of doubt, any artifacts released by the Pints Research team are done so in accordance with the 'fair use' clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier.
en
transformers mit mradermacher

About

static quants of https://huggingface.co/pints-ai/1.5-Pints-2K-v0.1

weighted/imatrix quants are available at https://huggingface.co/mradermacher/1.5-Pints-2K-v0.1-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 0.7
GGUF Q3_K_S 0.8
GGUF Q3_K_M 0.9 lower quality
GGUF Q3_K_L 0.9
GGUF IQ4_XS 1.0
GGUF Q4_K_S 1.0 fast, recommended
GGUF Q4_K_M 1.1 fast, recommended
GGUF Q5_K_S 1.2
GGUF Q5_K_M 1.2
GGUF Q6_K 1.4 very good quality
GGUF Q8_0 1.8 fast, best quality
GGUF f16 3.2 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Description
Model synced from source: mradermacher/1.5-Pints-2K-v0.1-GGUF
Readme 27 KiB