Files
DQN-Labs-dqnMath-v0.1-3B-HF…/README.md
ModelHub XC efc9f61045 初始化项目,由ModelHub XC社区提供模型
Model: Flexan/DQN-Labs-dqnMath-v0.1-3B-HF-GGUF
Source: Original Platform
2026-04-11 14:00:59 +08:00

3.8 KiB

base_model, library_name, language, license, inference, pipeline_tag, tags
base_model library_name language license inference pipeline_tag tags
DQN-Labs/dqnMath-v0.1-3B-HF mlx
en
fr
es
de
it
pt
nl
zh
ja
ko
ar
apache-2.0 true text-generation
mlx

GGUF Files for dqnMath-v0.1-3B-HF

These are the GGUF files for DQN-Labs/dqnMath-v0.1-3B-HF.

Downloads

GGUF Link Quantization Description
Download Q2_K Lowest quality
Download Q3_K_S
Download IQ3_S Integer quant, preferable over Q3_K_S
Download IQ3_M Integer quant
Download Q3_K_M
Download Q3_K_L
Download IQ4_XS Integer quant
Download Q4_K_S Fast with good performance
Download Q4_K_M Recommended: Perfect mix of speed and performance
Download Q5_K_S
Download Q5_K_M
Download Q6_K Very good quality
Download Q8_0 Best quality
Download f16 Full precision, don't bother; use a quant

Note from Flexan

I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet, usually for models I deem interesting and wish to try out.

If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding this model, please refer to the original model repo.

You can find more info about me and what I do here.

DQN-Labs/dqnMath-v0.1-3B-HF

This model DQN-Labs/dqnMath-v0.1-3B-HF was converted to MLX format from LakoMoor/Ministral-3-3B-Text-Only using mlx-lm version 0.29.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("DQN-Labs/dqnMath-v0.1-3B-HF")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)