ModelHub XC 42a0807df9 初始化项目,由ModelHub XC社区提供模型
Model: Flexan/DQN-Labs-dqnMath-v0.2-3.8B-HF-GGUF
Source: Original Platform
2026-04-11 17:12:55 +08:00

language, library_name, license, license_link, pipeline_tag, tags, widget, base_model
language library_name license license_link pipeline_tag tags widget base_model
en
mlx mit https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE text-generation
nlp
math
code
mlx
messages
role content
user How to solve 3*x^2+4*x+5=1?
DQN-Labs/dqnMath-v0.2-3.8B-HF

GGUF Files for dqnMath-v0.2-3.8B-HF

These are the GGUF files for DQN-Labs/dqnMath-v0.2-3.8B-HF.

Downloads

GGUF Link Quantization Description
Download Q2_K Lowest quality
Download Q3_K_S
Download IQ3_S Integer quant, preferable over Q3_K_S
Download IQ3_M Integer quant
Download Q3_K_M
Download Q3_K_L
Download IQ4_XS Integer quant
Download Q4_K_S Fast with good performance
Download Q4_K_M Recommended: Perfect mix of speed and performance
Download Q5_K_S
Download Q5_K_M
Download Q6_K Very good quality
Download Q8_0 Best quality
Download f16 Full precision, don't bother; use a quant

Note from Flexan

I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet, usually for models I deem interesting and wish to try out.

If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding this model, please refer to the original model repo.

You can find more info about me and what I do here.

DQN-Labs/dqnMath-v0.2-3.8B-HF

This model DQN-Labs/dqnMath-v0.2-3.8B-HF was converted to MLX format from microsoft/Phi-4-mini-reasoning using mlx-lm version 0.30.7.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("DQN-Labs/dqnMath-v0.2-3.8B-HF")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Description
Model synced from source: Flexan/DQN-Labs-dqnMath-v0.2-3.8B-HF-GGUF
Readme 27 KiB