ModelHub XC 25eaf0e4ce 初始化项目,由ModelHub XC社区提供模型
Model: Pomni/OWoTGPT-1.3-GGUF
Source: Original Platform
2026-04-18 08:59:34 +08:00

quantized_by, language, base_model, pipeline_tag, tags
quantized_by language base_model pipeline_tag tags
Pomni
en
Pomni/OWoTGPT-1.3
text-generation
gpt2
slm
owot
gpt
gguf

OWoTGPT-1.3 quants

This is a repository of GGUF quants for OWoTGPT-1.3.

If you are looking for a program to run this model with, then I would recommend LM Studio, as it is user-friendly, has a GUI, and is very powerful.

List of Quants

Sorry — too much quants for me to list. Go to the files page to download them.

The MXFP4_MOE and TQx_0 quants are experimental. Additionally, I would not go below F16 for a model this small. F32 is the way to go here.

Questions you may have

What program did you use to make these quants?

I used llama.cpp b8352 on Windows x64, leveraging CUDA 12.4.

One or multiple of the quants are not working for me.

Open a new discussion in the community tab about this, and I will look into the issue.

Description
Model synced from source: Pomni/OWoTGPT-1.3-GGUF
Readme 28 KiB