初始化项目,由ModelHub XC社区提供模型

Model: prithivMLmods/OpenReasoning-Nemotron-1.5B-F32-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-07 23:04:06 +08:00
commit c0d535bfbc
17 changed files with 135 additions and 0 deletions

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:61f85dede8743e4e3a0bf07ec334681dd1545a7918d77826ee9ad11bbba3c986
size 3093666784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a62e3dd3f5ad35a34a748d6f0bc58c84d7dd2a57d4b0111b4447187624621a30
size 3093666784

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:78f5ffd22b3132f0cdc479456a70e78b7892b1e539b03e0f219c7adeb0638005
size 6180805600

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:325757983d0c588bda1195463a2113d056dac1c86cc486f27c45610a75563f84
size 676302304

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0a862eb62f6f0307b6335d00505b3f72f613311b3cd12272631a8db904dbc53a
size 880160224

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5a7a28b0b2133bdec0d3803e6a93b2f520adb59d4278588706963d16cf779f2a
size 824176096

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:69a53d0dff5682d163071e4a983bd08a292d6e2cb5e91b9c8e15c63dd0dc6f78
size 760942048

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:464f8cbec869c739e9b06e06d26c9b07a18b1ca985545825fa3aef3854007ea6
size 986045920

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:febabe18b16024296f604456470bedc472983de4049c077d48b9862b30a9b3a5
size 940309984

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7cd452deeeb06a62be603695e07225ccecf2d472431cbb07475def8e9b6b6325
size 1125047776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5354579bea1a9617407a06f5eeb0dd07185f899bf9c666fc9e376bc970531ffb
size 1098726880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1c065d6573fd819af5989ec889591869843e94ad289c5e8ed48cd0d89d546739
size 1272737248

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9ec2083691c415c0799c88073b6e2bd0be712d0c986aedc852940912bb88a8af
size 1646570464

45
README.md Normal file
View File

@@ -0,0 +1,45 @@
---
license: cc-by-4.0
language:
- en
base_model:
- nvidia/OpenReasoning-Nemotron-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- code
- nvidia
---
# **OpenReasoning-Nemotron-1.5B-F32-GGUF**
> OpenReasoning-Nemotron-1.5B is a large language model (LLM) which is a derivative of Qwen2.5-1.5B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. We evaluated this model with up to 64K output tokens. OpenReasoning-Nemotron models can be used in a "heavy" mode by starting multiple parallel generations and combining them together via generative solution selection (GenSelect). To add this "skill" we follow the original GenSelect training pipeline except we do not train on the selection summary but use the full reasoning trace of DeepSeek R1 0528 671B instead. We only train models to select the best solution for math problems but surprisingly find that this capability directly generalizes to code and science questions! With this "heavy" GenSelect inference mode, OpenReasoning-Nemotron-32B model surpasses O3 (High) on math and coding benchmarks.
## Model File
| Quant Type | File Size | Filename |
|------------|-----------|----------|
| F32 | 6.18 GB | OpenReasoning-Nemotron-1.5B.F32.gguf |
| F16 | 3.09 GB | OpenReasoning-Nemotron-1.5B.F16.gguf |
| BF16 | 3.09 GB | OpenReasoning-Nemotron-1.5B.BF16.gguf |
| Q8_0 | 1.65 GB | OpenReasoning-Nemotron-1.5B.Q8_0.gguf |
| Q6_K | 1.27 GB | OpenReasoning-Nemotron-1.5B.Q6_K.gguf |
| Q5_K_M | 1.13 GB | OpenReasoning-Nemotron-1.5B.Q5_K_M.gguf |
| Q5_K_S | 1.1 GB | OpenReasoning-Nemotron-1.5B.Q5_K_S.gguf |
| Q4_K_M | 986 MB | OpenReasoning-Nemotron-1.5B.Q4_K_M.gguf |
| Q4_K_S | 940 MB | OpenReasoning-Nemotron-1.5B.Q4_K_S.gguf |
| Q3_K_L | 880 MB | OpenReasoning-Nemotron-1.5B.Q3_K_L.gguf |
| Q3_K_M | 824 MB | OpenReasoning-Nemotron-1.5B.Q3_K_M.gguf |
| Q3_K_S | 761 MB | OpenReasoning-Nemotron-1.5B.Q3_K_S.gguf |
| Q2_K | 676 MB | OpenReasoning-Nemotron-1.5B.Q2_K.gguf |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)

3
config.json Normal file
View File

@@ -0,0 +1,3 @@
{
"model_type": "qwen2"
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}