Model: afrideva/smol-3b-GGUF Source: Original Platform
base_model, inference, library_name, license, model-index, model_creator, model_name, pipeline_tag, quantized_by, tags
| base_model | inference | library_name | license | model-index | model_creator | model_name | pipeline_tag | quantized_by | tags | |||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| rishiraj/smol-3b | false | peft | apache-2.0 |
|
rishiraj | smol-3b | text-generation | afrideva |
|
rishiraj/smol-3b-GGUF
Quantized GGUF model files for smol-3b from rishiraj
| Name | Quant method | Size |
|---|---|---|
| smol-3b.fp16.gguf | fp16 | 6.04 GB |
| smol-3b.q2_k.gguf | q2_k | 1.30 GB |
| smol-3b.q3_k_m.gguf | q3_k_m | 1.51 GB |
| smol-3b.q4_k_m.gguf | q4_k_m | 1.85 GB |
| smol-3b.q5_k_m.gguf | q5_k_m | 2.15 GB |
| smol-3b.q6_k.gguf | q6_k | 2.48 GB |
| smol-3b.q8_0.gguf | q8_0 | 3.21 GB |
Original Model Card:
smol-3b
See how open weights instead of open source feel like!
Description