Model: afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-guanaco-GGUF Source: Original Platform
2.2 KiB
2.2 KiB
base_model, inference, model_creator, model_name, pipeline_tag, quantized_by, tags
| base_model | inference | model_creator | model_name | pipeline_tag | quantized_by | tags | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| jncraton/TinyLlama-1.1B-intermediate-step-955k-token-2T-guanaco | false | jncraton | TinyLlama-1.1B-intermediate-step-955k-token-2T-guanaco | text-generation | afrideva |
|
jncraton/TinyLlama-1.1B-intermediate-step-955k-token-2T-guanaco-GGUF
Quantized GGUF model files for TinyLlama-1.1B-intermediate-step-955k-token-2T-guanaco from jncraton
| Name | Quant method | Size |
|---|---|---|
| tinyllama-1.1b-intermediate-step-955k-token-2t-guanaco.q2_k.gguf | q2_k | 482.14 MB |
| tinyllama-1.1b-intermediate-step-955k-token-2t-guanaco.q3_k_m.gguf | q3_k_m | 549.85 MB |
| tinyllama-1.1b-intermediate-step-955k-token-2t-guanaco.q4_k_m.gguf | q4_k_m | 667.81 MB |
| tinyllama-1.1b-intermediate-step-955k-token-2t-guanaco.q5_k_m.gguf | q5_k_m | 782.04 MB |
| tinyllama-1.1b-intermediate-step-955k-token-2t-guanaco.q6_k.gguf | q6_k | 903.41 MB |
| tinyllama-1.1b-intermediate-step-955k-token-2t-guanaco.q8_0.gguf | q8_0 | 1.17 GB |