base_model, inference, language, license, model_creator, model_name, pipeline_tag, quantized_by, tags
base_model inference language license model_creator model_name pipeline_tag quantized_by tags
cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1 false
pt
en
mit cnmoro TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1 text-generation afrideva
gguf
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0

cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF

Quantized GGUF model files for TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1 from cnmoro

Name Quant method Size
tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q2_k.gguf q2_k 482.14 MB
tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q3_k_m.gguf q3_k_m 549.85 MB
tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q4_k_m.gguf q4_k_m 667.81 MB
tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q5_k_m.gguf q5_k_m 782.04 MB
tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q6_k.gguf q6_k 903.41 MB
tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.q8_0.gguf q8_0 1.17 GB

Original Model Card:

Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl.

This is a work in progress, final version will be v3 or v4.

Prompt format:

f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n"

Description
Model synced from source: afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-GGUF
Readme 25 KiB