Files
TinyLlama-1.1B-Chat-v0.5-GGUF/README.md
ModelHub XC 4cb1c9f3ca 初始化项目,由ModelHub XC社区提供模型
Model: afrideva/TinyLlama-1.1B-Chat-v0.5-GGUF
Source: Original Platform
2026-04-11 09:58:56 +08:00

3.6 KiB

base_model, datasets, inference, language, license, model_creator, model_name, pipeline_tag, quantized_by, tags
base_model datasets inference language license model_creator model_name pipeline_tag quantized_by tags
TinyLlama/TinyLlama-1.1B-Chat-v0.5
cerebras/SlimPajama-627B
bigcode/starcoderdata
OpenAssistant/oasst_top1_2023-08-25
false
en
apache-2.0 TinyLlama TinyLlama-1.1B-Chat-v0.5 text-generation afrideva
gguf
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0

TinyLlama/TinyLlama-1.1B-Chat-v0.5-GGUF

Quantized GGUF model files for TinyLlama-1.1B-Chat-v0.5 from TinyLlama

Name Quant method Size
tinyllama-1.1b-chat-v0.5.q2_k.gguf q2_k 482.15 MB
tinyllama-1.1b-chat-v0.5.q3_k_m.gguf q3_k_m 549.85 MB
tinyllama-1.1b-chat-v0.5.q4_k_m.gguf q4_k_m 667.82 MB
tinyllama-1.1b-chat-v0.5.q5_k_m.gguf q5_k_m 782.05 MB
tinyllama-1.1b-chat-v0.5.q6_k.gguf q6_k 903.42 MB
tinyllama-1.1b-chat-v0.5.q8_0.gguf q8_0 1.17 GB

Original Model Card:

TinyLlama-1.1B

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.

We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.

This Model

This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T. The dataset used is OpenAssistant/oasst_top1_2023-08-25 following the chatml format.

How to use

You will need the transformers>=4.31 Do check the TinyLlama github page for more information.

from transformers import AutoTokenizer
import transformers 
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.5"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

CHAT_EOS_TOKEN_ID = 32002

prompt = "How to get in a good university?"
formatted_prompt = (
    f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
)


sequences = pipeline(
    formatted_prompt,
    do_sample=True,
    top_k=50,
    top_p = 0.9,
    num_return_sequences=1,
    repetition_penalty=1.1,
    max_new_tokens=1024,
    eos_token_id=CHAT_EOS_TOKEN_ID,
)

for seq in sequences:
    print(f"Result: {seq['generated_text']}")