Files
ModelHub XC ec9a76ec0b 初始化项目,由ModelHub XC社区提供模型
Model: afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF
Source: Original Platform
2026-04-13 05:47:01 +08:00

5.4 KiB

base_model, datasets, inference, language, license, model_creator, model_name, pipeline_tag, quantized_by, tags
base_model datasets inference language license model_creator model_name pipeline_tag quantized_by tags
TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
cerebras/SlimPajama-627B
bigcode/starcoderdata
false
en
apache-2.0 TinyLlama TinyLlama-1.1B-intermediate-step-955k-token-2T text-generation afrideva
gguf
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0

TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF

Quantized GGUF model files for TinyLlama-1.1B-intermediate-step-955k-token-2T from TinyLlama

Name Quant method Size
tinyllama-1.1b-intermediate-step-955k-token-2t.q2_k.gguf q2_k 482.14 MB
tinyllama-1.1b-intermediate-step-955k-token-2t.q3_k_m.gguf q3_k_m 549.85 MB
tinyllama-1.1b-intermediate-step-955k-token-2t.q4_k_m.gguf q4_k_m 667.81 MB
tinyllama-1.1b-intermediate-step-955k-token-2t.q5_k_m.gguf q5_k_m 782.04 MB
tinyllama-1.1b-intermediate-step-955k-token-2t.q6_k.gguf q6_k 903.41 MB
tinyllama-1.1b-intermediate-step-955k-token-2t.q8_0.gguf q8_0 1.17 GB

Original Model Card:

TinyLlama-1.1B

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.

We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.

This Model

This is an intermediate checkpoint with 995K steps and 2003B tokens.

Releases Schedule

We will be rolling out intermediate checkpoints following the below schedule. We also include some baseline models for comparison.

Date HF Checkpoint Tokens Step HellaSwag Acc_norm
Baseline StableLM-Alpha-3B 800B -- 38.31
Baseline Pythia-1B-intermediate-step-50k-105b 105B 50k 42.04
Baseline Pythia-1B 300B 143k 47.16
2023-09-04 TinyLlama-1.1B-intermediate-step-50k-105b 105B 50k 43.50
2023-09-16 -- 500B -- --
2023-10-01 -- 1T -- --
2023-10-16 -- 1.5T -- --
2023-10-31 -- 2T -- --
2023-11-15 -- 2.5T -- --
2023-12-01 -- 3T -- --

How to use

You will need the transformers>=4.31 Do check the TinyLlama github page for more information.

from transformers import AutoTokenizer
import transformers 
import torch
model = "TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    repetition_penalty=1.5,
    eos_token_id=tokenizer.eos_token_id,
    max_length=500,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")