Files
ModelHub XC 173aa39265 初始化项目,由ModelHub XC社区提供模型
Model: afrideva/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
Source: Original Platform
2026-05-05 09:50:04 +08:00

4.4 KiB

base_model, datasets, inference, language, license, model_creator, model_name, pipeline_tag, quantized_by, tags
base_model datasets inference language license model_creator model_name pipeline_tag quantized_by tags
TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
cerebras/SlimPajama-627B
bigcode/starcoderdata
false
en
apache-2.0 TinyLlama TinyLlama-1.1B-intermediate-step-1431k-3T text-generation afrideva
gguf
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0

TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF

Quantized GGUF model files for TinyLlama-1.1B-intermediate-step-1431k-3T from TinyLlama

Name Quant method Size
tinyllama-1.1b-intermediate-step-1431k-3t.fp16.gguf fp16 2.20 GB
tinyllama-1.1b-intermediate-step-1431k-3t.q2_k.gguf q2_k 483.12 MB
tinyllama-1.1b-intermediate-step-1431k-3t.q3_k_m.gguf q3_k_m 550.82 MB
tinyllama-1.1b-intermediate-step-1431k-3t.q4_k_m.gguf q4_k_m 668.79 MB
tinyllama-1.1b-intermediate-step-1431k-3t.q5_k_m.gguf q5_k_m 783.02 MB
tinyllama-1.1b-intermediate-step-1431k-3t.q6_k.gguf q6_k 904.39 MB
tinyllama-1.1b-intermediate-step-1431k-3t.q8_0.gguf q8_0 1.17 GB

Original Model Card:

TinyLlama-1.1B

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.

We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.

This Collection

This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.

Eval

Model Pretrain Tokens HellaSwag Obqa WinoGrande ARC_c ARC_e boolq piqa avg
Pythia-1.0B 300B 47.16 31.40 53.43 27.05 48.99 60.83 69.21 48.30
TinyLlama-1.1B-intermediate-step-50K-104b 103B 43.50 29.80 53.28 24.32 44.91 59.66 67.30 46.11
TinyLlama-1.1B-intermediate-step-240k-503b 503B 49.56 31.40 55.80 26.54 48.32 56.91 69.42 48.28
TinyLlama-1.1B-intermediate-step-480k-1007B 1007B 52.54 33.40 55.96 27.82 52.36 59.54 69.91 50.22
TinyLlama-1.1B-intermediate-step-715k-1.5T 1.5T 53.68 35.20 58.33 29.18 51.89 59.08 71.65 51.29
TinyLlama-1.1B-intermediate-step-955k-2T 2T 54.63 33.40 56.83 28.07 54.67 63.21 70.67 51.64
TinyLlama-1.1B-intermediate-step-1195k-token-2.5T 2.5T 58.96 34.40 58.72 31.91 56.78 63.21 73.07 53.86