初始化项目,由ModelHub XC社区提供模型
Model: TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T Source: Original Platform
This commit is contained in:
64
README.md
Normal file
64
README.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- cerebras/SlimPajama-627B
|
||||
- bigcode/starcoderdata
|
||||
language:
|
||||
- en
|
||||
---
|
||||
<div align="center">
|
||||
|
||||
# TinyLlama-1.1B
|
||||
</div>
|
||||
|
||||
https://github.com/jzhang38/TinyLlama
|
||||
|
||||
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
|
||||
</div>
|
||||
|
||||
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
|
||||
|
||||
#### This Model
|
||||
This is an intermediate checkpoint with 715K steps and 1.49T tokens. **We suggest you not use this directly for inference.**
|
||||
|
||||
|
||||
#### How to use
|
||||
You will need the transformers>=4.31
|
||||
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
||||
```
|
||||
from transformers import AutoTokenizer
|
||||
import transformers
|
||||
import torch
|
||||
model = "PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model)
|
||||
pipeline = transformers.pipeline(
|
||||
"text-generation",
|
||||
model=model,
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
sequences = pipeline(
|
||||
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
|
||||
do_sample=True,
|
||||
top_k=10,
|
||||
num_return_sequences=1,
|
||||
repetition_penalty=1.5,
|
||||
eos_token_id=tokenizer.eos_token_id,
|
||||
max_length=500,
|
||||
)
|
||||
for seq in sequences:
|
||||
print(f"Result: {seq['generated_text']}")
|
||||
```
|
||||
|
||||
#### Eval
|
||||
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|
||||
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
|
||||
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
|
||||
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
|
||||
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
|
||||
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
|
||||
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.49T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
|
||||
Reference in New Issue
Block a user