初始化项目,由ModelHub XC社区提供模型
Model: RedHatAI/Llama-2-7b-pruned70-retrained Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
66
README.md
Normal file
66
README.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
base_model: neuralmagic/Llama-2-7b-pruned50-retrained
|
||||
inference: true
|
||||
model_type: llama
|
||||
pipeline_tag: text-generation
|
||||
datasets:
|
||||
- cerebras/SlimPajama-627B
|
||||
tags:
|
||||
- sparse
|
||||
---
|
||||
|
||||
# Llama-2-7b-pruned70-retrained
|
||||
|
||||
This repo contains model files for a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) model that has had 50% of the parameters pruned in one-shot with [SparseGPT](https://arxiv.org/abs/2301.00774), then retrained by [Cerebras](https://huggingface.co/cerebras) with 50B tokens from SlimPajama while maintaining sparsity. It was then one-shot pruned to 70% sparsity and trained for another 100B tokens.
|
||||
|
||||
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
|
||||
|
||||
**Authors**: Neural Magic, Cerebras
|
||||
|
||||
## Usage
|
||||
|
||||
Below we share some code snippets on how to get quickly started with running the model.
|
||||
|
||||
### Sparse Transfer
|
||||
|
||||
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
|
||||
|
||||
### Running the model
|
||||
|
||||
This model has not been fine-tuned for instruction-following but may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse).
|
||||
|
||||
```python
|
||||
# pip install transformers accelerate
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-pruned70-retrained")
|
||||
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-pruned70-retrained", device_map="auto")
|
||||
|
||||
input_text = "Write me a poem about Machine Learning."
|
||||
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
||||
|
||||
outputs = model.generate(**input_ids)
|
||||
print(tokenizer.decode(outputs[0]))
|
||||
```
|
||||
|
||||
## Evaluation Benchmark Results
|
||||
|
||||
Model evaluation metrics and results. [UPDATE]
|
||||
|
||||
| Benchmark | Metric | Llama-2-7b | Llama-2-7b-pruned70-retrained |
|
||||
|------------------------------------------------|---------------|-------------|-------------------------------|
|
||||
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 46.9% | 36.5% |
|
||||
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 78.6% | 74.1% |
|
||||
| [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 74.0% | 69.5% |
|
||||
| [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 53.1% | 45.4% |
|
||||
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | 38.8% | 36.7% |
|
||||
| [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 14.5% | 8.0% |
|
||||
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 13.4% | 14.4% |
|
||||
|
||||
## Model Training Details
|
||||
|
||||
[UPDATE]
|
||||
|
||||
## Help
|
||||
|
||||
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
||||
25
arc_challenge_25shot_bs16_bf16.json
Normal file
25
arc_challenge_25shot_bs16_bf16.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"results": {
|
||||
"arc_challenge": {
|
||||
"acc": 0.431740614334471,
|
||||
"acc_stderr": 0.014474591427196206,
|
||||
"acc_norm": 0.4539249146757679,
|
||||
"acc_norm_stderr": 0.014549221105171864
|
||||
}
|
||||
},
|
||||
"versions": {
|
||||
"arc_challenge": 0
|
||||
},
|
||||
"config": {
|
||||
"model": "sparseml",
|
||||
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse70_retrained/checkpoint,dtype=bfloat16",
|
||||
"num_fewshot": 25,
|
||||
"batch_size": "16",
|
||||
"batch_sizes": [],
|
||||
"device": "cuda:4",
|
||||
"no_cache": true,
|
||||
"limit": null,
|
||||
"bootstrap_iters": 100000,
|
||||
"description_dict": {}
|
||||
}
|
||||
}
|
||||
29
config.json
Normal file
29
config.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"_name_or_path": "neuralmagic/Llama-2-7b-pruned70-retrained",
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 11008,
|
||||
"max_position_embeddings": 4096,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 32,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 10000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"tokenizer_class": "LlamaTokenizerFast",
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.40.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32000
|
||||
}
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.40.0"
|
||||
}
|
||||
23
gsm8k_5shot_bs16_bf16.json
Normal file
23
gsm8k_5shot_bs16_bf16.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"results": {
|
||||
"gsm8k": {
|
||||
"acc": 0.07960576194086429,
|
||||
"acc_stderr": 0.00745592433867626
|
||||
}
|
||||
},
|
||||
"versions": {
|
||||
"gsm8k": 0
|
||||
},
|
||||
"config": {
|
||||
"model": "sparseml",
|
||||
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse70_retrained/checkpoint,dtype=bfloat16",
|
||||
"num_fewshot": 5,
|
||||
"batch_size": "16",
|
||||
"batch_sizes": [],
|
||||
"device": "cuda:6",
|
||||
"no_cache": true,
|
||||
"limit": null,
|
||||
"bootstrap_iters": 100000,
|
||||
"description_dict": {}
|
||||
}
|
||||
}
|
||||
25
hellaswag_10shot_bs16_bf16.json
Normal file
25
hellaswag_10shot_bs16_bf16.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"results": {
|
||||
"hellaswag": {
|
||||
"acc": 0.5493925512846046,
|
||||
"acc_stderr": 0.004965375341643134,
|
||||
"acc_norm": 0.7405895239992033,
|
||||
"acc_norm_stderr": 0.004374153847826759
|
||||
}
|
||||
},
|
||||
"versions": {
|
||||
"hellaswag": 0
|
||||
},
|
||||
"config": {
|
||||
"model": "sparseml",
|
||||
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse70_retrained/checkpoint,dtype=bfloat16",
|
||||
"num_fewshot": 10,
|
||||
"batch_size": "16",
|
||||
"batch_sizes": [],
|
||||
"device": "cuda:2",
|
||||
"no_cache": true,
|
||||
"limit": null,
|
||||
"bootstrap_iters": 100000,
|
||||
"description_dict": {}
|
||||
}
|
||||
}
|
||||
3
humaneval_fp16.json
Normal file
3
humaneval_fp16.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a0e58b4257cba27b3176bfa1789bc72526d9d6faff315377073a2cb5b99c51e1
|
||||
size 1366
|
||||
3
mmlu_5shot_bs4_bf16.json
Normal file
3
mmlu_5shot_bs4_bf16.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2929427f3957584b28353ae43428995f13d92f3361b654864e278e4e7cff9539
|
||||
size 14303
|
||||
3
model-00001-of-00003.safetensors
Normal file
3
model-00001-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:14de06ce30ff5abb553419bddfbcf4f136bf25acdc6d4b78e81ce52153dd7b0a
|
||||
size 4938985352
|
||||
3
model-00002-of-00003.safetensors
Normal file
3
model-00002-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:54099b8f1e7e8ad535877bd43c645ff6bb5ea8fecc3b56e5a50235d47e13dbb2
|
||||
size 4947390880
|
||||
3
model-00003-of-00003.safetensors
Normal file
3
model-00003-of-00003.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:641b79a8e42d777edc8268dd4ce54bfc16d361b47b32294fc22e3c8ca46f0107
|
||||
size 3590488816
|
||||
3
model.safetensors.index.json
Normal file
3
model.safetensors.index.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:091fdb60b8ee3713df38fd6f9849c3aa82d7eeeb5f6d224a77139543b1db8652
|
||||
size 23950
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bcd04f0eadf90287bd26e1a183ac487d8a141b09b06aecb7725bbdd343640f2e
|
||||
size 1842767
|
||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
|
||||
size 499723
|
||||
41
tokenizer_config.json
Normal file
41
tokenizer_config.json
Normal file
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_eos_token": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"bos_token": "<s>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "</s>",
|
||||
"legacy": false,
|
||||
"model_max_length": 1000000000000000019884624838656,
|
||||
"pad_token": null,
|
||||
"padding_side": "right",
|
||||
"sp_model_kwargs": {},
|
||||
"tokenizer_class": "LlamaTokenizer",
|
||||
"unk_token": "<unk>",
|
||||
"use_default_system_prompt": false
|
||||
}
|
||||
25
truthfulqa_mc_0shot_bs16_bf16.json
Normal file
25
truthfulqa_mc_0shot_bs16_bf16.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"results": {
|
||||
"truthfulqa_mc": {
|
||||
"mc1": 0.2350061199510404,
|
||||
"mc1_stderr": 0.014843061507731618,
|
||||
"mc2": 0.3668136165316701,
|
||||
"mc2_stderr": 0.013583264391841515
|
||||
}
|
||||
},
|
||||
"versions": {
|
||||
"truthfulqa_mc": 1
|
||||
},
|
||||
"config": {
|
||||
"model": "sparseml",
|
||||
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse70_retrained/checkpoint,dtype=bfloat16",
|
||||
"num_fewshot": 0,
|
||||
"batch_size": "16",
|
||||
"batch_sizes": [],
|
||||
"device": "cuda:4",
|
||||
"no_cache": true,
|
||||
"limit": null,
|
||||
"bootstrap_iters": 100000,
|
||||
"description_dict": {}
|
||||
}
|
||||
}
|
||||
23
winogrande_5shot_bs16_bf16.json
Normal file
23
winogrande_5shot_bs16_bf16.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"results": {
|
||||
"winogrande": {
|
||||
"acc": 0.6945540647198106,
|
||||
"acc_stderr": 0.012945038632552029
|
||||
}
|
||||
},
|
||||
"versions": {
|
||||
"winogrande": 0
|
||||
},
|
||||
"config": {
|
||||
"model": "sparseml",
|
||||
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse70_retrained/checkpoint,dtype=bfloat16",
|
||||
"num_fewshot": 5,
|
||||
"batch_size": "16",
|
||||
"batch_sizes": [],
|
||||
"device": "cuda:2",
|
||||
"no_cache": true,
|
||||
"limit": null,
|
||||
"bootstrap_iters": 100000,
|
||||
"description_dict": {}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user