初始化项目,由ModelHub XC社区提供模型

Model: RedHatAI/Llama-2-7b-ultrachat200k-pruned_50
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-23 09:47:58 +08:00
commit d27239b8e3
19 changed files with 310 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

71
README.md Normal file
View File

@@ -0,0 +1,71 @@
---
base_model: neuralmagic/Llama-2-7b-pruned50-retrained
inference: true
model_type: llama
pipeline_tag: text-generation
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
tags:
- sparse
- chat
---
# Llama-2-7b-pruned50-retrained-ultrachat
This repo contains a [50% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) finetuned for chat tasks using the [UltraChat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
**Authors**: Neural Magic, Cerebras
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
### Running the model
This model may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse).
```python
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained-ultrachat")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained-ultrachat", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer.apply_chat_template(input_text, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
## Evaluation Benchmark Results
Model evaluation metrics and results.
| Benchmark | Metric | Llama-2-7b-ultrachat | Llama-2-7b-pruned50-retrained-ultrachat |
|------------------------------------------------|---------------|-------------|-------------------------------|
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot | 46.1% | 41.4% |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | 75.9% | 73.5% |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | 5-shot | 72.6% | 67.8% |
| [ARC-c](https://arxiv.org/abs/1911.01547) | 25-shot | 52.8% | 49.0% |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | 44.8% | 39.5% |
| [GSM8K](https://arxiv.org/abs/2110.14168) | 5-shot | 12.4% | 8.0% |
| [AlpacaEval](https://arxiv.org/abs/2107.03374) ([Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) evaluator) | Win rate | 57.6% | 60.1% |
| [AlpacaEval](https://arxiv.org/abs/2107.03374) (GPT-4 Turbo evaluator) | Win rate | 60.6% | 59.0% |
## Model Training Details
This model was obtained by sparse-tranfer of the sparse foundational model [Llama-2-7b-pruned50-retrained](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
Training was perfomerd for 2 epochs and used the [SquareHead](https://arxiv.org/abs/2310.06927) knowledge distillation with [Llama-2-7b-ultrachat](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat) as teacher.
## Help
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)

View File

@@ -0,0 +1,25 @@
{
"results": {
"arc_challenge": {
"acc": 0.4667235494880546,
"acc_stderr": 0.01457899585960581,
"acc_norm": 0.48976109215017066,
"acc_norm_stderr": 0.014608326906285019
}
},
"versions": {
"arc_challenge": 0
},
"config": {
"model": "sparseml",
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse50_45B_retrained/ultrachat200k/llama2_7B_45B_sparse50_LR2e-4_GC2_E2/training,dtype=bfloat16",
"num_fewshot": 25,
"batch_size": "16",
"batch_sizes": [],
"device": "cuda:1",
"no_cache": true,
"limit": null,
"bootstrap_iters": 100000,
"description_dict": {}
}
}

29
config.json Normal file
View File

@@ -0,0 +1,29 @@
{
"_name_or_path": "neuralmagic/Llama-2-7b-pruned50-retrained-ultrachat",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"tokenizer_class": "LlamaTokenizerFast",
"torch_dtype": "bfloat16",
"transformers_version": "4.40.0",
"use_cache": true,
"vocab_size": 32000
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

6
generation_config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"transformers_version": "4.40.0"
}

View File

@@ -0,0 +1,23 @@
{
"results": {
"gsm8k": {
"acc": 0.07960576194086429,
"acc_stderr": 0.007455924338676284
}
},
"versions": {
"gsm8k": 0
},
"config": {
"model": "sparseml",
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse50_45B_retrained/ultrachat200k/llama2_7B_45B_sparse50_LR2e-4_GC2_E2/training,dtype=bfloat16",
"num_fewshot": 5,
"batch_size": "16",
"batch_sizes": [],
"device": "cuda:6",
"no_cache": true,
"limit": null,
"bootstrap_iters": 100000,
"description_dict": {}
}
}

View File

@@ -0,0 +1,25 @@
{
"results": {
"hellaswag": {
"acc": 0.5541724756024696,
"acc_stderr": 0.004960408362133245,
"acc_norm": 0.7353116908982275,
"acc_norm_stderr": 0.00440265476726963
}
},
"versions": {
"hellaswag": 0
},
"config": {
"model": "sparseml",
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse50_45B_retrained/ultrachat200k/llama2_7B_45B_sparse50_LR2e-4_GC2_E2/training,dtype=bfloat16",
"num_fewshot": 10,
"batch_size": "16",
"batch_sizes": [],
"device": "cuda:6",
"no_cache": true,
"limit": null,
"bootstrap_iters": 100000,
"description_dict": {}
}
}

3
mmlu_5shot_bs4_bf16.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1e4fa1ee19845996940f26ad8f606f028ff4bee1db0cba118079e0d4f242310d
size 14252

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d55ad804981e5548cf63e29e9921cf0961dfd13f4c155e0fccfb18188f816991
size 4938985352

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a86e033c2ae1911684278dc4b72720dc125bf9dd18766aec6dfd0da517e70c04
size 4947390880

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:049ac64154b2db7e46b11d45a5f2fbe7f4af694a8af96053da5d297b79ab847b
size 3590488816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:091fdb60b8ee3713df38fd6f9849c3aa82d7eeeb5f6d224a77139543b1db8652
size 23950

23
special_tokens_map.json Normal file
View File

@@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bcd04f0eadf90287bd26e1a183ac487d8a141b09b06aecb7725bbdd343640f2e
size 1842767

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
size 499723

3
tokenizer_config.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3c83dc4260b41a1c38bb2a26de2ca1f2c7a995aaeea7f1460eda7e0986064d06
size 1365

View File

@@ -0,0 +1,25 @@
{
"results": {
"truthfulqa_mc": {
"mc1": 0.2631578947368421,
"mc1_stderr": 0.015415241740237012,
"mc2": 0.39511662473536746,
"mc2_stderr": 0.014932053345393062
}
},
"versions": {
"truthfulqa_mc": 1
},
"config": {
"model": "sparseml",
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse50_45B_retrained/ultrachat200k/llama2_7B_45B_sparse50_LR2e-4_GC2_E2/training,dtype=bfloat16",
"num_fewshot": 0,
"batch_size": "16",
"batch_sizes": [],
"device": "cuda:4",
"no_cache": true,
"limit": null,
"bootstrap_iters": 100000,
"description_dict": {}
}
}

View File

@@ -0,0 +1,23 @@
{
"results": {
"winogrande": {
"acc": 0.6779794790844514,
"acc_stderr": 0.01313207020207106
}
},
"versions": {
"winogrande": 0
},
"config": {
"model": "sparseml",
"model_args": "pretrained=/network/alexandre/research/cerebras/llama2_7B_sparse50_45B_retrained/ultrachat200k/llama2_7B_45B_sparse50_LR2e-4_GC2_E2/training,dtype=bfloat16",
"num_fewshot": 5,
"batch_size": "16",
"batch_sizes": [],
"device": "cuda:2",
"no_cache": true,
"limit": null,
"bootstrap_iters": 100000,
"description_dict": {}
}
}