Update README.md
This commit is contained in:
293
README.md
293
README.md
@@ -1,143 +1,150 @@
|
|||||||
---
|
---
|
||||||
license: other
|
license: other
|
||||||
license_name: nvidia-open-model-license
|
license_name: nvidia-open-model-license
|
||||||
license_link: >-
|
license_link: >-
|
||||||
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
|
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
|
||||||
---
|
library_name: transformers
|
||||||
|
pipeline_tag: text-generation
|
||||||
# Model Overview
|
language:
|
||||||
|
- en
|
||||||
Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
|
tags:
|
||||||
|
- nvidia
|
||||||
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
|
- llama-3
|
||||||
|
- pytorch
|
||||||
This model is for research and development only.
|
---
|
||||||
|
# Model Overview
|
||||||
**Model Developer:** NVIDIA
|
|
||||||
|
Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
|
||||||
**Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024.
|
|
||||||
|
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
|
||||||
## License
|
|
||||||
|
This model is for research and development only.
|
||||||
Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
|
|
||||||
|
**Model Developer:** NVIDIA
|
||||||
## Model Architecture
|
|
||||||
|
**Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024.
|
||||||
Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
|
|
||||||
It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
|
## License
|
||||||
|
|
||||||
**Architecture Type:** Transformer Decoder (auto-regressive language model)
|
Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
|
||||||
|
|
||||||
**Network Architecture:** Nemotron-4
|
## Model Architecture
|
||||||
|
|
||||||
**Input Type:** Text
|
Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
|
||||||
|
It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
|
||||||
**Input Format:** String
|
|
||||||
|
**Architecture Type:** Transformer Decoder (auto-regressive language model)
|
||||||
**Input Parameters:** None
|
|
||||||
|
**Network Architecture:** Nemotron-4
|
||||||
**Other Properties Related to Input:** None
|
|
||||||
|
**Input Type:** Text
|
||||||
**Output Type:** Text
|
|
||||||
|
**Input Format:** String
|
||||||
**Output Format:** String
|
|
||||||
|
**Input Parameters:** None
|
||||||
**Output Parameters:** None
|
|
||||||
|
**Other Properties Related to Input:** None
|
||||||
**Other Properties Related to Output:** None
|
|
||||||
|
**Output Type:** Text
|
||||||
## Usage
|
|
||||||
|
**Output Format:** String
|
||||||
Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
|
|
||||||
|
**Output Parameters:** None
|
||||||
```
|
|
||||||
pip install git+https://github.com/huggingface/transformers
|
**Other Properties Related to Output:** None
|
||||||
```
|
|
||||||
|
## Usage
|
||||||
The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
|
|
||||||
|
Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
|
||||||
```python
|
|
||||||
import torch
|
```
|
||||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
pip install git+https://github.com/huggingface/transformers
|
||||||
|
```
|
||||||
# Load the tokenizer and model
|
|
||||||
model_path = "nvidia/Minitron-8B-Base"
|
The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
|
||||||
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
|
||||||
|
```python
|
||||||
device='cuda'
|
import torch
|
||||||
dtype=torch.bfloat16
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
|
|
||||||
|
# Load the tokenizer and model
|
||||||
# Prepare the input text
|
model_path = "nvidia/Minitron-8B-Base"
|
||||||
prompt = "To be or not to be,"
|
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
||||||
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
|
|
||||||
|
device='cuda'
|
||||||
# Generate the output
|
dtype=torch.bfloat16
|
||||||
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
|
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
|
||||||
|
|
||||||
# Decode and print the output
|
# Prepare the input text
|
||||||
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
prompt = "To be or not to be,"
|
||||||
print(output_text)
|
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
|
||||||
```
|
|
||||||
|
# Generate the output
|
||||||
## Dataset & Training
|
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
|
||||||
|
|
||||||
**Data Collection Method:** Hybrid
|
# Decode and print the output
|
||||||
|
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
||||||
**Labeling Method:** Not Applicable
|
print(output_text)
|
||||||
|
```
|
||||||
**Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
|
|
||||||
|
## Dataset & Training
|
||||||
**Data Freshness:** The pretraining data has a cutoff of June 2023.
|
|
||||||
|
**Data Collection Method:** Hybrid
|
||||||
## Evaluation Results
|
|
||||||
|
**Labeling Method:** Not Applicable
|
||||||
*5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
|
|
||||||
|
**Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
|
||||||
| Average |
|
|
||||||
| :---- |
|
**Data Freshness:** The pretraining data has a cutoff of June 2023.
|
||||||
| 64.5 |
|
|
||||||
|
## Evaluation Results
|
||||||
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
|
|
||||||
|
*5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
|
||||||
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
|
|
||||||
| :------------- | :------------- | :------------- | :------------- | :------------- |
|
| Average |
|
||||||
| 81.6 | 80.3 | 54.2 | 49.2 | 31.1
|
| :---- |
|
||||||
|
| 64.5 |
|
||||||
|
|
||||||
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
|
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
|
||||||
|
|
||||||
| p@1, 0-Shot |
|
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
|
||||||
| :------------- |
|
| :------------- | :------------- | :------------- | :------------- | :------------- |
|
||||||
| 31.6 |
|
| 81.6 | 80.3 | 54.2 | 49.2 | 31.1
|
||||||
|
|
||||||
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
|
|
||||||
|
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
|
||||||
## Inference
|
|
||||||
**Engine:** TensorRT-LLM
|
| p@1, 0-Shot |
|
||||||
|
| :------------- |
|
||||||
**Test Hardware:** NVIDIA A100
|
| 31.6 |
|
||||||
|
|
||||||
**DType:** Float16/BFloat16
|
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
|
||||||
|
|
||||||
## Limitations
|
## Inference
|
||||||
|
**Engine:** TensorRT-LLM
|
||||||
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
|
||||||
|
**Test Hardware:** NVIDIA A100
|
||||||
## Ethical Considerations
|
|
||||||
|
**DType:** Float16/BFloat16
|
||||||
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
## Citation
|
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
||||||
|
|
||||||
If you find our work helpful, please consider citing our paper:
|
## Ethical Considerations
|
||||||
```
|
|
||||||
@article{minitron2024,
|
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
||||||
title={Compact Language Models via Pruning and Knowledge Distillation},
|
|
||||||
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
|
|
||||||
journal={arXiv preprint arXiv:2407.14679},
|
## Citation
|
||||||
year={2024},
|
|
||||||
url={https://arxiv.org/abs/2407.14679},
|
If you find our work helpful, please consider citing our paper:
|
||||||
}
|
```
|
||||||
```
|
@article{minitron2024,
|
||||||
|
title={Compact Language Models via Pruning and Knowledge Distillation},
|
||||||
|
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
|
||||||
|
journal={arXiv preprint arXiv:2407.14679},
|
||||||
|
year={2024},
|
||||||
|
url={https://arxiv.org/abs/2407.14679},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|||||||
Reference in New Issue
Block a user