Compare commits

...

10 Commits

Author SHA1 Message Date
Pavlo Molchanov
e2c30f2d4c Update README.md 2025-02-14 19:04:05 +00:00
Saurav Muralidharan
70fa5997af Update installation instructions 2024-08-20 11:03:27 -07:00
Saurav Muralidharan
c4b8f786c9 Update nemo checkpoint 2024-08-20 09:54:24 -07:00
Saurav Muralidharan
501d23a6ef Add back tokenizer.model 2024-08-17 08:41:54 -07:00
Saurav Muralidharan
0959baf2c6 Model name change 2024-08-16 11:57:24 -07:00
Saurav Muralidharan
6d7f9b30e1 Update model card 2024-08-15 12:37:28 -07:00
Saurav Muralidharan
c648d1c2cb Update with v2 model and config 2024-08-15 12:29:36 -07:00
Saurav Muralidharan
34eef636ad Add copyright notice 2024-08-13 17:29:01 -07:00
Saurav Muralidharan
54057c9c74 Typo 2024-08-13 14:12:03 -07:00
Saurav Muralidharan
6d40552c0d Update model name 2024-08-13 14:11:21 -07:00
9 changed files with 8174 additions and 132 deletions

1
.gitattributes vendored
View File

@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
nemo/*.nemo filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

3
NOTICE Normal file
View File

@@ -0,0 +1,3 @@
Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
NVIDIA CORPORATION, its affiliates and licensors retain all intellectual property and proprietary rights in and to this material, related documentation and any modifications thereto. Any use, reproduction, disclosure or distribution of this material and related documentation without an express license agreement from NVIDIA CORPORATION or its affiliates is strictly prohibited.

240
README.md
View File

@@ -1,90 +1,150 @@
---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
---
# Minitron 8B Base
Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
Minitron models are for research and development only.
## HuggingFace Quickstart
The [PR](https://github.com/huggingface/transformers/pull/31699) to support our models in Hugging Face in under review and expected to be merged soon. In the meantime, this [branch](https://github.com/suiyoubi/transformers/tree/aot/nemotron-support) at [commit ID 63d9cb0](https://github.com/suiyoubi/transformers/commit/63d9cb0afd2bf5d4cb5431ba1b2c4e353752a937) can be used for Minitron models:
```
git clone git@github.com:suiyoubi/transformers.git
cd transformers
git checkout 63d9cb0
pip install .
```
The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
model_path = "nvidia/Minitron-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
device='cuda'
dtype=torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
# Prepare the input text
prompt = "To be or not to be,"
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
# Generate the output
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
# Decode and print the output
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
```
## License
Minitron is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
## Evaluation Results
*5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
| Average |
| :---- |
| 63.8 |
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
| :------------- | :------------- | :------------- | :------------- | :------------- |
| 80.7 | 79.0 | 51.3 | 52.6 | 31.2
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
| p@1, 0-Shot |
| :------------- |
| 31.6 |
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
## Citation
If you find our work helpful, please consider citing our paper:
```
@article{minitron2024,
title={Compact Language Models via Pruning and Knowledge Distillation},
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
journal={arXiv preprint arXiv:2407.14679},
year={2024},
url={https://arxiv.org/abs/2407.14679},
}
```
---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
library_name: transformers
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
---
# Model Overview
Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
This model is for research and development only.
**Model Developer:** NVIDIA
**Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024.
## License
Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
## Model Architecture
Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Architecture Type:** Transformer Decoder (auto-regressive language model)
**Network Architecture:** Nemotron-4
**Input Type:** Text
**Input Format:** String
**Input Parameters:** None
**Other Properties Related to Input:** None
**Output Type:** Text
**Output Format:** String
**Output Parameters:** None
**Other Properties Related to Output:** None
## Usage
Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
```
pip install git+https://github.com/huggingface/transformers
```
The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
model_path = "nvidia/Minitron-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
device='cuda'
dtype=torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
# Prepare the input text
prompt = "To be or not to be,"
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
# Generate the output
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
# Decode and print the output
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
```
## Dataset & Training
**Data Collection Method:** Hybrid
**Labeling Method:** Not Applicable
**Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
**Data Freshness:** The pretraining data has a cutoff of June 2023.
## Evaluation Results
*5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
| Average |
| :---- |
| 64.5 |
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
| :------------- | :------------- | :------------- | :------------- | :------------- |
| 81.6 | 80.3 | 54.2 | 49.2 | 31.1
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
| p@1, 0-Shot |
| :------------- |
| 31.6 |
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
## Inference
**Engine:** TensorRT-LLM
**Test Hardware:** NVIDIA A100
**DType:** Float16/BFloat16
## Limitations
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Citation
If you find our work helpful, please consider citing our paper:
```
@article{minitron2024,
title={Compact Language Models via Pruning and Knowledge Distillation},
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
journal={arXiv preprint arXiv:2407.14679},
year={2024},
url={https://arxiv.org/abs/2407.14679},
}
```

View File

@@ -13,15 +13,14 @@
"model_type": "nemotron",
"num_attention_heads": 48,
"num_hidden_layers": 32,
"kv_channels": 128,
"num_key_value_heads": 8,
"norm_eps": 1e-05,
"rope_theta": 10000,
"rope_percent": 0.5,
"rope_scaling": null,
"partial_rotary_factor": 0.5,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.32.0.dev0",
"transformers_version": "4.44.0",
"use_cache": true,
"vocab_size": 256000
}
"vocab_size": 256000,
"head_dim": 128
}

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:135cb73e323b91a3830e60d8d5a5bef066a0c15885935756de6ccb34d60b2809
oid sha256:59162ca3471a17052c78518402342477d81650f3b6bc8c87fb37dec5a35c904a
size 16554813440

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5d0c84c2ed78c94d103bbf81f4205206a8e022e0c7c1135c693e7574535fa470
oid sha256:debb954a1720801fa4c054f0c761fe344758ef659419d45e5b9a5e7b10722a11
size 16543512498

View File

@@ -1,23 +1,4 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
"bos_token": "<s>",
"eos_token": "</s>"
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:83d0648daa0467fb02ddef7ff25460321dab2fbb20c280ae0bc1ea8052f7df90
size 18143149

File diff suppressed because it is too large Load Diff