Model name change

This commit is contained in:
Saurav Muralidharan
2024-08-16 11:57:24 -07:00
parent 6d7f9b30e1
commit 0959baf2c6
2 changed files with 8 additions and 8 deletions

View File

@@ -7,7 +7,7 @@ license_link: >-
# Model Overview # Model Overview
Nemotron-4-Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose. Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details. Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
@@ -15,15 +15,15 @@ This model is for research and development only.
**Model Developer:** NVIDIA **Model Developer:** NVIDIA
**Model Dates:** Nemotron-4-Minitron-8B-Base was trained between February 2024 and June 2024. **Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024.
## License ## License
Nemotron-4-Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
## Model Architecture ## Model Architecture
Nemotron-4-Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384. Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE). It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Architecture Type:** Transformer Decoder (auto-regressive language model) **Architecture Type:** Transformer Decoder (auto-regressive language model)
@@ -55,14 +55,14 @@ $ git clone -b aot/head_dim_rope --single-branch https://github.com/suiyoubi/tra
$ pip install -e . $ pip install -e .
``` ```
The following code provides an example of how to load the Nemotron-4-Minitron-8B model and use it to perform text generation. The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
```python ```python
import torch import torch
from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model # Load the tokenizer and model
model_path = "nvidia/Nemotron-4-Minitron-8B-Base" model_path = "nvidia/Minitron-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path)
device='cuda' device='cuda'
@@ -87,7 +87,7 @@ print(output_text)
**Labeling Method:** Not Applicable **Labeling Method:** Not Applicable
**Properties:** The training corpus for Nemotron-4-Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance. **Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
**Data Freshness:** The pretraining data has a cutoff of June 2023. **Data Freshness:** The pretraining data has a cutoff of June 2023.

View File

@@ -1,5 +1,5 @@
{ {
"_name_or_path": "nvidia/Nemotron-4-Minitron-8B-Base", "_name_or_path": "nvidia/Minitron-8B-Base",
"architectures": [ "architectures": [
"NemotronForCausalLM" "NemotronForCausalLM"
], ],