diff --git a/README.md b/README.md index 0e83835..f8d433b 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ license_link: >- # Model Overview -Nemotron-4-Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose. +Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose. Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details. @@ -15,15 +15,15 @@ This model is for research and development only. **Model Developer:** NVIDIA -**Model Dates:** Nemotron-4-Minitron-8B-Base was trained between February 2024 and June 2024. +**Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024. ## License -Nemotron-4-Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). +Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). ## Model Architecture -Nemotron-4-Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384. +Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384. It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE). **Architecture Type:** Transformer Decoder (auto-regressive language model) @@ -55,14 +55,14 @@ $ git clone -b aot/head_dim_rope --single-branch https://github.com/suiyoubi/tra $ pip install -e . ``` -The following code provides an example of how to load the Nemotron-4-Minitron-8B model and use it to perform text generation. +The following code provides an example of how to load the Minitron-8B model and use it to perform text generation. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model -model_path = "nvidia/Nemotron-4-Minitron-8B-Base" +model_path = "nvidia/Minitron-8B-Base" tokenizer = AutoTokenizer.from_pretrained(model_path) device='cuda' @@ -87,7 +87,7 @@ print(output_text) **Labeling Method:** Not Applicable -**Properties:** The training corpus for Nemotron-4-Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance. +**Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance. **Data Freshness:** The pretraining data has a cutoff of June 2023. diff --git a/config.json b/config.json index d99b7af..c455ad9 100644 --- a/config.json +++ b/config.json @@ -1,5 +1,5 @@ { - "_name_or_path": "nvidia/Nemotron-4-Minitron-8B-Base", + "_name_or_path": "nvidia/Minitron-8B-Base", "architectures": [ "NemotronForCausalLM" ],