Update with v2 model and config

This commit is contained in:
Saurav Muralidharan
2024-08-15 12:29:36 -07:00
parent 34eef636ad
commit c648d1c2cb
8 changed files with 8028 additions and 54 deletions

1
.gitattributes vendored
View File

@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text *.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text
nemo/*.nemo filter=lfs diff=lfs merge=lfs -text nemo/*.nemo filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

View File

@@ -5,7 +5,7 @@ license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
--- ---
# Minitron 8B Base # Nemotron-4 Minitron 8B Base
Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models. Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.
@@ -15,22 +15,20 @@ Minitron models are for research and development only.
## HuggingFace Quickstart ## HuggingFace Quickstart
The [PR](https://github.com/huggingface/transformers/pull/31699) to support our models in Hugging Face in under review and expected to be merged soon. In the meantime, this [branch](https://github.com/suiyoubi/transformers/tree/aot/nemotron-support) at [commit ID 63d9cb0](https://github.com/suiyoubi/transformers/commit/63d9cb0afd2bf5d4cb5431ba1b2c4e353752a937) can be used for Minitron models: The [pull request](https://github.com/huggingface/transformers/pull/32495) to support this model in Hugging Face Transformers is under review and expected to be merged soon. In the meantime, please follow the installation instructions below:
``` ```
git clone git@github.com:suiyoubi/transformers.git $ git clone -b aot/head_dim_rope --single-branch https://github.com/suiyoubi/transformers.git && cd transformers
cd transformers $ pip install -e .
git checkout 63d9cb0
pip install .
``` ```
The following code provides an example of how to load the Minitron-8B model and use it to perform text generation. The following code provides an example of how to load the Nemotron-4-Minitron-8B model and use it to perform text generation.
```python ```python
import torch import torch
from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model # Load the tokenizer and model
model_path = "nvidia/Minitron-8B-Base" model_path = "nvidia/Nemotron-4-Minitron-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path)
device='cuda' device='cuda'
@@ -59,13 +57,13 @@ Minitron is released under the [NVIDIA Open Model License Agreement](https://dev
| Average | | Average |
| :---- | | :---- |
| 63.8 | | 64.5 |
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions: *Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum | HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
| :------------- | :------------- | :------------- | :------------- | :------------- | | :------------- | :------------- | :------------- | :------------- | :------------- |
| 80.7 | 79.0 | 51.3 | 52.6 | 31.2 | 81.6 | 80.3 | 54.2 | 49.2 | 31.1
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval): *Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):

View File

@@ -13,15 +13,14 @@
"model_type": "nemotron", "model_type": "nemotron",
"num_attention_heads": 48, "num_attention_heads": 48,
"num_hidden_layers": 32, "num_hidden_layers": 32,
"kv_channels": 128,
"num_key_value_heads": 8, "num_key_value_heads": 8,
"norm_eps": 1e-05, "norm_eps": 1e-05,
"rope_theta": 10000, "rope_theta": 10000,
"rope_percent": 0.5, "partial_rotary_factor": 0.5,
"rope_scaling": null,
"tie_word_embeddings": false, "tie_word_embeddings": false,
"torch_dtype": "bfloat16", "torch_dtype": "bfloat16",
"transformers_version": "4.32.0.dev0", "transformers_version": "4.44.0",
"use_cache": true, "use_cache": true,
"vocab_size": 256000 "vocab_size": 256000,
} "head_dim": 128
}

View File

@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1 version https://git-lfs.github.com/spec/v1
oid sha256:5d0c84c2ed78c94d103bbf81f4205206a8e022e0c7c1135c693e7574535fa470 oid sha256:debb954a1720801fa4c054f0c761fe344758ef659419d45e5b9a5e7b10722a11
size 16543512498 size 16543512498

View File

@@ -1,23 +1,4 @@
{ {
"bos_token": { "bos_token": "<s>",
"content": "<s>", "eos_token": "</s>"
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
} }

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:83d0648daa0467fb02ddef7ff25460321dab2fbb20c280ae0bc1ea8052f7df90
size 18143149

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6dfd8b970f437002fc445214304969fe59e64d4f48500bd0b77ba55340f2d811
size 4545602

File diff suppressed because it is too large Load Diff