初始化项目,由ModelHub XC社区提供模型

Model: nvidia/Minitron-8B-Base
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-10 11:27:56 +08:00
commit 4c0814e979
12 changed files with 8271 additions and 0 deletions

37
.gitattributes vendored Normal file
View File

@@ -0,0 +1,37 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
nemo/*.nemo filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text

0
LICENSE Normal file
View File

3
NOTICE Normal file
View File

@@ -0,0 +1,3 @@
Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
NVIDIA CORPORATION, its affiliates and licensors retain all intellectual property and proprietary rights in and to this material, related documentation and any modifications thereto. Any use, reproduction, disclosure or distribution of this material and related documentation without an express license agreement from NVIDIA CORPORATION or its affiliates is strictly prohibited.

150
README.md Normal file
View File

@@ -0,0 +1,150 @@
---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
library_name: transformers
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
---
# Model Overview
Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
This model is for research and development only.
**Model Developer:** NVIDIA
**Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024.
## License
Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
## Model Architecture
Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Architecture Type:** Transformer Decoder (auto-regressive language model)
**Network Architecture:** Nemotron-4
**Input Type:** Text
**Input Format:** String
**Input Parameters:** None
**Other Properties Related to Input:** None
**Output Type:** Text
**Output Format:** String
**Output Parameters:** None
**Other Properties Related to Output:** None
## Usage
Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
```
pip install git+https://github.com/huggingface/transformers
```
The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
model_path = "nvidia/Minitron-8B-Base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
device='cuda'
dtype=torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
# Prepare the input text
prompt = "To be or not to be,"
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
# Generate the output
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
# Decode and print the output
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
```
## Dataset & Training
**Data Collection Method:** Hybrid
**Labeling Method:** Not Applicable
**Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
**Data Freshness:** The pretraining data has a cutoff of June 2023.
## Evaluation Results
*5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
| Average |
| :---- |
| 64.5 |
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
| :------------- | :------------- | :------------- | :------------- | :------------- |
| 81.6 | 80.3 | 54.2 | 49.2 | 31.1
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
| p@1, 0-Shot |
| :------------- |
| 31.6 |
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
## Inference
**Engine:** TensorRT-LLM
**Test Hardware:** NVIDIA A100
**DType:** Float16/BFloat16
## Limitations
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## Citation
If you find our work helpful, please consider citing our paper:
```
@article{minitron2024,
title={Compact Language Models via Pruning and Knowledge Distillation},
author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
journal={arXiv preprint arXiv:2407.14679},
year={2024},
url={https://arxiv.org/abs/2407.14679},
}
```

26
config.json Normal file
View File

@@ -0,0 +1,26 @@
{
"_name_or_path": "nvidia/Minitron-8B-Base",
"architectures": [
"NemotronForCausalLM"
],
"bos_token_id": 2,
"eos_token_id": 3,
"hidden_act": "relu2",
"hidden_size": 4096,
"initializer_range": 0.0134,
"intermediate_size": 16384,
"max_position_embeddings": 4096,
"model_type": "nemotron",
"num_attention_heads": 48,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"norm_eps": 1e-05,
"rope_theta": 10000,
"partial_rotary_factor": 0.5,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.44.0",
"use_cache": true,
"vocab_size": 256000,
"head_dim": 128
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:59162ca3471a17052c78518402342477d81650f3b6bc8c87fb37dec5a35c904a
size 16554813440

3
pytorch_model.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:debb954a1720801fa4c054f0c761fe344758ef659419d45e5b9a5e7b10722a11
size 16543512498

2
requirements.txt Normal file
View File

@@ -0,0 +1,2 @@
sentencepiece
accelerate

4
special_tokens_map.json Normal file
View File

@@ -0,0 +1,4 @@
{
"bos_token": "<s>",
"eos_token": "</s>"
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:83d0648daa0467fb02ddef7ff25460321dab2fbb20c280ae0bc1ea8052f7df90
size 18143149

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6dfd8b970f437002fc445214304969fe59e64d4f48500bd0b77ba55340f2d811
size 4545602

8037
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff