Update README.md

This commit is contained in:
ai-modelscope
2025-02-14 12:32:18 +08:00
parent c384c7fe48
commit b18c50bbed
2 changed files with 155 additions and 149 deletions

2
.gitattributes vendored
View File

@@ -35,5 +35,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text tokenizer.json filter=lfs diff=lfs merge=lfs -text
*.nemo filter=lfs diff=lfs merge=lfs -text *.nemo filter=lfs diff=lfs merge=lfs -text
model-00002-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
lfs -text

302
README.md
View File

@@ -1,148 +1,156 @@
--- ---
license: other license: other
license_name: nvidia-open-model-license license_name: nvidia-open-model-license
license_link: >- license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
--- library_name: transformers
# Llama-3.1-Minitron-4B-Width-Base pipeline_tag: text-generation
language:
## Model Overview - en
tags:
Llama-3.1-Minitron-4B-Width-Base is a base text-to-text model that can be adopted for a variety of natural language generation tasks. - nvidia
It is obtained by pruning Llama-3.1-8B; specifically, we prune model embedding size and MLP intermediate dimension. - llama-3
Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose. Please refer to our [technical report](https://arxiv.org/abs/2408.11796) for more details. - pytorch
---
This model is ready for commercial use. # Llama-3.1-Minitron-4B-Width-Base
**Model Developer:** NVIDIA ## Model Overview
**Model Dates:** Llama-3.1-Minitron-4B-Width-Base was trained between July 29, 2024 and Aug 3, 2024. Llama-3.1-Minitron-4B-Width-Base is a base text-to-text model that can be adopted for a variety of natural language generation tasks.
It is obtained by pruning Llama-3.1-8B; specifically, we prune model embedding size and MLP intermediate dimension.
## License Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose. Please refer to our [technical report](https://arxiv.org/abs/2408.11796) for more details.
This model is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). This model is ready for commercial use.
## Model Architecture **Model Developer:** NVIDIA
Llama-3.1-Minitron-4B-Width-Base uses a model embedding size of 3072, 32 attention heads, MLP intermediate dimension of 9216, with 32 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE). **Model Dates:** Llama-3.1-Minitron-4B-Width-Base was trained between July 29, 2024 and Aug 3, 2024.
**Architecture Type:** Transformer Decoder (Auto-Regressive Language Model) ## License
**Network Architecture:** Llama-3.1 This model is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
**Input Type(s):** Text ## Model Architecture
**Input Format(s):** String Llama-3.1-Minitron-4B-Width-Base uses a model embedding size of 3072, 32 attention heads, MLP intermediate dimension of 9216, with 32 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Input Parameters:** None **Architecture Type:** Transformer Decoder (Auto-Regressive Language Model)
**Other Properties Related to Input:** Works well within 8k characters or less. **Network Architecture:** Llama-3.1
**Output Type(s):** Text **Input Type(s):** Text
**Output Format:** String **Input Format(s):** String
**Output Parameters:** 1D **Input Parameters:** None
**Other Properties Related to Output:** None **Other Properties Related to Input:** Works well within 8k characters or less.
**Output Type(s):** Text
## Usage
Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source: **Output Format:** String
```
pip install git+https://github.com/huggingface/transformers **Output Parameters:** 1D
```
We can now run inference on this model: **Other Properties Related to Output:** None
```python
import torch ## Usage
from transformers import AutoTokenizer, LlamaForCausalLM Support for this model will be added in the upcoming `transformers` release. In the meantime, please install the library from source:
```
# Load the tokenizer and model pip install git+https://github.com/huggingface/transformers
model_path = "nvidia/Llama-3.1-Minitron-4B-Width-Base" ```
tokenizer = AutoTokenizer.from_pretrained(model_path) We can now run inference on this model:
device = 'cuda' ```python
dtype = torch.bfloat16 import torch
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device) from transformers import AutoTokenizer, LlamaForCausalLM
# Prepare the input text # Load the tokenizer and model
prompt = 'Complete the paragraph: our solar system is' model_path = "nvidia/Llama-3.1-Minitron-4B-Width-Base"
inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device) tokenizer = AutoTokenizer.from_pretrained(model_path)
# Generate the output device = 'cuda'
outputs = model.generate(inputs, max_length=20) dtype = torch.bfloat16
model = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
# Decode and print the output
output_text = tokenizer.decode(outputs[0]) # Prepare the input text
print(output_text) prompt = 'Complete the paragraph: our solar system is'
``` inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)
## Software Integration # Generate the output
**Runtime Engine(s):** outputs = model.generate(inputs, max_length=20)
* NeMo 24.05
# Decode and print the output
**Supported Hardware Microarchitecture Compatibility:** <br> output_text = tokenizer.decode(outputs[0])
* NVIDIA Ampere print(output_text)
* NVIDIA Blackwell ```
* NVIDIA Hopper
* NVIDIA Lovelace ## Software Integration
**Runtime Engine(s):**
* NeMo 24.05
**[Preferred/Supported] Operating System(s):** <br>
* Linux **Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere
## Dataset & Training * NVIDIA Blackwell
* NVIDIA Hopper
**Data Collection Method by Dataset:** Automated * NVIDIA Lovelace
**Labeling Method by Dataset:** Not Applicable
**[Preferred/Supported] Operating System(s):** <br>
**Properties:** * Linux
The training corpus for Llama-3.1-Minitron-4B-Width-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
## Dataset & Training
**Data Freshness:** The pretraining data has a cutoff of June 2023.
**Data Collection Method by Dataset:** Automated
## Evaluation Results
**Labeling Method by Dataset:** Not Applicable
### Overview
_5-shot performance._ Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300): **Properties:**
The training corpus for Llama-3.1-Minitron-4B-Width-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
| Average |
| :---- | **Data Freshness:** The pretraining data has a cutoff of June 2023.
| 60.5 |
## Evaluation Results
_Zero-shot performance._ Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
### Overview
| HellaSwag | Winogrande | GSM8K| ARC-Challenge | XLSum | _5-shot performance._ Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
| :---- | :---- | :---- | :---- | :---- |
| 76.1 | 73.5 | 41.2 | 55.6 | 28.7 | Average |
| :---- |
_Code generation performance._ Evaluated using [MBPP](https://github.com/google-research/google-research/tree/master/mbpp): | 60.5 |
| Score |
| :---- | _Zero-shot performance._ Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
| 32.0 |
| HellaSwag | Winogrande | GSM8K| ARC-Challenge | XLSum |
## Inference | :---- | :---- | :---- | :---- | :---- |
| 76.1 | 73.5 | 41.2 | 55.6 | 28.7
**Engine:** TensorRT-LLM
_Code generation performance._ Evaluated using [MBPP](https://github.com/google-research/google-research/tree/master/mbpp):
**Test Hardware:** NVIDIA A100 | Score |
| :---- |
**DType:** BFloat16 | 32.0 |
## Inference
## Limitations
**Engine:** TensorRT-LLM
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
**Test Hardware:** NVIDIA A100
## Ethical Considerations
**DType:** BFloat16
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ## Limitations
## References The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
* [Compact Language Models via Pruning and Knowledge Distillation](https://arxiv.org/abs/2407.14679) ## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
## References
* [Compact Language Models via Pruning and Knowledge Distillation](https://arxiv.org/abs/2407.14679)
* [LLM Pruning and Distillation in Practice: The Minitron Approach](https://arxiv.org/abs/2408.11796) * [LLM Pruning and Distillation in Practice: The Minitron Approach](https://arxiv.org/abs/2408.11796)