Add evaluation preview
This commit is contained in:
23
README.md
23
README.md
@@ -53,6 +53,29 @@ print(output_text)
|
|||||||
|
|
||||||
Minitron is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
|
Minitron is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
|
||||||
|
|
||||||
|
## Evaluation Results
|
||||||
|
|
||||||
|
*5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
|
||||||
|
|
||||||
|
| Average |
|
||||||
|
| :---- |
|
||||||
|
| 63.8 |
|
||||||
|
|
||||||
|
*Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
|
||||||
|
|
||||||
|
HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
|
||||||
|
| :------------- | :------------- | :------------- | :------------- | :------------- |
|
||||||
|
| 80.7 | 79.0 | 51.3 | 52.6 | 31.2
|
||||||
|
|
||||||
|
|
||||||
|
*Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
|
||||||
|
|
||||||
|
| p@1, 0-Shot |
|
||||||
|
| :------------- |
|
||||||
|
| 31.6 |
|
||||||
|
|
||||||
|
Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
|
||||||
|
|
||||||
## Citation
|
## Citation
|
||||||
|
|
||||||
If you find our work helpful, please consider citing our paper:
|
If you find our work helpful, please consider citing our paper:
|
||||||
|
|||||||
Reference in New Issue
Block a user