34 lines
889 B
Markdown
34 lines
889 B
Markdown
---
|
|
license: apache-2.0
|
|
datasets:
|
|
- databricks/databricks-dolly-15k
|
|
language:
|
|
- en
|
|
metrics:
|
|
- rouge
|
|
base_model:
|
|
- facebook/opt-6.7B
|
|
pipeline_tag: text-generation
|
|
---
|
|
# SFT-OPT-6.7B
|
|
|
|
[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
|
|
|
|
**SFT-OPT-6.7B** is an OPT-6.7B model supervised fine-tuned on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k).
|
|
|
|
It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-OPT-6.7B).
|
|
|
|
## Other Baselines
|
|
+ [KD](https://huggingface.co/MiniLLM/KD-OPT-6.7B)
|
|
+ [SeqKD](https://huggingface.co/MiniLLM/SeqKD-OPT-6.7B)
|
|
|
|
|
|
## Citation
|
|
```
|
|
@inproceedings{minillm,
|
|
title={MiniLLM: Knowledge Distillation of Large Language Models},
|
|
author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
|
|
booktitle={Proceedings of ICLR},
|
|
year={2024}
|
|
}
|
|
``` |