初始化项目,由ModelHub XC社区提供模型
Model: MiniLLM/SFT-OPT-6.7B Source: Original Platform
This commit is contained in:
34
README.md
Normal file
34
README.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- databricks/databricks-dolly-15k
|
||||
language:
|
||||
- en
|
||||
metrics:
|
||||
- rouge
|
||||
base_model:
|
||||
- facebook/opt-6.7B
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
# SFT-OPT-6.7B
|
||||
|
||||
[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
|
||||
|
||||
**SFT-OPT-6.7B** is an OPT-6.7B model supervised fine-tuned on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k).
|
||||
|
||||
It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-OPT-6.7B).
|
||||
|
||||
## Other Baselines
|
||||
+ [KD](https://huggingface.co/MiniLLM/KD-OPT-6.7B)
|
||||
+ [SeqKD](https://huggingface.co/MiniLLM/SeqKD-OPT-6.7B)
|
||||
|
||||
|
||||
## Citation
|
||||
```
|
||||
@inproceedings{minillm,
|
||||
title={MiniLLM: Knowledge Distillation of Large Language Models},
|
||||
author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
|
||||
booktitle={Proceedings of ICLR},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user