d8b7f17ce0f3cf0a8388e1d78d6a2ce00ee4696a
Model: dddsaty/SOLAR_Merge_Adapter_DPO_Orca Source: Original Platform
license, datasets, language, pipeline_tag
| license | datasets | language | pipeline_tag | ||
|---|---|---|---|---|---|
| cc-by-nc-4.0 |
|
|
text-generation |
Explanation
- Merge two base models using mergekit (slerp)
- Apply DPO to the merged model, just an adapter part is saved
- merge the adpater and the merged model
Base Model
Training Corpus
Score
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---|---|---|---|---|---|---|
| 65.96 | 63.91 | 84.58 | 63.18 | 51.49 | 82 | 50.57 |
Log
- 2024.02.05: Initial version Upload
- 2024.02.10: Readme update
LICENSE
Following the upstage/SOLAR-10.7B-Instruct-v1.0 License
- cc-by-nc-4.0
Citation
- beomi/OPEN-SOLAR-KO-10.7B
@misc {solar_ko_junbum_2023,
author = { {L. Junbum} },
title = { Solar-Ko-10.7b },
year = 2024,
url = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
publisher = { Hugging Face }
}
- upstage/SOLAR-10.7B-Instruct-v1.0
@misc{kim2023solar,
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
year={2023},
eprint={2312.15166},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Description