167 lines
5.0 KiB
Markdown
167 lines
5.0 KiB
Markdown
|
|
---
|
||
|
|
license: apache-2.0
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- moe
|
||
|
|
- moerge
|
||
|
|
---
|
||
|
|
|
||
|
|
# Multi Master 7B
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
A quick multi-disciplinary fine tune of openchat/openchat-3.5-0106 using an alpaca-style dataset across different disciplines. I used LORA adapters that I then merged back into the main model for ease of use.
|
||
|
|
|
||
|
|
# Prompting
|
||
|
|
|
||
|
|
## Prompt Template for alpaca style
|
||
|
|
|
||
|
|
```
|
||
|
|
### Instruction:
|
||
|
|
|
||
|
|
<prompt> (without the <>)
|
||
|
|
|
||
|
|
### Response:
|
||
|
|
```
|
||
|
|
|
||
|
|
## Sample Code
|
||
|
|
|
||
|
|
```python
|
||
|
|
import torch
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||
|
|
|
||
|
|
torch.set_default_device("cuda")
|
||
|
|
|
||
|
|
model = AutoModelForCausalLM.from_pretrained("ibivibiv/multimaster-7b", torch_dtype="auto", device_config='auto')
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained("ibivibiv/multimaster-7b")
|
||
|
|
|
||
|
|
inputs = tokenizer("### Instruction: Who would when in an arm wrestling match between Abraham Lincoln and Chuck Norris?\nA. Abraham Lincoln \nB. Chuck Norris\n### Response:\n", return_tensors="pt", return_attention_mask=False)
|
||
|
|
|
||
|
|
outputs = model.generate(**inputs, max_length=200)
|
||
|
|
text = tokenizer.batch_decode(outputs)[0]
|
||
|
|
print(text)
|
||
|
|
```
|
||
|
|
|
||
|
|
# Model Details
|
||
|
|
* **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv)
|
||
|
|
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
|
||
|
|
* **Model type:** **multimaster-7b** is a lora tuned version of openchat/openchat-3.5-0106 with the adapter merged back into the main model
|
||
|
|
* **Language(s)**: English
|
||
|
|
* **Purpose**: This model is a focus on multi-disciplinary model tuning
|
||
|
|
|
||
|
|
# Benchmark Scores
|
||
|
|
|
||
|
|
coming soon
|
||
|
|
|
||
|
|
## Citations
|
||
|
|
|
||
|
|
```
|
||
|
|
@misc{open-llm-leaderboard,
|
||
|
|
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
|
||
|
|
title = {Open LLM Leaderboard},
|
||
|
|
year = {2023},
|
||
|
|
publisher = {Hugging Face},
|
||
|
|
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@software{eval-harness,
|
||
|
|
author = {Gao, Leo and
|
||
|
|
Tow, Jonathan and
|
||
|
|
Biderman, Stella and
|
||
|
|
Black, Sid and
|
||
|
|
DiPofi, Anthony and
|
||
|
|
Foster, Charles and
|
||
|
|
Golding, Laurence and
|
||
|
|
Hsu, Jeffrey and
|
||
|
|
McDonell, Kyle and
|
||
|
|
Muennighoff, Niklas and
|
||
|
|
Phang, Jason and
|
||
|
|
Reynolds, Laria and
|
||
|
|
Tang, Eric and
|
||
|
|
Thite, Anish and
|
||
|
|
Wang, Ben and
|
||
|
|
Wang, Kevin and
|
||
|
|
Zou, Andy},
|
||
|
|
title = {A framework for few-shot language model evaluation},
|
||
|
|
month = sep,
|
||
|
|
year = 2021,
|
||
|
|
publisher = {Zenodo},
|
||
|
|
version = {v0.0.1},
|
||
|
|
doi = {10.5281/zenodo.5371628},
|
||
|
|
url = {https://doi.org/10.5281/zenodo.5371628}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@misc{clark2018think,
|
||
|
|
title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
|
||
|
|
author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
|
||
|
|
year={2018},
|
||
|
|
eprint={1803.05457},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.AI}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@misc{zellers2019hellaswag,
|
||
|
|
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
|
||
|
|
author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
|
||
|
|
year={2019},
|
||
|
|
eprint={1905.07830},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.CL}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@misc{hendrycks2021measuring,
|
||
|
|
title={Measuring Massive Multitask Language Understanding},
|
||
|
|
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
|
||
|
|
year={2021},
|
||
|
|
eprint={2009.03300},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.CY}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@misc{lin2022truthfulqa,
|
||
|
|
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
|
||
|
|
author={Stephanie Lin and Jacob Hilton and Owain Evans},
|
||
|
|
year={2022},
|
||
|
|
eprint={2109.07958},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.CL}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@misc{DBLP:journals/corr/abs-1907-10641,
|
||
|
|
title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale},
|
||
|
|
author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi},
|
||
|
|
year={2019},
|
||
|
|
eprint={1907.10641},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.CL}
|
||
|
|
}
|
||
|
|
```
|
||
|
|
```
|
||
|
|
@misc{DBLP:journals/corr/abs-2110-14168,
|
||
|
|
title={Training Verifiers to Solve Math Word Problems},
|
||
|
|
author={Karl Cobbe and
|
||
|
|
Vineet Kosaraju and
|
||
|
|
Mohammad Bavarian and
|
||
|
|
Mark Chen and
|
||
|
|
Heewoo Jun and
|
||
|
|
Lukasz Kaiser and
|
||
|
|
Matthias Plappert and
|
||
|
|
Jerry Tworek and
|
||
|
|
Jacob Hilton and
|
||
|
|
Reiichiro Nakano and
|
||
|
|
Christopher Hesse and
|
||
|
|
John Schulman},
|
||
|
|
year={2021},
|
||
|
|
eprint={2110.14168},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.CL}
|
||
|
|
}
|
||
|
|
```
|