47 lines
1.4 KiB
Markdown
47 lines
1.4 KiB
Markdown
---
|
|
license: llama2
|
|
language:
|
|
- en
|
|
---
|
|
|
|
# Model Card
|
|
|
|
### Model Description
|
|
|
|
Llama2 7B fine-tuned using ShareGPT datasets for multi-turn conversations.
|
|
|
|
- **Developed by:** l3utterfly
|
|
- **Funded by:** Layla Network
|
|
- **Model type:** Llama2
|
|
- **Language(s) (NLP):** English
|
|
- **License:** Llama2
|
|
- **Finetuned from model:** Llama2 7B
|
|
|
|
## Uses
|
|
|
|
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
|
|
|
|
Help & support: https://discord.gg/x546YJ6nYC
|
|
|
|
Prompt:
|
|
```
|
|
User:
|
|
Assistant:
|
|
```
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__llama2-7b-layla)
|
|
|
|
| Metric | Value |
|
|
|-----------------------|---------------------------|
|
|
| Avg. | 45.56 |
|
|
| ARC (25-shot) | 54.18 |
|
|
| HellaSwag (10-shot) | 79.34 |
|
|
| MMLU (5-shot) | 49.7 |
|
|
| TruthfulQA (0-shot) | 46.5 |
|
|
| Winogrande (5-shot) | 74.11 |
|
|
| GSM8K (5-shot) | 8.49 |
|
|
| DROP (3-shot) | 6.57 |
|
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|