180 lines
4.7 KiB
Markdown
180 lines
4.7 KiB
Markdown
---
|
|
license: apache-2.0
|
|
datasets:
|
|
- garage-bAInd/Open-Platypus
|
|
- jondurbin/airoboros-3.2
|
|
model-index:
|
|
- name: Platyboros-Instruct-7B
|
|
results:
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: AI2 Reasoning Challenge (25-Shot)
|
|
type: ai2_arc
|
|
config: ARC-Challenge
|
|
split: test
|
|
args:
|
|
num_few_shot: 25
|
|
metrics:
|
|
- type: acc_norm
|
|
value: 57.76
|
|
name: normalized accuracy
|
|
source:
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: HellaSwag (10-Shot)
|
|
type: hellaswag
|
|
split: validation
|
|
args:
|
|
num_few_shot: 10
|
|
metrics:
|
|
- type: acc_norm
|
|
value: 82.59
|
|
name: normalized accuracy
|
|
source:
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: MMLU (5-Shot)
|
|
type: cais/mmlu
|
|
config: all
|
|
split: test
|
|
args:
|
|
num_few_shot: 5
|
|
metrics:
|
|
- type: acc
|
|
value: 62.05
|
|
name: accuracy
|
|
source:
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: TruthfulQA (0-shot)
|
|
type: truthful_qa
|
|
config: multiple_choice
|
|
split: validation
|
|
args:
|
|
num_few_shot: 0
|
|
metrics:
|
|
- type: mc2
|
|
value: 60.92
|
|
source:
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: Winogrande (5-shot)
|
|
type: winogrande
|
|
config: winogrande_xl
|
|
split: validation
|
|
args:
|
|
num_few_shot: 5
|
|
metrics:
|
|
- type: acc
|
|
value: 78.14
|
|
name: accuracy
|
|
source:
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
|
|
name: Open LLM Leaderboard
|
|
- task:
|
|
type: text-generation
|
|
name: Text Generation
|
|
dataset:
|
|
name: GSM8k (5-shot)
|
|
type: gsm8k
|
|
config: main
|
|
split: test
|
|
args:
|
|
num_few_shot: 5
|
|
metrics:
|
|
- type: acc
|
|
value: 43.67
|
|
name: accuracy
|
|
source:
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Platyboros-Instruct-7B
|
|
name: Open LLM Leaderboard
|
|
---
|
|
|
|
<img src=https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B/resolve/main/platyboros.png>
|
|
|
|
# Platyboros-Instruct-7B
|
|
|
|
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with [jondurbin/airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) and [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), in Alpaca format.
|
|
|
|
<br />
|
|
<br />
|
|
|
|
# Prompt Format
|
|
|
|
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
|
|
```
|
|
<s>[INST] {sys_prompt} {instruction} [/INST]
|
|
```
|
|
|
|
or
|
|
|
|
|
|
```
|
|
{sys_prompt}
|
|
|
|
### Instruction:
|
|
{instruction}
|
|
|
|
### Response:
|
|
|
|
```
|
|
The tokenizer default is Alpaca this time around.
|
|
|
|
<br />
|
|
<br />
|
|
|
|
# Usage
|
|
|
|
```python
|
|
from transformers import AutoTokenizer
|
|
import transformers
|
|
import torch
|
|
|
|
model = "lodrick-the-lafted/Platyboros-Instruct-7B"
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
pipeline = transformers.pipeline(
|
|
"text-generation",
|
|
model=model,
|
|
model_kwargs={"torch_dtype": torch.bfloat16},
|
|
)
|
|
|
|
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
|
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
|
|
print(outputs[0]["generated_text"])
|
|
```
|
|
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Platyboros-Instruct-7B)
|
|
|
|
| Metric |Value|
|
|
|---------------------------------|----:|
|
|
|Avg. |64.19|
|
|
|AI2 Reasoning Challenge (25-Shot)|57.76|
|
|
|HellaSwag (10-Shot) |82.59|
|
|
|MMLU (5-Shot) |62.05|
|
|
|TruthfulQA (0-shot) |60.92|
|
|
|Winogrande (5-shot) |78.14|
|
|
|GSM8k (5-shot) |43.67|
|
|
|