63 lines
1.5 KiB
Markdown
63 lines
1.5 KiB
Markdown
|
|
---
|
||
|
|
license: apache-2.0
|
||
|
|
datasets:
|
||
|
|
- lodrick-the-lafted/Hermes-217K
|
||
|
|
---
|
||
|
|
|
||
|
|
<img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K/resolve/main/hermes-instruct.png>
|
||
|
|
|
||
|
|
# Hermes-Instruct-7B-217K
|
||
|
|
|
||
|
|
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with 217K rows of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format.
|
||
|
|
Why? Mistral-7B-Instruct-v0.2 has native 32K context and rope theta of 1M. It's not a base model, so I've used the same recipe with different amounts of data to gauge the effects of further finetuning.
|
||
|
|
|
||
|
|
<br />
|
||
|
|
<br />
|
||
|
|
|
||
|
|
# Prompt Format
|
||
|
|
|
||
|
|
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
|
||
|
|
```
|
||
|
|
<s>[INST] {sys_prompt} {instruction} [/INST]
|
||
|
|
```
|
||
|
|
|
||
|
|
or
|
||
|
|
|
||
|
|
|
||
|
|
```
|
||
|
|
{sys_prompt}
|
||
|
|
|
||
|
|
### Instruction:
|
||
|
|
{instruction}
|
||
|
|
|
||
|
|
### Response:
|
||
|
|
|
||
|
|
```
|
||
|
|
The tokenizer default is Alpaca this time around.
|
||
|
|
|
||
|
|
<br />
|
||
|
|
<br />
|
||
|
|
|
||
|
|
# Usage
|
||
|
|
|
||
|
|
```python
|
||
|
|
from transformers import AutoTokenizer
|
||
|
|
import transformers
|
||
|
|
import torch
|
||
|
|
|
||
|
|
model = "lodrick-the-lafted/Hermes-Instruct-7B-217K"
|
||
|
|
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
||
|
|
pipeline = transformers.pipeline(
|
||
|
|
"text-generation",
|
||
|
|
model=model,
|
||
|
|
model_kwargs={"torch_dtype": torch.bfloat16},
|
||
|
|
)
|
||
|
|
|
||
|
|
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
|
||
|
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
|
||
|
|
print(outputs[0]["generated_text"])
|
||
|
|
```
|
||
|
|
|