91 lines
3.4 KiB
Markdown
91 lines
3.4 KiB
Markdown
---
|
|
tags:
|
|
- merge
|
|
- mergekit
|
|
- lazymergekit
|
|
- mistralai/Mistral-7B-Instruct-v0.2
|
|
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
|
|
base_model:
|
|
- mistralai/Mistral-7B-Instruct-v0.2
|
|
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
|
|
license: apache-2.0
|
|
---
|
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6389d3c61e8755d777902366/-_AiKUEsY3x-N7oY52fdE.jpeg" style="border-radius:6%; width: 33%">
|
|
|
|
# pandafish-2-7b-32k
|
|
|
|
pandafish-2-7b-32k is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
|
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
|
|
|
|
|
|
## 💬 Try it
|
|
|
|
[Playground on Huggingface Space](https://huggingface.co/spaces/ichigoberry/pandafish-2-7b-32k)
|
|
|
|
Chat template: Mistral Instruct
|
|
|
|
|
|
## ⚡ Quantized models
|
|
|
|
- **GGUF**: [ichigoberry/pandafish-2-7b-32k-GGUF](https://huggingface.co/ichigoberry/pandafish-2-7b-32k-GGUF)
|
|
- **GGUF**: [mradermacher/pandafish-2-7b-32k-GGUF](https://huggingface.co/mradermacher/pandafish-2-7b-32k-GGUF)
|
|
- **MLX**: [4bit](https://huggingface.co/mlx-community/pandafish-dt-7b-4bit)
|
|
- **EXL2**: [bartowski/pandafish-2-7b-32k-exl2](https://huggingface.co/bartowski/pandafish-2-7b-32k-exl2)
|
|
|
|
|
|
## 🏆 Evals
|
|
|
|
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|
|
|---------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|
|
|🐡 [**pandafish-2-7b-32k**](https://huggingface.co/ichigoberry/pandafish-2-7b-32k) [📄](https://gist.github.com/tosh/de1769c43db88d94353ca481f4bc418f)| **40.8**| **73.35**| 57.46| **42.69**| 53.57|
|
|
|[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) [📄](https://gist.github.com/tosh/578fa995f985b178b65a7675168b145c)| 38.5| 71.64| 66.82| 42.29| **54.81**|
|
|
|[dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) [📄](https://gist.github.com/tosh/424bf090fa49d6117f4cffe6373e4060)| 38.99| 72.22| **51.96**| 40.41| 50.9|
|
|
|
|
|
|
## 🧩 Configuration
|
|
|
|
```yaml
|
|
models:
|
|
- model: alpindale/Mistral-7B-v0.2-hf
|
|
# No parameters necessary for base model
|
|
- model: mistralai/Mistral-7B-Instruct-v0.2
|
|
parameters:
|
|
density: 0.53
|
|
weight: 0.4
|
|
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
|
|
parameters:
|
|
density: 0.53
|
|
weight: 0.4
|
|
merge_method: dare_ties
|
|
base_model: alpindale/Mistral-7B-v0.2-hf
|
|
parameters:
|
|
int8_mask: true
|
|
dtype: bfloat16
|
|
```
|
|
|
|
## 💻 Usage
|
|
|
|
```python
|
|
!pip install -qU transformers accelerate
|
|
|
|
from transformers import AutoTokenizer
|
|
import transformers
|
|
import torch
|
|
|
|
model = "ichigoberry/pandafish-2-7b-32k"
|
|
messages = [{"role": "user", "content": "What is a large language model?"}]
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
pipeline = transformers.pipeline(
|
|
"text-generation",
|
|
model=model,
|
|
torch_dtype=torch.float16,
|
|
device_map="auto",
|
|
)
|
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
|
print(outputs[0]["generated_text"])
|
|
``` |