190 lines
6.0 KiB
Markdown
190 lines
6.0 KiB
Markdown
|
|
---
|
||
|
|
license: cc-by-nc-2.0
|
||
|
|
tags:
|
||
|
|
- merge
|
||
|
|
- mergekit
|
||
|
|
- lazymergekit
|
||
|
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
||
|
|
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
|
||
|
|
base_model:
|
||
|
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
||
|
|
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
|
||
|
|
model-index:
|
||
|
|
- name: kuno-royale-v2-7b
|
||
|
|
results:
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: AI2 Reasoning Challenge (25-Shot)
|
||
|
|
type: ai2_arc
|
||
|
|
config: ARC-Challenge
|
||
|
|
split: test
|
||
|
|
args:
|
||
|
|
num_few_shot: 25
|
||
|
|
metrics:
|
||
|
|
- type: acc_norm
|
||
|
|
value: 72.01
|
||
|
|
name: normalized accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: HellaSwag (10-Shot)
|
||
|
|
type: hellaswag
|
||
|
|
split: validation
|
||
|
|
args:
|
||
|
|
num_few_shot: 10
|
||
|
|
metrics:
|
||
|
|
- type: acc_norm
|
||
|
|
value: 88.15
|
||
|
|
name: normalized accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: MMLU (5-Shot)
|
||
|
|
type: cais/mmlu
|
||
|
|
config: all
|
||
|
|
split: test
|
||
|
|
args:
|
||
|
|
num_few_shot: 5
|
||
|
|
metrics:
|
||
|
|
- type: acc
|
||
|
|
value: 65.07
|
||
|
|
name: accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: TruthfulQA (0-shot)
|
||
|
|
type: truthful_qa
|
||
|
|
config: multiple_choice
|
||
|
|
split: validation
|
||
|
|
args:
|
||
|
|
num_few_shot: 0
|
||
|
|
metrics:
|
||
|
|
- type: mc2
|
||
|
|
value: 71.1
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: Winogrande (5-shot)
|
||
|
|
type: winogrande
|
||
|
|
config: winogrande_xl
|
||
|
|
split: validation
|
||
|
|
args:
|
||
|
|
num_few_shot: 5
|
||
|
|
metrics:
|
||
|
|
- type: acc
|
||
|
|
value: 82.24
|
||
|
|
name: accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
- task:
|
||
|
|
type: text-generation
|
||
|
|
name: Text Generation
|
||
|
|
dataset:
|
||
|
|
name: GSM8k (5-shot)
|
||
|
|
type: gsm8k
|
||
|
|
config: main
|
||
|
|
split: test
|
||
|
|
args:
|
||
|
|
num_few_shot: 5
|
||
|
|
metrics:
|
||
|
|
- type: acc
|
||
|
|
value: 70.2
|
||
|
|
name: accuracy
|
||
|
|
source:
|
||
|
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-v2-7b
|
||
|
|
name: Open LLM Leaderboard
|
||
|
|
---
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
# kuno-royale-v2-7b
|
||
|
|
|
||
|
|
An attempt to further strengthen the roleplaying prose of [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) using [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO), a high-scorer for 7B models on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
||
|
|
|
||
|
|
Personal RP tests prove promising, and meaningless leaderboard metrics have improved vs [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B).
|
||
|
|
|
||
|
|
Some GGUF quants available [here](https://huggingface.co/core-3/kuno-royale-v2-7b-GGUF).
|
||
|
|
|
||
|
|
Works well with Silly Tavern Noromaid template recommended by [SanjiWatsuki for Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B): [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json)
|
||
|
|
|
||
|
|
|Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|
||
|
|
|-------------------|---------|-----|-----------|------|------------|------------|-------|
|
||
|
|
| eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO | 76.45 | 73.12 | 89.09 | 64.80 | 77.45 | 84.77 | 69.45 |
|
||
|
|
| **core-3/kuno-royale-v2-7b** | **74.80** | **72.01** | **88.15** | **65.07** | **71.10** | **82.24** | **70.20** |
|
||
|
|
| [core-3/kuno-royale-7B](https://huggingface.co/core-3/kuno-royale-7B) | 74.74 | 71.76 | 88.20 | 65.13 | 71.12 | 82.32 | 69.90
|
||
|
|
| SanjiWatsuki/Kunoichi-DPO-v2-7B | 72.46 | 69.62 | 87.44 | 64.94 | 66.06 | 80.82 | 65.88 |
|
||
|
|
| SanjiWatsuki/Kunoichi-7B | 72.13 | 68.69 | 87.10 | 64.90 | 64.04 | 81.06 | 67.02 |
|
||
|
|
|
||
|
|
|
||
|
|
# Original LazyMergekit Card:
|
||
|
|
|
||
|
|
kuno-royale-v2-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
||
|
|
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
|
||
|
|
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO)
|
||
|
|
|
||
|
|
## 🧩 Configuration
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
slices:
|
||
|
|
- sources:
|
||
|
|
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
||
|
|
layer_range: [0, 32]
|
||
|
|
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
|
||
|
|
layer_range: [0, 32]
|
||
|
|
merge_method: slerp
|
||
|
|
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
|
||
|
|
parameters:
|
||
|
|
t:
|
||
|
|
- filter: self_attn
|
||
|
|
value: [0, 0.5, 0.3, 0.7, 1]
|
||
|
|
- filter: mlp
|
||
|
|
value: [1, 0.5, 0.7, 0.3, 0]
|
||
|
|
- value: 0.5
|
||
|
|
dtype: bfloat16
|
||
|
|
```
|
||
|
|
|
||
|
|
## 💻 Usage
|
||
|
|
|
||
|
|
```python
|
||
|
|
!pip install -qU transformers accelerate
|
||
|
|
|
||
|
|
from transformers import AutoTokenizer
|
||
|
|
import transformers
|
||
|
|
import torch
|
||
|
|
|
||
|
|
model = "core-3/kuno-royale-v2-7b"
|
||
|
|
messages = [{"role": "user", "content": "What is a large language model?"}]
|
||
|
|
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
||
|
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||
|
|
pipeline = transformers.pipeline(
|
||
|
|
"text-generation",
|
||
|
|
model=model,
|
||
|
|
torch_dtype=torch.float16,
|
||
|
|
device_map="auto",
|
||
|
|
)
|
||
|
|
|
||
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
||
|
|
print(outputs[0]["generated_text"])
|
||
|
|
```
|
||
|
|
|