初始化项目,由ModelHub XC社区提供模型
Model: eren23/merged-dpo-binarized-NeutrixOmnibe-7B Source: Original Platform
This commit is contained in:
180
README.md
Normal file
180
README.md
Normal file
@@ -0,0 +1,180 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
tags:
|
||||
- merge
|
||||
- mergekit
|
||||
- lazymergekit
|
||||
- eren23/dpo-binarized-NeutrixOmnibe-7B
|
||||
- Kukedlc/NeuTrixOmniBe-7B-model-remix
|
||||
base_model:
|
||||
- eren23/dpo-binarized-NeutrixOmnibe-7B
|
||||
- Kukedlc/NeuTrixOmniBe-7B-model-remix
|
||||
model-index:
|
||||
- name: merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
results:
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Text Generation
|
||||
dataset:
|
||||
name: AI2 Reasoning Challenge (25-Shot)
|
||||
type: ai2_arc
|
||||
config: ARC-Challenge
|
||||
split: test
|
||||
args:
|
||||
num_few_shot: 25
|
||||
metrics:
|
||||
- type: acc_norm
|
||||
value: 72.7
|
||||
name: normalized accuracy
|
||||
source:
|
||||
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
name: Open LLM Leaderboard
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Text Generation
|
||||
dataset:
|
||||
name: HellaSwag (10-Shot)
|
||||
type: hellaswag
|
||||
split: validation
|
||||
args:
|
||||
num_few_shot: 10
|
||||
metrics:
|
||||
- type: acc_norm
|
||||
value: 89.03
|
||||
name: normalized accuracy
|
||||
source:
|
||||
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
name: Open LLM Leaderboard
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Text Generation
|
||||
dataset:
|
||||
name: MMLU (5-Shot)
|
||||
type: cais/mmlu
|
||||
config: all
|
||||
split: test
|
||||
args:
|
||||
num_few_shot: 5
|
||||
metrics:
|
||||
- type: acc
|
||||
value: 64.59
|
||||
name: accuracy
|
||||
source:
|
||||
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
name: Open LLM Leaderboard
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Text Generation
|
||||
dataset:
|
||||
name: TruthfulQA (0-shot)
|
||||
type: truthful_qa
|
||||
config: multiple_choice
|
||||
split: validation
|
||||
args:
|
||||
num_few_shot: 0
|
||||
metrics:
|
||||
- type: mc2
|
||||
value: 76.9
|
||||
source:
|
||||
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
name: Open LLM Leaderboard
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Text Generation
|
||||
dataset:
|
||||
name: Winogrande (5-shot)
|
||||
type: winogrande
|
||||
config: winogrande_xl
|
||||
split: validation
|
||||
args:
|
||||
num_few_shot: 5
|
||||
metrics:
|
||||
- type: acc
|
||||
value: 85.08
|
||||
name: accuracy
|
||||
source:
|
||||
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
name: Open LLM Leaderboard
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Text Generation
|
||||
dataset:
|
||||
name: GSM8k (5-shot)
|
||||
type: gsm8k
|
||||
config: main
|
||||
split: test
|
||||
args:
|
||||
num_few_shot: 5
|
||||
metrics:
|
||||
- type: acc
|
||||
value: 68.92
|
||||
name: accuracy
|
||||
source:
|
||||
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
name: Open LLM Leaderboard
|
||||
---
|
||||
|
||||
# merged-dpo-binarized-NeutrixOmnibe-7B
|
||||
|
||||
merged-dpo-binarized-NeutrixOmnibe-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
||||
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
|
||||
* [Kukedlc/NeuTrixOmniBe-7B-model-remix](https://huggingface.co/Kukedlc/NeuTrixOmniBe-7B-model-remix)
|
||||
|
||||
## 🧩 Configuration
|
||||
|
||||
```yaml
|
||||
slices:
|
||||
- sources:
|
||||
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
|
||||
layer_range: [0, 32]
|
||||
- model: Kukedlc/NeuTrixOmniBe-7B-model-remix
|
||||
layer_range: [0, 32]
|
||||
merge_method: slerp
|
||||
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
|
||||
parameters:
|
||||
t:
|
||||
- filter: self_attn
|
||||
value: [0.2, 0.7, 0.8, 0.7, 1]
|
||||
- filter: mlp
|
||||
value: [0.8, 0.3, 0.2, 0.3, 0]
|
||||
- value: 0.45
|
||||
dtype: bfloat16
|
||||
```
|
||||
|
||||
## 💻 Usage
|
||||
|
||||
```python
|
||||
!pip install -qU transformers accelerate
|
||||
|
||||
from transformers import AutoTokenizer
|
||||
import transformers
|
||||
import torch
|
||||
|
||||
model = "eren23/merged-dpo-binarized-NeutrixOmnibe-7B"
|
||||
messages = [{"role": "user", "content": "What is a large language model?"}]
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model)
|
||||
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||
pipeline = transformers.pipeline(
|
||||
"text-generation",
|
||||
model=model,
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
)
|
||||
|
||||
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
||||
print(outputs[0]["generated_text"])
|
||||
```
|
||||
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
||||
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__merged-dpo-binarized-NeutrixOmnibe-7B)
|
||||
|
||||
| Metric |Value|
|
||||
|---------------------------------|----:|
|
||||
|Avg. |76.20|
|
||||
|AI2 Reasoning Challenge (25-Shot)|72.70|
|
||||
|HellaSwag (10-Shot) |89.03|
|
||||
|MMLU (5-Shot) |64.59|
|
||||
|TruthfulQA (0-shot) |76.90|
|
||||
|Winogrande (5-shot) |85.08|
|
||||
|GSM8k (5-shot) |68.92|
|
||||
|
||||
Reference in New Issue
Block a user