114 lines
3.3 KiB
Markdown
114 lines
3.3 KiB
Markdown
---
|
|
tags:
|
|
- merge
|
|
- mergekit
|
|
- lazymergekit
|
|
- automerger/YamShadow-7B
|
|
- mlabonne/AlphaMonarch-7B
|
|
- automerger/OgnoExperiment27-7B
|
|
- Kukedlc/Jupiter-k-7B-slerp
|
|
base_model:
|
|
- automerger/YamShadow-7B
|
|
- mlabonne/AlphaMonarch-7B
|
|
- automerger/OgnoExperiment27-7B
|
|
- Kukedlc/Jupiter-k-7B-slerp
|
|
license: apache-2.0
|
|
---
|
|
|
|
# NeuralShiva-7B-DT
|
|
|
|
|
|

|
|
|
|
NeuralShiva-7B-DT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
|
|
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
|
|
* [automerger/OgnoExperiment27-7B](https://huggingface.co/automerger/OgnoExperiment27-7B)
|
|
* [Kukedlc/Jupiter-k-7B-slerp](https://huggingface.co/Kukedlc/Jupiter-k-7B-slerp)
|
|
|
|
## 🧬 Model Family
|
|
|
|

|
|
|
|
## 🧩 Configuration
|
|
|
|
```yaml
|
|
models:
|
|
- model: liminerity/M7-7b
|
|
# no parameters necessary for base model
|
|
- model: automerger/YamShadow-7B
|
|
parameters:
|
|
weight: 0.3
|
|
density: 0.5
|
|
- model: mlabonne/AlphaMonarch-7B
|
|
parameters:
|
|
weight: 0.2
|
|
density: 0.5
|
|
- model: automerger/OgnoExperiment27-7B
|
|
parameters:
|
|
weight: 0.2
|
|
density: 0.5
|
|
- model: Kukedlc/Jupiter-k-7B-slerp
|
|
parameters:
|
|
weight: 0.3
|
|
density: 0.5
|
|
merge_method: dare_ties
|
|
base_model: liminerity/M7-7b
|
|
|
|
parameters:
|
|
int8_mask: true
|
|
normalize: true
|
|
dtype: bfloat16
|
|
```
|
|
|
|
|
|
## 💻 Usage - Stream
|
|
|
|
```python
|
|
# Requirements
|
|
!pip install -qU transformers accelerate bitsandbytes
|
|
|
|
# Imports & settings
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
|
import warnings
|
|
import os
|
|
os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
|
warnings.filterwarnings('ignore')
|
|
|
|
# Model & Tokenizer
|
|
MODEL_NAME = "Kukedlc/NeuralShiva-7B-DT"
|
|
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda:1', load_in_4bit=True)
|
|
tok = AutoTokenizer.from_pretrained(MODEL_NAME)
|
|
|
|
# Inference
|
|
prompt = "I want you to generate a theory that unites quantum mechanics with the theory of relativity and cosmic consciousness"
|
|
inputs = tok([prompt], return_tensors="pt").to('cuda')
|
|
streamer = TextStreamer(tok)
|
|
|
|
# Despite returning the usual output, the streamer will also print the generated text to stdout.
|
|
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512, do_sample=True, num_beams=1, top_p=0.9, temperature=0.7)
|
|
|
|
```
|
|
## 💻 Usage - Clasic
|
|
|
|
```python
|
|
!pip install -qU transformers bitsandbytes accelerate
|
|
|
|
from transformers import AutoTokenizer
|
|
import transformers
|
|
import torch
|
|
|
|
model = "Kukedlc/NeuralShiva-7B-DT"
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
pipeline = transformers.pipeline(
|
|
"text-generation",
|
|
model=model,
|
|
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
|
|
)
|
|
|
|
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
|
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
|
print(outputs[0]["generated_text"])
|
|
``` |