76 lines
1.7 KiB
Markdown
76 lines
1.7 KiB
Markdown
---
|
||
tags:
|
||
- transformers
|
||
- causal-lm
|
||
- text-generation
|
||
- instruct
|
||
- chat
|
||
- fine-tuned
|
||
- merged-lora
|
||
- llama-3
|
||
- hermes
|
||
- discord-dataset
|
||
- conversational-ai
|
||
- chatml
|
||
- pytorch
|
||
- open-weights
|
||
- 8b-parameters
|
||
model-index:
|
||
- name: mookiezii/Discord-Hermes-3-8B
|
||
results: []
|
||
base_model:
|
||
- NousResearch/Hermes-3-Llama-3.1-8B
|
||
datasets:
|
||
- mookiezi/Discord-Dialogues
|
||
library_name: transformers
|
||
license: llama3
|
||
---
|
||
|
||
# Discord-Hermes-3-8B
|
||
|
||
## Model Description
|
||
|
||
This is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B), trained on a curated dataset built from a mixture of enhanced and original samples from [mookiezi/Discord-Dialogues](https://huggingface.co/datasets/mookiezi/Discord-Dialogues).
|
||
|
||
The training hyper-parameters and log are available [here](https://huggingface.co/mookiezii/Discord-Hermes-3-8B/raw/main/train.log).
|
||
|
||
---
|
||
|
||
### Sample Outputs
|
||
|
||
⚠️ **Disclaimer:** The first image is fictional dialogue.
|
||
It should not be interpreted as political advice, a statement of fact, or an endorsement of any kind.
|
||
|
||

|
||
|
||

|
||
|
||
---
|
||
|
||
## Interfacing
|
||
|
||
An optimized Python script for interfacing is available [here](https://huggingface.co/mookiezii/Discord-Hermes-3-8B/blob/main/interface.py).
|
||
|
||
### Windows
|
||
|
||
```powershell
|
||
py interface.py
|
||
````
|
||
|
||
### macOS
|
||
|
||
```bash
|
||
python3 interface.py
|
||
```
|
||
|
||
### Linux
|
||
|
||
```bash
|
||
python3 interface.py
|
||
```
|
||
|
||
|
||
[](https://20000.online/micae)
|
||
[](https://20000.online/openmicae)
|
||
[](https://20000.online/discord-dialogues)
|