66 lines
1.8 KiB
Markdown
66 lines
1.8 KiB
Markdown
---
|
|
tags:
|
|
- autotrain
|
|
- text-generation
|
|
datasets:
|
|
- neovalle/H4rmony
|
|
language:
|
|
- en
|
|
library_name: transformers
|
|
license: mit
|
|
---
|
|
|
|
# Model Card for Model neovalle/H4rmoniousBreeze
|
|
|
|

|
|
|
|
## Model Details
|
|
|
|
### Model Description
|
|
|
|
This is model is a version of HuggingFaceH4/zephyr-7b-beta fine-tuned via Autotrain Reward Model, using the H4rmony dataset, which aims
|
|
to better align the model with ecological values through the use of ecolinguistics principles.
|
|
|
|
- **Developed by:** Jorge Vallego
|
|
- **Funded by :** Neovalle Ltd.
|
|
- **Shared by :** airesearch@neovalle.co.uk
|
|
- **Model type:** mistral
|
|
- **Language(s) (NLP):** Primarily English
|
|
- **License:** MIT
|
|
- **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta
|
|
|
|
|
|
## Uses
|
|
|
|
Intended as PoC to show the effects of H4rmony dataset.
|
|
|
|
### Direct Use
|
|
|
|
For testing purposes to gain insight in order to help with the continous improvement of the H4rmony dataset.
|
|
|
|
### Downstream Use
|
|
|
|
Its direct use in applications is not recommended as this model is under testing for a specific task only (Ecological Alignment)
|
|
|
|
### Out-of-Scope Use
|
|
|
|
Not meant to be used other than testing and evaluation of the H4rmony dataset and ecological alignment.
|
|
|
|
## Bias, Risks, and Limitations
|
|
|
|
This model might produce biased completions already existing in the base model, and others unintentionally introduced during fine-tuning.
|
|
|
|
## How to Get Started with the Model
|
|
|
|
It can be loaded and run in a Colab instance with High RAM.
|
|
Code to load base and finetuned models to compare outputs:
|
|
|
|
https://github.com/Neovalle/H4rmony/blob/main/H4rmoniousBreeze.ipynb
|
|
|
|
## Training Details
|
|
|
|
Autotrained reward model
|
|
|
|
### Training Data
|
|
|
|
H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony |