Compare commits
10 Commits
1cf36b2800
...
07ed34d94e
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
07ed34d94e | ||
|
|
cdd8a6cde7 | ||
|
|
0c5d723c60 | ||
|
|
9b61c7836a | ||
|
|
ffe268ef26 | ||
|
|
3f44e4f60f | ||
|
|
29de03ae82 | ||
|
|
623116e0c5 | ||
|
|
fff6602cbd | ||
|
|
a797b02af7 |
67
README.md
67
README.md
@@ -1,53 +1,59 @@
|
|||||||
---
|
---
|
||||||
license: apache-2.0
|
license: cc-by-sa-4.0
|
||||||
pipeline_tag: text-generation
|
pipeline_tag: text-generation
|
||||||
---
|
---
|
||||||
|
|
||||||
# 🤗 FinOPT-Rosenblatt
|
# 🤗 FinOPT-Washington
|
||||||
|
Released June 1, 2023
|
||||||
|
|
||||||
## Model Description
|
## Model Description
|
||||||
FinOPT-Rosenblatt is a language model based on the OPT-125M architecture, which has been fine-tuned on a financial question-answering dataset. The model aims to provide accurate and informative responses to financial-related questions.
|
FinOPT-Washington is a language model based on the OPT-125M architecture, which has been fine-tuned on a financial question-answering dataset. The model aims to provide accurate and informative responses to financial-related questions.
|
||||||
|
|
||||||
## FinOPT Series
|
## FinOPT Series
|
||||||
The FinOPT series of language models come in various model sizes. Kindly refer to this Huggingface Hub [link](https://huggingface.co/models?search=mayaph/finopt) to see the other checkpoints of FinOPT.
|
The FinOPT series of language models come in various model sizes. Kindly refer to this Huggingface Hub [link](https://huggingface.co/models?search=mayaph/finopt) to see the other checkpoints of FinOPT.
|
||||||
| Model Name | Parameter Size |
|
|
||||||
|--------------------|----------------|
|
| Model Name | Parameter Size |
|
||||||
| FinOPT-Turing | 1.3B |
|
|---------------------|----------------|
|
||||||
| FinOPT-Lovelace | 350M |
|
| [FinOPT-Franklin](https://huggingface.co/MayaPH/FinOPT-Franklin) | 1.3B |
|
||||||
| <b>FinOPT-Rosenblatt</b> | <b>125M</b> |
|
| [FinOPT-Lincoln](https://huggingface.co/MayaPH/FinOPT-Lincoln) | 350M |
|
||||||
|
| <b>FinOPT-Washington</b> | <b>125M</b> |
|
||||||
|
|
||||||
## Intended Use
|
## Intended Use
|
||||||
FinOPT-Rosenblatt is designed to assist users in obtaining relevant and reliable information about financial topics. It can be used as a tool for performing question-answering tasks in the financial domain, including banking queries, investment advice, and general financial inquiries.
|
FinOPT-Washington is designed to assist users in obtaining relevant and reliable information about financial topics. It can be used as a tool for performing question-answering tasks in the financial domain, including banking queries, investment advice, and general financial inquiries.
|
||||||
|
|
||||||
The model is intended to be used by individuals seeking information about financial topics, as well as developers and researchers working on natural language processing (NLP) tasks in the financial domain.
|
The model is intended to be used by individuals seeking information about financial topics, as well as developers and researchers working on natural language processing (NLP) tasks in the financial domain.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
To use FinOPT-Rosenblatt, you may use the following template:
|
To use FinOPT-Washington, you are required to provide attribution in accordance with the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. Please include the following attribution notice when utilizing FinOPT-Washington in your work:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
# This code uses FinOPT-Washington, a language model developed by MayaPH.
|
||||||
|
# The model is licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.
|
||||||
|
# For more information, visit: https://creativecommons.org/licenses/by-sa/4.0/
|
||||||
|
|
||||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
|
|
||||||
tokenizer = AutoTokenizer.from_pretrained("MayaPH/FinOPT-Rosenblatt")
|
tokenizer = AutoTokenizer.from_pretrained("MayaPH/FinOPT-Washington")
|
||||||
|
|
||||||
model = AutoModelForCausalLM.from_pretrained("MayaPH/FinOPT-Rosenblatt")
|
model = AutoModelForCausalLM.from_pretrained("MayaPH/FinOPT-Washington")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Please ensure that you include the relevant attribution notice in your code or any other form of usage to comply with the license terms.
|
||||||
|
|
||||||
## Limitations and Caveats
|
## Limitations and Caveats
|
||||||
While FinOPT-Rosenblatt has been fine-tuned on a financial question-answering dataset, it is important to note the following limitations and caveats:
|
While FinOPT-Washington has been fine-tuned on a financial question-answering dataset, it is important to note the following limitations and caveats:
|
||||||
|
|
||||||
1. **Domain-Specific Focus:** The model's training data primarily consists of financial questions and answers from the financial QA dataset. It may not perform as well on questions outside the financial domain.
|
1. **Domain-Specific Focus:** The model's training data primarily consists of financial questions and answers from the financial QA dataset. It may not perform as well on questions outside the financial domain.
|
||||||
|
|
||||||
2. **Knowledge Cutoff:** The model's knowledge is based on data available until September 2021. It may not be aware of recent financial events, news, or regulatory changes after that date.
|
2. **Potential Bias:** The model may reflect biases present in the training data. It is crucial to carefully evaluate and interpret the model's responses, particularly on sensitive topics such as investment advice or financial recommendations.
|
||||||
|
|
||||||
3. **Potential Bias:** The model may reflect biases present in the training data. It is crucial to carefully evaluate and interpret the model's responses, particularly on sensitive topics such as investment advice or financial recommendations.
|
3. **Confidence and Verification:** The model generates responses based on patterns learned from the training data, but it does not have inherent fact-checking capabilities. Users should verify the information provided by the model from reliable sources before making any financial decisions.
|
||||||
|
|
||||||
4. **Confidence and Verification:** The model generates responses based on patterns learned from the training data, but it does not have inherent fact-checking capabilities. Users should verify the information provided by the model from reliable sources before making any financial decisions.
|
|
||||||
|
|
||||||
## Training Data
|
## Training Data
|
||||||
FinOPT-Rosenblatt was trained on a financial question-answering dataset, which consists of questions and answers related to various financial topics. The dataset was collected from online sources and financial forums, and manually handcrafted.
|
FinOPT-Washington was trained on a financial question-answering dataset, which consists of questions and answers related to various financial topics. The dataset was collected from online sources and financial forums, and manually handcrafted.
|
||||||
|
|
||||||
## Ethical Considerations
|
## Ethical Considerations
|
||||||
When using FinOPT-Rosenblatt, it is important to consider the following ethical considerations:
|
When using FinOPT-Washington, it is important to consider the following ethical considerations:
|
||||||
|
|
||||||
1. **Privacy and Security:** Avoid sharing sensitive personal or financial information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
|
1. **Privacy and Security:** Avoid sharing sensitive personal or financial information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
|
||||||
|
|
||||||
@@ -55,13 +61,28 @@ When using FinOPT-Rosenblatt, it is important to consider the following ethical
|
|||||||
|
|
||||||
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
|
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
|
||||||
|
|
||||||
4. **User Responsibility:** Users should take responsibility for their own financial decisions and not solely rely on the information provided by the model. Consult with financial professionals or reliable sources for specific financial advice or recommendations.
|
4. **User Responsibility:** Users should take responsibility
|
||||||
|
|
||||||
|
for their own financial decisions and not solely rely on the information provided by the model. Consult with financial professionals or reliable sources for specific financial advice or recommendations.
|
||||||
|
|
||||||
## Further Information
|
## Further Information
|
||||||
For additional information or inquiries about FinOPT-Rosenblatt, please contact the Maya Philippines iOps Team via jaspercatapang@maya.ph.
|
For additional information or inquiries about FinOPT-Washington, please contact the Maya Philippines iOps Team via jasper.catapang@maya.ph.
|
||||||
|
|
||||||
## Disclaimer
|
## Disclaimer
|
||||||
FinOPT-Rosenblatt is an AI language model trained by Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
|
FinOPT-Washington is an AI language model trained by Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
|
||||||
|
|
||||||
## Acknowledgments
|
## Acknowledgments
|
||||||
The development of FinOPT-Rosenblatt was made possible by Maya Philippines and the curation and creation of the financial question-answering dataset.
|
The development of FinOPT-Washington was made possible by Maya Philippines and the curation and creation of the financial question-answering dataset.
|
||||||
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
||||||
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__FinOPT-Washington)
|
||||||
|
|
||||||
|
| Metric | Value |
|
||||||
|
|-----------------------|---------------------------|
|
||||||
|
| Avg. | 24.87 |
|
||||||
|
| ARC (25-shot) | 25.17 |
|
||||||
|
| HellaSwag (10-shot) | 26.25 |
|
||||||
|
| MMLU (5-shot) | 24.83 |
|
||||||
|
| TruthfulQA (0-shot) | 45.8 |
|
||||||
|
| Winogrande (5-shot) | 51.07 |
|
||||||
|
| GSM8K (5-shot) | 0.0 |
|
||||||
|
| DROP (3-shot) | 1.0 |
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"_name_or_path": "facebook/opt-125m",
|
"_name_or_path": "FinOPT-Washington",
|
||||||
"_remove_final_layer_norm": false,
|
"_remove_final_layer_norm": false,
|
||||||
"activation_dropout": 0.0,
|
"activation_dropout": 0.0,
|
||||||
"activation_function": "relu",
|
"activation_function": "relu",
|
||||||
|
|||||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f57a7e43a147d4557d7c8ca97d235350f1969db75db2cc4c62ce5d09ae29efdc
|
||||||
|
size 500979600
|
||||||
Reference in New Issue
Block a user