115 lines
4.9 KiB
Markdown
115 lines
4.9 KiB
Markdown
|
|
---
|
|||
|
|
license: other
|
|||
|
|
license_name: tongyi-qianwen-license
|
|||
|
|
license_link: LICENSE
|
|||
|
|
language:
|
|||
|
|
- en
|
|||
|
|
- ja
|
|||
|
|
library_name: transformers
|
|||
|
|
pipeline_tag: text-generation
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
# nekomata-7b-pfn-qfin
|
|||
|
|
|
|||
|
|
## Model Description
|
|||
|
|
nekomata-7b-pfn-qfin is a fine-tuned model based on [rinna/nekomata-7b](https://huggingface.co/rinna/nekomata-7b/tree/main).
|
|||
|
|
This is the base model, which is good at generating continuous sentences for finance.
|
|||
|
|
nekomata-7b-pfn-qfin is fine-tuned on 370M tokens from multiple special datasets generated by Preferred Networks, which is clear to use for commercial usage.
|
|||
|
|
The fine-tuned were carried out at a 2048 context length.
|
|||
|
|
This model is released under [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/e8e15962d897714944773cca57fa2e460a3655e8/Tongyi%20Qianwen%20LICENSE%20AGREEMENT).
|
|||
|
|
|
|||
|
|
The research article is available on [arXiv](https://arxiv.org/abs/2404.10555).
|
|||
|
|
|
|||
|
|
# Benchmarking
|
|||
|
|
The benchmark score is obtained using [Japanese Language Model Financial Evaluation Harness](https://github.com/pfnet-research/japanese-lm-fin-harness)
|
|||
|
|
For the benchmark, 0-shot and default prompts are used.
|
|||
|
|
```
|
|||
|
|
| Task |Metric| nekomaba-7b | Ours |
|
|||
|
|
|----------------|------|------|---|------|------|---|------|
|
|||
|
|
|chabsa |f1 |0.8134| | |0.8127| | |
|
|||
|
|
|cma_basics |acc |0.3158|± |0.0764|0.3684|± |0.0793|
|
|||
|
|
|cpa_audit |acc |0.2085|± |0.0203|0.1809|± |0.0193|
|
|||
|
|
|fp2 |acc |0.2484|± |0.0198|0.2674|± |0.0203|
|
|||
|
|
|security_sales_1|acc |0.4912|± |0.0668|0.5088|± |0.0668|
|
|||
|
|
|----------------|------|------|---|------|------|---|------|
|
|||
|
|
|OVER ALL | |0.4155 |0.4276 |
|
|||
|
|
```
|
|||
|
|
## Usage
|
|||
|
|
Install the required libraries as follows:
|
|||
|
|
```sh
|
|||
|
|
>>> python -m pip install numpy sentencepiece torch transformers accelerate transformers_stream_generator tiktoken einops
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
Execute the following python code:
|
|||
|
|
```python
|
|||
|
|
import torch
|
|||
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|||
|
|
|
|||
|
|
tokenizer = AutoTokenizer.from_pretrained("pfnet/nekomata-7b-pfn-qfin", trust_remote_code=True)
|
|||
|
|
|
|||
|
|
# Use GPU with bf16 (recommended for supported devices)
|
|||
|
|
# model = AutoModelForCausalLM.from_pretrained("pfnet/nekomata-7b-pfn-qfin", device_map="auto", trust_remote_code=True, bf16=True)
|
|||
|
|
|
|||
|
|
# Use GPU with fp16
|
|||
|
|
# model = AutoModelForCausalLM.from_pretrained("pfnet/nekomata-7b-pfn-qfin", device_map="auto", trust_remote_code=True, fp16=True)
|
|||
|
|
|
|||
|
|
# Use GPU with fp32
|
|||
|
|
# model = AutoModelForCausalLM.from_pretrained("pfnet/nekomata-7b-pfn-qfin", device_map="auto", trust_remote_code=True, fp32=True)
|
|||
|
|
|
|||
|
|
# Use CPU
|
|||
|
|
# model = AutoModelForCausalLM.from_pretrained("pfnet/nekomata-7b-pfn-qfin", device_map="cpu", trust_remote_code=True)
|
|||
|
|
|
|||
|
|
# Automatically select device and precision
|
|||
|
|
model = AutoModelForCausalLM.from_pretrained("pfnet/nekomata-7b-pfn-qfin", device_map="auto", trust_remote_code=True)
|
|||
|
|
|
|||
|
|
text = "日本銀行は"
|
|||
|
|
input_ids = tokenizer(text, return_tensors="pt").input_ids.to(model.device)
|
|||
|
|
with torch.no_grad():
|
|||
|
|
generated_tokens = model.generate(
|
|||
|
|
inputs=input_ids,
|
|||
|
|
max_new_tokens=32,
|
|||
|
|
do_sample=True,
|
|||
|
|
temperature=1.0,
|
|||
|
|
repetition_penalty=1.1
|
|||
|
|
)[0]
|
|||
|
|
generated_text = tokenizer.decode(generated_tokens)
|
|||
|
|
print(generated_text)
|
|||
|
|
# 日本銀行は、2016年9月に「長短金利操作付き量的・質的金融緩和」を導入し、長期国
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
## Model Details
|
|||
|
|
- Model size: 7b
|
|||
|
|
- Fine-tuned tokens: 370M tokens (Japanese: 300M tokens, English: 13M tokens, Digits: 14M tokens)
|
|||
|
|
- Context length: 2048
|
|||
|
|
- Developed by: Preferred Networks, Inc
|
|||
|
|
- Model type: Causal decoder-only
|
|||
|
|
- Language(s): Japanese and English
|
|||
|
|
- License: [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/e8e15962d897714944773cca57fa2e460a3655e8/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)
|
|||
|
|
|
|||
|
|
## Bias, Risks, and Limitations
|
|||
|
|
nekomata-7b-pfn-qfin is a new technology that carries risks with use.
|
|||
|
|
Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios.
|
|||
|
|
For these reasons, as with all LLMs, nekomata-7b-pfn-qfin’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts.
|
|||
|
|
This model is not designed for legal, tax, investment, financial, or other advice.
|
|||
|
|
Therefore, before deploying any applications of nekomata-7b-pfn-qfin, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
|||
|
|
|
|||
|
|
## How to cite
|
|||
|
|
```
|
|||
|
|
@misc{hirano2024,
|
|||
|
|
title={Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training},
|
|||
|
|
author={Masanori Hirano and Kentaro Imajo},
|
|||
|
|
year={2024},
|
|||
|
|
eprint={2404.10555},
|
|||
|
|
archivePrefix={arXiv},
|
|||
|
|
primaryClass={cs.CL}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
## Contributors
|
|||
|
|
Preferred Networks, Inc.
|
|||
|
|
- Masanori Hirano
|
|||
|
|
- Kentaro Imajo
|
|||
|
|
|
|||
|
|
# License
|
|||
|
|
[Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/e8e15962d897714944773cca57fa2e460a3655e8/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)
|