初始化项目,由ModelHub XC社区提供模型
Model: khazarai/Fino1-4B Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
116
README.md
Normal file
116
README.md
Normal file
@@ -0,0 +1,116 @@
|
|||||||
|
---
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- sft
|
||||||
|
- finance
|
||||||
|
- unsloth
|
||||||
|
license: apache-2.0
|
||||||
|
datasets:
|
||||||
|
- TheFinAI/Fino1_Reasoning_Path_FinQA
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
base_model:
|
||||||
|
- unsloth/Qwen3-4B
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
---
|
||||||
|
|
||||||
|
# Model Card for Model ID
|
||||||
|
|
||||||
|
## Model Details
|
||||||
|
|
||||||
|
Fino1-4B is a fine-tuned version of Qwen3-4B adapted for financial reasoning and question answering.
|
||||||
|
The model was trained with LoRA parameter-efficient fine-tuning on a curated dataset derived from FinQA and enriched with GPT-4o-generated reasoning paths, enabling more structured and explainable answers to financial questions.
|
||||||
|
|
||||||
|
### Model Description
|
||||||
|
|
||||||
|
- **Language(s) (NLP):** English
|
||||||
|
- **License:** MIT
|
||||||
|
- **Finetuned from model:** Qwen3-4B
|
||||||
|
- **Domain:** Finance
|
||||||
|
|
||||||
|
## Uses
|
||||||
|
|
||||||
|
### Direct Use
|
||||||
|
|
||||||
|
- Primary use case: Answering financial reasoning questions with step-by-step structured reasoning.
|
||||||
|
- Example applications:
|
||||||
|
- Financial report analysis
|
||||||
|
- Numerical reasoning over tabular data
|
||||||
|
- Structured financial Q&A for educational/research purposes
|
||||||
|
|
||||||
|
|
||||||
|
## Bias, Risks, and Limitations
|
||||||
|
|
||||||
|
- Not a financial advisor; outputs should not be considered professional financial advice.
|
||||||
|
- May hallucinate numbers or misinterpret complex financial contexts.
|
||||||
|
- Trained on ~5.5K examples; may lack coverage for niche financial instruments or international standards.
|
||||||
|
|
||||||
|
|
||||||
|
## How to Get Started with the Model
|
||||||
|
```python
|
||||||
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained("khazarai/Fino1-4B")
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
"khazarai/Fino1-4B",
|
||||||
|
device_map={"": 0}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
question = """
|
||||||
|
Please answer the given financial question based on the context.
|
||||||
|
Context: note 9 2014 benefit plans the company has defined benefit pension plans covering certain employees in the united states and certain international locations . postretirement healthcare and life insurance benefits provided to qualifying domestic retirees as well as other postretirement benefit plans in international countries are not material . the measurement date used for the company 2019s employee benefit plans is september 30 . effective january 1 , 2018 , the legacy u.s . pension plan was frozen to limit the participation of employees who are hired or re-hired by the company , or who transfer employment to the company , on or after january 1 , net pension cost for the years ended september 30 included the following components: .
|
||||||
|
|( millions of dollars )|pension plans 2019|pension plans 2018|pension plans 2017|
|
||||||
|
|service cost|$ 134|$ 136|$ 110|
|
||||||
|
|interest cost|107|90|61|
|
||||||
|
|expected return on plan assets|( 180 )|( 154 )|( 112 )|
|
||||||
|
|amortization of prior service credit|( 13 )|( 13 )|( 14 )|
|
||||||
|
|amortization of loss|78|78|92|
|
||||||
|
|settlements|10|2|2014|
|
||||||
|
|net pension cost|$ 135|$ 137|$ 138|
|
||||||
|
|net pension cost included in the preceding table that is attributable to international plans|$ 32|$ 34|$ 43|
|
||||||
|
net pension cost included in the preceding table that is attributable to international plans $ 32 $ 34 $ 43 the amounts provided above for amortization of prior service credit and amortization of loss represent the reclassifications of prior service credits and net actuarial losses that were recognized in accumulated other comprehensive income ( loss ) in prior periods . the settlement losses recorded in 2019 and 2018 primarily included lump sum benefit payments associated with the company 2019s u.s . supplemental pension plan . the company recognizes pension settlements when payments from the supplemental plan exceed the sum of service and interest cost components of net periodic pension cost associated with this plan for the fiscal year . as further discussed in note 2 , upon adopting an accounting standard update on october 1 , 2018 , all components of the company 2019s net periodic pension and postretirement benefit costs , aside from service cost , are recorded to other income ( expense ) , net on its consolidated statements of income , for all periods presented . notes to consolidated financial statements 2014 ( continued ) becton , dickinson and company .
|
||||||
|
Question: what is the percentage increase in service costs from 2017 to 2018?
|
||||||
|
Answer:
|
||||||
|
"""
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
{"role" : "user", "content" : question}
|
||||||
|
]
|
||||||
|
text = tokenizer.apply_chat_template(
|
||||||
|
messages,
|
||||||
|
tokenize = False,
|
||||||
|
add_generation_prompt = True,
|
||||||
|
enable_thinking = True,
|
||||||
|
)
|
||||||
|
|
||||||
|
from transformers import TextStreamer
|
||||||
|
_ = model.generate(
|
||||||
|
**tokenizer(text, return_tensors = "pt").to("cuda"),
|
||||||
|
max_new_tokens = 4000,
|
||||||
|
temperature = 0.6,
|
||||||
|
top_p = 0.95,
|
||||||
|
top_k = 20,
|
||||||
|
streamer = TextStreamer(tokenizer, skip_prompt = True),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Training Data
|
||||||
|
|
||||||
|
### Citation
|
||||||
|
```bibtex
|
||||||
|
@article{chen2021finqa,
|
||||||
|
title={Finqa: A dataset of numerical reasoning over financial data},
|
||||||
|
author={Chen, Zhiyu and Chen, Wenhu and Smiley, Charese and Shah, Sameena and Borova, Iana and Langdon, Dylan and Moussa, Reema and Beane, Matt and Huang, Ting-Hao and Routledge, Bryan and others},
|
||||||
|
journal={arXiv preprint arXiv:2109.00122},
|
||||||
|
year={2021}
|
||||||
|
|
||||||
|
@article{qian2025fino1,
|
||||||
|
title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance},
|
||||||
|
author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian},
|
||||||
|
journal={arXiv preprint arXiv:2502.08127},
|
||||||
|
year={2025}
|
||||||
|
}
|
||||||
|
```
|
||||||
28
added_tokens.json
Normal file
28
added_tokens.json
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"</think>": 151668,
|
||||||
|
"</tool_call>": 151658,
|
||||||
|
"</tool_response>": 151666,
|
||||||
|
"<think>": 151667,
|
||||||
|
"<tool_call>": 151657,
|
||||||
|
"<tool_response>": 151665,
|
||||||
|
"<|box_end|>": 151649,
|
||||||
|
"<|box_start|>": 151648,
|
||||||
|
"<|endoftext|>": 151643,
|
||||||
|
"<|file_sep|>": 151664,
|
||||||
|
"<|fim_middle|>": 151660,
|
||||||
|
"<|fim_pad|>": 151662,
|
||||||
|
"<|fim_prefix|>": 151659,
|
||||||
|
"<|fim_suffix|>": 151661,
|
||||||
|
"<|im_end|>": 151645,
|
||||||
|
"<|im_start|>": 151644,
|
||||||
|
"<|image_pad|>": 151655,
|
||||||
|
"<|object_ref_end|>": 151647,
|
||||||
|
"<|object_ref_start|>": 151646,
|
||||||
|
"<|quad_end|>": 151651,
|
||||||
|
"<|quad_start|>": 151650,
|
||||||
|
"<|repo_name|>": 151663,
|
||||||
|
"<|video_pad|>": 151656,
|
||||||
|
"<|vision_end|>": 151653,
|
||||||
|
"<|vision_pad|>": 151654,
|
||||||
|
"<|vision_start|>": 151652
|
||||||
|
}
|
||||||
98
chat_template.jinja
Normal file
98
chat_template.jinja
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
{%- if tools %}
|
||||||
|
{{- '<|im_start|>system\n' }}
|
||||||
|
{%- if messages[0].role == 'system' %}
|
||||||
|
{{- messages[0].content + '\n\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
||||||
|
{%- for tool in tools %}
|
||||||
|
{{- "\n" }}
|
||||||
|
{{- tool | tojson }}
|
||||||
|
{%- endfor %}
|
||||||
|
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
||||||
|
{%- else %}
|
||||||
|
{%- if messages[0].role == 'system' %}
|
||||||
|
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
||||||
|
{%- for forward_message in messages %}
|
||||||
|
{%- set index = (messages|length - 1) - loop.index0 %}
|
||||||
|
{%- set message = messages[index] %}
|
||||||
|
{%- set current_content = message.content if message.content is not none else '' %}
|
||||||
|
{%- set tool_start = '<tool_response>' %}
|
||||||
|
{%- set tool_start_length = tool_start|length %}
|
||||||
|
{%- set start_of_message = current_content[:tool_start_length] %}
|
||||||
|
{%- set tool_end = '</tool_response>' %}
|
||||||
|
{%- set tool_end_length = tool_end|length %}
|
||||||
|
{%- set start_pos = (current_content|length) - tool_end_length %}
|
||||||
|
{%- if start_pos < 0 %}
|
||||||
|
{%- set start_pos = 0 %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- set end_of_message = current_content[start_pos:] %}
|
||||||
|
{%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
|
||||||
|
{%- set ns.multi_step_tool = false %}
|
||||||
|
{%- set ns.last_query_index = index %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- for message in messages %}
|
||||||
|
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
||||||
|
{%- elif message.role == "assistant" %}
|
||||||
|
{%- set content = message.content %}
|
||||||
|
{%- set reasoning_content = '' %}
|
||||||
|
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
|
||||||
|
{%- set reasoning_content = message.reasoning_content %}
|
||||||
|
{%- else %}
|
||||||
|
{%- if '</think>' in message.content %}
|
||||||
|
{%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
|
||||||
|
{%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
|
||||||
|
{%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if loop.index0 > ns.last_query_index %}
|
||||||
|
{%- if loop.last or (not loop.last and reasoning_content) %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
||||||
|
{%- else %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- else %}
|
||||||
|
{{- '<|im_start|>' + message.role + '\n' + content }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if message.tool_calls %}
|
||||||
|
{%- for tool_call in message.tool_calls %}
|
||||||
|
{%- if (loop.first and content) or (not loop.first) %}
|
||||||
|
{{- '\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- if tool_call.function %}
|
||||||
|
{%- set tool_call = tool_call.function %}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '<tool_call>\n{"name": "' }}
|
||||||
|
{{- tool_call.name }}
|
||||||
|
{{- '", "arguments": ' }}
|
||||||
|
{%- if tool_call.arguments is string %}
|
||||||
|
{{- tool_call.arguments }}
|
||||||
|
{%- else %}
|
||||||
|
{{- tool_call.arguments | tojson }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '}\n</tool_call>' }}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '<|im_end|>\n' }}
|
||||||
|
{%- elif message.role == "tool" %}
|
||||||
|
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
||||||
|
{{- '<|im_start|>user' }}
|
||||||
|
{%- endif %}
|
||||||
|
{{- '\n<tool_response>\n' }}
|
||||||
|
{{- message.content }}
|
||||||
|
{{- '\n</tool_response>' }}
|
||||||
|
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
||||||
|
{{- '<|im_end|>\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endfor %}
|
||||||
|
{%- if add_generation_prompt %}
|
||||||
|
{{- '<|im_start|>assistant\n' }}
|
||||||
|
{%- if enable_thinking is defined and enable_thinking is false %}
|
||||||
|
{{- '<think>\n\n</think>\n\n' }}
|
||||||
|
{%- endif %}
|
||||||
|
{%- endif %}
|
||||||
72
config.json
Normal file
72
config.json
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"Qwen3ForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": null,
|
||||||
|
"dtype": "bfloat16",
|
||||||
|
"eos_token_id": 151645,
|
||||||
|
"head_dim": 128,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 2560,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 9728,
|
||||||
|
"layer_types": [
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention",
|
||||||
|
"full_attention"
|
||||||
|
],
|
||||||
|
"max_position_embeddings": 40960,
|
||||||
|
"max_window_layers": 36,
|
||||||
|
"model_type": "qwen3",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 36,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"pad_token_id": 151654,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_parameters": {
|
||||||
|
"rope_theta": 1000000,
|
||||||
|
"rope_type": "default"
|
||||||
|
},
|
||||||
|
"sliding_window": null,
|
||||||
|
"tie_word_embeddings": true,
|
||||||
|
"transformers_version": "5.0.0",
|
||||||
|
"unsloth_fixed": true,
|
||||||
|
"use_cache": true,
|
||||||
|
"use_sliding_window": false,
|
||||||
|
"vocab_size": 151936
|
||||||
|
}
|
||||||
14
generation_config.json
Normal file
14
generation_config.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"bos_token_id": 151643,
|
||||||
|
"do_sample": true,
|
||||||
|
"eos_token_id": [
|
||||||
|
151645,
|
||||||
|
151643
|
||||||
|
],
|
||||||
|
"max_length": 40960,
|
||||||
|
"pad_token_id": 151654,
|
||||||
|
"temperature": 0.6,
|
||||||
|
"top_k": 20,
|
||||||
|
"top_p": 0.95,
|
||||||
|
"transformers_version": "5.0.0"
|
||||||
|
}
|
||||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:f688eff9c95b50a9e44b852b813fcafc22896e48fe6412698f79177882c32418
|
||||||
|
size 8044982080
|
||||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|im_start|>",
|
||||||
|
"<|im_end|>",
|
||||||
|
"<|object_ref_start|>",
|
||||||
|
"<|object_ref_end|>",
|
||||||
|
"<|box_start|>",
|
||||||
|
"<|box_end|>",
|
||||||
|
"<|quad_start|>",
|
||||||
|
"<|quad_end|>",
|
||||||
|
"<|vision_start|>",
|
||||||
|
"<|vision_end|>",
|
||||||
|
"<|vision_pad|>",
|
||||||
|
"<|image_pad|>",
|
||||||
|
"<|video_pad|>"
|
||||||
|
],
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|im_end|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<|vision_pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:be75606093db2094d7cd20f3c2f385c212750648bd6ea4fb2bf507a6a4c55506
|
||||||
|
size 11422650
|
||||||
241
tokenizer_config.json
Normal file
241
tokenizer_config.json
Normal file
File diff suppressed because one or more lines are too long
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user