---
license: apache-2.0
language:
- en
base_model: Qwen/Qwen3-1.7B
datasets:
- Ayansk11/FinSenti-Dataset
pipeline_tag: text-generation
library_name: transformers
tags:
- finance
- financial-sentiment
- sentiment-analysis
- chain-of-thought
- reasoning
- grpo
- sft
- lora
- finsenti
---
# FinSenti-Qwen3-1.7B
FinSenti-Qwen3-1.7B is a 1.7B-parameter model fine-tuned to
read short financial text (headlines, earnings snippets, market commentary)
and explain its read of them before settling on positive, negative, or
neutral. It's a useful middle size: small enough to load on a 6 GB laptop GPU, big enough that the reasoning stays coherent on tricky headlines.
The model is part of the [FinSenti
collection](https://huggingface.co/collections/Ayansk11/finsenti), a
scaling study of small models trained on the same data with the same recipe.
## What it's good at
- Classifying short financial text (1-3 sentences) into positive / negative
/ neutral
- Producing a short reasoning chain you can read or log
- Following a strict `......` output
format that's easy to parse downstream
It was trained on news-style headlines and earnings snippets in English, so
that's where it shines. Outside that domain you'll see the format hold up
but the labels get noisier.
## How it was trained
Two-stage recipe, same across the whole FinSenti family:
1. **SFT** on the SFT train slice from the [FinSenti
dataset](https://huggingface.co/datasets/Ayansk11/FinSenti-Dataset)
(~15.2K balanced training samples, drawn from a
50.8K-sample pool with held-out val/test splits, chain-of-thought
targets generated by a teacher model and filtered for label agreement).
This stage took about 0.8 hours on a single A100 80GB
for this model.
2. **GRPO** with four reward functions (sentiment correctness, format
compliance, reasoning quality, output consistency), each weighted equally
for a maximum reward of 4.0. The training budget was 3000
steps with early stopping; the best checkpoint landed near step
~300 with a mean reward of approximately
**3.71 / 4.0** on the validation slice.
Trainer stack: Unsloth + TRL, using Unsloth's pre-quantized mirror
[`unsloth/Qwen3-1.7B`](https://huggingface.co/unsloth/Qwen3-1.7B) as the
loading shortcut for the upstream
[`Qwen/Qwen3-1.7B`](https://huggingface.co/Qwen/Qwen3-1.7B)
weights. LoRA adapters (r=32, alpha=64) were
trained on the attention and MLP projection layers, then merged into the
base weights before export, so this repo is a self-contained model and
doesn't need PEFT to load.
## Quick start
Standard `transformers` usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Ayansk11/FinSenti-Qwen3-1.7B"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype=torch.bfloat16, device_map="auto"
)
system = (
"You are a financial sentiment analyst. For each headline you receive, "
"write a short reasoning chain inside ... tags, "
"then give a single label inside ... tags. The label "
"must be exactly one of: positive, negative, neutral."
)
user = "Apple beats Q4 estimates as iPhone sales jump 12% year over year."
messages = [
{"role": "system", "content": system},
{"role": "user", "content": user},
]
prompt = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=256, do_sample=False)
print(tok.decode(out[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
```
Expected output (your reasoning text will vary; the label should match):
```
Beating estimates is a positive earnings surprise. A 12% YoY iPhone sales jump in the company's biggest product line points to demand strength. Both signals push the read positive.
positive
```
## Prompt format
The model expects the system prompt above, verbatim is best. The user turn
is the headline or short snippet you want classified. Output is two XML-ish
blocks in this order: `...` then
`...`. The `` content is one of `positive`,
`negative`, or `neutral` (lowercase, no punctuation).
If you want labels only and don't care about the reasoning, you can stop
generation as soon as you see `` to save tokens.
## Performance notes
The training reward (max 4.0) hit **3.71** on the
held-out validation slice. That breaks down across the four reward
functions roughly as:
- Sentiment correctness: dominant contributor; the model gets the label
right on the validation split most of the time
- Format compliance: near-saturated by the end of GRPO; the model almost
always produces well-formed `` and `` tags
- Reasoning quality: judged on length and presence of finance-relevant
signal words; this one's the noisiest of the four
- Consistency: rewards stable labels across paraphrases of the same headline
Numbers on standard finance benchmarks (FPB, FiQA, Twitter Financial News)
are forthcoming and will be added once the eval pipeline lands.
## Hardware
bf16 weights are about 3.4 GB. You want ~4 GB of VRAM for batch=1 inference. CPU works but is slower; the Q4_K_M GGUF is the right pick if you don't have a GPU.
## Limitations
A few things this model isn't built for:
- **Long documents.** Training context was capped at 2048
tokens. Anything much longer than a few paragraphs is out of distribution.
- **Multi-asset reasoning.** It classifies the sentiment of a single piece
of text. It won't aggregate across multiple headlines or weigh sources.
- **Numerical reasoning.** It can read "beats by 12%" and call that
positive, but it isn't doing math. Don't ask it to forecast.
- **Languages other than English.** Training data was English only.
- **Background knowledge.** If the headline needs you to know what a
company does, the model only has whatever was in its base pretraining.
It can't look anything up.
- **Three labels, hard cutoffs.** The output space is positive / negative /
neutral. If you need a 5-class scale or a continuous score, you'll need
to retrain or post-process.
## Training details
| | |
|---|---|
| Upstream base model | [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) |
| Loading mirror | [unsloth/Qwen3-1.7B](https://huggingface.co/unsloth/Qwen3-1.7B) (Unsloth's pre-quantized copy) |
| Dataset | [Ayansk11/FinSenti-Dataset](https://huggingface.co/datasets/Ayansk11/FinSenti-Dataset) (~15.2K train per stage, 50.8K total across splits) |
| SFT length | ~0.8 hours on A100 80GB |
| GRPO budget | 3000 steps with early stopping (best near step ~300) |
| Best GRPO reward | ~3.71 / 4.0 |
| Adapter | LoRA (r=32, alpha=64) on q/k/v/o/gate/up/down projections |
| Sequence length | 2048 |
| Optimizer | AdamW (8-bit), cosine LR schedule |
| Hardware | NVIDIA A100 80GB (Indiana University BigRed200 cluster) |
| Frameworks | Unsloth + TRL |
## Related FinSenti models
Other sizes and bases trained with the same recipe:
- **Qwen3**: [Qwen3-0.6B](https://huggingface.co/Ayansk11/FinSenti-Qwen3-0.6B), [Qwen3-4B](https://huggingface.co/Ayansk11/FinSenti-Qwen3-4B), [Qwen3-8B](https://huggingface.co/Ayansk11/FinSenti-Qwen3-8B)
- **Qwen3.5**: [Qwen3.5-0.8B](https://huggingface.co/Ayansk11/FinSenti-Qwen3.5-0.8B), [Qwen3.5-2B](https://huggingface.co/Ayansk11/FinSenti-Qwen3.5-2B), [Qwen3.5-4B](https://huggingface.co/Ayansk11/FinSenti-Qwen3.5-4B), [Qwen3.5-9B](https://huggingface.co/Ayansk11/FinSenti-Qwen3.5-9B)
- **DeepSeek**: [DeepSeek-R1-1.5B](https://huggingface.co/Ayansk11/FinSenti-DeepSeek-R1-1.5B)
- **MobileLLM**: [MobileLLM-R1-950M](https://huggingface.co/Ayansk11/FinSenti-MobileLLM-R1-950M)
- **Tiny-LLM**: [Tiny-LLM-10M](https://huggingface.co/Ayansk11/FinSenti-Tiny-LLM-10M)
- **Llama-3**: [Llama-3.2-1B](https://huggingface.co/Ayansk11/FinSenti-Llama-3.2-1B)
- **SmolLM**: [SmolLM-1.7B](https://huggingface.co/Ayansk11/FinSenti-SmolLM-1.7B)
There's a GGUF build of this same model at
[Ayansk11/FinSenti-Qwen3-1.7B-GGUF](https://huggingface.co/Ayansk11/FinSenti-Qwen3-1.7B-GGUF) for Ollama and
llama.cpp, and the dataset itself is at
[Ayansk11/FinSenti-Dataset](https://huggingface.co/datasets/Ayansk11/FinSenti-Dataset).
If you're picking a size, a rough guide:
- **Need it on a phone or browser?** Look at the smallest model in the
group (Qwen3-0.6B) or its GGUF.
- **Laptop with no GPU?** Any model up to ~2B as Q4_K_M GGUF works.
- **Single 8-12 GB GPU?** The 1.5B-4B sizes are the sweet spot.
- **Server or workstation?** The 8B / 9B variants give the best reasoning
but need the memory.
## Citation
If you use this model in research, please cite:
```bibtex
@misc{shaikh2026finsenti,
title = {FinSenti: Small Language Models for Financial Sentiment with Chain-of-Thought Reasoning},
author = {Shaikh, Ayan},
year = {2026},
url = {https://huggingface.co/collections/Ayansk11/finsenti},
note = {Indiana University}
}
```
## License
Apache 2.0, same as the base model.
## Acknowledgements
Trained on the Indiana University BigRed200 cluster.
Thanks to the Unsloth and TRL teams for the trainer stack, and to the
Qwen / DeepSeek teams for the base models.