Model: dipta007/GanitLLM-0.6B-SFT Source: Original Platform
library_name, license, base_model, pipeline_tag, language, tags, datasets
| library_name | license | base_model | pipeline_tag | language | tags | datasets | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| transformers | apache-2.0 | Qwen/Qwen3-0.6B | text-generation |
|
|
|
GanitLLM-0.6B_SFT
Highlights
GanitLLM-0.6B_SFT is our smallest Bengali mathematical reasoning model trained with Supervised Fine-Tuning on the GANIT dataset. Ideal for resource-constrained deployments. Key improvements over the base Qwen3-0.6B model:
- +20.00 accuracy on Bn-MGSM benchmark (8.40 → 28.40)
- +39.20 accuracy on Bn-MSVAMP benchmark (12.20 → 51.40)
- 88.60% Bengali reasoning (vs 12.43% for base model)
- 79.2% fewer words in generated solutions (1265 → 263 words)
Note
: This is the SFT-only checkpoint. For best results, use the RL-enhanced versions: GanitLLM-0.6B_SFT_CGRPO or GanitLLM-0.6B_SFT_GRPO.
Model Overview
| Property | Value |
|---|---|
| Model Type | Causal Language Model |
| Base Model | Qwen/Qwen3-0.6B |
| Parameters | 0.6B |
| Training | Supervised Fine-Tuning |
| Context Length | 4,096 tokens |
| Language | Bengali, English |
Training Details
This model was trained with a single-stage pipeline:
- Supervised Fine-Tuning (SFT): Trained on GANIT-SFT (~11k examples) to ground reasoning in Bengali
Training Data
- Dataset: GANIT-SFT (11,023 examples)
- Format: Bengali math problems with chain-of-thought reasoning
- Structure:
<think>tags for reasoning,<answer>tags for final answer
Quickstart
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "dipta007/GanitLLM-0.6B_SFT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
problem = "একটি দোকানে ১২টি আপেল আছে। যদি ৫টি আপেল বিক্রি হয়, তাহলে কতটি আপেল বাকি থাকবে?"
prompt = f"""A conversation takes place between the user and the assistant. The user asks a question, and the assistant solves the problem. Please reason step by step in Bengali, and put your final answer in the <answer> </answer> tags.
Question: {problem}"""
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=2048, temperature=0.7)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
response = tokenizer.decode(output_ids, skip_special_tokens=True)
print(response)
Using vLLM
vllm serve dipta007/GanitLLM-0.6B_SFT --max-model-len 4096
Performance
| Model | Bn-MGSM | Bn-MSVAMP | Avg. Words | Bengali % |
|---|---|---|---|---|
| Qwen3-0.6B (base) | 8.40 | 12.20 | 1265 | 12.43% |
| GanitLLM-0.6B_SFT | 28.40 | 51.40 | 263 | 88.60% |
Related Models
| Model | Parameters | Training | Link |
|---|---|---|---|
| GanitLLM-0.6B_SFT_CGRPO | 0.6B | SFT + CGRPO | Link |
| GanitLLM-0.6B_SFT_GRPO | 0.6B | SFT + GRPO | Link |
| GanitLLM-0.6B_SFT | 0.6B | SFT | Link |
| GanitLLM-0.6B_CGRPO | 0.6B | CGRPO | Link |
Citation
@inproceedings{dipta2026ganitllm,
title={GanitLLM: Difficulty-Aware Bengali Mathematical Reasoning through Curriculum-GRPO},
author={Shubhashis Roy Dipta and Khairul Mahbub and Nadia Najjar},
booktitle={Findings of the Association for Computational Linguistics: ACL 2026},
year={2026},
eprint={2601.06767},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.06767},
}
License
This model is released under the Apache 2.0 License.
Description
Languages
Python
88.9%
Jinja
11.1%