language, license, base_model, tags, model-index, pipeline_tag
language license base_model tags model-index pipeline_tag
es
fr
en
apache-2.0 unsloth/Qwen2.5-7B-Instruct
unsloth
trl
lora
reasoning
chain-of-thought
multilingual
instruction-tuned
qwen
name results
Qwen2.5-7B-Thinking-Spanish-French
text-generation

🧠 Qwen2.5-7B-Thinking-Spanish-French (LoRA)

A lightweight, reasoning-enhanced multilingual model fine-tuned for step-by-step thinking in Spanish and French, built on top of Qwen2.5-7B-Instruct using LoRA.


🚀 Overview

This model enhances the reasoning capabilities of the base model by encouraging structured "thinking" before answering. It is optimized for:

  • 🇪🇸 Spanish reasoning tasks
  • 🇫🇷 French reasoning tasks
  • 🧠 Step-by-step logical explanations
  • 💬 Instruction-following with personality

The fine-tuning process leverages curated multilingual reasoning datasets to improve coherence, clarity, and depth in responses.


🏗️ Model Details

Component Description
Base Model Qwen2.5-7B-Instruct
Fine-tuning LoRA (Low-Rank Adaptation) via Unsloth
Dataset HuggingFaceH4/Multilingual-Thinking (Spanish & French filtered)
Quantization 4-bit (bitsandbytes)
Max Sequence Length 512 tokens
Framework TRL + Unsloth

🎯 Capabilities

  • Generates chain-of-thought reasoning
  • Produces structured, step-by-step answers
  • Handles multilingual prompts (ES/FR/EN)
  • Maintains engaging and expressive tone
  • Efficient inference with low VRAM usage

⚠️ Limitations

  • Context limited to 512 tokens → long reasoning may truncate
  • Performance may degrade for:
    • highly technical domains (e.g., legal/medical)
    • languages outside ES/FR/EN
  • Chain-of-thought is learned behavior → may not always be consistent

📦 How to Use

🔹 Load with Unsloth

from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "sarimahsan101/qwen2.5-7b-thinking-esp",
    max_seq_length = 512,
    load_in_4bit = True,
)

FastLanguageModel.for_inference(model)
Description
Model synced from source: sarimahsan101/qwen2.5-7b-thinking-esp
Readme 30 KiB
Languages
Jinja 100%