ModelHub XC 798ca60934 初始化项目,由ModelHub XC社区提供模型
Model: Mushari440/Qwen3-8B-SFT-v2
Source: Original Platform
2026-04-22 11:19:10 +08:00

library_name, tags
library_name tags
transformers
arabic
sft
qwen

Qwen3-8B-SFT

Model Details

  • Developed by: Mushari Alothman
  • Model type: Causal Language Model
  • Language(s): Arabic, English
  • License: Apache 2.0
  • Finetuned from: Qwen3-8B-Base

This is a supervised fine-tuned (SFT) Qwen3-8B model optimized for accurate, clean supervision across Arabic and English tasks.

Intended Uses

Direct Use

  • Arabic & English MCQ answering
  • Context-based QA / RAG
  • General instruction following

Out-of-Scope Use

  • Safety-critical or real-time decision making
  • Generating factual guarantees without verification

Training Summary

  • Training type: Supervised Fine-Tuning (SFT)
  • Precision: bf16 mixed precision
  • Data: Curated Arabic & English datasets including:
    • MCQ
    • QA / RAG / context understanding
    • General instruction data

How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Mushari440/Qwen3-8B-SFT-v2")
model = AutoModelForCausalLM.from_pretrained("Mushari440/Qwen3-8B-SFT-v2")

inputs = tokenizer("سؤال: ما عاصمة السعودية؟", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Description
Model synced from source: Mushari440/Qwen3-8B-SFT-v2
Readme 2 MiB
Languages
Jinja 100%