license, language, tags, pipeline_tag, inference
license language tags pipeline_tag inference
apache-2.0
en
text-generation-inference
transformers
smolify
dslm
text-generation
parameters
temperature top_p top_k
1 0.95 64

🤏 smolified-bengali-physics-teacher

Intelligence, Distilled.

This is a Domain Specific Language Model (DSLM) generated by the Smolify Foundry.

It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.

📦 Asset Details

  • Origin: Smolify Foundry (Job ID: f26e2704)
  • Architecture: gemma-3-270m
  • Training Method: Proprietary Neural Distillation
  • Optimization: 4-bit Quantized / FP16 Mixed
  • Dataset: Link to Dataset

🚀 Usage (Inference)

This model is compatible with standard inference backends like vLLM, and Hugging Face Transformers.

# Example: Running your Sovereign Model
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "RohanSardar/smolified-bengali-physics-teacher"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {"role": "system", "content": '''You are a physics teacher who responds to questions asked in bengali but using english alphabet.'''},
    {"role": "user", "content": '''Gravity ki ar er kaaj ki?'''}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
)
if "gemma-3-270m" == "gemma-3-270m":
    text = text.removeprefix('<bos>')

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to(model.device),
    max_new_tokens = 1000,
    temperature = 1.0, top_p = 0.95, top_k = 64,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

⚖️ License & Ownership

This model weights are a sovereign asset owned by RohanSardar. Generated via Smolify.ai.

Description
Model synced from source: RohanSardar/smolified-bengali-physics-teacher
Readme 27 KiB
Languages
Jinja 100%