Files
ModelHub XC e6522971dc 初始化项目,由ModelHub XC社区提供模型
Model: Rahidul2006/smolified-recipe-ingridient-extractar
Source: Original Platform
2026-05-04 12:05:45 +08:00

2.5 KiB

license, language, tags, pipeline_tag, inference
license language tags pipeline_tag inference
apache-2.0
en
text-generation-inference
transformers
smolify
dslm
text-generation
parameters
temperature top_p top_k
1 0.95 64

🤏 smolified-recipe-ingridient-extractar

Intelligence, Distilled.

This is a Domain Specific Language Model (DSLM) generated by the Smolify Foundry.

It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.

📦 Asset Details

  • Origin: Smolify Foundry (Job ID: 5f19bec6)
  • Architecture: gemma-3-270m
  • Training Method: Proprietary Neural Distillation
  • Optimization: 4-bit Quantized / FP16 Mixed
  • Dataset: Link to Dataset

🚀 Usage (Inference)

This model is compatible with standard inference backends like vLLM, and Hugging Face Transformers.

# Example: Running your Sovereign Model
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Rahidul2006/smolified-recipe-ingridient-extractar"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {"role": "system", "content": '''Extract ingredients from Indian recipe instructions into an alphabetical Python list.'''},
    {"role": "user", "content": '''Prepare Baingan Bharta by roasting a large eggplant directly over an open flame until the skin is charred. Peel the skin and mash the pulp. In a kadhai, heat mustard oil, add cumin seeds, and sauté chopped onions, ginger, and green chilies. Mix in diced tomatoes and cook until soft. Add the mashed eggplant, red chili powder, turmeric, and salt. Cook for 5 minutes and stir in fresh coriander.'''}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
)
if "gemma-3-270m" == "gemma-3-270m":
    text = text.removeprefix('<bos>')

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to(model.device),
    max_new_tokens = 1000,
    temperature = 1.0, top_p = 0.95, top_k = 64,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

⚖️ License & Ownership

This model weights are a sovereign asset owned by Rahidul2006. Generated via Smolify.ai.