70 lines
2.2 KiB
Markdown
70 lines
2.2 KiB
Markdown
|
|
---
|
||
|
|
license: apache-2.0
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
tags:
|
||
|
|
- text-generation-inference
|
||
|
|
- transformers
|
||
|
|
- smolify
|
||
|
|
- dslm
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
inference:
|
||
|
|
parameters:
|
||
|
|
temperature: 1
|
||
|
|
top_p: 0.95
|
||
|
|
top_k: 64
|
||
|
|
---
|
||
|
|
|
||
|
|
# 🤏 smolified-course-selector
|
||
|
|
|
||
|
|
> **Intelligence, Distilled.**
|
||
|
|
|
||
|
|
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
|
||
|
|
|
||
|
|
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.
|
||
|
|
|
||
|
|
## 📦 Asset Details
|
||
|
|
- **Origin:** Smolify Foundry (Job ID: `b279efb3`)
|
||
|
|
- **Architecture:** gemma-3-270m
|
||
|
|
- **Training Method:** Proprietary Neural Distillation
|
||
|
|
- **Optimization:** 4-bit Quantized / FP16 Mixed
|
||
|
|
- **Dataset:** [Link to Dataset](https://huggingface.co/datasets/Rudraksh2004/smolified-course-selector)
|
||
|
|
|
||
|
|
## 🚀 Usage (Inference)
|
||
|
|
This model is compatible with standard inference backends like vLLM, and Hugging Face Transformers.
|
||
|
|
|
||
|
|
```python
|
||
|
|
# Example: Running your Sovereign Model
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||
|
|
|
||
|
|
model_id = "Rudraksh2004/smolified-course-selector"
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||
|
|
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
||
|
|
|
||
|
|
messages = [
|
||
|
|
{"role": "system", "content": '''You are an educational assistant that helps students explore university paths and associated scholarship opportunities based on their interests.'''},
|
||
|
|
{"role": "user", "content": '''I am fascinated by aircraft and national defense strategy. Are there academic paths for this?'''}
|
||
|
|
]
|
||
|
|
text = tokenizer.apply_chat_template(
|
||
|
|
messages,
|
||
|
|
tokenize = False,
|
||
|
|
add_generation_prompt = True,
|
||
|
|
)
|
||
|
|
if "gemma-3-270m" == "gemma-3-270m":
|
||
|
|
text = text.removeprefix('<bos>')
|
||
|
|
|
||
|
|
from transformers import TextStreamer
|
||
|
|
_ = model.generate(
|
||
|
|
**tokenizer(text, return_tensors = "pt").to(model.device),
|
||
|
|
max_new_tokens = 1000,
|
||
|
|
temperature = 1.0, top_p = 0.95, top_k = 64,
|
||
|
|
streamer = TextStreamer(tokenizer, skip_prompt = True),
|
||
|
|
)
|
||
|
|
```
|
||
|
|
|
||
|
|
## ⚖️ License & Ownership
|
||
|
|
This model weights are a sovereign asset owned by **Rudraksh2004**.
|
||
|
|
Generated via [Smolify.ai](https://smolify.ai).
|
||
|
|
|
||
|
|
[<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai)
|