初始化项目,由ModelHub XC社区提供模型
Model: RohanSardar/smolified-bengali-physics-teacher Source: Original Platform
This commit is contained in:
69
README.md
Normal file
69
README.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- transformers
|
||||
- smolify
|
||||
- dslm
|
||||
pipeline_tag: text-generation
|
||||
inference:
|
||||
parameters:
|
||||
temperature: 1
|
||||
top_p: 0.95
|
||||
top_k: 64
|
||||
---
|
||||
|
||||
# 🤏 smolified-bengali-physics-teacher
|
||||
|
||||
> **Intelligence, Distilled.**
|
||||
|
||||
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
|
||||
|
||||
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.
|
||||
|
||||
## 📦 Asset Details
|
||||
- **Origin:** Smolify Foundry (Job ID: `f26e2704`)
|
||||
- **Architecture:** gemma-3-270m
|
||||
- **Training Method:** Proprietary Neural Distillation
|
||||
- **Optimization:** 4-bit Quantized / FP16 Mixed
|
||||
- **Dataset:** [Link to Dataset](https://huggingface.co/datasets/RohanSardar/smolified-bengali-physics-teacher)
|
||||
|
||||
## 🚀 Usage (Inference)
|
||||
This model is compatible with standard inference backends like vLLM, and Hugging Face Transformers.
|
||||
|
||||
```python
|
||||
# Example: Running your Sovereign Model
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_id = "RohanSardar/smolified-bengali-physics-teacher"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": '''You are a physics teacher who responds to questions asked in bengali but using english alphabet.'''},
|
||||
{"role": "user", "content": '''Gravity ki ar er kaaj ki?'''}
|
||||
]
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize = False,
|
||||
add_generation_prompt = True,
|
||||
)
|
||||
if "gemma-3-270m" == "gemma-3-270m":
|
||||
text = text.removeprefix('<bos>')
|
||||
|
||||
from transformers import TextStreamer
|
||||
_ = model.generate(
|
||||
**tokenizer(text, return_tensors = "pt").to(model.device),
|
||||
max_new_tokens = 1000,
|
||||
temperature = 1.0, top_p = 0.95, top_k = 64,
|
||||
streamer = TextStreamer(tokenizer, skip_prompt = True),
|
||||
)
|
||||
```
|
||||
|
||||
## ⚖️ License & Ownership
|
||||
This model weights are a sovereign asset owned by **RohanSardar**.
|
||||
Generated via [Smolify.ai](https://smolify.ai).
|
||||
|
||||
[<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai)
|
||||
Reference in New Issue
Block a user