40 lines
1.5 KiB
Markdown
40 lines
1.5 KiB
Markdown
---
|
|
base_model: HuggingFaceTB/SmolLM2-360M
|
|
tags:
|
|
- transformers
|
|
- unsloth
|
|
- llama
|
|
license: apache-2.0
|
|
language:
|
|
- de
|
|
datasets:
|
|
- wikimedia/wikipedia
|
|
- FreedomIntelligence/alpaca-gpt4-deutsch
|
|
---
|
|
|
|
# SmolLM2-360m-German-Instruct
|
|
|
|
<p align="center">
|
|
<img alt="Showcase image for SmolLM2-360m-German-Instruct" src="https://huggingface.co/mags0ft/SmolLM2-360m-German-Instruct/resolve/main/showcase-image.png" width="600" />
|
|
</p>
|
|
|
|
This is a continued pre-train as well as an instruct fine-tune done using Unsloth in order to make SmolLM2 360m capable of speaking German.
|
|
It has been trained on 15% of the German Wikipedia as well as the full German version of the Alpaca-GPT4 dataset (translated version).
|
|
|
|
Even though a lot of training has been done, this is still a tiny model and is highly limited to its small size. Expect many hallucinations and do not use this in a demanding production workflow.
|
|
|
|
# Links
|
|
|
|
- [Ollama](https://ollama.com/jace-ai/SmolLM2-German-Instruct)
|
|
- [Technical report paper](https://huggingface.co/mags0ft/SmolLM2-360m-German-Instruct/resolve/main/Enhancing%20Foreign%20Language%20Proficiency%20in%20SmolLM2-360M%20via%20Continued%20Pretraining%20and%20Instruction%20Fine-Tuning.pdf)
|
|
|
|
# Cite as
|
|
|
|
```bibtex
|
|
@misc{smollm2germaninstruct,
|
|
author = {Magnus Leonard Schlinsog},
|
|
title = {Enhancing Foreign Language Proficiency in SmolLM2-360M via Continued Pretraining and Instruction Fine-Tuning},
|
|
year = {2025},
|
|
url = {https://huggingface.co/mags0ft/SmolLM2-360m-German-Instruct},
|
|
}
|
|
``` |