From 567425fe3082769b9530ae4452b69995a0e3761d Mon Sep 17 00:00:00 2001 From: Magnus Date: Sat, 19 Jul 2025 07:11:22 +0000 Subject: [PATCH] Unsloth Model Card --- README.md | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 890812c..68e0db5 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,21 @@ --- +base_model: HuggingFaceTB/SmolLM2-360M +tags: +- text-generation-inference +- transformers +- unsloth +- llama license: apache-2.0 -datasets: -- wikimedia/wikipedia -- FreedomIntelligence/alpaca-gpt4-deutsch language: -- de -base_model: -- HuggingFaceTB/SmolLM2-360M -library_name: transformers +- en --- -# SmolLM2-360m-German-Instruct +# Uploaded finetuned model -This is a continued pre-train as well as an instruct fine-tune done using Unsloth in order to make SmolLM2 360m capable of speaking German. -It has been trained on 15% of the German Wikipedia as well as the full German version of the Alpaca-GPT4 dataset (translated version). +- **Developed by:** mags0ft +- **License:** apache-2.0 +- **Finetuned from model :** HuggingFaceTB/SmolLM2-360M -Even though a lot of training has been done, this is still a tiny model and is highly limited to its small size. Expect many hallucinations and do not use this in a demanding production workflow. \ No newline at end of file +This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. + +[](https://github.com/unslothai/unsloth)