add more accurate number of Wikipedia dataset size used
This commit is contained in:
@@ -13,6 +13,6 @@ library_name: transformers
|
|||||||
# SmolLM2-360m-German-Instruct
|
# SmolLM2-360m-German-Instruct
|
||||||
|
|
||||||
This is a continued pre-train as well as an instruct fine-tune done using Unsloth in order to make SmolLM2 360m capable of speaking German.
|
This is a continued pre-train as well as an instruct fine-tune done using Unsloth in order to make SmolLM2 360m capable of speaking German.
|
||||||
It has been trained on 10% of the German Wikipedia as well as the full German version of the Alpaca-GPT4 dataset (translated version).
|
It has been trained on 15% of the German Wikipedia as well as the full German version of the Alpaca-GPT4 dataset (translated version).
|
||||||
|
|
||||||
Even though a lot of training has been done, this is still a tiny model and is highly limited to its small size. Expect many hallucinations and do not use this in a demanding production workflow.
|
Even though a lot of training has been done, this is still a tiny model and is highly limited to its small size. Expect many hallucinations and do not use this in a demanding production workflow.
|
||||||
Reference in New Issue
Block a user