diff --git a/README.md b/README.md index b2e2885..c5c423e 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,9 @@ language: - tr --- + + # Turkcell-LLM-7b-v1 This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method followed by fine-tuning with the LORA method.