diff --git a/README.md b/README.md index 16e2ec3..dbd48d9 100644 --- a/README.md +++ b/README.md @@ -9,14 +9,14 @@ alt="Turkcell LLM" width="300"/> # Turkcell-LLM-7b-v1 -This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method followed by fine-tuning with the LORA method. +This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method initially. Following this, we utilized Turkish instruction sets created from various open-source and internal resources for fine-tuning with the LORA method. ## Model Details - **Base Model**: Mistral 7B based LLM - **Tokenizer Extension**: Specifically extended for Turkish - **Training Dataset**: Cleaned Turkish raw data with 5 billion tokens -- **Training Method**: Initially with DORA, followed by fine-tuning with LORA +- **Training Method**: Initially with DORA, followed by fine-tuning with LORA using custom Turkish instruction sets ### DORA Configuration