Update README.md
This commit is contained in:
@@ -9,14 +9,14 @@ alt="Turkcell LLM" width="300"/>
|
|||||||
|
|
||||||
# Turkcell-LLM-7b-v1
|
# Turkcell-LLM-7b-v1
|
||||||
|
|
||||||
This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method followed by fine-tuning with the LORA method.
|
This model is an extended version of a Mistral-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset containing 5 billion tokens. The training process involved using the DORA method initially. Following this, we utilized Turkish instruction sets created from various open-source and internal resources for fine-tuning with the LORA method.
|
||||||
|
|
||||||
## Model Details
|
## Model Details
|
||||||
|
|
||||||
- **Base Model**: Mistral 7B based LLM
|
- **Base Model**: Mistral 7B based LLM
|
||||||
- **Tokenizer Extension**: Specifically extended for Turkish
|
- **Tokenizer Extension**: Specifically extended for Turkish
|
||||||
- **Training Dataset**: Cleaned Turkish raw data with 5 billion tokens
|
- **Training Dataset**: Cleaned Turkish raw data with 5 billion tokens
|
||||||
- **Training Method**: Initially with DORA, followed by fine-tuning with LORA
|
- **Training Method**: Initially with DORA, followed by fine-tuning with LORA using custom Turkish instruction sets
|
||||||
|
|
||||||
### DORA Configuration
|
### DORA Configuration
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user