初始化项目,由ModelHub XC社区提供模型

Model: suayptalha/Sungur-14B
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-11 16:27:54 +08:00
commit b80d2e8f4d
20 changed files with 1182 additions and 0 deletions

173
README.md Normal file
View File

@@ -0,0 +1,173 @@
---
base_model:
- Qwen/Qwen3-14B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- SFT
- sungur
license: apache-2.0
language:
- tr
pipeline_tag: text-generation
datasets:
- suayptalha/Sungur-Dataset
---
<img src="./Sungur.png"/>
# Sungur-14B
Sungur-14B is a Turkish-specialized large language model derived from Qwen/Qwen3-14B. The model was fine-tuned using suayptalha/Sungur-Dataset, a 41.1k-sample collection of reasoning-focused conversations spanning domains such as mathematics, medicine, and general knowledge. This dataset is entirely in Turkish and was created to enhance native Turkish reasoning ability.
The training process employed 4-bit QLoRA for Supervised Fine-Tuning (SFT), enabling efficient adaptation while preserving the capabilities of the base model.
Sungur-14B is designed for Turkish reasoning and text generation tasks, delivering coherent, context-aware, and logically structured responses. Through its specialized dataset and training pipeline, the model gains strong native reasoning capabilities in Turkish, making it suitable for advanced applications in analytical dialogue, education, and domain-specific problem solving.
## Loss Graph:
![Loss Graph](Sungur-14B-Loss.png)
This model was trained on `1xB200` GPU. Training took ~3 hours.
## Usage
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "suayptalha/Sungur-14B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "user", "content": "5x + 1 = 16. x'i bul."},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
enable_thinking=True
).to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=4096,
do_sample=True,
temperature=0.6,
top_p=0.95,
top_k=20,
min_p=0,
repetition_penalty=1.2,
eos_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
"""
<think>
5x + 1 = 16 denklemini çözmek için, x'i izole etmem gerekiyor. İlk olarak, her iki taraftan 1 çıkararak 5x = 15 elde ederim. Sonra, her iki tarafı 5'e bölerim ve x = 3 bulurum.
</think>
\(5x + 1 = 16\) denklemini çözmek için, \(x\) değişkenini izole etmemiz gerekiyor. İşte adım adım çözüm:
1. **Her iki taraftan 1 çıkarın:**
\[
5x + 1 - 1 = 16 - 1
\]
\[
5x = 15
\]
2. **Her iki tarafı 5'e bölün:**
\[
\frac{5x}{5} = \frac{15}{5}
\]
\[
x = 3
\]
**Sonuç:**
\[
\boxed{3}
"""
```
By default, Sungur-14B has thinking capabilities enabled. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. If you do not want thinking you can set `enable_thinking=False`.
> [!NOTE]
> For thinking mode, use `temperature=0.6`, `top_p=0.95`, `top_k=20`, `min_p=0`, and `repetition_penalty=1.2`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
> For non-thinking mode, use `temperature=0.7`, `top_p=0.8`, `top_k=20`, and `min_p=0`.
## 📊 Benchmarks
### Comparison with Base Model (via `malhajar17/lm-evaluation-harness_turkish`)
| Benchmark | Sungur-14B | Qwen3-14B |
| ------------------------ | ---------- | ---------- |
| **ARC (tr, acc)** | **0.4727** | 0.4701 |
| **ARC (tr, acc_norm)** | 0.5213 | **0.5273** |
| **GSM8K (tr, flex)** | 0.0380 | **0.0418** |
| **GSM8K (tr, strict)** | 0.7760 | **0.8185** |
| **HellaSwag (tr, acc)** | **0.4051** | 0.4017 |
| **HellaSwag (tr, norm)** | **0.5279** | 0.5113 |
| **Winogrande (tr)** | **0.5893** | 0.5656 |
| **TruthfulQA (acc)** | **0.5174** | 0.5165 |
| **MMLU (tr, ort.)** | 0.6640 | **0.6729** |
### Turkish GSM8K Results
| Model Name | GSM8K (strict) |
| --------------------------------------- | -------------- |
| Qwen/Qwen2.5-72B-Instruct | 83.60 |
| Qwen/Qwen3-14B | 81.85 |
| Qwen/Qwen2.5-32B-Instruct | 77.83 |
| **suayptalha/Sungur-14B** | **77.60** |
| google/gemma-3-27b-it | 77.52 |
| ytu-ce-cosmos/Turkish-Gemma-9b-T1 | 77.41 |
| Qwen/Qwen2.5-14B-it | 76.77 |
| google/gemma-2-27b-it | 76.54 |
| **suayptalha/Sungur-9B** | **74.49** |
| ytu-ce-cosmos/Turkish-Gemma-9b-v0.1 | 73.42 |
| google/gemma-3-12b-it | 72.06 |
| meta-llama/Llama-3-1-70B-Instruct | 66.13 |
| Qwen/Qwen2.5-7B-Instruct | 64.16 |
| google/gemma-2-9b-it | 63.10 |
## Acknowledgments
- Thanks to [@Qwen](https://huggingface.co/Qwen) team for their amazing Qwen/Qwen3-14B model.
- Thanks to [unsloth](https://unsloth.ai) for making the repository I used to make this model.
- Thanks to all Turkish open source AI community.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Citation
```
@misc{sungur_collection_2025,
title = {Sungur (Hugging Face Collection)},
author = {Şuayp Talha Kocabay},
year = {2025},
howpublished = {\url{https://huggingface.co/collections/suayptalha/sungur-68dcd094da7f8976cdc5898e}},
note = {Turkish LLM family and dataset collection}
}
```
## Support
<a href="https://www.buymeacoffee.com/suayptalha" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
---
license: apache-2.0
---