初始化项目,由ModelHub XC社区提供模型
Model: asafaya/kanarya-2b Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
74
README.md
Normal file
74
README.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- oscar
|
||||
- mc4
|
||||
language:
|
||||
- tr
|
||||
pipeline_tag: text-generation
|
||||
widget:
|
||||
- text: "Benim adım Zeynep, ve en sevdiğim kitabın adı:"
|
||||
example_title: "Benim adım Zeynep, ve en sevdiğim kitabın adı"
|
||||
- text: "Bugünkü yemeğimiz"
|
||||
example_title: "Bugünkü yemeğimiz"
|
||||
---
|
||||
|
||||
# Kanarya-2B: Turkish Language Model
|
||||
|
||||
<img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/>
|
||||
|
||||
**Kanarya** is a pre-trained Turkish GPT-J 2B model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture.
|
||||
|
||||
## Model Details
|
||||
|
||||
- Model Name: Kanarya-2B
|
||||
- Model Size: 2,050M parameters
|
||||
- Training Data: OSCAR, mC4
|
||||
- Language: Turkish
|
||||
- Layers: 24
|
||||
- Hidden Size: 2560
|
||||
- Number of Heads: 20
|
||||
- Context Size: 2048
|
||||
- Positional Embeddings: Rotary
|
||||
- Vocabulary Size: 32,768
|
||||
|
||||
## Intended Use
|
||||
|
||||
This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning.
|
||||
|
||||
## Limitations and Ethical Considerations
|
||||
|
||||
The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues.
|
||||
|
||||
## License: Apache 2.0
|
||||
|
||||
The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind.
|
||||
|
||||
## Citation
|
||||
|
||||
If you use the model, please cite the following paper:
|
||||
|
||||
```bibtex
|
||||
@inproceedings{safaya-etal-2022-mukayese,
|
||||
title = "Mukayese: {T}urkish {NLP} Strikes Back",
|
||||
author = "Safaya, Ali and
|
||||
Kurtulu{\c{s}}, Emirhan and
|
||||
Goktogan, Arda and
|
||||
Yuret, Deniz",
|
||||
editor = "Muresan, Smaranda and
|
||||
Nakov, Preslav and
|
||||
Villavicencio, Aline",
|
||||
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
|
||||
month = may,
|
||||
year = "2022",
|
||||
address = "Dublin, Ireland",
|
||||
publisher = "Association for Computational Linguistics",
|
||||
url = "https://aclanthology.org/2022.findings-acl.69",
|
||||
doi = "10.18653/v1/2022.findings-acl.69",
|
||||
pages = "846--863",
|
||||
}
|
||||
```
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
|
||||
42
config.json
Normal file
42
config.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"activation_function": "gelu_new",
|
||||
"architectures": [
|
||||
"GPTJForCausalLM"
|
||||
],
|
||||
"attn_pdrop": 0.0,
|
||||
"bos_token_id": 0,
|
||||
"embd_pdrop": 0.0,
|
||||
"eos_token_id": 2,
|
||||
"gradient_checkpointing": false,
|
||||
"initializer_range": 0.01,
|
||||
"layer_norm_epsilon": 1e-06,
|
||||
"model_type": "gptj",
|
||||
"n_embd": 2560,
|
||||
"n_head": 20,
|
||||
"n_inner": null,
|
||||
"n_layer": 24,
|
||||
"n_positions": 2048,
|
||||
"resid_pdrop": 0.0,
|
||||
"rotary": true,
|
||||
"rotary_dim": 64,
|
||||
"scale_attn_weights": true,
|
||||
"summary_activation": null,
|
||||
"summary_first_dropout": 0.1,
|
||||
"summary_proj_to_labels": true,
|
||||
"summary_type": "cls_index",
|
||||
"summary_use_proj": true,
|
||||
"task_specific_params": {
|
||||
"text-generation": {
|
||||
"do_sample": true,
|
||||
"max_length": 2048,
|
||||
"temperature": 0.8,
|
||||
"top_k": 50
|
||||
}
|
||||
},
|
||||
"tie_word_embeddings": false,
|
||||
"tokenizer_class": "GPT2Tokenizer",
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.37.2",
|
||||
"use_cache": true,
|
||||
"vocab_size": 32768
|
||||
}
|
||||
3
flax_model.msgpack
Normal file
3
flax_model.msgpack
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:139a7664016cd5f5b6dbacdd7851cab0fc8d2a039f0d5bce14fa3ce25816b371
|
||||
size 4111363281
|
||||
9
generation_config.json
Normal file
9
generation_config.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"bos_token_id": 0,
|
||||
"eos_token_id": 2,
|
||||
"transformers_version": "4.31.0",
|
||||
"do_sample": true,
|
||||
"temperature": 0.8,
|
||||
"max_length": 2048,
|
||||
"top_k": 50
|
||||
}
|
||||
32510
merges.txt
Normal file
32510
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:098e57c5cbe3a15c84945f4dee90118c454db7e22904a01972cfda477ed502b2
|
||||
size 4111380520
|
||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:45282377e08ffaf9b4a94f5e29114d1eddb967b951259bfde1c398fcc443a81c
|
||||
size 4111431009
|
||||
24
special_tokens_map.json
Normal file
24
special_tokens_map.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": "<unk>",
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
65405
tokenizer.json
Normal file
65405
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
32
tokenizer_config.json
Normal file
32
tokenizer_config.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"add_bos_token": true,
|
||||
"add_prefix_space": true,
|
||||
"bos_token": {
|
||||
"__type": "AddedToken",
|
||||
"content": "<s>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"clean_up_tokenization_spaces": true,
|
||||
"eos_token": {
|
||||
"__type": "AddedToken",
|
||||
"content": "</s>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"errors": "replace",
|
||||
"model_max_length": 2048,
|
||||
"tokenizer_class": "GPT2Tokenizer",
|
||||
"unk_token": {
|
||||
"__type": "AddedToken",
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user