初始化项目,由ModelHub XC社区提供模型
Model: ai-forever/mGPT-1.3B-buryat Source: Original Platform
This commit is contained in:
47
.gitattributes
vendored
Normal file
47
.gitattributes
vendored
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.db* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ark* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
|
||||||
|
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gguf* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ggml filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.llamafile* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
63
README.md
Normal file
63
README.md
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
---
|
||||||
|
language:
|
||||||
|
- bua
|
||||||
|
- en
|
||||||
|
- ru
|
||||||
|
license: mit
|
||||||
|
tags:
|
||||||
|
- gpt3
|
||||||
|
- transformers
|
||||||
|
- mgpt
|
||||||
|
---
|
||||||
|
# 🌞 Buryat mGPT 1.3B
|
||||||
|
|
||||||
|
Language model for Buryat. Model has 1.3B parameters as you can guess from it's name.
|
||||||
|
|
||||||
|
Buryat belongs to Mongolic language family. It's a very distinct language with approximately 0.5 million speakers. Here are some facts about it:
|
||||||
|
|
||||||
|
1. Primarily spoken in the region around Lake Baikal in Russia.
|
||||||
|
2. It has multiple dialects, reflecting the wide spread of Buriat communities.
|
||||||
|
3. The Buryats are the largest indigenous group in Siberia.
|
||||||
|
|
||||||
|
## Technical details
|
||||||
|
|
||||||
|
It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.
|
||||||
|
|
||||||
|
We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Buryat mGPT 1.3B** was trained for another 1000 steps with batch_size=4 and context window of **2048** tokens on 1 A100.
|
||||||
|
|
||||||
|
Final perplexity for this model on validation is **17.63**.
|
||||||
|
|
||||||
|
_Chart of the training loss and perplexity:_
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Other mGPT-1.3B models
|
||||||
|
|
||||||
|
- [🇦🇲 mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
|
||||||
|
- [🇦🇿 mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
|
||||||
|
- [🍯 mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
|
||||||
|
- [🇧🇾 mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
|
||||||
|
- [🇧🇬 mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
|
||||||
|
- [🌳 mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
|
||||||
|
- [🇬🇪 mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
|
||||||
|
- [🌸 mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk)
|
||||||
|
- [🇰🇿 mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh)
|
||||||
|
- [🇰🇬 mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
|
||||||
|
- [🐻 mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
|
||||||
|
- [🇲🇳 mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
|
||||||
|
- [🐆 mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
|
||||||
|
- [🇮🇷 mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
|
||||||
|
- [🇷🇴 mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian)
|
||||||
|
- [🇹🇯 mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik)
|
||||||
|
- [☕ mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar)
|
||||||
|
- [🇹🇲 mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
|
||||||
|
- [🐎 mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
|
||||||
|
- [🇺🇦 mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian)
|
||||||
|
- [🇺🇿 mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
|
||||||
|
- [💎 mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)
|
||||||
|
|
||||||
|
## Feedback
|
||||||
|
|
||||||
|
If you'll found a bug of have additional data to train model on your language — please, give us feedback.
|
||||||
|
|
||||||
|
Model will be improved over time. Stay tuned!
|
||||||
31
config.json
Normal file
31
config.json
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"activation_function": "gelu_new",
|
||||||
|
"architectures": [
|
||||||
|
"GPT2LMHeadModel"
|
||||||
|
],
|
||||||
|
"attn_pdrop": 0.1,
|
||||||
|
"bos_token_id": 50256,
|
||||||
|
"embd_pdrop": 0.1,
|
||||||
|
"eos_token_id": 50256,
|
||||||
|
"gradient_checkpointing": false,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"layer_norm_epsilon": 1e-05,
|
||||||
|
"model_type": "gpt2",
|
||||||
|
"n_ctx": 2048,
|
||||||
|
"n_embd": 2048,
|
||||||
|
"n_head": 16,
|
||||||
|
"n_inner": null,
|
||||||
|
"n_layer": 24,
|
||||||
|
"n_positions": 2048,
|
||||||
|
"resid_pdrop": 0.1,
|
||||||
|
"scale_attn_weights": true,
|
||||||
|
"summary_activation": null,
|
||||||
|
"summary_first_dropout": 0.1,
|
||||||
|
"summary_proj_to_labels": true,
|
||||||
|
"summary_type": "cls_index",
|
||||||
|
"summary_use_proj": true,
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.10.3",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 100000
|
||||||
|
}
|
||||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||||
99738
merges.txt
Normal file
99738
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:e6dbb36be3e1d8ab863598cf75e7b33e59de529d4068bd0bfa9b3bc4195f275f
|
||||||
|
size 5771170577
|
||||||
31
special_tokens_map.json
Normal file
31
special_tokens_map.json
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"mask_token": "<mask>",
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<pad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
199821
tokenizer.json
Normal file
199821
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
41
tokenizer_config.json
Normal file
41
tokenizer_config.json
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": false,
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"bos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"clean_up_tokenization_spaces": true,
|
||||||
|
"eos_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"errors": "replace",
|
||||||
|
"mask_token": "<mask>",
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<pad>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"tokenizer_class": "GPT2Tokenizer",
|
||||||
|
"unk_token": {
|
||||||
|
"__type": "AddedToken",
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user