初始化项目,由ModelHub XC社区提供模型

Model: anrilombard/mzansilm-125m
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-01 11:02:30 +08:00
commit 265112f00b
8 changed files with 327141 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

111
README.md Normal file
View File

@@ -0,0 +1,111 @@
---
language:
- af
- en
- nso
- sot
- ssw
- tsn
- tso
- ven
- xho
- zul
- nbl
tags:
- llama
- south-african-languages
- low-resource
- decoder-only
- mzansilm
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
---
# MzansiLM 125M
**MzansiLM** is a 125M-parameter decoder-only language model trained from scratch on **MzansiText**, a multilingual corpus covering all eleven official South African languages.
[![GitHub](https://img.shields.io/badge/GitHub-Anri--Lombard/sallm-blue)](https://github.com/Anri-Lombard/sallm)
[![Paper](https://img.shields.io/badge/Paper-arXiv_2603.20732-red.svg)](https://arxiv.org/abs/2603.20732)
[![Dataset](https://img.shields.io/badge/Dataset-MzansiText-green)](https://huggingface.co/datasets/anrilombard/mzansi-text)
[![Collection](https://img.shields.io/badge/Collection-MzansiLM-orange)](https://huggingface.co/collections/anrilombard/mzansilm-69635ca7b60efedb9dfcb09e)
## Model Details
- Parameters: `125,008,384`
- Architecture: decoder-only `LlamaForCausalLM`
- Hidden size: `512`
- Intermediate size: `1536`
- Layers: `30`
- Attention heads: `9`
- Key/value heads: `3`
- Context length: `2048`
- RoPE theta: `10000.0`
- RMSNorm epsilon: `1e-5`
- Tied word embeddings: `true`
- Training attention implementation: `flash_attention_2`
## Tokenizer
MzansiLM uses a custom BPE tokenizer with a vocabulary size of `65536`.
- `[BOS] = 0`
- `[EOS] = 1`
- `[PAD] = 2`
- `[UNK] = 3`
- Normalizer: `NFD`
- Pre-tokenizer: `ByteLevel`
- Post-processing:
- single sequence: `[BOS] $A [EOS]`
- pair sequence: `[BOS] $A [EOS] [BOS] $B [EOS]`
## Training Data
The model was trained on **MzansiText** and covers all eleven official South African languages:
`af`, `en`, `nso`, `sot`, `ssw`, `tsn`, `tso`, `ven`, `xho`, `zul`, `nbl`
Related releases:
- Paper: [arXiv:2603.20732](https://arxiv.org/abs/2603.20732)
- Raw corpus: [anrilombard/mzansi-text](https://huggingface.co/datasets/anrilombard/mzansi-text)
- Tokenized corpus: [anrilombard/mzansi-text-tokenized](https://huggingface.co/datasets/anrilombard/mzansi-text-tokenized)
- GitHub code and configs: [https://github.com/Anri-Lombard/sallm](https://github.com/Anri-Lombard/sallm)
## Intended Use
MzansiLM is a research model for pretraining, fine-tuning, and evaluation on South African languages. It is intended as a reproducible baseline for language modeling and downstream task adaptation.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("anrilombard/mzansilm-125m")
model = AutoModelForCausalLM.from_pretrained("anrilombard/mzansilm-125m")
inputs = tokenizer("Molo!", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
Please cite the paper:
```bibtex
@misc{lombard2026mzansitextmzansilmopencorpus,
title={MzansiText and MzansiLM: An Open Corpus and Decoder-Only Language Model for South African Languages},
author={Anri Lombard and Simbarashe Mawere and Temi Aina and Ethan Wolff and Sbonelo Gumede and Elan Novick and Francois Meyer and Jan Buys},
year={2026},
eprint={2603.20732},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2603.20732},
}
```
## License
Apache License 2.0

30
config.json Normal file
View File

@@ -0,0 +1,30 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 0,
"eos_token_id": 1,
"head_dim": 56,
"hidden_act": "silu",
"hidden_size": 512,
"initializer_range": 0.02,
"intermediate_size": 1536,
"max_position_embeddings": 2048,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 9,
"num_hidden_layers": 30,
"num_key_value_heads": 3,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": true,
"torch_dtype": "float32",
"transformers_version": "4.52.4",
"use_cache": true,
"vocab_size": 65536,
"pad_token_id": 2
}

7
generation_config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 0,
"eos_token_id": 1,
"transformers_version": "4.52.4",
"pad_token_id": 2
}

3
pytorch_model.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7388b67c8fe73a1bb8f80a97b8b756de4a110e914ac8e1dfcaf0d06697a61273
size 500125578

30
special_tokens_map.json Normal file
View File

@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "[BOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "[EOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "[PAD]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "[UNK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

326881
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

44
tokenizer_config.json Normal file
View File

@@ -0,0 +1,44 @@
{
"added_tokens_decoder": {
"0": {
"content": "[BOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "[EOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "[PAD]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "[UNK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "[BOS]",
"clean_up_tokenization_spaces": true,
"eos_token": "[EOS]",
"extra_special_tokens": {},
"model_max_length": 1000000000000000019884624838656,
"pad_token": "[PAD]",
"tokenizer_class": "PreTrainedTokenizer",
"unk_token": "[UNK]"
}