ModelHub XC 0ba4cd403e 初始化项目,由ModelHub XC社区提供模型
Model: pkupie/gemma-3-4b-mn-cpt
Source: Original Platform
2026-05-04 21:33:51 +08:00

license, datasets, language, base_model, pipeline_tag
license datasets language base_model pipeline_tag
gemma
pkupie/mc2_corpus
mn
google/gemma-3-4b-pt
text-generation

Gemma 3 PT 4B Continually Pretrained on Mongolian (Traditional Mongolian Script)

This model is a continual pretraining (CPT) checkpoint built by further pretraining Gemma 3 PT 4B on the Mongolian (Traditional Mongolian Script) portion of the MC^2 Corpus.

The model is intended to improve Mongolian (Traditional Mongolian Script) language modeling and to support research on low-resource language adaptation.

Training details and methodology are described in: "Efficient Low-Resource Language Adaptation via Multi-Source Dynamic Logit Fusion" (ACL 2026).

Training Data

  • Corpus: Mongolian (Traditional Mongolian Script) subset of MC^2 Corpus
  • Language: Mongolian (mn, Traditional Mongolian Script)
  • Training paradigm: Continual pretraining (CPT) starting from Gemma 3 PT 4B

Intended Use

This checkpoint is released primarily for research purposes. Researchers are welcome to use this CPT checkpoint as a base model for future work, particularly in model merging and logit fusion.

Citation

If you use this model, please cite:

@article{zhang2026efficient,
  title={Efficient Low-Resource Language Adaptation via Multi-Source Dynamic Logit Fusion},
  author={Zhang, Chen and Lin, Jiuheng and Liao, Zhiyuan and Feng, Yansong},
  journal={arXiv preprint arXiv:2604.18106},
  year={2026}
}
Description
Model synced from source: pkupie/gemma-3-4b-mn-cpt
Readme 71 KiB
Languages
Jinja 100%