Files
Qwen2.5-1.5B-bo-cpt/README.md
ModelHub XC 2378c8db4a 初始化项目,由ModelHub XC社区提供模型
Model: pkupie/Qwen2.5-1.5B-bo-cpt
Source: Original Platform
2026-05-06 07:45:40 +08:00

1.4 KiB

license, datasets, language, base_model, pipeline_tag
license datasets language base_model pipeline_tag
apache-2.0
pkupie/mc2_corpus
bo
Qwen/Qwen2.5-1.5B
text-generation

Qwen2.5-1.5B Continually Pretrained on Tibetan

This model is a continual pretraining (CPT) checkpoint built by further pretraining Qwen2.5 1.5B on the Tibetan portion of the MC^2 Corpus.

The model is intended to improve Tibetan language modeling and to support research on low-resource language adaptation.

Training details and methodology are described in: "Efficient Low-Resource Language Adaptation via Multi-Source Dynamic Logit Fusion" (ACL 2026).

Training Data

  • Corpus: Tibetan subset of MC^2 Corpus
  • Language: Tibetan (bo)
  • Training paradigm: Continual pretraining (CPT) starting from Qwen2.5-1.5B

Intended Use

This checkpoint is released primarily for research purposes. Researchers are welcome to use this CPT checkpoint as a base model for future work, particularly in model merging and logit fusion.

Citation

If you use this model, please cite:

@article{zhang2026efficient,
  title={Efficient Low-Resource Language Adaptation via Multi-Source Dynamic Logit Fusion},
  author={Zhang, Chen and Lin, Jiuheng and Liao, Zhiyuan and Feng, Yansong},
  journal={arXiv preprint arXiv:2604.18106},
  year={2026}
}