ModelHub XC 820dffbdca 初始化项目,由ModelHub XC社区提供模型
Model: RichardErkhov/MaLA-LM_-_lucky52-bloom-7b1-no-5-gguf
Source: Original Platform
2026-04-21 17:42:49 +08:00

Quantization made by Richard Erkhov.

Github

Discord

Request more models

lucky52-bloom-7b1-no-5 - GGUF

Name Quant method Size
lucky52-bloom-7b1-no-5.Q2_K.gguf Q2_K 3.2GB
lucky52-bloom-7b1-no-5.IQ3_XS.gguf IQ3_XS 3.56GB
lucky52-bloom-7b1-no-5.IQ3_S.gguf IQ3_S 3.63GB
lucky52-bloom-7b1-no-5.Q3_K_S.gguf Q3_K_S 3.63GB
lucky52-bloom-7b1-no-5.IQ3_M.gguf IQ3_M 3.9GB
lucky52-bloom-7b1-no-5.Q3_K.gguf Q3_K 4.14GB
lucky52-bloom-7b1-no-5.Q3_K_M.gguf Q3_K_M 4.14GB
lucky52-bloom-7b1-no-5.Q3_K_L.gguf Q3_K_L 4.42GB
lucky52-bloom-7b1-no-5.IQ4_XS.gguf IQ4_XS 4.33GB
lucky52-bloom-7b1-no-5.Q4_0.gguf Q4_0 4.51GB
lucky52-bloom-7b1-no-5.IQ4_NL.gguf IQ4_NL 4.53GB
lucky52-bloom-7b1-no-5.Q4_K_S.gguf Q4_K_S 4.53GB
lucky52-bloom-7b1-no-5.Q4_K.gguf Q4_K 4.91GB
lucky52-bloom-7b1-no-5.Q4_K_M.gguf Q4_K_M 4.91GB
lucky52-bloom-7b1-no-5.Q4_1.gguf Q4_1 4.92GB
lucky52-bloom-7b1-no-5.Q5_0.gguf Q5_0 5.33GB
lucky52-bloom-7b1-no-5.Q5_K_S.gguf Q5_K_S 5.33GB
lucky52-bloom-7b1-no-5.Q5_K.gguf Q5_K 5.63GB
lucky52-bloom-7b1-no-5.Q5_K_M.gguf Q5_K_M 5.63GB
lucky52-bloom-7b1-no-5.Q5_1.gguf Q5_1 5.74GB
lucky52-bloom-7b1-no-5.Q6_K.gguf Q6_K 6.2GB
lucky52-bloom-7b1-no-5.Q8_0.gguf Q8_0 8.03GB

Original model description:


library_name: transformers pipeline_tag: text-generation language:

  • multilingual tags:
  • generation
  • question answering
  • instruction tuning datasets:
  • MBZUAI/Bactrian-X license: cc-by-nc-4.0

Model Description

This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages. We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.

Please refer to our paper for more details.

  • Base model: BLOOM 7B1
  • Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani
  • Instruction language codes: en, zh, af, ar, az
  • Training method: full-parameter fine-tuning.

Usage

The model checkpoint should be loaded using transformers library.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-5")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-5")

Citation

@inproceedings{ji2025lucky52,
      title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM}, 
      author={Shaoxiong Ji and Pinzhen Chen},
      year={2025},
      booktitle={Proceedings of COLING},
      url={https://arxiv.org/abs/2404.04850}, 
}
Description
Model synced from source: RichardErkhov/MaLA-LM_-_lucky52-bloom-7b1-no-5-gguf
Readme 28 KiB