87 lines
3.1 KiB
Markdown
87 lines
3.1 KiB
Markdown
|
|
---
|
||
|
|
license: gemma
|
||
|
|
base_model:
|
||
|
|
- xiaomi-research/MiMT-46-1B-Pretrain
|
||
|
|
pipeline_tag: translation
|
||
|
|
library_name: transformers
|
||
|
|
---
|
||
|
|
|
||
|
|
|
||
|
|
## Model Description
|
||
|
|
|
||
|
|
MiLMMT-46-1B-v0.1 is an LLM-based translation model. It has been fintuned on MiLMMT-46-1B-Pretrain, which is a language model developed through continual pretraining of Gemma3-1B using a mix of 143 billion tokens from both monolingual and parallel data across 46 different languages. Please find more details in our paper: [Scaling Model and Data for Multilingual Machine Translation with Open Large Language Models](https://arxiv.org/abs/2602.11961).
|
||
|
|
|
||
|
|
|
||
|
|
- **Supported Languages**: Arabic, Azerbaijani, Bulgarian, Bengali, Catalan, Czech, Danish, German, Greek, English, Spanish, Persian, Finnish, French, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Kazakh, Khmer, Korean, Lao, Malay, Burmese, Norwegian, Dutch, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Swedish, Tamil, Thai, Tagalog, Turkish, Urdu, Uzbek, Vietnamese, Cantonese, Chinese (Simplified), Chinese (Traditional).
|
||
|
|
- **GitHub**: Please find more details in our [GitHub repository](https://github.com/xiaomi-research/gemmax).
|
||
|
|
- **Developed by**: Xiaomi Inc.
|
||
|
|
|
||
|
|
|
||
|
|
## Model Performance
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
## Translation Prompt
|
||
|
|
|
||
|
|
```text
|
||
|
|
Translate this from <source language name> to <target language name>:
|
||
|
|
<source language name>: <source language sentence>
|
||
|
|
<target language name>:
|
||
|
|
```
|
||
|
|
Please use the language name specified above in the translation prompt.
|
||
|
|
|
||
|
|
|
||
|
|
## Run the model
|
||
|
|
|
||
|
|
#### Using on vLLM:
|
||
|
|
```python
|
||
|
|
from vllm import LLM, SamplingParams
|
||
|
|
|
||
|
|
|
||
|
|
model_id = "xiaomi-research/MiLMMT-46-1B-v0.1"
|
||
|
|
|
||
|
|
model = LLM(model=model_id)
|
||
|
|
sampling_params = SamplingParams(top_k=1, temperature=0, max_tokens=2048)
|
||
|
|
|
||
|
|
text = "Translate this from Chinese (Simplified) to English:\nChinese (Simplified): 我爱机器翻译\nEnglish:"
|
||
|
|
|
||
|
|
outputs = model.generate(text, sampling_params)
|
||
|
|
print(outputs[0].outputs[0].text)
|
||
|
|
```
|
||
|
|
|
||
|
|
#### Using on Transformers:
|
||
|
|
```python
|
||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||
|
|
|
||
|
|
|
||
|
|
model_id = "xiaomi-research/MiLMMT-46-1B-v0.1"
|
||
|
|
|
||
|
|
model = AutoModelForCausalLM.from_pretrained(model_id)
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||
|
|
|
||
|
|
text = "Translate this from Chinese (Simplified) to English:\nChinese (Simplified): 我爱机器翻译\nEnglish:"
|
||
|
|
inputs = tokenizer(text, add_special_tokens=False, return_tensors="pt")
|
||
|
|
|
||
|
|
outputs = model.generate(**inputs, max_new_tokens=1024)
|
||
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||
|
|
```
|
||
|
|
|
||
|
|
|
||
|
|
## Citation
|
||
|
|
|
||
|
|
```bibtex
|
||
|
|
@misc{shang2026scalingmodeldatamultilingual,
|
||
|
|
title={Scaling Model and Data for Multilingual Machine Translation with Open Large Language Models},
|
||
|
|
author={Yuzhe Shang and Pengzhi Gao and Wei Liu and Jian Luan and Jinsong Su},
|
||
|
|
year={2026},
|
||
|
|
eprint={2602.11961},
|
||
|
|
archivePrefix={arXiv},
|
||
|
|
primaryClass={cs.CL},
|
||
|
|
url={https://arxiv.org/abs/2602.11961},
|
||
|
|
}
|
||
|
|
```
|
||
|
|
|
||
|
|
|
||
|
|
## Limitations
|
||
|
|
|
||
|
|
MiLMMT-46 currently supports only the 46 languages listed above, and strong translation performance is not guaranteed for other languages. We will continue to improve the translation quality of MiLMMT-46, and future model releases will follow in due course.
|