227 lines
7.0 KiB
Markdown
227 lines
7.0 KiB
Markdown
---
|
|
language:
|
|
- en
|
|
- zh
|
|
- id
|
|
- th
|
|
- vi
|
|
- ms
|
|
- lo
|
|
datasets:
|
|
- cerebras/SlimPajama-627B
|
|
- Skywork/SkyPile-150B
|
|
- allenai/MADLAD-400
|
|
- cc100
|
|
tags:
|
|
- multilingual
|
|
- sea
|
|
- sailor
|
|
license: apache-2.0
|
|
base_model: Qwen/Qwen1.5-4B
|
|
inference: false
|
|
model-index:
|
|
- name: Sailor-4B
|
|
results:
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: XQuAD-Thai
|
|
type: XQuAD-Thai
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 46.82
|
|
- name: F1 (3-Shot)
|
|
type: F1 (3-Shot)
|
|
value: 63.34
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: TyDiQA-Indonesian
|
|
type: TyDiQA-Indonesian
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 53.98
|
|
- name: F1 (3-Shot)
|
|
type: F1 (3-Shot)
|
|
value: 73.48
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: XQuAD-Vietnamese
|
|
type: XQuAD-Vietnamese
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 47.65
|
|
- name: F1 (3-Shot)
|
|
type: F1 (3-Shot)
|
|
value: 67.09
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: XCOPA-Thai
|
|
type: XCOPA-Thai
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 53.4
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: XCOPA-Indonesian
|
|
type: XCOPA-Indonesian
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 69.20
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: XCOPA-Vietnamese
|
|
type: XCOPA-Vietnamese
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 68.20
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: M3Exam-Thai
|
|
type: M3Exam-Thai
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 27.88
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: M3Exam-Indonesian
|
|
type: M3Exam-Indonesian
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 31.27
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: M3Exam-Vietnamese
|
|
type: M3Exam-Vietnamese
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 40.69
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: BELEBELE-Thai
|
|
type: BELEBELE-Thai
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 36.11
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: BELEBELE-Indonesian
|
|
type: BELEBELE-Indonesian
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 41.33
|
|
- task:
|
|
type: text-generation
|
|
dataset:
|
|
name: BELEBELE-Vietnamese
|
|
type: BELEBELE-Vietnamese
|
|
metrics:
|
|
- name: EM (3-Shot)
|
|
type: EM (3-Shot)
|
|
value: 38.89
|
|
---
|
|
|
|
<div align="center">
|
|
<img src="banner_sailor.jpg" width="700"/>
|
|
</div>
|
|
|
|
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
|
|
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
|
|
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
|
|
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
|
|
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
|
|
|
|
> The logo was generated by MidJourney
|
|
|
|
## Model Summary
|
|
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
|
|
- **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/)
|
|
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
|
|
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
|
|
|
|
## Training details
|
|
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
|
|
The pre-training corpus heavily leverages the publicly available corpus, including
|
|
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
|
|
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
|
|
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
|
|
|
|
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
|
|
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
|
|
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
|
|
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
|
|
|
|
## Requirements
|
|
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|
|
|
|
## Quickstart
|
|
|
|
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
|
|
|
|
```python
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
device = "cuda" # the device to load the model
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("sail/Sailor-4B", device_map="auto")
|
|
tokenizer = AutoTokenizer.from_pretrained("sail/Sailor-4B")
|
|
|
|
input_message = "Model bahasa adalah model probabilistik"
|
|
### The given Indonesian input translates to 'A language model is a probabilistic model of.'
|
|
|
|
model_inputs = tokenizer([input_message], return_tensors="pt").to(device)
|
|
|
|
generated_ids = model.generate(
|
|
model_inputs.input_ids,
|
|
max_new_tokens=64
|
|
)
|
|
|
|
generated_ids = [
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
|
]
|
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
print(response)
|
|
```
|
|
|
|
# License
|
|
|
|
Sailor is distributed under the terms of the Apache License 2.0.
|
|
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
|
|
|
|
## Citation
|
|
|
|
If you find sailor useful, please cite our work as follows:
|
|
|
|
|
|
```
|
|
@inproceedings{dou-etal-2024-sailor,
|
|
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
|
|
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
|
|
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
|
|
year = "2024",
|
|
}
|
|
```
|
|
|
|
# Contact Us
|
|
|
|
If you have any questions, please raise an issue or contact us at [doulx@sea.com](mailto:doulx@sea.com) or [liuqian.sea@gmail.com](mailto:liuqian.sea@gmail.com). |