142 lines
5.7 KiB
Markdown
142 lines
5.7 KiB
Markdown
---
|
|
language:
|
|
- en
|
|
- zh
|
|
- id
|
|
- th
|
|
- vi
|
|
- ms
|
|
- lo
|
|
datasets:
|
|
- CohereForAI/aya_dataset
|
|
- CohereForAI/aya_collection
|
|
- Open-Orca/OpenOrca
|
|
tags:
|
|
- multilingual
|
|
- sea
|
|
- sailor
|
|
- sft
|
|
- chat
|
|
- instruction
|
|
widget:
|
|
- text: "如何制作烤鱼?"
|
|
example_title: "Chinese"
|
|
- text: "How to bake fish?"
|
|
example_title: "English"
|
|
- text: "Bagaimana cara memanggang ikan?"
|
|
example_title: "Malay"
|
|
- text: "วิธีย่างปลา?"
|
|
example_title: "Thai"
|
|
- text: "Bagaimana membuat bakaran ikan?"
|
|
example_title: "Indonesian"
|
|
- text: "Làm thế nào để nướng cá?"
|
|
example_title: "Vietnamese"
|
|
license: apache-2.0
|
|
base_model: sail/Sailor-7B
|
|
---
|
|
|
|
<div align="center">
|
|
<img src="banner_sailor.jpg" width="700"/>
|
|
</div>
|
|
|
|
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
|
|
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
|
|
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
|
|
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
|
|
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
|
|
|
|
> The logo was generated by MidJourney
|
|
|
|
## Model Summary
|
|
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
|
|
- **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/)
|
|
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
|
|
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
|
|
|
|
## Training details
|
|
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
|
|
The pre-training corpus heavily leverages the publicly available corpus, including
|
|
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
|
|
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
|
|
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
|
|
The instruction tuning corpus are all publicly available including
|
|
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
|
|
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
|
|
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
|
|
|
|
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
|
|
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
|
|
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
|
|
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
|
|
|
|
## Requirements
|
|
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|
|
|
|
## Quickstart
|
|
|
|
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
|
|
|
|
```python
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
device = "cuda"
|
|
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
'sail/Sailor-7B-Chat',
|
|
torch_dtype="auto",
|
|
device_map="auto"
|
|
)
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-7B-Chat')
|
|
system_prompt= 'You are a helpful assistant'
|
|
|
|
prompt = "Beri saya pengenalan singkat tentang model bahasa besar."
|
|
# prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn."
|
|
# prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่"
|
|
|
|
messages = [
|
|
{"role": "system", "content": system_prompt},
|
|
{"role": "question", "content": prompt}
|
|
]
|
|
text = tokenizer.apply_chat_template(
|
|
messages,
|
|
tokenize=False,
|
|
add_generation_prompt=True
|
|
)
|
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(device)
|
|
input_ids = model_inputs.input_ids.to(device)
|
|
|
|
generated_ids = model.generate(
|
|
input_ids,
|
|
max_new_tokens=512,
|
|
)
|
|
|
|
generated_ids = [
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
|
]
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
print(response)
|
|
```
|
|
|
|
# License
|
|
|
|
Sailor is distributed under the terms of the Apache License 2.0.
|
|
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
|
|
|
|
## Citation
|
|
|
|
If you find sailor useful, please cite our work as follows:
|
|
|
|
|
|
```
|
|
@inproceedings{dou-etal-2024-sailor,
|
|
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
|
|
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
|
|
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
|
|
year = "2024",
|
|
}
|
|
```
|
|
|
|
# Contact Us
|
|
|
|
If you have any questions, please raise an issue or contact us at [doulx@sea.com](mailto:doulx@sea.com) or [liuqian.sea@gmail.com](mailto:liuqian.sea@gmail.com). |