69 lines
3.1 KiB
Markdown
69 lines
3.1 KiB
Markdown
---
|
|
license: apache-2.0
|
|
language:
|
|
- ja
|
|
pipeline_tag: text-generation
|
|
library_name: transformers
|
|
base_model:
|
|
- Qwen/Qwen2.5-1.5B-Instruct
|
|
---
|
|
# TinySwallow-1.5B
|
|
|
|
🤗 [Models](https://huggingface.co/SakanaAI) | 📚 [Paper](https://arxiv.org/abs/2501.16937) | 📝 [Blog](https://sakana.ai/taid-jp/) | 🐦 [Twitter](https://twitter.com/SakanaAILabs)
|
|
|
|
**TinySwallow-1.5B** is a Japanese compact language model created through *TAID (Temporally Adaptive Interpolated Distillation)*, our new knowledge distillation method.
|
|
We used [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) as the teacher model and [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) as the student model.
|
|
The model has been further pre-trained on Japanese text data to enhance its Japanese language capabilities.
|
|
|
|
If you are looking for an instruction-following model, check [TinySwallow-1.5B-Instruct](https://huggingface.co/SakanaAI/TinySwallow-1.5B-Instruct).
|
|
|
|
## Model Details
|
|
|
|
- **Developed by:** [Sakana AI](https://sakana.ai/) and [Swallow Team](https://swallow-llm.github.io/index.en.html)
|
|
- **Model type:** Autoregressive Language Model
|
|
- **Language(s):** Japanese
|
|
- **License:** [Apache License, Version 2.0](./LICENSE)
|
|
- **Paper:** https://arxiv.org/abs/2501.16937
|
|
- **Blog:** https://sakana.ai/taid-jp/
|
|
|
|
## Uses
|
|
This model is provided for research and development purposes only and should be considered as an experimental prototype.
|
|
It is not intended for commercial use or deployment in mission-critical environments.
|
|
Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed.
|
|
Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained.
|
|
Users must fully understand the risks associated with the use of this model and use it at their own discretion.
|
|
|
|
|
|
## Acknowledgement
|
|
|
|
We would like to thank the developers of the source models for their contributions and for making their work available.
|
|
|
|
|
|
## Authors
|
|
|
|
* [Sakana AI](https://sakana.ai/)
|
|
* [Makoto Shing](https://huggingface.co/mkshing)
|
|
* [Taishi Nakamura](https://x.com/Setuna7777_2)
|
|
* [Kou Misaki](https://huggingface.co/takkyu2)
|
|
* [Takuya Akiba](https://huggingface.co/iwiwi)
|
|
* [Swallow Team](https://swallow-llm.github.io/index.en.html)
|
|
* [Naoki Okazaki](https://www.chokkan.org/index.ja.html)
|
|
* [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
|
|
* [Kakeru Hattori](https://aya-se.vercel.app/)
|
|
* [Kazuki Fujii](https://x.com/okoge_kaz)
|
|
* [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
|
|
|
|
|
|
## Citation
|
|
|
|
```bibtex
|
|
@misc{sakana2025taid,
|
|
title = {TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models},
|
|
author. = {Makoto Shing and Kou Misaki and Han Bao and Sho Yokoi and Takuya Akiba},
|
|
year = {2025},
|
|
eprint = {2501.16937},
|
|
archivePrefix = {arXiv},
|
|
primaryClass = {cs.LG},
|
|
url = {https://arxiv.org/abs/2501.16937}
|
|
}
|
|
``` |