57 lines
2.5 KiB
Markdown
57 lines
2.5 KiB
Markdown
---
|
|
language:
|
|
- es
|
|
license: apache-2.0
|
|
library_name: transformers
|
|
base_model:
|
|
- allenai/Olmo-3-1025-7B
|
|
tags:
|
|
- multilingual
|
|
- synthetic
|
|
- sft
|
|
datasets:
|
|
- ljvmiranda921/PolyglotTeachers-SFT-Synth
|
|
---
|
|
|
|
<img alt="Logo for LTL" src="ltl_logo2.svg" width="240px" style="margin-left:'auto' margin-right:'auto' display:'block'">
|
|
|
|
# Polyglot-OLMo3-7B-SFT-es
|
|
|
|
This model is a fine-tuned version of [allenai/OLMo-3-1025-7B](https://huggingface.co/allenai/OLMo-3-1025-7B) on Spanish synthetic data, using the best teacher-student combination identified in our paper [Polyglot Teachers: Evaluating Language Models for Multilingual Synthetic Data Generation]().
|
|
|
|
The training data was generated by [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) and is available in the [PolyglotTeachers-SFT-Synth](https://huggingface.co/datasets/ljvmiranda921/PolyglotTeachers-SFT-Synth) dataset.
|
|
|
|
## Usage
|
|
|
|
```python
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
model_id = "ljvmiranda921/Polyglot-OLMo3-7B-SFT-es"
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
|
model = AutoModelForCausalLM.from_pretrained(model_id)
|
|
|
|
messages = [{"role": "user", "content": "Hola, ¿cómo estás?"}]
|
|
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
|
|
outputs = model.generate(inputs, max_new_tokens=256)
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
```
|
|
|
|
## Acknowledgements
|
|
|
|
LJVM and AK acknowledge the support of the UKRI Frontier Grant EP/Y031350/1 ([EQUATE](https://gtr.ukri.org/projects?ref=EP%2FY031350%2F1)).
|
|
This work was performed using joint resources provided by the [Cambridge Service for Data Driven Discovery (CSD3)](https://hpc.cam.ac.uk/high-performance-computing) EP/T022159/1 and the [Isambard AI National AI Research Resource (AIRR)](https://www.bristol.ac.uk/research/centres/bristol-supercomputing/#isambard-ai) ST/AIRR/I-A-I/1023, and the Microsoft Research Grant.
|
|
LJVM would also like to thank Songbo Hu, Chen Cecilia Liu, Millicent Ochieng, and Felermino Ali for helpful and productive discussions on the project.
|
|
|
|
## Citation
|
|
|
|
```bibtex
|
|
@misc{miranda2026polyglotteachersevaluatinglanguage,
|
|
title={Polyglot Teachers: Evaluating Language Models for Multilingual Synthetic Data Generation},
|
|
author={Lester James V. Miranda and Ivan Vulić and Anna Korhonen},
|
|
year={2026},
|
|
eprint={2604.11290},
|
|
archivePrefix={arXiv},
|
|
primaryClass={cs.CL},
|
|
url={https://arxiv.org/abs/2604.11290},
|
|
}
|
|
``` |