225 lines
9.4 KiB
Markdown
225 lines
9.4 KiB
Markdown
|
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||
|
|
|
||
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||
|
|
the License. You may obtain a copy of the License at
|
||
|
|
|
||
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
|
|
||
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||
|
|
specific language governing permissions and limitations under the License.
|
||
|
|
|
||
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||
|
|
rendered properly in your Markdown viewer.
|
||
|
|
|
||
|
|
-->
|
||
|
|
*This model was released on 2019-11-05 and added to Hugging Face Transformers on 2020-11-16.*
|
||
|
|
|
||
|
|
<div style="float: right;">
|
||
|
|
<div class="flex flex-wrap space-x-1">
|
||
|
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||
|
|
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||
|
|
</div>
|
||
|
|
</div>
|
||
|
|
|
||
|
|
# XLM-RoBERTa
|
||
|
|
|
||
|
|
[XLM-RoBERTa](https://huggingface.co/papers/1911.02116) is a large multilingual masked language model trained on 2.5TB of filtered CommonCrawl data across 100 languages. It shows that scaling the model provides strong performance gains on high-resource and low-resource languages. The model uses the [RoBERTa](./roberta) pretraining objectives on the [XLM](./xlm) model.
|
||
|
|
|
||
|
|
You can find all the original XLM-RoBERTa checkpoints under the [Facebook AI community](https://huggingface.co/FacebookAI) organization.
|
||
|
|
|
||
|
|
> [!TIP]
|
||
|
|
> Click on the XLM-RoBERTa models in the right sidebar for more examples of how to apply XLM-RoBERTa to different cross-lingual tasks like classification, translation, and question answering.
|
||
|
|
|
||
|
|
The example below demonstrates how to predict the `<mask>` token with [`Pipeline`], [`AutoModel`], and from the command line.
|
||
|
|
|
||
|
|
<hfoptions id="usage">
|
||
|
|
<hfoption id="Pipeline">
|
||
|
|
|
||
|
|
```python
|
||
|
|
import torch
|
||
|
|
from transformers import pipeline
|
||
|
|
|
||
|
|
pipeline = pipeline(
|
||
|
|
task="fill-mask",
|
||
|
|
model="FacebookAI/xlm-roberta-base",
|
||
|
|
dtype=torch.float16,
|
||
|
|
device=0
|
||
|
|
)
|
||
|
|
# Example in French
|
||
|
|
pipeline("Bonjour, je suis un modèle <mask>.")
|
||
|
|
```
|
||
|
|
|
||
|
|
</hfoption>
|
||
|
|
<hfoption id="AutoModel">
|
||
|
|
|
||
|
|
```python
|
||
|
|
from transformers import AutoModelForMaskedLM, AutoTokenizer
|
||
|
|
import torch
|
||
|
|
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained(
|
||
|
|
"FacebookAI/xlm-roberta-base"
|
||
|
|
)
|
||
|
|
model = AutoModelForMaskedLM.from_pretrained(
|
||
|
|
"FacebookAI/xlm-roberta-base",
|
||
|
|
dtype=torch.float16,
|
||
|
|
device_map="auto",
|
||
|
|
attn_implementation="sdpa"
|
||
|
|
)
|
||
|
|
|
||
|
|
# Prepare input
|
||
|
|
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)
|
||
|
|
|
||
|
|
with torch.no_grad():
|
||
|
|
outputs = model(**inputs)
|
||
|
|
predictions = outputs.logits
|
||
|
|
|
||
|
|
masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
|
||
|
|
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
|
||
|
|
predicted_token = tokenizer.decode(predicted_token_id)
|
||
|
|
|
||
|
|
print(f"The predicted token is: {predicted_token}")
|
||
|
|
```
|
||
|
|
|
||
|
|
</hfoption>
|
||
|
|
<hfoption id="transformers CLI">
|
||
|
|
|
||
|
|
```bash
|
||
|
|
echo -e "Plants create <mask> through a process known as photosynthesis." | transformers run --task fill-mask --model FacebookAI/xlm-roberta-base --device 0
|
||
|
|
```
|
||
|
|
|
||
|
|
</hfoption>
|
||
|
|
</hfoptions>
|
||
|
|
|
||
|
|
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [quantization guide](../quantization) overview for more available quantization backends.
|
||
|
|
|
||
|
|
The example below uses [bitsandbytes](../quantization/bitsandbytes) the quantive the weights to 4 bits
|
||
|
|
|
||
|
|
```python
|
||
|
|
import torch
|
||
|
|
from transformers import AutoModelForMaskedLM, AutoTokenizer, BitsAndBytesConfig
|
||
|
|
|
||
|
|
quantization_config = BitsAndBytesConfig(
|
||
|
|
load_in_4bit=True,
|
||
|
|
bnb_4bit_compute_dtype=torch.bfloat16
|
||
|
|
bnb_4bit_quant_type="nf4", # or "fp4" for float 4-bit quantization
|
||
|
|
bnb_4bit_use_double_quant=True, # use double quantization for better performance
|
||
|
|
)
|
||
|
|
tokenizer = AutoTokenizer.from_pretrained("facebook/xlm-roberta-large")
|
||
|
|
model = AutoModelForMaskedLM.from_pretrained(
|
||
|
|
"facebook/xlm-roberta-large",
|
||
|
|
dtype=torch.float16,
|
||
|
|
device_map="auto",
|
||
|
|
attn_implementation="flash_attention_2",
|
||
|
|
quantization_config=quantization_config
|
||
|
|
)
|
||
|
|
|
||
|
|
inputs = tokenizer("Bonjour, je suis un modèle <mask>.", return_tensors="pt").to(model.device)
|
||
|
|
outputs = model.generate(**inputs, max_new_tokens=100)
|
||
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||
|
|
```
|
||
|
|
|
||
|
|
## Notes
|
||
|
|
|
||
|
|
- Unlike some XLM models, XLM-RoBERTa doesn't require `lang` tensors to understand what language is being used. It automatically determines the language from the input IDs
|
||
|
|
|
||
|
|
## Resources
|
||
|
|
|
||
|
|
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||
|
|
|
||
|
|
<PipelineTag pipeline="text-classification"/>
|
||
|
|
|
||
|
|
- A blog post on how to [finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS](https://www.philschmid.de/habana-distributed-training)
|
||
|
|
- [`XLMRobertaForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)..
|
||
|
|
- [Text classification](https://huggingface.co/docs/transformers/tasks/sequence_classification) chapter of the 🤗 Hugging Face Task Guides.
|
||
|
|
- [Text classification task guide](../tasks/sequence_classification)
|
||
|
|
|
||
|
|
<PipelineTag pipeline="token-classification"/>
|
||
|
|
|
||
|
|
- [`XLMRobertaForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb).
|
||
|
|
- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||
|
|
- [Token classification task guide](../tasks/token_classification)
|
||
|
|
|
||
|
|
<PipelineTag pipeline="text-generation"/>
|
||
|
|
|
||
|
|
- [`XLMRobertaForCausalLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||
|
|
- [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) chapter of the 🤗 Hugging Face Task Guides.
|
||
|
|
- [Causal language modeling task guide](../tasks/language_modeling)
|
||
|
|
|
||
|
|
<PipelineTag pipeline="fill-mask"/>
|
||
|
|
|
||
|
|
- [`XLMRobertaForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||
|
|
- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||
|
|
- [Masked language modeling](../tasks/masked_language_modeling)
|
||
|
|
|
||
|
|
<PipelineTag pipeline="question-answering"/>
|
||
|
|
|
||
|
|
- [`XLMRobertaForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
|
||
|
|
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the 🤗 Hugging Face Course.
|
||
|
|
- [Question answering task guide](../tasks/question_answering)
|
||
|
|
|
||
|
|
**Multiple choice**
|
||
|
|
|
||
|
|
- [`XLMRobertaForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb).
|
||
|
|
- [Multiple choice task guide](../tasks/multiple_choice)
|
||
|
|
|
||
|
|
🚀 Deploy
|
||
|
|
|
||
|
|
- A blog post on how to [Deploy Serverless XLM RoBERTa on AWS Lambda](https://www.philschmid.de/multilingual-serverless-xlm-roberta-with-huggingface).
|
||
|
|
|
||
|
|
<Tip>
|
||
|
|
|
||
|
|
This implementation is the same as RoBERTa. Refer to the [documentation of RoBERTa](roberta) for usage examples as well as the information relative to the inputs and outputs.
|
||
|
|
</Tip>
|
||
|
|
|
||
|
|
## XLMRobertaConfig
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaConfig
|
||
|
|
|
||
|
|
## XLMRobertaTokenizer
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaTokenizer
|
||
|
|
- build_inputs_with_special_tokens
|
||
|
|
- get_special_tokens_mask
|
||
|
|
- create_token_type_ids_from_sequences
|
||
|
|
- save_vocabulary
|
||
|
|
|
||
|
|
## XLMRobertaTokenizerFast
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaTokenizerFast
|
||
|
|
|
||
|
|
## XLMRobertaModel
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaModel
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## XLMRobertaForCausalLM
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaForCausalLM
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## XLMRobertaForMaskedLM
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaForMaskedLM
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## XLMRobertaForSequenceClassification
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaForSequenceClassification
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## XLMRobertaForMultipleChoice
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaForMultipleChoice
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## XLMRobertaForTokenClassification
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaForTokenClassification
|
||
|
|
- forward
|
||
|
|
|
||
|
|
## XLMRobertaForQuestionAnswering
|
||
|
|
|
||
|
|
[[autodoc]] XLMRobertaForQuestionAnswering
|
||
|
|
- forward
|