init
This commit is contained in:
183
transformers/examples/pytorch/question-answering/README.md
Normal file
183
transformers/examples/pytorch/question-answering/README.md
Normal file
@@ -0,0 +1,183 @@
|
||||
<!---
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
|
||||
# Question answering
|
||||
|
||||
This folder contains several scripts that showcase how to fine-tune a 🤗 Transformers model on a question answering dataset,
|
||||
like SQuAD.
|
||||
|
||||
## Trainer-based scripts
|
||||
|
||||
The [`run_qa.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py),
|
||||
[`run_qa_beam_search.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_beam_search.py) and [`run_seq2seq_qa.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_seq2seq_qa.py) leverage the 🤗 [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) for fine-tuning.
|
||||
|
||||
### Fine-tuning BERT on SQuAD1.0
|
||||
|
||||
The [`run_qa.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py) script
|
||||
allows to fine-tune any model from our [hub](https://huggingface.co/models) (as long as its architecture has a `ForQuestionAnswering` version in the library) on a question-answering dataset (such as SQuAD, or any other QA dataset available in the `datasets` library, or your own csv/jsonlines files) as long as they are structured the same way as SQuAD. You might need to tweak the data processing inside the script if your data is structured differently.
|
||||
|
||||
**Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it
|
||||
uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
|
||||
[this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version of the script which can be found [here](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering).
|
||||
|
||||
Note that if your dataset contains samples with no possible answers (like SQuAD version 2), you need to pass along the flag `--version_2_with_negative`.
|
||||
|
||||
This example code fine-tunes BERT on the SQuAD1.0 dataset. It runs in 24 min (with BERT-base) or 68 min (with BERT-large)
|
||||
on a single tesla V100 16GB.
|
||||
|
||||
```bash
|
||||
python run_qa.py \
|
||||
--model_name_or_path google-bert/bert-base-uncased \
|
||||
--dataset_name squad \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--per_device_train_batch_size 12 \
|
||||
--learning_rate 3e-5 \
|
||||
--num_train_epochs 2 \
|
||||
--max_seq_length 384 \
|
||||
--doc_stride 128 \
|
||||
--output_dir /tmp/debug_squad/
|
||||
```
|
||||
|
||||
Training with the previously defined hyper-parameters yields the following results:
|
||||
|
||||
```bash
|
||||
f1 = 88.52
|
||||
exact_match = 81.22
|
||||
```
|
||||
|
||||
### Fine-tuning XLNet with beam search on SQuAD
|
||||
|
||||
The [`run_qa_beam_search.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa_beam_search.py) script is only meant to fine-tune XLNet, which is a special encoder-only Transformer model. The example code below fine-tunes XLNet on the SQuAD1.0 and SQuAD2.0 datasets.
|
||||
|
||||
#### Command for SQuAD1.0:
|
||||
|
||||
```bash
|
||||
python run_qa_beam_search.py \
|
||||
--model_name_or_path xlnet/xlnet-large-cased \
|
||||
--dataset_name squad \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--learning_rate 3e-5 \
|
||||
--num_train_epochs 2 \
|
||||
--max_seq_length 384 \
|
||||
--doc_stride 128 \
|
||||
--output_dir ./wwm_cased_finetuned_squad/ \
|
||||
--per_device_eval_batch_size=4 \
|
||||
--per_device_train_batch_size=4 \
|
||||
--save_steps 5000
|
||||
```
|
||||
|
||||
#### Command for SQuAD2.0:
|
||||
|
||||
```bash
|
||||
export SQUAD_DIR=/path/to/SQUAD
|
||||
|
||||
python run_qa_beam_search.py \
|
||||
--model_name_or_path xlnet/xlnet-large-cased \
|
||||
--dataset_name squad_v2 \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--version_2_with_negative \
|
||||
--learning_rate 3e-5 \
|
||||
--num_train_epochs 4 \
|
||||
--max_seq_length 384 \
|
||||
--doc_stride 128 \
|
||||
--output_dir ./wwm_cased_finetuned_squad/ \
|
||||
--per_device_eval_batch_size=2 \
|
||||
--per_device_train_batch_size=2 \
|
||||
--save_steps 5000
|
||||
```
|
||||
|
||||
### Fine-tuning T5 on SQuAD2.0
|
||||
|
||||
The [`run_seq2seq_qa.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_seq2seq_qa.py) script is meant for encoder-decoder (also called seq2seq) Transformer models, such as T5 or BART. These
|
||||
models are generative, rather than discriminative. This means that they learn to generate the correct answer, rather than predicting the start and end position of the tokens of the answer.
|
||||
|
||||
This example code fine-tunes T5 on the SQuAD2.0 dataset.
|
||||
|
||||
```bash
|
||||
python run_seq2seq_qa.py \
|
||||
--model_name_or_path google-t5/t5-small \
|
||||
--dataset_name squad_v2 \
|
||||
--context_column context \
|
||||
--question_column question \
|
||||
--answer_column answers \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--per_device_train_batch_size 12 \
|
||||
--learning_rate 3e-5 \
|
||||
--num_train_epochs 2 \
|
||||
--max_seq_length 384 \
|
||||
--doc_stride 128 \
|
||||
--output_dir /tmp/debug_seq2seq_squad/
|
||||
```
|
||||
|
||||
## Accelerate-based scripts
|
||||
|
||||
Based on the scripts `run_qa_no_trainer.py` and `run_qa_beam_search_no_trainer.py`.
|
||||
|
||||
Like `run_qa.py` and `run_qa_beam_search.py`, these scripts allow you to fine-tune any of the models supported on a
|
||||
SQuAD or a similar dataset, the main difference is that this script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like. It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer or the dataloaders directly in the script), but still run in a distributed setup, on TPU and supports mixed precision by leveraging the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library.
|
||||
|
||||
You can use the script normally after installing it:
|
||||
|
||||
```bash
|
||||
pip install git+https://github.com/huggingface/accelerate
|
||||
```
|
||||
|
||||
then
|
||||
|
||||
```bash
|
||||
python run_qa_no_trainer.py \
|
||||
--model_name_or_path google-bert/bert-base-uncased \
|
||||
--dataset_name squad \
|
||||
--max_seq_length 384 \
|
||||
--doc_stride 128 \
|
||||
--output_dir ~/tmp/debug_squad
|
||||
```
|
||||
|
||||
You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
|
||||
|
||||
```bash
|
||||
accelerate config
|
||||
```
|
||||
|
||||
and reply to the questions asked. Then
|
||||
|
||||
```bash
|
||||
accelerate test
|
||||
```
|
||||
|
||||
that will check everything is ready for training. Finally, you can launch training with
|
||||
|
||||
```bash
|
||||
accelerate launch run_qa_no_trainer.py \
|
||||
--model_name_or_path google-bert/bert-base-uncased \
|
||||
--dataset_name squad \
|
||||
--max_seq_length 384 \
|
||||
--doc_stride 128 \
|
||||
--output_dir ~/tmp/debug_squad
|
||||
```
|
||||
|
||||
This command is the same and will work for:
|
||||
|
||||
- a CPU-only setup
|
||||
- a setup with one GPU
|
||||
- a distributed training with several GPUs (single or multi node)
|
||||
- a training on TPUs
|
||||
|
||||
Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.
|
||||
@@ -0,0 +1,4 @@
|
||||
accelerate >= 0.12.0
|
||||
datasets >= 1.8.0
|
||||
torch >= 1.3.0
|
||||
evaluate
|
||||
714
transformers/examples/pytorch/question-answering/run_qa.py
Executable file
714
transformers/examples/pytorch/question-answering/run_qa.py
Executable file
@@ -0,0 +1,714 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
Fine-tuning the library models for question answering using a slightly adapted version of the 🤗 Trainer.
|
||||
"""
|
||||
# You can also adapt this script on your own question answering task. Pointers for this are left as comments.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
import datasets
|
||||
import evaluate
|
||||
from datasets import load_dataset
|
||||
from trainer_qa import QuestionAnsweringTrainer
|
||||
from utils_qa import postprocess_qa_predictions
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoModelForQuestionAnswering,
|
||||
AutoTokenizer,
|
||||
DataCollatorWithPadding,
|
||||
EvalPrediction,
|
||||
HfArgumentParser,
|
||||
PreTrainedTokenizerFast,
|
||||
TrainingArguments,
|
||||
default_data_collator,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/question-answering/requirements.txt")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
tokenizer_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Path to directory to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
dataset_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
|
||||
validation_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
|
||||
)
|
||||
test_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "An optional input test data file to evaluate the perplexity on (a text file)."},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_seq_length: int = field(
|
||||
default=384,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total input sequence length after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
)
|
||||
},
|
||||
)
|
||||
pad_to_max_length: bool = field(
|
||||
default=True,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to pad all samples to `max_seq_length`. If False, will pad the samples dynamically when"
|
||||
" batching to the maximum length in the batch (which can be faster on GPU but will be slower on TPU)."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_predict_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of prediction examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
version_2_with_negative: bool = field(
|
||||
default=False, metadata={"help": "If true, some of the examples do not have an answer."}
|
||||
)
|
||||
null_score_diff_threshold: float = field(
|
||||
default=0.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"The threshold used to select the null answer: if the best answer has a score that is less than "
|
||||
"the score of the null answer minus this threshold, the null answer is selected for this example. "
|
||||
"Only useful when `version_2_with_negative=True`."
|
||||
)
|
||||
},
|
||||
)
|
||||
doc_stride: int = field(
|
||||
default=128,
|
||||
metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."},
|
||||
)
|
||||
n_best_size: int = field(
|
||||
default=20,
|
||||
metadata={"help": "The total number of n-best predictions to generate when looking for an answer."},
|
||||
)
|
||||
max_answer_length: int = field(
|
||||
default=30,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum length of an answer that can be generated. This is needed because the start "
|
||||
"and end predictions are not conditioned on one another."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
def __post_init__(self):
|
||||
if (
|
||||
self.dataset_name is None
|
||||
and self.train_file is None
|
||||
and self.validation_file is None
|
||||
and self.test_file is None
|
||||
):
|
||||
raise ValueError("Need either a dataset name or a training/validation file/test_file.")
|
||||
else:
|
||||
if self.train_file is not None:
|
||||
extension = self.train_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
|
||||
if self.validation_file is not None:
|
||||
extension = self.validation_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
|
||||
if self.test_file is not None:
|
||||
extension = self.test_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`test_file` should be a csv or a json file."
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_qa", model_args, data_args)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
|
||||
if training_args.should_log:
|
||||
# The default of training_args.log_level is passive, so we set log level at info here to have that default.
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
|
||||
log_level = training_args.get_process_log_level()
|
||||
logger.setLevel(log_level)
|
||||
datasets.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
+ f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
logger.info(f"Training/evaluation parameters {training_args}")
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
|
||||
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
|
||||
# (the dataset will be downloaded automatically from the datasets Hub).
|
||||
#
|
||||
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
|
||||
# 'text' is found. You can easily tweak this behavior (see below).
|
||||
#
|
||||
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
|
||||
# download the dataset.
|
||||
if data_args.dataset_name is not None:
|
||||
# Downloading and loading a dataset from the hub.
|
||||
raw_datasets = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
else:
|
||||
data_files = {}
|
||||
if data_args.train_file is not None:
|
||||
data_files["train"] = data_args.train_file
|
||||
extension = data_args.train_file.split(".")[-1]
|
||||
|
||||
if data_args.validation_file is not None:
|
||||
data_files["validation"] = data_args.validation_file
|
||||
extension = data_args.validation_file.split(".")[-1]
|
||||
if data_args.test_file is not None:
|
||||
data_files["test"] = data_args.test_file
|
||||
extension = data_args.test_file.split(".")[-1]
|
||||
raw_datasets = load_dataset(
|
||||
extension,
|
||||
data_files=data_files,
|
||||
field="data",
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
)
|
||||
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
|
||||
# https://huggingface.co/docs/datasets/loading_datasets.
|
||||
|
||||
# Load pretrained model and tokenizer
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
use_fast=True,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
model = AutoModelForQuestionAnswering.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in model_args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# Tokenizer check: this script requires a fast tokenizer.
|
||||
if not isinstance(tokenizer, PreTrainedTokenizerFast):
|
||||
raise TypeError(
|
||||
"This example script only works for models that have a fast tokenizer. Check out the big table of models at"
|
||||
" https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet"
|
||||
" this requirement"
|
||||
)
|
||||
|
||||
# Preprocessing the datasets.
|
||||
# Preprocessing is slightly different for training and evaluation.
|
||||
if training_args.do_train:
|
||||
column_names = raw_datasets["train"].column_names
|
||||
elif training_args.do_eval:
|
||||
column_names = raw_datasets["validation"].column_names
|
||||
else:
|
||||
column_names = raw_datasets["test"].column_names
|
||||
question_column_name = "question" if "question" in column_names else column_names[0]
|
||||
context_column_name = "context" if "context" in column_names else column_names[1]
|
||||
answer_column_name = "answers" if "answers" in column_names else column_names[2]
|
||||
|
||||
# Padding side determines if we do (question|context) or (context|question).
|
||||
pad_on_right = tokenizer.padding_side == "right"
|
||||
|
||||
if data_args.max_seq_length > tokenizer.model_max_length:
|
||||
logger.warning(
|
||||
f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the "
|
||||
f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
|
||||
)
|
||||
max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
|
||||
|
||||
# Training preprocessing
|
||||
def prepare_train_features(examples):
|
||||
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
|
||||
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
|
||||
# left whitespace
|
||||
examples[question_column_name] = [q.lstrip() for q in examples[question_column_name]]
|
||||
|
||||
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
|
||||
# in one example possible giving several features when a context is long, each of those features having a
|
||||
# context that overlaps a bit the context of the previous feature.
|
||||
tokenized_examples = tokenizer(
|
||||
examples[question_column_name if pad_on_right else context_column_name],
|
||||
examples[context_column_name if pad_on_right else question_column_name],
|
||||
truncation="only_second" if pad_on_right else "only_first",
|
||||
max_length=max_seq_length,
|
||||
stride=data_args.doc_stride,
|
||||
return_overflowing_tokens=True,
|
||||
return_offsets_mapping=True,
|
||||
padding="max_length" if data_args.pad_to_max_length else False,
|
||||
)
|
||||
|
||||
# Since one example might give us several features if it has a long context, we need a map from a feature to
|
||||
# its corresponding example. This key gives us just that.
|
||||
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
|
||||
# The offset mappings will give us a map from token to character position in the original context. This will
|
||||
# help us compute the start_positions and end_positions.
|
||||
offset_mapping = tokenized_examples.pop("offset_mapping")
|
||||
|
||||
# Let's label those examples!
|
||||
tokenized_examples["start_positions"] = []
|
||||
tokenized_examples["end_positions"] = []
|
||||
|
||||
for i, offsets in enumerate(offset_mapping):
|
||||
# We will label impossible answers with the index of the CLS token.
|
||||
input_ids = tokenized_examples["input_ids"][i]
|
||||
if tokenizer.cls_token_id in input_ids:
|
||||
cls_index = input_ids.index(tokenizer.cls_token_id)
|
||||
elif tokenizer.bos_token_id in input_ids:
|
||||
cls_index = input_ids.index(tokenizer.bos_token_id)
|
||||
else:
|
||||
cls_index = 0
|
||||
|
||||
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
|
||||
sequence_ids = tokenized_examples.sequence_ids(i)
|
||||
|
||||
# One example can give several spans, this is the index of the example containing this span of text.
|
||||
sample_index = sample_mapping[i]
|
||||
answers = examples[answer_column_name][sample_index]
|
||||
# If no answers are given, set the cls_index as answer.
|
||||
if len(answers["answer_start"]) == 0:
|
||||
tokenized_examples["start_positions"].append(cls_index)
|
||||
tokenized_examples["end_positions"].append(cls_index)
|
||||
else:
|
||||
# Start/end character index of the answer in the text.
|
||||
start_char = answers["answer_start"][0]
|
||||
end_char = start_char + len(answers["text"][0])
|
||||
|
||||
# Start token index of the current span in the text.
|
||||
token_start_index = 0
|
||||
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
|
||||
token_start_index += 1
|
||||
|
||||
# End token index of the current span in the text.
|
||||
token_end_index = len(input_ids) - 1
|
||||
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
|
||||
token_end_index -= 1
|
||||
|
||||
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
|
||||
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
|
||||
tokenized_examples["start_positions"].append(cls_index)
|
||||
tokenized_examples["end_positions"].append(cls_index)
|
||||
else:
|
||||
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
|
||||
# Note: we could go after the last offset if the answer is the last word (edge case).
|
||||
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
|
||||
token_start_index += 1
|
||||
tokenized_examples["start_positions"].append(token_start_index - 1)
|
||||
while offsets[token_end_index][1] >= end_char:
|
||||
token_end_index -= 1
|
||||
tokenized_examples["end_positions"].append(token_end_index + 1)
|
||||
|
||||
return tokenized_examples
|
||||
|
||||
if training_args.do_train:
|
||||
if "train" not in raw_datasets:
|
||||
raise ValueError("--do_train requires a train dataset")
|
||||
train_dataset = raw_datasets["train"]
|
||||
if data_args.max_train_samples is not None:
|
||||
# We will select sample from whole data if argument is specified
|
||||
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
|
||||
train_dataset = train_dataset.select(range(max_train_samples))
|
||||
# Create train feature from dataset
|
||||
with training_args.main_process_first(desc="train dataset map pre-processing"):
|
||||
train_dataset = train_dataset.map(
|
||||
prepare_train_features,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on train dataset",
|
||||
)
|
||||
if data_args.max_train_samples is not None:
|
||||
# Number of samples might increase during Feature Creation, We select only specified max samples
|
||||
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
|
||||
train_dataset = train_dataset.select(range(max_train_samples))
|
||||
|
||||
# Validation preprocessing
|
||||
def prepare_validation_features(examples):
|
||||
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
|
||||
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
|
||||
# left whitespace
|
||||
examples[question_column_name] = [q.lstrip() for q in examples[question_column_name]]
|
||||
|
||||
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
|
||||
# in one example possible giving several features when a context is long, each of those features having a
|
||||
# context that overlaps a bit the context of the previous feature.
|
||||
tokenized_examples = tokenizer(
|
||||
examples[question_column_name if pad_on_right else context_column_name],
|
||||
examples[context_column_name if pad_on_right else question_column_name],
|
||||
truncation="only_second" if pad_on_right else "only_first",
|
||||
max_length=max_seq_length,
|
||||
stride=data_args.doc_stride,
|
||||
return_overflowing_tokens=True,
|
||||
return_offsets_mapping=True,
|
||||
padding="max_length" if data_args.pad_to_max_length else False,
|
||||
)
|
||||
|
||||
# Since one example might give us several features if it has a long context, we need a map from a feature to
|
||||
# its corresponding example. This key gives us just that.
|
||||
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
|
||||
|
||||
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
|
||||
# corresponding example_id and we will store the offset mappings.
|
||||
tokenized_examples["example_id"] = []
|
||||
|
||||
for i in range(len(tokenized_examples["input_ids"])):
|
||||
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
|
||||
sequence_ids = tokenized_examples.sequence_ids(i)
|
||||
context_index = 1 if pad_on_right else 0
|
||||
|
||||
# One example can give several spans, this is the index of the example containing this span of text.
|
||||
sample_index = sample_mapping[i]
|
||||
tokenized_examples["example_id"].append(examples["id"][sample_index])
|
||||
|
||||
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
|
||||
# position is part of the context or not.
|
||||
tokenized_examples["offset_mapping"][i] = [
|
||||
(o if sequence_ids[k] == context_index else None)
|
||||
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
|
||||
]
|
||||
|
||||
return tokenized_examples
|
||||
|
||||
if training_args.do_eval:
|
||||
if "validation" not in raw_datasets:
|
||||
raise ValueError("--do_eval requires a validation dataset")
|
||||
eval_examples = raw_datasets["validation"]
|
||||
if data_args.max_eval_samples is not None:
|
||||
# We will select sample from whole data
|
||||
max_eval_samples = min(len(eval_examples), data_args.max_eval_samples)
|
||||
eval_examples = eval_examples.select(range(max_eval_samples))
|
||||
# Validation Feature Creation
|
||||
with training_args.main_process_first(desc="validation dataset map pre-processing"):
|
||||
eval_dataset = eval_examples.map(
|
||||
prepare_validation_features,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on validation dataset",
|
||||
)
|
||||
if data_args.max_eval_samples is not None:
|
||||
# During Feature creation dataset samples might increase, we will select required samples again
|
||||
max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)
|
||||
eval_dataset = eval_dataset.select(range(max_eval_samples))
|
||||
|
||||
if training_args.do_predict:
|
||||
if "test" not in raw_datasets:
|
||||
raise ValueError("--do_predict requires a test dataset")
|
||||
predict_examples = raw_datasets["test"]
|
||||
if data_args.max_predict_samples is not None:
|
||||
# We will select sample from whole data
|
||||
predict_examples = predict_examples.select(range(data_args.max_predict_samples))
|
||||
# Predict Feature Creation
|
||||
with training_args.main_process_first(desc="prediction dataset map pre-processing"):
|
||||
predict_dataset = predict_examples.map(
|
||||
prepare_validation_features,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on prediction dataset",
|
||||
)
|
||||
if data_args.max_predict_samples is not None:
|
||||
# During Feature creation dataset samples might increase, we will select required samples again
|
||||
max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)
|
||||
predict_dataset = predict_dataset.select(range(max_predict_samples))
|
||||
|
||||
# Data collator
|
||||
# We have already padded to max length if the corresponding flag is True, otherwise we need to pad in the data
|
||||
# collator.
|
||||
data_collator = (
|
||||
default_data_collator
|
||||
if data_args.pad_to_max_length
|
||||
else DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8 if training_args.fp16 else None)
|
||||
)
|
||||
|
||||
# Post-processing:
|
||||
def post_processing_function(examples, features, predictions, stage="eval"):
|
||||
# Post-processing: we match the start logits and end logits to answers in the original context.
|
||||
predictions = postprocess_qa_predictions(
|
||||
examples=examples,
|
||||
features=features,
|
||||
predictions=predictions,
|
||||
version_2_with_negative=data_args.version_2_with_negative,
|
||||
n_best_size=data_args.n_best_size,
|
||||
max_answer_length=data_args.max_answer_length,
|
||||
null_score_diff_threshold=data_args.null_score_diff_threshold,
|
||||
output_dir=training_args.output_dir,
|
||||
log_level=log_level,
|
||||
prefix=stage,
|
||||
)
|
||||
# Format the result to the format the metric expects.
|
||||
if data_args.version_2_with_negative:
|
||||
formatted_predictions = [
|
||||
{"id": str(k), "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items()
|
||||
]
|
||||
else:
|
||||
formatted_predictions = [{"id": str(k), "prediction_text": v} for k, v in predictions.items()]
|
||||
|
||||
references = [{"id": str(ex["id"]), "answers": ex[answer_column_name]} for ex in examples]
|
||||
return EvalPrediction(predictions=formatted_predictions, label_ids=references)
|
||||
|
||||
if data_args.version_2_with_negative:
|
||||
accepted_best_metrics = ("exact", "f1", "HasAns_exact", "HasAns_f1")
|
||||
else:
|
||||
accepted_best_metrics = ("exact_match", "f1")
|
||||
|
||||
if training_args.load_best_model_at_end and training_args.metric_for_best_model not in accepted_best_metrics:
|
||||
warnings.warn(f"--metric_for_best_model should be set to one of {accepted_best_metrics}")
|
||||
|
||||
metric = evaluate.load(
|
||||
"squad_v2" if data_args.version_2_with_negative else "squad", cache_dir=model_args.cache_dir
|
||||
)
|
||||
|
||||
def compute_metrics(p: EvalPrediction):
|
||||
return metric.compute(predictions=p.predictions, references=p.label_ids)
|
||||
|
||||
# Initialize our Trainer
|
||||
trainer = QuestionAnsweringTrainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=train_dataset if training_args.do_train else None,
|
||||
eval_dataset=eval_dataset if training_args.do_eval else None,
|
||||
eval_examples=eval_examples if training_args.do_eval else None,
|
||||
processing_class=tokenizer,
|
||||
data_collator=data_collator,
|
||||
post_process_function=post_processing_function,
|
||||
compute_metrics=compute_metrics,
|
||||
)
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
checkpoint = None
|
||||
if training_args.resume_from_checkpoint is not None:
|
||||
checkpoint = training_args.resume_from_checkpoint
|
||||
elif last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model() # Saves the tokenizer too for easy upload
|
||||
|
||||
metrics = train_result.metrics
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
|
||||
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
metrics = trainer.evaluate()
|
||||
|
||||
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
|
||||
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Prediction
|
||||
if training_args.do_predict:
|
||||
logger.info("*** Predict ***")
|
||||
results = trainer.predict(predict_dataset, predict_examples)
|
||||
metrics = results.metrics
|
||||
|
||||
max_predict_samples = (
|
||||
data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)
|
||||
)
|
||||
metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset))
|
||||
|
||||
trainer.log_metrics("predict", metrics)
|
||||
trainer.save_metrics("predict", metrics)
|
||||
|
||||
kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "question-answering"}
|
||||
if data_args.dataset_name is not None:
|
||||
kwargs["dataset_tags"] = data_args.dataset_name
|
||||
if data_args.dataset_config_name is not None:
|
||||
kwargs["dataset_args"] = data_args.dataset_config_name
|
||||
kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
|
||||
else:
|
||||
kwargs["dataset"] = data_args.dataset_name
|
||||
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
||||
main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
741
transformers/examples/pytorch/question-answering/run_qa_beam_search.py
Executable file
741
transformers/examples/pytorch/question-answering/run_qa_beam_search.py
Executable file
@@ -0,0 +1,741 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
Fine-tuning XLNet for question answering with beam search using a slightly adapted version of the 🤗 Trainer.
|
||||
"""
|
||||
# You can also adapt this script on your own question answering task. Pointers for this are left as comments.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
import datasets
|
||||
import evaluate
|
||||
from datasets import load_dataset
|
||||
from trainer_qa import QuestionAnsweringTrainer
|
||||
from utils_qa import postprocess_qa_predictions_with_beam_search
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
DataCollatorWithPadding,
|
||||
EvalPrediction,
|
||||
HfArgumentParser,
|
||||
TrainingArguments,
|
||||
XLNetConfig,
|
||||
XLNetForQuestionAnswering,
|
||||
XLNetTokenizerFast,
|
||||
default_data_collator,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/question-answering/requirements.txt")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
tokenizer_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
dataset_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
|
||||
validation_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
|
||||
)
|
||||
test_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "An optional input test data file to test the perplexity on (a text file)."},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_seq_length: int = field(
|
||||
default=384,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total input sequence length after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
)
|
||||
},
|
||||
)
|
||||
pad_to_max_length: bool = field(
|
||||
default=True,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to pad all samples to `max_seq_length`. If False, will pad the samples dynamically when"
|
||||
" batching to the maximum length in the batch (which can be faster on GPU but will be slower on TPU)."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_predict_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of prediction examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
version_2_with_negative: bool = field(
|
||||
default=False, metadata={"help": "If true, some of the examples do not have an answer."}
|
||||
)
|
||||
null_score_diff_threshold: float = field(
|
||||
default=0.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"The threshold used to select the null answer: if the best answer has a score that is less than "
|
||||
"the score of the null answer minus this threshold, the null answer is selected for this example. "
|
||||
"Only useful when `version_2_with_negative=True`."
|
||||
)
|
||||
},
|
||||
)
|
||||
doc_stride: int = field(
|
||||
default=128,
|
||||
metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."},
|
||||
)
|
||||
n_best_size: int = field(
|
||||
default=20,
|
||||
metadata={"help": "The total number of n-best predictions to generate when looking for an answer."},
|
||||
)
|
||||
max_answer_length: int = field(
|
||||
default=30,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum length of an answer that can be generated. This is needed because the start "
|
||||
"and end predictions are not conditioned on one another."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
def __post_init__(self):
|
||||
if (
|
||||
self.dataset_name is None
|
||||
and self.train_file is None
|
||||
and self.validation_file is None
|
||||
and self.test_file is None
|
||||
):
|
||||
raise ValueError("Need either a dataset name or a training/validation/test file.")
|
||||
else:
|
||||
if self.train_file is not None:
|
||||
extension = self.train_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
|
||||
if self.validation_file is not None:
|
||||
extension = self.validation_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
|
||||
if self.test_file is not None:
|
||||
extension = self.test_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`test_file` should be a csv or a json file."
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_qa_beam_search", model_args, data_args)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
|
||||
if training_args.should_log:
|
||||
# The default of training_args.log_level is passive, so we set log level at info here to have that default.
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
|
||||
log_level = training_args.get_process_log_level()
|
||||
logger.setLevel(log_level)
|
||||
datasets.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
+ f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
logger.info(f"Training/evaluation parameters {training_args}")
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
|
||||
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
|
||||
# (the dataset will be downloaded automatically from the datasets Hub).
|
||||
#
|
||||
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
|
||||
# 'text' is found. You can easily tweak this behavior (see below).
|
||||
#
|
||||
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
|
||||
# download the dataset.
|
||||
if data_args.dataset_name is not None:
|
||||
# Downloading and loading a dataset from the hub.
|
||||
raw_datasets = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
else:
|
||||
data_files = {}
|
||||
if data_args.train_file is not None:
|
||||
data_files["train"] = data_args.train_file
|
||||
extension = data_args.train_file.split(".")[-1]
|
||||
if data_args.validation_file is not None:
|
||||
data_files["validation"] = data_args.validation_file
|
||||
extension = data_args.validation_file.split(".")[-1]
|
||||
if data_args.test_file is not None:
|
||||
data_files["test"] = data_args.test_file
|
||||
extension = data_args.test_file.split(".")[-1]
|
||||
raw_datasets = load_dataset(
|
||||
extension,
|
||||
data_files=data_files,
|
||||
field="data",
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
)
|
||||
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
|
||||
# https://huggingface.co/docs/datasets/loading_datasets.
|
||||
|
||||
# Load pretrained model and tokenizer
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config = XLNetConfig.from_pretrained(
|
||||
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
)
|
||||
tokenizer = XLNetTokenizerFast.from_pretrained(
|
||||
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
)
|
||||
model = XLNetForQuestionAnswering.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in model_args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
)
|
||||
|
||||
# Preprocessing the datasets.
|
||||
# Preprocessing is slightly different for training and evaluation.
|
||||
if training_args.do_train:
|
||||
column_names = raw_datasets["train"].column_names
|
||||
elif training_args.do_eval:
|
||||
column_names = raw_datasets["validation"].column_names
|
||||
else:
|
||||
column_names = raw_datasets["test"].column_names
|
||||
question_column_name = "question" if "question" in column_names else column_names[0]
|
||||
context_column_name = "context" if "context" in column_names else column_names[1]
|
||||
answer_column_name = "answers" if "answers" in column_names else column_names[2]
|
||||
|
||||
# Padding side determines if we do (question|context) or (context|question).
|
||||
pad_on_right = tokenizer.padding_side == "right"
|
||||
|
||||
if data_args.max_seq_length > tokenizer.model_max_length:
|
||||
logger.warning(
|
||||
f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the "
|
||||
f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
|
||||
)
|
||||
max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
|
||||
|
||||
# Training preprocessing
|
||||
def prepare_train_features(examples):
|
||||
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
|
||||
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
|
||||
# left whitespace
|
||||
examples[question_column_name] = [q.lstrip() for q in examples[question_column_name]]
|
||||
|
||||
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
|
||||
# in one example possible giving several features when a context is long, each of those features having a
|
||||
# context that overlaps a bit the context of the previous feature.
|
||||
tokenized_examples = tokenizer(
|
||||
examples[question_column_name if pad_on_right else context_column_name],
|
||||
examples[context_column_name if pad_on_right else question_column_name],
|
||||
truncation="only_second" if pad_on_right else "only_first",
|
||||
max_length=max_seq_length,
|
||||
stride=data_args.doc_stride,
|
||||
return_overflowing_tokens=True,
|
||||
return_offsets_mapping=True,
|
||||
return_special_tokens_mask=True,
|
||||
return_token_type_ids=True,
|
||||
padding="max_length",
|
||||
)
|
||||
|
||||
# Since one example might give us several features if it has a long context, we need a map from a feature to
|
||||
# its corresponding example. This key gives us just that.
|
||||
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
|
||||
# The offset mappings will give us a map from token to character position in the original context. This will
|
||||
# help us compute the start_positions and end_positions.
|
||||
offset_mapping = tokenized_examples.pop("offset_mapping")
|
||||
# The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers).
|
||||
special_tokens = tokenized_examples.pop("special_tokens_mask")
|
||||
|
||||
# Let's label those examples!
|
||||
tokenized_examples["start_positions"] = []
|
||||
tokenized_examples["end_positions"] = []
|
||||
tokenized_examples["is_impossible"] = []
|
||||
tokenized_examples["cls_index"] = []
|
||||
tokenized_examples["p_mask"] = []
|
||||
|
||||
for i, offsets in enumerate(offset_mapping):
|
||||
# We will label impossible answers with the index of the CLS token.
|
||||
input_ids = tokenized_examples["input_ids"][i]
|
||||
if tokenizer.cls_token_id in input_ids:
|
||||
cls_index = input_ids.index(tokenizer.cls_token_id)
|
||||
elif tokenizer.bos_token_id in input_ids:
|
||||
cls_index = input_ids.index(tokenizer.bos_token_id)
|
||||
else:
|
||||
cls_index = 0
|
||||
tokenized_examples["cls_index"].append(cls_index)
|
||||
|
||||
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
|
||||
sequence_ids = tokenized_examples["token_type_ids"][i]
|
||||
for k, s in enumerate(special_tokens[i]):
|
||||
if s:
|
||||
sequence_ids[k] = 3
|
||||
context_idx = 1 if pad_on_right else 0
|
||||
|
||||
# Build the p_mask: non special tokens and context gets 0.0, the others get 1.0.
|
||||
# The cls token gets 1.0 too (for predictions of empty answers).
|
||||
tokenized_examples["p_mask"].append(
|
||||
[
|
||||
0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0
|
||||
for k, s in enumerate(sequence_ids)
|
||||
]
|
||||
)
|
||||
|
||||
# One example can give several spans, this is the index of the example containing this span of text.
|
||||
sample_index = sample_mapping[i]
|
||||
answers = examples[answer_column_name][sample_index]
|
||||
# If no answers are given, set the cls_index as answer.
|
||||
if len(answers["answer_start"]) == 0:
|
||||
tokenized_examples["start_positions"].append(cls_index)
|
||||
tokenized_examples["end_positions"].append(cls_index)
|
||||
tokenized_examples["is_impossible"].append(1.0)
|
||||
else:
|
||||
# Start/end character index of the answer in the text.
|
||||
start_char = answers["answer_start"][0]
|
||||
end_char = start_char + len(answers["text"][0])
|
||||
|
||||
# Start token index of the current span in the text.
|
||||
token_start_index = 0
|
||||
while sequence_ids[token_start_index] != context_idx:
|
||||
token_start_index += 1
|
||||
|
||||
# End token index of the current span in the text.
|
||||
token_end_index = len(input_ids) - 1
|
||||
while sequence_ids[token_end_index] != context_idx:
|
||||
token_end_index -= 1
|
||||
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
|
||||
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
|
||||
tokenized_examples["start_positions"].append(cls_index)
|
||||
tokenized_examples["end_positions"].append(cls_index)
|
||||
tokenized_examples["is_impossible"].append(1.0)
|
||||
else:
|
||||
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
|
||||
# Note: we could go after the last offset if the answer is the last word (edge case).
|
||||
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
|
||||
token_start_index += 1
|
||||
tokenized_examples["start_positions"].append(token_start_index - 1)
|
||||
while offsets[token_end_index][1] >= end_char:
|
||||
token_end_index -= 1
|
||||
tokenized_examples["end_positions"].append(token_end_index + 1)
|
||||
tokenized_examples["is_impossible"].append(0.0)
|
||||
|
||||
return tokenized_examples
|
||||
|
||||
if training_args.do_train:
|
||||
if "train" not in raw_datasets:
|
||||
raise ValueError("--do_train requires a train dataset")
|
||||
train_dataset = raw_datasets["train"]
|
||||
if data_args.max_train_samples is not None:
|
||||
# Select samples from Dataset, This will help to decrease processing time
|
||||
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
|
||||
train_dataset = train_dataset.select(range(max_train_samples))
|
||||
# Create Training Features
|
||||
with training_args.main_process_first(desc="train dataset map pre-processing"):
|
||||
train_dataset = train_dataset.map(
|
||||
prepare_train_features,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on train dataset",
|
||||
)
|
||||
if data_args.max_train_samples is not None:
|
||||
# Select samples from dataset again since Feature Creation might increase number of features
|
||||
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
|
||||
train_dataset = train_dataset.select(range(max_train_samples))
|
||||
|
||||
# Validation preprocessing
|
||||
def prepare_validation_features(examples):
|
||||
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
|
||||
# in one example possible giving several features when a context is long, each of those features having a
|
||||
# context that overlaps a bit the context of the previous feature.
|
||||
tokenized_examples = tokenizer(
|
||||
examples[question_column_name if pad_on_right else context_column_name],
|
||||
examples[context_column_name if pad_on_right else question_column_name],
|
||||
truncation="only_second" if pad_on_right else "only_first",
|
||||
max_length=max_seq_length,
|
||||
stride=data_args.doc_stride,
|
||||
return_overflowing_tokens=True,
|
||||
return_offsets_mapping=True,
|
||||
return_special_tokens_mask=True,
|
||||
return_token_type_ids=True,
|
||||
padding="max_length",
|
||||
)
|
||||
|
||||
# Since one example might give us several features if it has a long context, we need a map from a feature to
|
||||
# its corresponding example. This key gives us just that.
|
||||
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
|
||||
|
||||
# The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers).
|
||||
special_tokens = tokenized_examples.pop("special_tokens_mask")
|
||||
|
||||
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
|
||||
# corresponding example_id and we will store the offset mappings.
|
||||
tokenized_examples["example_id"] = []
|
||||
|
||||
# We still provide the index of the CLS token and the p_mask to the model, but not the is_impossible label.
|
||||
tokenized_examples["cls_index"] = []
|
||||
tokenized_examples["p_mask"] = []
|
||||
|
||||
for i, input_ids in enumerate(tokenized_examples["input_ids"]):
|
||||
# Find the CLS token in the input ids.
|
||||
if tokenizer.cls_token_id in input_ids:
|
||||
cls_index = input_ids.index(tokenizer.cls_token_id)
|
||||
elif tokenizer.bos_token_id in input_ids:
|
||||
cls_index = input_ids.index(tokenizer.bos_token_id)
|
||||
else:
|
||||
cls_index = 0
|
||||
tokenized_examples["cls_index"].append(cls_index)
|
||||
|
||||
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
|
||||
sequence_ids = tokenized_examples["token_type_ids"][i]
|
||||
for k, s in enumerate(special_tokens[i]):
|
||||
if s:
|
||||
sequence_ids[k] = 3
|
||||
context_idx = 1 if pad_on_right else 0
|
||||
|
||||
# Build the p_mask: non special tokens and context gets 0.0, the others 1.0.
|
||||
tokenized_examples["p_mask"].append(
|
||||
[
|
||||
0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0
|
||||
for k, s in enumerate(sequence_ids)
|
||||
]
|
||||
)
|
||||
|
||||
# One example can give several spans, this is the index of the example containing this span of text.
|
||||
sample_index = sample_mapping[i]
|
||||
tokenized_examples["example_id"].append(examples["id"][sample_index])
|
||||
|
||||
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
|
||||
# position is part of the context or not.
|
||||
tokenized_examples["offset_mapping"][i] = [
|
||||
(o if sequence_ids[k] == context_idx else None)
|
||||
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
|
||||
]
|
||||
|
||||
return tokenized_examples
|
||||
|
||||
if training_args.do_eval:
|
||||
if "validation" not in raw_datasets:
|
||||
raise ValueError("--do_eval requires a validation dataset")
|
||||
eval_examples = raw_datasets["validation"]
|
||||
if data_args.max_eval_samples is not None:
|
||||
# Selecting Eval Samples from Dataset
|
||||
max_eval_samples = min(len(eval_examples), data_args.max_eval_samples)
|
||||
eval_examples = eval_examples.select(range(max_eval_samples))
|
||||
# Create Features from Eval Dataset
|
||||
with training_args.main_process_first(desc="validation dataset map pre-processing"):
|
||||
eval_dataset = eval_examples.map(
|
||||
prepare_validation_features,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on validation dataset",
|
||||
)
|
||||
if data_args.max_eval_samples is not None:
|
||||
# Selecting Samples from Dataset again since Feature Creation might increase samples size
|
||||
max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)
|
||||
eval_dataset = eval_dataset.select(range(max_eval_samples))
|
||||
|
||||
if training_args.do_predict:
|
||||
if "test" not in raw_datasets:
|
||||
raise ValueError("--do_predict requires a test dataset")
|
||||
predict_examples = raw_datasets["test"]
|
||||
if data_args.max_predict_samples is not None:
|
||||
# We will select sample from whole data
|
||||
predict_examples = predict_examples.select(range(data_args.max_predict_samples))
|
||||
# Test Feature Creation
|
||||
with training_args.main_process_first(desc="prediction dataset map pre-processing"):
|
||||
predict_dataset = predict_examples.map(
|
||||
prepare_validation_features,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on prediction dataset",
|
||||
)
|
||||
if data_args.max_predict_samples is not None:
|
||||
# During Feature creation dataset samples might increase, we will select required samples again
|
||||
max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)
|
||||
predict_dataset = predict_dataset.select(range(max_predict_samples))
|
||||
|
||||
# Data collator
|
||||
# We have already padded to max length if the corresponding flag is True, otherwise we need to pad in the data
|
||||
# collator.
|
||||
data_collator = (
|
||||
default_data_collator
|
||||
if data_args.pad_to_max_length
|
||||
else DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8 if training_args.fp16 else None)
|
||||
)
|
||||
|
||||
# Post-processing:
|
||||
def post_processing_function(examples, features, predictions, stage="eval"):
|
||||
# Post-processing: we match the start logits and end logits to answers in the original context.
|
||||
predictions, scores_diff_json = postprocess_qa_predictions_with_beam_search(
|
||||
examples=examples,
|
||||
features=features,
|
||||
predictions=predictions,
|
||||
version_2_with_negative=data_args.version_2_with_negative,
|
||||
n_best_size=data_args.n_best_size,
|
||||
max_answer_length=data_args.max_answer_length,
|
||||
start_n_top=model.config.start_n_top,
|
||||
end_n_top=model.config.end_n_top,
|
||||
output_dir=training_args.output_dir,
|
||||
log_level=log_level,
|
||||
prefix=stage,
|
||||
)
|
||||
# Format the result to the format the metric expects.
|
||||
if data_args.version_2_with_negative:
|
||||
formatted_predictions = [
|
||||
{"id": k, "prediction_text": v, "no_answer_probability": scores_diff_json[k]}
|
||||
for k, v in predictions.items()
|
||||
]
|
||||
else:
|
||||
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()]
|
||||
|
||||
references = [{"id": ex["id"], "answers": ex[answer_column_name]} for ex in examples]
|
||||
return EvalPrediction(predictions=formatted_predictions, label_ids=references)
|
||||
|
||||
metric = evaluate.load(
|
||||
"squad_v2" if data_args.version_2_with_negative else "squad", cache_dir=model_args.cache_dir
|
||||
)
|
||||
|
||||
def compute_metrics(p: EvalPrediction):
|
||||
return metric.compute(predictions=p.predictions, references=p.label_ids)
|
||||
|
||||
# Initialize our Trainer
|
||||
trainer = QuestionAnsweringTrainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=train_dataset if training_args.do_train else None,
|
||||
eval_dataset=eval_dataset if training_args.do_eval else None,
|
||||
eval_examples=eval_examples if training_args.do_eval else None,
|
||||
processing_class=tokenizer,
|
||||
data_collator=data_collator,
|
||||
post_process_function=post_processing_function,
|
||||
compute_metrics=compute_metrics,
|
||||
)
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
checkpoint = None
|
||||
if training_args.resume_from_checkpoint is not None:
|
||||
checkpoint = training_args.resume_from_checkpoint
|
||||
elif last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model() # Saves the tokenizer too for easy upload
|
||||
|
||||
metrics = train_result.metrics
|
||||
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
|
||||
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
metrics = trainer.evaluate()
|
||||
|
||||
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
|
||||
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Prediction
|
||||
if training_args.do_predict:
|
||||
logger.info("*** Predict ***")
|
||||
results = trainer.predict(predict_dataset, predict_examples)
|
||||
metrics = results.metrics
|
||||
|
||||
max_predict_samples = (
|
||||
data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)
|
||||
)
|
||||
metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset))
|
||||
|
||||
trainer.log_metrics("predict", metrics)
|
||||
trainer.save_metrics("predict", metrics)
|
||||
|
||||
kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "question-answering"}
|
||||
if data_args.dataset_name is not None:
|
||||
kwargs["dataset_tags"] = data_args.dataset_name
|
||||
if data_args.dataset_config_name is not None:
|
||||
kwargs["dataset_args"] = data_args.dataset_config_name
|
||||
kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
|
||||
else:
|
||||
kwargs["dataset"] = data_args.dataset_name
|
||||
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
||||
main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
File diff suppressed because it is too large
Load Diff
1043
transformers/examples/pytorch/question-answering/run_qa_no_trainer.py
Executable file
1043
transformers/examples/pytorch/question-answering/run_qa_no_trainer.py
Executable file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,742 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2021 The HuggingFace Team All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
Fine-tuning the library's seq2seq models for question answering using the 🤗 Seq2SeqTrainer.
|
||||
"""
|
||||
# You can also adapt this script on your own question answering task. Pointers for this are left as comments.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
import datasets
|
||||
import evaluate
|
||||
import numpy as np
|
||||
from datasets import load_dataset
|
||||
from trainer_seq2seq_qa import QuestionAnsweringSeq2SeqTrainer
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoModelForSeq2SeqLM,
|
||||
AutoTokenizer,
|
||||
DataCollatorForSeq2Seq,
|
||||
HfArgumentParser,
|
||||
Seq2SeqTrainingArguments,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import EvalLoopOutput, EvalPrediction, get_last_checkpoint
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/question-answering/requirements.txt")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
tokenizer_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Path to directory to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
use_fast_tokenizer: bool = field(
|
||||
default=True,
|
||||
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
dataset_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
context_column: Optional[str] = field(
|
||||
default="context",
|
||||
metadata={"help": "The name of the column in the datasets containing the contexts (for question answering)."},
|
||||
)
|
||||
question_column: Optional[str] = field(
|
||||
default="question",
|
||||
metadata={"help": "The name of the column in the datasets containing the questions (for question answering)."},
|
||||
)
|
||||
answer_column: Optional[str] = field(
|
||||
default="answers",
|
||||
metadata={"help": "The name of the column in the datasets containing the answers (for question answering)."},
|
||||
)
|
||||
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
|
||||
validation_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
|
||||
)
|
||||
test_file: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "An optional input test data file to evaluate the perplexity on (a text file)."},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_seq_length: int = field(
|
||||
default=384,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total input sequence length after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_answer_length: int = field(
|
||||
default=30,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum length of an answer that can be generated. This is needed because the start "
|
||||
"and end predictions are not conditioned on one another."
|
||||
)
|
||||
},
|
||||
)
|
||||
val_max_answer_length: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total sequence length for validation target text after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded. Will default to `max_answer_length`. "
|
||||
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
|
||||
"during ``evaluate`` and ``predict``."
|
||||
)
|
||||
},
|
||||
)
|
||||
pad_to_max_length: bool = field(
|
||||
default=True,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to pad all samples to `max_seq_length`. If False, will pad the samples dynamically when"
|
||||
" batching to the maximum length in the batch (which can be faster on GPU but will be slower on TPU)."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_predict_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of prediction examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
version_2_with_negative: bool = field(
|
||||
default=False, metadata={"help": "If true, some of the examples do not have an answer."}
|
||||
)
|
||||
null_score_diff_threshold: float = field(
|
||||
default=0.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"The threshold used to select the null answer: if the best answer has a score that is less than "
|
||||
"the score of the null answer minus this threshold, the null answer is selected for this example. "
|
||||
"Only useful when `version_2_with_negative=True`."
|
||||
)
|
||||
},
|
||||
)
|
||||
doc_stride: int = field(
|
||||
default=128,
|
||||
metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."},
|
||||
)
|
||||
n_best_size: int = field(
|
||||
default=20,
|
||||
metadata={"help": "The total number of n-best predictions to generate when looking for an answer."},
|
||||
)
|
||||
num_beams: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"Number of beams to use for evaluation. This argument will be passed to ``model.generate``, "
|
||||
"which is used during ``evaluate`` and ``predict``."
|
||||
)
|
||||
},
|
||||
)
|
||||
ignore_pad_token_for_loss: bool = field(
|
||||
default=True,
|
||||
metadata={
|
||||
"help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
|
||||
},
|
||||
)
|
||||
|
||||
def __post_init__(self):
|
||||
if (
|
||||
self.dataset_name is None
|
||||
and self.train_file is None
|
||||
and self.validation_file is None
|
||||
and self.test_file is None
|
||||
):
|
||||
raise ValueError("Need either a dataset name or a training/validation file/test_file.")
|
||||
else:
|
||||
if self.train_file is not None:
|
||||
extension = self.train_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
|
||||
if self.validation_file is not None:
|
||||
extension = self.validation_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
|
||||
if self.test_file is not None:
|
||||
extension = self.test_file.split(".")[-1]
|
||||
assert extension in ["csv", "json"], "`test_file` should be a csv or a json file."
|
||||
if self.val_max_answer_length is None:
|
||||
self.val_max_answer_length = self.max_answer_length
|
||||
|
||||
|
||||
question_answering_column_name_mapping = {
|
||||
"squad_v2": ("question", "context", "answer"),
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_seq2seq_qa", model_args, data_args)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
|
||||
if training_args.should_log:
|
||||
# The default of training_args.log_level is passive, so we set log level at info here to have that default.
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
|
||||
log_level = training_args.get_process_log_level()
|
||||
logger.setLevel(log_level)
|
||||
datasets.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
+ f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
logger.info(f"Training/evaluation parameters {training_args}")
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
|
||||
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
|
||||
# (the dataset will be downloaded automatically from the datasets Hub).
|
||||
#
|
||||
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
|
||||
# 'text' is found. You can easily tweak this behavior (see below).
|
||||
#
|
||||
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
|
||||
# download the dataset.
|
||||
if data_args.dataset_name is not None:
|
||||
# Downloading and loading a dataset from the hub.
|
||||
raw_datasets = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
else:
|
||||
data_files = {}
|
||||
if data_args.train_file is not None:
|
||||
data_files["train"] = data_args.train_file
|
||||
extension = data_args.train_file.split(".")[-1]
|
||||
if data_args.validation_file is not None:
|
||||
data_files["validation"] = data_args.validation_file
|
||||
extension = data_args.validation_file.split(".")[-1]
|
||||
if data_args.test_file is not None:
|
||||
data_files["test"] = data_args.test_file
|
||||
extension = data_args.test_file.split(".")[-1]
|
||||
raw_datasets = load_dataset(
|
||||
extension,
|
||||
data_files=data_files,
|
||||
field="data",
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
)
|
||||
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
|
||||
# https://huggingface.co/docs/datasets/loading_datasets.
|
||||
|
||||
# Load pretrained model and tokenizer
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
use_fast=model_args.use_fast_tokenizer,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in model_args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
|
||||
# on a small vocab and want a smaller embedding size, remove this test.
|
||||
embedding_size = model.get_input_embeddings().weight.shape[0]
|
||||
if len(tokenizer) > embedding_size:
|
||||
model.resize_token_embeddings(len(tokenizer))
|
||||
|
||||
if model.config.decoder_start_token_id is None:
|
||||
raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
|
||||
|
||||
# Preprocessing the datasets.
|
||||
# We need to generate and tokenize inputs and targets.
|
||||
if training_args.do_train:
|
||||
column_names = raw_datasets["train"].column_names
|
||||
elif training_args.do_eval:
|
||||
column_names = raw_datasets["validation"].column_names
|
||||
elif training_args.do_predict:
|
||||
column_names = raw_datasets["test"].column_names
|
||||
else:
|
||||
logger.info("There is nothing to do. Please pass `do_train`, `do_eval` and/or `do_predict`.")
|
||||
return
|
||||
|
||||
# Get the column names for input/target.
|
||||
dataset_columns = question_answering_column_name_mapping.get(data_args.dataset_name, None)
|
||||
if data_args.question_column is None:
|
||||
question_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
|
||||
else:
|
||||
question_column = data_args.question_column
|
||||
if question_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--question_column' value '{data_args.question_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
if data_args.context_column is None:
|
||||
context_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
|
||||
else:
|
||||
context_column = data_args.context_column
|
||||
if context_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--context_column' value '{data_args.context_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
if data_args.answer_column is None:
|
||||
answer_column = dataset_columns[2] if dataset_columns is not None else column_names[2]
|
||||
else:
|
||||
answer_column = data_args.answer_column
|
||||
if answer_column not in column_names:
|
||||
raise ValueError(
|
||||
f"--answer_column' value '{data_args.answer_column}' needs to be one of: {', '.join(column_names)}"
|
||||
)
|
||||
|
||||
# Temporarily set max_answer_length for training.
|
||||
max_answer_length = data_args.max_answer_length
|
||||
padding = "max_length" if data_args.pad_to_max_length else False
|
||||
|
||||
if training_args.label_smoothing_factor > 0 and not hasattr(model, "prepare_decoder_input_ids_from_labels"):
|
||||
logger.warning(
|
||||
"label_smoothing is enabled but the `prepare_decoder_input_ids_from_labels` method is not defined for "
|
||||
f"`{model.__class__.__name__}`. This will lead to loss being calculated twice and will take up more memory"
|
||||
)
|
||||
|
||||
if data_args.max_seq_length > tokenizer.model_max_length:
|
||||
logger.warning(
|
||||
f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the "
|
||||
f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
|
||||
)
|
||||
max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
|
||||
|
||||
def preprocess_squad_batch(
|
||||
examples,
|
||||
question_column: str,
|
||||
context_column: str,
|
||||
answer_column: str,
|
||||
) -> tuple[list[str], list[str]]:
|
||||
questions = examples[question_column]
|
||||
contexts = examples[context_column]
|
||||
answers = examples[answer_column]
|
||||
|
||||
def generate_input(_question, _context):
|
||||
return " ".join(["question:", _question.lstrip(), "context:", _context.lstrip()])
|
||||
|
||||
inputs = [generate_input(question, context) for question, context in zip(questions, contexts)]
|
||||
targets = [answer["text"][0] if len(answer["text"]) > 0 else "" for answer in answers]
|
||||
return inputs, targets
|
||||
|
||||
def preprocess_function(examples):
|
||||
inputs, targets = preprocess_squad_batch(examples, question_column, context_column, answer_column)
|
||||
|
||||
model_inputs = tokenizer(inputs, max_length=max_seq_length, padding=padding, truncation=True)
|
||||
# Tokenize targets with text_target=...
|
||||
labels = tokenizer(text_target=targets, max_length=max_answer_length, padding=padding, truncation=True)
|
||||
|
||||
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
|
||||
# padding in the loss.
|
||||
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
|
||||
labels["input_ids"] = [
|
||||
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
|
||||
]
|
||||
|
||||
model_inputs["labels"] = labels["input_ids"]
|
||||
return model_inputs
|
||||
|
||||
# Validation preprocessing
|
||||
def preprocess_validation_function(examples):
|
||||
inputs, targets = preprocess_squad_batch(examples, question_column, context_column, answer_column)
|
||||
|
||||
model_inputs = tokenizer(
|
||||
inputs,
|
||||
max_length=max_seq_length,
|
||||
padding=padding,
|
||||
truncation=True,
|
||||
return_overflowing_tokens=True,
|
||||
return_offsets_mapping=True,
|
||||
)
|
||||
# Tokenize targets with the `text_target` keyword argument
|
||||
labels = tokenizer(text_target=targets, max_length=max_answer_length, padding=padding, truncation=True)
|
||||
|
||||
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
|
||||
# padding in the loss.
|
||||
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
|
||||
labels["input_ids"] = [
|
||||
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
|
||||
]
|
||||
|
||||
# Since one example might give us several features if it has a long context, we need a map from a feature to
|
||||
# its corresponding example. This key gives us just that.
|
||||
sample_mapping = model_inputs.pop("overflow_to_sample_mapping")
|
||||
|
||||
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
|
||||
# corresponding example_id and we will store the offset mappings.
|
||||
model_inputs["example_id"] = []
|
||||
# Augment the overflowing tokens to the labels
|
||||
labels_out = []
|
||||
|
||||
for i in range(len(model_inputs["input_ids"])):
|
||||
# One example can give several spans, this is the index of the example containing this span of text.
|
||||
sample_index = sample_mapping[i]
|
||||
model_inputs["example_id"].append(examples["id"][sample_index])
|
||||
labels_out.append(labels["input_ids"][sample_index])
|
||||
|
||||
model_inputs["labels"] = labels_out
|
||||
return model_inputs
|
||||
|
||||
if training_args.do_train:
|
||||
if "train" not in raw_datasets:
|
||||
raise ValueError("--do_train requires a train dataset")
|
||||
train_dataset = raw_datasets["train"]
|
||||
if data_args.max_train_samples is not None:
|
||||
# We will select sample from whole data if argument is specified
|
||||
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
|
||||
train_dataset = train_dataset.select(range(max_train_samples))
|
||||
# Create train feature from dataset
|
||||
with training_args.main_process_first(desc="train dataset map pre-processing"):
|
||||
train_dataset = train_dataset.map(
|
||||
preprocess_function,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on train dataset",
|
||||
)
|
||||
if data_args.max_train_samples is not None:
|
||||
# Number of samples might increase during Feature Creation, We select only specified max samples
|
||||
max_train_samples = min(len(train_dataset), data_args.max_train_samples)
|
||||
train_dataset = train_dataset.select(range(max_train_samples))
|
||||
|
||||
if training_args.do_eval:
|
||||
if "validation" not in raw_datasets:
|
||||
raise ValueError("--do_eval requires a validation dataset")
|
||||
eval_examples = raw_datasets["validation"]
|
||||
if data_args.max_eval_samples is not None:
|
||||
# We will select sample from whole data
|
||||
max_eval_samples = min(len(eval_examples), data_args.max_eval_samples)
|
||||
eval_examples = eval_examples.select(range(max_eval_samples))
|
||||
# Validation Feature Creation
|
||||
with training_args.main_process_first(desc="validation dataset map pre-processing"):
|
||||
eval_dataset = eval_examples.map(
|
||||
preprocess_validation_function,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on validation dataset",
|
||||
)
|
||||
if data_args.max_eval_samples is not None:
|
||||
# During Feature creation dataset samples might increase, we will select required samples again
|
||||
max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)
|
||||
eval_dataset = eval_dataset.select(range(max_eval_samples))
|
||||
|
||||
if training_args.do_predict:
|
||||
if "test" not in raw_datasets:
|
||||
raise ValueError("--do_predict requires a test dataset")
|
||||
predict_examples = raw_datasets["test"]
|
||||
if data_args.max_predict_samples is not None:
|
||||
# We will select sample from whole data
|
||||
predict_examples = predict_examples.select(range(data_args.max_predict_samples))
|
||||
# Predict Feature Creation
|
||||
with training_args.main_process_first(desc="prediction dataset map pre-processing"):
|
||||
predict_dataset = predict_examples.map(
|
||||
preprocess_validation_function,
|
||||
batched=True,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
remove_columns=column_names,
|
||||
load_from_cache_file=not data_args.overwrite_cache,
|
||||
desc="Running tokenizer on prediction dataset",
|
||||
)
|
||||
if data_args.max_predict_samples is not None:
|
||||
# During Feature creation dataset samples might increase, we will select required samples again
|
||||
max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples)
|
||||
predict_dataset = predict_dataset.select(range(max_predict_samples))
|
||||
|
||||
# Data collator
|
||||
label_pad_token_id = -100 if data_args.ignore_pad_token_for_loss else tokenizer.pad_token_id
|
||||
data_collator = DataCollatorForSeq2Seq(
|
||||
tokenizer,
|
||||
model=model,
|
||||
label_pad_token_id=label_pad_token_id,
|
||||
pad_to_multiple_of=8 if training_args.fp16 else None,
|
||||
)
|
||||
|
||||
metric = evaluate.load(
|
||||
"squad_v2" if data_args.version_2_with_negative else "squad", cache_dir=model_args.cache_dir
|
||||
)
|
||||
|
||||
def compute_metrics(p: EvalPrediction):
|
||||
return metric.compute(predictions=p.predictions, references=p.label_ids)
|
||||
|
||||
# Post-processing:
|
||||
def post_processing_function(
|
||||
examples: datasets.Dataset, features: datasets.Dataset, outputs: EvalLoopOutput, stage="eval"
|
||||
):
|
||||
# Decode the predicted tokens.
|
||||
preds = outputs.predictions
|
||||
if isinstance(preds, tuple):
|
||||
preds = preds[0]
|
||||
# Replace -100s used for padding as we can't decode them
|
||||
preds = np.where(preds != -100, preds, tokenizer.pad_token_id)
|
||||
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
|
||||
|
||||
# Build a map example to its corresponding features.
|
||||
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
|
||||
feature_per_example = {example_id_to_index[feature["example_id"]]: i for i, feature in enumerate(features)}
|
||||
predictions = {}
|
||||
# Let's loop over all the examples!
|
||||
for example_index, example in enumerate(examples):
|
||||
# This is the index of the feature associated to the current example.
|
||||
feature_index = feature_per_example[example_index]
|
||||
predictions[example["id"]] = decoded_preds[feature_index]
|
||||
|
||||
# Format the result to the format the metric expects.
|
||||
if data_args.version_2_with_negative:
|
||||
formatted_predictions = [
|
||||
{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items()
|
||||
]
|
||||
else:
|
||||
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()]
|
||||
|
||||
references = [{"id": ex["id"], "answers": ex[answer_column]} for ex in examples]
|
||||
return EvalPrediction(predictions=formatted_predictions, label_ids=references)
|
||||
|
||||
# Initialize our Trainer
|
||||
trainer = QuestionAnsweringSeq2SeqTrainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=train_dataset if training_args.do_train else None,
|
||||
eval_dataset=eval_dataset if training_args.do_eval else None,
|
||||
eval_examples=eval_examples if training_args.do_eval else None,
|
||||
processing_class=tokenizer,
|
||||
data_collator=data_collator,
|
||||
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
|
||||
post_process_function=post_processing_function,
|
||||
)
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
checkpoint = None
|
||||
if training_args.resume_from_checkpoint is not None:
|
||||
checkpoint = training_args.resume_from_checkpoint
|
||||
elif last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model() # Saves the tokenizer too for easy upload
|
||||
|
||||
metrics = train_result.metrics
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
|
||||
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
results = {}
|
||||
max_length = (
|
||||
training_args.generation_max_length
|
||||
if training_args.generation_max_length is not None
|
||||
else data_args.val_max_answer_length
|
||||
)
|
||||
num_beams = data_args.num_beams if data_args.num_beams is not None else training_args.generation_num_beams
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval")
|
||||
|
||||
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
|
||||
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Prediction
|
||||
if training_args.do_predict:
|
||||
logger.info("*** Predict ***")
|
||||
results = trainer.predict(predict_dataset, predict_examples)
|
||||
metrics = results.metrics
|
||||
|
||||
max_predict_samples = (
|
||||
data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)
|
||||
)
|
||||
metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset))
|
||||
|
||||
trainer.log_metrics("predict", metrics)
|
||||
trainer.save_metrics("predict", metrics)
|
||||
|
||||
if training_args.push_to_hub:
|
||||
kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "question-answering"}
|
||||
if data_args.dataset_name is not None:
|
||||
kwargs["dataset_tags"] = data_args.dataset_name
|
||||
if data_args.dataset_config_name is not None:
|
||||
kwargs["dataset_args"] = data_args.dataset_config_name
|
||||
kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
|
||||
else:
|
||||
kwargs["dataset"] = data_args.dataset_name
|
||||
|
||||
trainer.push_to_hub(**kwargs)
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
||||
main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
136
transformers/examples/pytorch/question-answering/trainer_qa.py
Normal file
136
transformers/examples/pytorch/question-answering/trainer_qa.py
Normal file
@@ -0,0 +1,136 @@
|
||||
# Copyright 2020 The HuggingFace Team All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
A subclass of `Trainer` specific to Question-Answering tasks
|
||||
"""
|
||||
|
||||
import math
|
||||
import time
|
||||
|
||||
from transformers import Trainer, is_torch_xla_available
|
||||
from transformers.trainer_utils import PredictionOutput, speed_metrics
|
||||
|
||||
|
||||
if is_torch_xla_available():
|
||||
import torch_xla.core.xla_model as xm
|
||||
import torch_xla.debug.metrics as met
|
||||
|
||||
|
||||
class QuestionAnsweringTrainer(Trainer):
|
||||
def __init__(self, *args, eval_examples=None, post_process_function=None, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.eval_examples = eval_examples
|
||||
self.post_process_function = post_process_function
|
||||
|
||||
def evaluate(self, eval_dataset=None, eval_examples=None, ignore_keys=None, metric_key_prefix: str = "eval"):
|
||||
eval_dataset = self.eval_dataset if eval_dataset is None else eval_dataset
|
||||
eval_dataloader = self.get_eval_dataloader(eval_dataset)
|
||||
eval_examples = self.eval_examples if eval_examples is None else eval_examples
|
||||
|
||||
# Temporarily disable metric computation, we will do it in the loop here.
|
||||
compute_metrics = self.compute_metrics
|
||||
self.compute_metrics = None
|
||||
eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
|
||||
start_time = time.time()
|
||||
try:
|
||||
output = eval_loop(
|
||||
eval_dataloader,
|
||||
description="Evaluation",
|
||||
# No point gathering the predictions if there are no metrics, otherwise we defer to
|
||||
# self.args.prediction_loss_only
|
||||
prediction_loss_only=True if compute_metrics is None else None,
|
||||
ignore_keys=ignore_keys,
|
||||
metric_key_prefix=metric_key_prefix,
|
||||
)
|
||||
finally:
|
||||
self.compute_metrics = compute_metrics
|
||||
total_batch_size = self.args.eval_batch_size * self.args.world_size
|
||||
if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
|
||||
start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"]
|
||||
output.metrics.update(
|
||||
speed_metrics(
|
||||
metric_key_prefix,
|
||||
start_time,
|
||||
num_samples=output.num_samples,
|
||||
num_steps=math.ceil(output.num_samples / total_batch_size),
|
||||
)
|
||||
)
|
||||
if self.post_process_function is not None and self.compute_metrics is not None and self.args.should_save:
|
||||
# Only the main node write the results by default
|
||||
eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions)
|
||||
metrics = self.compute_metrics(eval_preds)
|
||||
|
||||
# Prefix all keys with metric_key_prefix + '_'
|
||||
for key in list(metrics.keys()):
|
||||
if not key.startswith(f"{metric_key_prefix}_"):
|
||||
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
|
||||
metrics.update(output.metrics)
|
||||
else:
|
||||
metrics = output.metrics
|
||||
|
||||
if self.args.should_log:
|
||||
# Only the main node log the results by default
|
||||
self.log(metrics)
|
||||
|
||||
if self.args.debug:
|
||||
# tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)
|
||||
xm.master_print(met.metrics_report())
|
||||
|
||||
self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, metrics)
|
||||
return metrics
|
||||
|
||||
def predict(self, predict_dataset, predict_examples, ignore_keys=None, metric_key_prefix: str = "test"):
|
||||
predict_dataloader = self.get_test_dataloader(predict_dataset)
|
||||
|
||||
# Temporarily disable metric computation, we will do it in the loop here.
|
||||
compute_metrics = self.compute_metrics
|
||||
self.compute_metrics = None
|
||||
eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
|
||||
start_time = time.time()
|
||||
try:
|
||||
output = eval_loop(
|
||||
predict_dataloader,
|
||||
description="Prediction",
|
||||
# No point gathering the predictions if there are no metrics, otherwise we defer to
|
||||
# self.args.prediction_loss_only
|
||||
prediction_loss_only=True if compute_metrics is None else None,
|
||||
ignore_keys=ignore_keys,
|
||||
metric_key_prefix=metric_key_prefix,
|
||||
)
|
||||
finally:
|
||||
self.compute_metrics = compute_metrics
|
||||
total_batch_size = self.args.eval_batch_size * self.args.world_size
|
||||
if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
|
||||
start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"]
|
||||
output.metrics.update(
|
||||
speed_metrics(
|
||||
metric_key_prefix,
|
||||
start_time,
|
||||
num_samples=output.num_samples,
|
||||
num_steps=math.ceil(output.num_samples / total_batch_size),
|
||||
)
|
||||
)
|
||||
|
||||
if self.post_process_function is None or self.compute_metrics is None:
|
||||
return output
|
||||
|
||||
predictions = self.post_process_function(predict_examples, predict_dataset, output.predictions, "predict")
|
||||
metrics = self.compute_metrics(predictions)
|
||||
|
||||
# Prefix all keys with metric_key_prefix + '_'
|
||||
for key in list(metrics.keys()):
|
||||
if not key.startswith(f"{metric_key_prefix}_"):
|
||||
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
|
||||
metrics.update(output.metrics)
|
||||
return PredictionOutput(predictions=predictions.predictions, label_ids=predictions.label_ids, metrics=metrics)
|
||||
@@ -0,0 +1,163 @@
|
||||
# Copyright 2021 The HuggingFace Team All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
A subclass of `Trainer` specific to Question-Answering tasks
|
||||
"""
|
||||
|
||||
import math
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
from transformers import Seq2SeqTrainer, is_torch_xla_available
|
||||
from transformers.trainer_utils import PredictionOutput, speed_metrics
|
||||
|
||||
|
||||
if is_torch_xla_available():
|
||||
import torch_xla.core.xla_model as xm
|
||||
import torch_xla.debug.metrics as met
|
||||
|
||||
|
||||
class QuestionAnsweringSeq2SeqTrainer(Seq2SeqTrainer):
|
||||
def __init__(self, *args, eval_examples=None, post_process_function=None, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.eval_examples = eval_examples
|
||||
self.post_process_function = post_process_function
|
||||
|
||||
# def evaluate(self, eval_dataset=None, eval_examples=None, ignore_keys=None, metric_key_prefix: str = "eval"):
|
||||
def evaluate(
|
||||
self,
|
||||
eval_dataset: Optional[Dataset] = None,
|
||||
eval_examples=None,
|
||||
ignore_keys: Optional[list[str]] = None,
|
||||
metric_key_prefix: str = "eval",
|
||||
**gen_kwargs,
|
||||
) -> dict[str, float]:
|
||||
gen_kwargs = gen_kwargs.copy()
|
||||
|
||||
# Use legacy argument setting if a) the option is not explicitly passed; and b) the argument is set in the
|
||||
# training args
|
||||
if gen_kwargs.get("max_length") is None and self.args.generation_max_length is not None:
|
||||
gen_kwargs["max_length"] = self.args.generation_max_length
|
||||
if gen_kwargs.get("num_beams") is None and self.args.generation_num_beams is not None:
|
||||
gen_kwargs["num_beams"] = self.args.generation_num_beams
|
||||
self._gen_kwargs = gen_kwargs
|
||||
|
||||
eval_dataset = self.eval_dataset if eval_dataset is None else eval_dataset
|
||||
eval_dataloader = self.get_eval_dataloader(eval_dataset)
|
||||
eval_examples = self.eval_examples if eval_examples is None else eval_examples
|
||||
|
||||
# Temporarily disable metric computation, we will do it in the loop here.
|
||||
compute_metrics = self.compute_metrics
|
||||
self.compute_metrics = None
|
||||
start_time = time.time()
|
||||
eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
|
||||
try:
|
||||
output = eval_loop(
|
||||
eval_dataloader,
|
||||
description="Evaluation",
|
||||
# No point gathering the predictions if there are no metrics, otherwise we defer to
|
||||
# self.args.prediction_loss_only
|
||||
prediction_loss_only=True if compute_metrics is None else None,
|
||||
ignore_keys=ignore_keys,
|
||||
metric_key_prefix=metric_key_prefix,
|
||||
)
|
||||
finally:
|
||||
self.compute_metrics = compute_metrics
|
||||
total_batch_size = self.args.eval_batch_size * self.args.world_size
|
||||
if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
|
||||
start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"]
|
||||
output.metrics.update(
|
||||
speed_metrics(
|
||||
metric_key_prefix,
|
||||
start_time,
|
||||
num_samples=output.num_samples,
|
||||
num_steps=math.ceil(output.num_samples / total_batch_size),
|
||||
)
|
||||
)
|
||||
|
||||
if self.post_process_function is not None and self.compute_metrics is not None and self.args.should_save:
|
||||
# Only the main node write the results by default
|
||||
eval_preds = self.post_process_function(eval_examples, eval_dataset, output)
|
||||
metrics = self.compute_metrics(eval_preds)
|
||||
|
||||
# Prefix all keys with metric_key_prefix + '_'
|
||||
for key in list(metrics.keys()):
|
||||
if not key.startswith(f"{metric_key_prefix}_"):
|
||||
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
|
||||
|
||||
metrics.update(output.metrics)
|
||||
else:
|
||||
metrics = output.metrics
|
||||
|
||||
if self.args.should_log:
|
||||
# Only the main node log the results by default
|
||||
self.log(metrics)
|
||||
|
||||
if self.args.debug:
|
||||
# tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)
|
||||
xm.master_print(met.metrics_report())
|
||||
|
||||
self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, metrics)
|
||||
return metrics
|
||||
|
||||
def predict(
|
||||
self, predict_dataset, predict_examples, ignore_keys=None, metric_key_prefix: str = "test", **gen_kwargs
|
||||
):
|
||||
self._gen_kwargs = gen_kwargs.copy()
|
||||
|
||||
predict_dataloader = self.get_test_dataloader(predict_dataset)
|
||||
|
||||
# Temporarily disable metric computation, we will do it in the loop here.
|
||||
compute_metrics = self.compute_metrics
|
||||
self.compute_metrics = None
|
||||
start_time = time.time()
|
||||
eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
|
||||
try:
|
||||
output = eval_loop(
|
||||
predict_dataloader,
|
||||
description="Prediction",
|
||||
# No point gathering the predictions if there are no metrics, otherwise we defer to
|
||||
# self.args.prediction_loss_only
|
||||
prediction_loss_only=True if compute_metrics is None else None,
|
||||
ignore_keys=ignore_keys,
|
||||
metric_key_prefix=metric_key_prefix,
|
||||
)
|
||||
finally:
|
||||
self.compute_metrics = compute_metrics
|
||||
|
||||
total_batch_size = self.args.eval_batch_size * self.args.world_size
|
||||
if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
|
||||
start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"]
|
||||
output.metrics.update(
|
||||
speed_metrics(
|
||||
metric_key_prefix,
|
||||
start_time,
|
||||
num_samples=output.num_samples,
|
||||
num_steps=math.ceil(output.num_samples / total_batch_size),
|
||||
)
|
||||
)
|
||||
if self.post_process_function is None or self.compute_metrics is None:
|
||||
return output
|
||||
|
||||
predictions = self.post_process_function(predict_examples, predict_dataset, output, "predict")
|
||||
metrics = self.compute_metrics(predictions)
|
||||
|
||||
# Prefix all keys with metric_key_prefix + '_'
|
||||
for key in list(metrics.keys()):
|
||||
if not key.startswith(f"{metric_key_prefix}_"):
|
||||
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
|
||||
metrics.update(output.metrics)
|
||||
return PredictionOutput(predictions=predictions.predictions, label_ids=predictions.label_ids, metrics=metrics)
|
||||
443
transformers/examples/pytorch/question-answering/utils_qa.py
Normal file
443
transformers/examples/pytorch/question-answering/utils_qa.py
Normal file
@@ -0,0 +1,443 @@
|
||||
# Copyright 2020 The HuggingFace Team All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
Post-processing utilities for question answering.
|
||||
"""
|
||||
|
||||
import collections
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
from tqdm.auto import tqdm
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def postprocess_qa_predictions(
|
||||
examples,
|
||||
features,
|
||||
predictions: tuple[np.ndarray, np.ndarray],
|
||||
version_2_with_negative: bool = False,
|
||||
n_best_size: int = 20,
|
||||
max_answer_length: int = 30,
|
||||
null_score_diff_threshold: float = 0.0,
|
||||
output_dir: Optional[str] = None,
|
||||
prefix: Optional[str] = None,
|
||||
log_level: Optional[int] = logging.WARNING,
|
||||
):
|
||||
"""
|
||||
Post-processes the predictions of a question-answering model to convert them to answers that are substrings of the
|
||||
original contexts. This is the base postprocessing functions for models that only return start and end logits.
|
||||
|
||||
Args:
|
||||
examples: The non-preprocessed dataset (see the main script for more information).
|
||||
features: The processed dataset (see the main script for more information).
|
||||
predictions (:obj:`tuple[np.ndarray, np.ndarray]`):
|
||||
The predictions of the model: two arrays containing the start logits and the end logits respectively. Its
|
||||
first dimension must match the number of elements of :obj:`features`.
|
||||
version_2_with_negative (:obj:`bool`, `optional`, defaults to :obj:`False`):
|
||||
Whether or not the underlying dataset contains examples with no answers.
|
||||
n_best_size (:obj:`int`, `optional`, defaults to 20):
|
||||
The total number of n-best predictions to generate when looking for an answer.
|
||||
max_answer_length (:obj:`int`, `optional`, defaults to 30):
|
||||
The maximum length of an answer that can be generated. This is needed because the start and end predictions
|
||||
are not conditioned on one another.
|
||||
null_score_diff_threshold (:obj:`float`, `optional`, defaults to 0):
|
||||
The threshold used to select the null answer: if the best answer has a score that is less than the score of
|
||||
the null answer minus this threshold, the null answer is selected for this example (note that the score of
|
||||
the null answer for an example giving several features is the minimum of the scores for the null answer on
|
||||
each feature: all features must be aligned on the fact they `want` to predict a null answer).
|
||||
|
||||
Only useful when :obj:`version_2_with_negative` is :obj:`True`.
|
||||
output_dir (:obj:`str`, `optional`):
|
||||
If provided, the dictionaries of predictions, n_best predictions (with their scores and logits) and, if
|
||||
:obj:`version_2_with_negative=True`, the dictionary of the scores differences between best and null
|
||||
answers, are saved in `output_dir`.
|
||||
prefix (:obj:`str`, `optional`):
|
||||
If provided, the dictionaries mentioned above are saved with `prefix` added to their names.
|
||||
log_level (:obj:`int`, `optional`, defaults to ``logging.WARNING``):
|
||||
``logging`` log level (e.g., ``logging.WARNING``)
|
||||
"""
|
||||
if len(predictions) != 2:
|
||||
raise ValueError("`predictions` should be a tuple with two elements (start_logits, end_logits).")
|
||||
all_start_logits, all_end_logits = predictions
|
||||
|
||||
if len(predictions[0]) != len(features):
|
||||
raise ValueError(f"Got {len(predictions[0])} predictions and {len(features)} features.")
|
||||
|
||||
# Build a map example to its corresponding features.
|
||||
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
|
||||
features_per_example = collections.defaultdict(list)
|
||||
for i, feature in enumerate(features):
|
||||
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
|
||||
|
||||
# The dictionaries we have to fill.
|
||||
all_predictions = collections.OrderedDict()
|
||||
all_nbest_json = collections.OrderedDict()
|
||||
if version_2_with_negative:
|
||||
scores_diff_json = collections.OrderedDict()
|
||||
|
||||
# Logging.
|
||||
logger.setLevel(log_level)
|
||||
logger.info(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
|
||||
|
||||
# Let's loop over all the examples!
|
||||
for example_index, example in enumerate(tqdm(examples)):
|
||||
# Those are the indices of the features associated to the current example.
|
||||
feature_indices = features_per_example[example_index]
|
||||
|
||||
min_null_prediction = None
|
||||
prelim_predictions = []
|
||||
|
||||
# Looping through all the features associated to the current example.
|
||||
for feature_index in feature_indices:
|
||||
# We grab the predictions of the model for this feature.
|
||||
start_logits = all_start_logits[feature_index]
|
||||
end_logits = all_end_logits[feature_index]
|
||||
# This is what will allow us to map some the positions in our logits to span of texts in the original
|
||||
# context.
|
||||
offset_mapping = features[feature_index]["offset_mapping"]
|
||||
# Optional `token_is_max_context`, if provided we will remove answers that do not have the maximum context
|
||||
# available in the current feature.
|
||||
token_is_max_context = features[feature_index].get("token_is_max_context", None)
|
||||
|
||||
# Update minimum null prediction.
|
||||
feature_null_score = start_logits[0] + end_logits[0]
|
||||
if min_null_prediction is None or min_null_prediction["score"] > feature_null_score:
|
||||
min_null_prediction = {
|
||||
"offsets": (0, 0),
|
||||
"score": feature_null_score,
|
||||
"start_logit": start_logits[0],
|
||||
"end_logit": end_logits[0],
|
||||
}
|
||||
|
||||
# Go through all possibilities for the `n_best_size` greater start and end logits.
|
||||
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
|
||||
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
|
||||
for start_index in start_indexes:
|
||||
for end_index in end_indexes:
|
||||
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
|
||||
# to part of the input_ids that are not in the context.
|
||||
if (
|
||||
start_index >= len(offset_mapping)
|
||||
or end_index >= len(offset_mapping)
|
||||
or offset_mapping[start_index] is None
|
||||
or len(offset_mapping[start_index]) < 2
|
||||
or offset_mapping[end_index] is None
|
||||
or len(offset_mapping[end_index]) < 2
|
||||
):
|
||||
continue
|
||||
# Don't consider answers with a length that is either < 0 or > max_answer_length.
|
||||
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
|
||||
continue
|
||||
# Don't consider answer that don't have the maximum context available (if such information is
|
||||
# provided).
|
||||
if token_is_max_context is not None and not token_is_max_context.get(str(start_index), False):
|
||||
continue
|
||||
|
||||
prelim_predictions.append(
|
||||
{
|
||||
"offsets": (offset_mapping[start_index][0], offset_mapping[end_index][1]),
|
||||
"score": start_logits[start_index] + end_logits[end_index],
|
||||
"start_logit": start_logits[start_index],
|
||||
"end_logit": end_logits[end_index],
|
||||
}
|
||||
)
|
||||
if version_2_with_negative and min_null_prediction is not None:
|
||||
# Add the minimum null prediction
|
||||
prelim_predictions.append(min_null_prediction)
|
||||
null_score = min_null_prediction["score"]
|
||||
|
||||
# Only keep the best `n_best_size` predictions.
|
||||
predictions = sorted(prelim_predictions, key=lambda x: x["score"], reverse=True)[:n_best_size]
|
||||
|
||||
# Add back the minimum null prediction if it was removed because of its low score.
|
||||
if (
|
||||
version_2_with_negative
|
||||
and min_null_prediction is not None
|
||||
and not any(p["offsets"] == (0, 0) for p in predictions)
|
||||
):
|
||||
predictions.append(min_null_prediction)
|
||||
|
||||
# Use the offsets to gather the answer text in the original context.
|
||||
context = example["context"]
|
||||
for pred in predictions:
|
||||
offsets = pred.pop("offsets")
|
||||
pred["text"] = context[offsets[0] : offsets[1]]
|
||||
|
||||
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
|
||||
# failure.
|
||||
if len(predictions) == 0 or (len(predictions) == 1 and predictions[0]["text"] == ""):
|
||||
predictions.insert(0, {"text": "empty", "start_logit": 0.0, "end_logit": 0.0, "score": 0.0})
|
||||
|
||||
# Compute the softmax of all scores (we do it with numpy to stay independent from torch in this file, using
|
||||
# the LogSumExp trick).
|
||||
scores = np.array([pred.pop("score") for pred in predictions])
|
||||
exp_scores = np.exp(scores - np.max(scores))
|
||||
probs = exp_scores / exp_scores.sum()
|
||||
|
||||
# Include the probabilities in our predictions.
|
||||
for prob, pred in zip(probs, predictions):
|
||||
pred["probability"] = prob
|
||||
|
||||
# Pick the best prediction. If the null answer is not possible, this is easy.
|
||||
if not version_2_with_negative:
|
||||
all_predictions[example["id"]] = predictions[0]["text"]
|
||||
else:
|
||||
# Otherwise we first need to find the best non-empty prediction.
|
||||
i = 0
|
||||
while predictions[i]["text"] == "":
|
||||
i += 1
|
||||
best_non_null_pred = predictions[i]
|
||||
|
||||
# Then we compare to the null prediction using the threshold.
|
||||
score_diff = null_score - best_non_null_pred["start_logit"] - best_non_null_pred["end_logit"]
|
||||
scores_diff_json[example["id"]] = float(score_diff) # To be JSON-serializable.
|
||||
if score_diff > null_score_diff_threshold:
|
||||
all_predictions[example["id"]] = ""
|
||||
else:
|
||||
all_predictions[example["id"]] = best_non_null_pred["text"]
|
||||
|
||||
# Make `predictions` JSON-serializable by casting np.float back to float.
|
||||
all_nbest_json[example["id"]] = [
|
||||
{k: (float(v) if isinstance(v, (np.float16, np.float32, np.float64)) else v) for k, v in pred.items()}
|
||||
for pred in predictions
|
||||
]
|
||||
|
||||
# If we have an output_dir, let's save all those dicts.
|
||||
if output_dir is not None:
|
||||
if not os.path.isdir(output_dir):
|
||||
raise OSError(f"{output_dir} is not a directory.")
|
||||
|
||||
prediction_file = os.path.join(
|
||||
output_dir, "predictions.json" if prefix is None else f"{prefix}_predictions.json"
|
||||
)
|
||||
nbest_file = os.path.join(
|
||||
output_dir, "nbest_predictions.json" if prefix is None else f"{prefix}_nbest_predictions.json"
|
||||
)
|
||||
if version_2_with_negative:
|
||||
null_odds_file = os.path.join(
|
||||
output_dir, "null_odds.json" if prefix is None else f"{prefix}_null_odds.json"
|
||||
)
|
||||
|
||||
logger.info(f"Saving predictions to {prediction_file}.")
|
||||
with open(prediction_file, "w") as writer:
|
||||
writer.write(json.dumps(all_predictions, indent=4) + "\n")
|
||||
logger.info(f"Saving nbest_preds to {nbest_file}.")
|
||||
with open(nbest_file, "w") as writer:
|
||||
writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
|
||||
if version_2_with_negative:
|
||||
logger.info(f"Saving null_odds to {null_odds_file}.")
|
||||
with open(null_odds_file, "w") as writer:
|
||||
writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
|
||||
|
||||
return all_predictions
|
||||
|
||||
|
||||
def postprocess_qa_predictions_with_beam_search(
|
||||
examples,
|
||||
features,
|
||||
predictions: tuple[np.ndarray, np.ndarray],
|
||||
version_2_with_negative: bool = False,
|
||||
n_best_size: int = 20,
|
||||
max_answer_length: int = 30,
|
||||
start_n_top: int = 5,
|
||||
end_n_top: int = 5,
|
||||
output_dir: Optional[str] = None,
|
||||
prefix: Optional[str] = None,
|
||||
log_level: Optional[int] = logging.WARNING,
|
||||
):
|
||||
"""
|
||||
Post-processes the predictions of a question-answering model with beam search to convert them to answers that are substrings of the
|
||||
original contexts. This is the postprocessing functions for models that return start and end logits, indices, as well as
|
||||
cls token predictions.
|
||||
|
||||
Args:
|
||||
examples: The non-preprocessed dataset (see the main script for more information).
|
||||
features: The processed dataset (see the main script for more information).
|
||||
predictions (:obj:`tuple[np.ndarray, np.ndarray]`):
|
||||
The predictions of the model: two arrays containing the start logits and the end logits respectively. Its
|
||||
first dimension must match the number of elements of :obj:`features`.
|
||||
version_2_with_negative (:obj:`bool`, `optional`, defaults to :obj:`False`):
|
||||
Whether or not the underlying dataset contains examples with no answers.
|
||||
n_best_size (:obj:`int`, `optional`, defaults to 20):
|
||||
The total number of n-best predictions to generate when looking for an answer.
|
||||
max_answer_length (:obj:`int`, `optional`, defaults to 30):
|
||||
The maximum length of an answer that can be generated. This is needed because the start and end predictions
|
||||
are not conditioned on one another.
|
||||
start_n_top (:obj:`int`, `optional`, defaults to 5):
|
||||
The number of top start logits too keep when searching for the :obj:`n_best_size` predictions.
|
||||
end_n_top (:obj:`int`, `optional`, defaults to 5):
|
||||
The number of top end logits too keep when searching for the :obj:`n_best_size` predictions.
|
||||
output_dir (:obj:`str`, `optional`):
|
||||
If provided, the dictionaries of predictions, n_best predictions (with their scores and logits) and, if
|
||||
:obj:`version_2_with_negative=True`, the dictionary of the scores differences between best and null
|
||||
answers, are saved in `output_dir`.
|
||||
prefix (:obj:`str`, `optional`):
|
||||
If provided, the dictionaries mentioned above are saved with `prefix` added to their names.
|
||||
log_level (:obj:`int`, `optional`, defaults to ``logging.WARNING``):
|
||||
``logging`` log level (e.g., ``logging.WARNING``)
|
||||
"""
|
||||
if len(predictions) != 5:
|
||||
raise ValueError("`predictions` should be a tuple with five elements.")
|
||||
start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits = predictions
|
||||
|
||||
if len(predictions[0]) != len(features):
|
||||
raise ValueError(f"Got {len(predictions[0])} predictions and {len(features)} features.")
|
||||
|
||||
# Build a map example to its corresponding features.
|
||||
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
|
||||
features_per_example = collections.defaultdict(list)
|
||||
for i, feature in enumerate(features):
|
||||
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
|
||||
|
||||
# The dictionaries we have to fill.
|
||||
all_predictions = collections.OrderedDict()
|
||||
all_nbest_json = collections.OrderedDict()
|
||||
scores_diff_json = collections.OrderedDict() if version_2_with_negative else None
|
||||
|
||||
# Logging.
|
||||
logger.setLevel(log_level)
|
||||
logger.info(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
|
||||
|
||||
# Let's loop over all the examples!
|
||||
for example_index, example in enumerate(tqdm(examples)):
|
||||
# Those are the indices of the features associated to the current example.
|
||||
feature_indices = features_per_example[example_index]
|
||||
|
||||
min_null_score = None
|
||||
prelim_predictions = []
|
||||
|
||||
# Looping through all the features associated to the current example.
|
||||
for feature_index in feature_indices:
|
||||
# We grab the predictions of the model for this feature.
|
||||
start_log_prob = start_top_log_probs[feature_index]
|
||||
start_indexes = start_top_index[feature_index]
|
||||
end_log_prob = end_top_log_probs[feature_index]
|
||||
end_indexes = end_top_index[feature_index]
|
||||
feature_null_score = cls_logits[feature_index]
|
||||
# This is what will allow us to map some the positions in our logits to span of texts in the original
|
||||
# context.
|
||||
offset_mapping = features[feature_index]["offset_mapping"]
|
||||
# Optional `token_is_max_context`, if provided we will remove answers that do not have the maximum context
|
||||
# available in the current feature.
|
||||
token_is_max_context = features[feature_index].get("token_is_max_context", None)
|
||||
|
||||
# Update minimum null prediction
|
||||
if min_null_score is None or feature_null_score < min_null_score:
|
||||
min_null_score = feature_null_score
|
||||
|
||||
# Go through all possibilities for the `n_start_top`/`n_end_top` greater start and end logits.
|
||||
for i in range(start_n_top):
|
||||
for j in range(end_n_top):
|
||||
start_index = int(start_indexes[i])
|
||||
j_index = i * end_n_top + j
|
||||
end_index = int(end_indexes[j_index])
|
||||
# Don't consider out-of-scope answers (last part of the test should be unnecessary because of the
|
||||
# p_mask but let's not take any risk)
|
||||
if (
|
||||
start_index >= len(offset_mapping)
|
||||
or end_index >= len(offset_mapping)
|
||||
or offset_mapping[start_index] is None
|
||||
or len(offset_mapping[start_index]) < 2
|
||||
or offset_mapping[end_index] is None
|
||||
or len(offset_mapping[end_index]) < 2
|
||||
):
|
||||
continue
|
||||
|
||||
# Don't consider answers with a length negative or > max_answer_length.
|
||||
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
|
||||
continue
|
||||
# Don't consider answer that don't have the maximum context available (if such information is
|
||||
# provided).
|
||||
if token_is_max_context is not None and not token_is_max_context.get(str(start_index), False):
|
||||
continue
|
||||
prelim_predictions.append(
|
||||
{
|
||||
"offsets": (offset_mapping[start_index][0], offset_mapping[end_index][1]),
|
||||
"score": start_log_prob[i] + end_log_prob[j_index],
|
||||
"start_log_prob": start_log_prob[i],
|
||||
"end_log_prob": end_log_prob[j_index],
|
||||
}
|
||||
)
|
||||
|
||||
# Only keep the best `n_best_size` predictions.
|
||||
predictions = sorted(prelim_predictions, key=lambda x: x["score"], reverse=True)[:n_best_size]
|
||||
|
||||
# Use the offsets to gather the answer text in the original context.
|
||||
context = example["context"]
|
||||
for pred in predictions:
|
||||
offsets = pred.pop("offsets")
|
||||
pred["text"] = context[offsets[0] : offsets[1]]
|
||||
|
||||
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
|
||||
# failure.
|
||||
if len(predictions) == 0:
|
||||
# Without predictions min_null_score is going to be None and None will cause an exception later
|
||||
min_null_score = -2e-6
|
||||
predictions.insert(0, {"text": "", "start_logit": -1e-6, "end_logit": -1e-6, "score": min_null_score})
|
||||
|
||||
# Compute the softmax of all scores (we do it with numpy to stay independent from torch in this file, using
|
||||
# the LogSumExp trick).
|
||||
scores = np.array([pred.pop("score") for pred in predictions])
|
||||
exp_scores = np.exp(scores - np.max(scores))
|
||||
probs = exp_scores / exp_scores.sum()
|
||||
|
||||
# Include the probabilities in our predictions.
|
||||
for prob, pred in zip(probs, predictions):
|
||||
pred["probability"] = prob
|
||||
|
||||
# Pick the best prediction and set the probability for the null answer.
|
||||
all_predictions[example["id"]] = predictions[0]["text"]
|
||||
if version_2_with_negative:
|
||||
scores_diff_json[example["id"]] = float(min_null_score)
|
||||
|
||||
# Make `predictions` JSON-serializable by casting np.float back to float.
|
||||
all_nbest_json[example["id"]] = [
|
||||
{k: (float(v) if isinstance(v, (np.float16, np.float32, np.float64)) else v) for k, v in pred.items()}
|
||||
for pred in predictions
|
||||
]
|
||||
|
||||
# If we have an output_dir, let's save all those dicts.
|
||||
if output_dir is not None:
|
||||
if not os.path.isdir(output_dir):
|
||||
raise OSError(f"{output_dir} is not a directory.")
|
||||
|
||||
prediction_file = os.path.join(
|
||||
output_dir, "predictions.json" if prefix is None else f"{prefix}_predictions.json"
|
||||
)
|
||||
nbest_file = os.path.join(
|
||||
output_dir, "nbest_predictions.json" if prefix is None else f"{prefix}_nbest_predictions.json"
|
||||
)
|
||||
if version_2_with_negative:
|
||||
null_odds_file = os.path.join(
|
||||
output_dir, "null_odds.json" if prefix is None else f"{prefix}_null_odds.json"
|
||||
)
|
||||
|
||||
logger.info(f"Saving predictions to {prediction_file}.")
|
||||
with open(prediction_file, "w") as writer:
|
||||
writer.write(json.dumps(all_predictions, indent=4) + "\n")
|
||||
logger.info(f"Saving nbest_preds to {nbest_file}.")
|
||||
with open(nbest_file, "w") as writer:
|
||||
writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
|
||||
if version_2_with_negative:
|
||||
logger.info(f"Saving null_odds to {null_odds_file}.")
|
||||
with open(null_odds_file, "w") as writer:
|
||||
writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
|
||||
|
||||
return all_predictions, scores_diff_json
|
||||
Reference in New Issue
Block a user