init
This commit is contained in:
334
transformers/examples/legacy/seq2seq/README.md
Normal file
334
transformers/examples/legacy/seq2seq/README.md
Normal file
@@ -0,0 +1,334 @@
|
||||
<!---
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
|
||||
# Sequence-to-Sequence Training and Evaluation
|
||||
|
||||
This directory contains examples for finetuning and evaluating transformers on summarization and translation tasks.
|
||||
For deprecated `bertabs` instructions, see https://github.com/huggingface/transformers-research-projects/blob/main/bertabs/README.md.
|
||||
|
||||
### Supported Architectures
|
||||
|
||||
- `BartForConditionalGeneration`
|
||||
- `MarianMTModel`
|
||||
- `PegasusForConditionalGeneration`
|
||||
- `MBartForConditionalGeneration`
|
||||
- `FSMTForConditionalGeneration`
|
||||
- `T5ForConditionalGeneration`
|
||||
|
||||
### Download the Datasets
|
||||
|
||||
#### XSUM
|
||||
|
||||
```bash
|
||||
cd examples/legacy/seq2seq
|
||||
wget https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz
|
||||
tar -xzvf xsum.tar.gz
|
||||
export XSUM_DIR=${PWD}/xsum
|
||||
```
|
||||
this should make a directory called `xsum/` with files like `test.source`.
|
||||
To use your own data, copy that files format. Each article to be summarized is on its own line.
|
||||
|
||||
#### CNN/DailyMail
|
||||
|
||||
```bash
|
||||
cd examples/legacy/seq2seq
|
||||
wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz
|
||||
tar -xzvf cnn_dm_v2.tgz # empty lines removed
|
||||
mv cnn_cln cnn_dm
|
||||
export CNN_DIR=${PWD}/cnn_dm
|
||||
```
|
||||
this should make a directory called `cnn_dm/` with 6 files.
|
||||
|
||||
#### WMT16 English-Romanian Translation Data
|
||||
|
||||
download with this command:
|
||||
```bash
|
||||
wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
|
||||
tar -xzvf wmt_en_ro.tar.gz
|
||||
export ENRO_DIR=${PWD}/wmt_en_ro
|
||||
```
|
||||
this should make a directory called `wmt_en_ro/` with 6 files.
|
||||
|
||||
#### WMT English-German
|
||||
|
||||
```bash
|
||||
wget https://cdn-datasets.huggingface.co/translation/wmt_en_de.tgz
|
||||
tar -xzvf wmt_en_de.tgz
|
||||
export DATA_DIR=${PWD}/wmt_en_de
|
||||
```
|
||||
|
||||
#### FSMT datasets (wmt)
|
||||
|
||||
Refer to the scripts starting with `eval_` under:
|
||||
https://github.com/huggingface/transformers/tree/main/scripts/fsmt
|
||||
|
||||
#### Pegasus (multiple datasets)
|
||||
|
||||
Multiple eval datasets are available for download from:
|
||||
https://github.com/stas00/porting/tree/master/datasets/pegasus
|
||||
|
||||
|
||||
#### Your Data
|
||||
|
||||
If you are using your own data, it must be formatted as one directory with 6 files:
|
||||
```
|
||||
train.source
|
||||
train.target
|
||||
val.source
|
||||
val.target
|
||||
test.source
|
||||
test.target
|
||||
```
|
||||
The `.source` files are the input, the `.target` files are the desired output.
|
||||
|
||||
### Potential issues
|
||||
|
||||
- native AMP (`--fp16` and no apex) may lead to a huge memory leak and require 10x gpu memory. This has been fixed in pytorch-nightly and the minimal official version to have this fix will be pytorch-1.7.1. Until then if you have to use mixed precision please use AMP only with pytorch-nightly or NVIDIA's apex. Reference: https://github.com/huggingface/transformers/issues/8403
|
||||
|
||||
|
||||
### Tips and Tricks
|
||||
|
||||
General Tips:
|
||||
- since you need to run from `examples/legacy/seq2seq`, and likely need to modify code, the easiest workflow is fork transformers, clone your fork, and run `pip install -e .` before you get started.
|
||||
- try `--freeze_encoder` or `--freeze_embeds` for faster training/larger batch size. (3hr per epoch with bs=8, see the "xsum_shared_task" command below)
|
||||
- `fp16_opt_level=O1` (the default works best).
|
||||
- In addition to the pytorch-lightning .ckpt checkpoint, a transformers checkpoint will be saved.
|
||||
Load it with `BartForConditionalGeneration.from_pretrained(f'{output_dir}/best_tfmr)`.
|
||||
- At the moment, `--do_predict` does not work in a multi-gpu setting. You need to use `evaluate_checkpoint` or the `run_eval.py` code.
|
||||
- This warning can be safely ignored:
|
||||
> "Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large-xsum and are newly initialized: ['final_logits_bias']"
|
||||
- Both finetuning and eval are 30% faster with `--fp16`. For that you need to [install apex](https://github.com/NVIDIA/apex#quick-start).
|
||||
- Read scripts before you run them!
|
||||
|
||||
Summarization Tips:
|
||||
- (summ) 1 epoch at batch size 1 for bart-large takes 24 hours and requires 13GB GPU RAM with fp16 on an NVIDIA-V100.
|
||||
- If you want to run experiments on improving the summarization finetuning process, try the XSUM Shared Task (below). It's faster to train than CNNDM because the summaries are shorter.
|
||||
- For CNN/DailyMail, the default `val_max_target_length` and `test_max_target_length` will truncate the ground truth labels, resulting in slightly higher rouge scores. To get accurate rouge scores, you should rerun calculate_rouge on the `{output_dir}/test_generations.txt` file saved by `trainer.test()`
|
||||
- `--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 ` is a reasonable setting for XSUM.
|
||||
- `wandb` can be used by specifying `--logger_name wandb`. It is useful for reproducibility. Specify the environment variable `WANDB_PROJECT='hf_xsum'` to do the XSUM shared task.
|
||||
- If you are finetuning on your own dataset, start from `distilbart-cnn-12-6` if you want long summaries and `distilbart-xsum-12-6` if you want short summaries.
|
||||
(It rarely makes sense to start from `bart-large` unless you are a researching finetuning methods).
|
||||
|
||||
**Update 2018-07-18**
|
||||
Datasets: `LegacySeq2SeqDataset` will be used for all tokenizers without a `prepare_seq2seq_batch` method. Otherwise, `Seq2SeqDataset` will be used.
|
||||
Future work/help wanted: A new dataset to support multilingual tasks.
|
||||
|
||||
|
||||
### Fine-tuning using Seq2SeqTrainer
|
||||
To use `Seq2SeqTrainer` for fine-tuning you should use the `finetune_trainer.py` script. It subclasses `Trainer` to extend it for seq2seq training. Except the `Trainer`-related `TrainingArguments`, it shares the same argument names as that of `finetune.py` file. One notable difference is that calculating generative metrics (BLEU, ROUGE) is optional and is controlled using the `--predict_with_generate` argument.
|
||||
|
||||
With PyTorch 1.6+ it'll automatically use `native AMP` when `--fp16` is set.
|
||||
|
||||
To see all the possible command line options, run:
|
||||
|
||||
```bash
|
||||
python finetune_trainer.py --help
|
||||
```
|
||||
|
||||
For multi-gpu training use `torch.distributed.launch`, e.g. with 2 gpus:
|
||||
```bash
|
||||
torchrun --nproc_per_node=2 finetune_trainer.py ...
|
||||
```
|
||||
|
||||
**At the moment, `Seq2SeqTrainer` does not support *with teacher* distillation.**
|
||||
|
||||
All `Seq2SeqTrainer`-based fine-tuning scripts are included in the `builtin_trainer` directory.
|
||||
|
||||
#### TPU Training
|
||||
`Seq2SeqTrainer` supports TPU training with few caveats
|
||||
1. As `generate` method does not work on TPU at the moment, `predict_with_generate` cannot be used. You should use `--prediction_loss_only` to only calculate loss, and do not set `--do_predict` and `--predict_with_generate`.
|
||||
2. All sequences should be padded to be of equal length to avoid extremely slow training. (`finetune_trainer.py` does this automatically when running on TPU.)
|
||||
|
||||
We provide a very simple launcher script named `xla_spawn.py` that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a `--num_cores` flag to this script, then your regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for `torch.distributed`).
|
||||
|
||||
`builtin_trainer/finetune_tpu.sh` script provides minimal arguments needed for TPU training.
|
||||
|
||||
The following command fine-tunes `sshleifer/student_marian_en_ro_6_3` on TPU V3-8 and should complete one epoch in ~5-6 mins.
|
||||
|
||||
```bash
|
||||
./builtin_trainer/train_distil_marian_enro_tpu.sh
|
||||
```
|
||||
|
||||
## Evaluation Commands
|
||||
|
||||
To create summaries for each article in dataset, we use `run_eval.py`, here are a few commands that run eval for different tasks and models.
|
||||
If 'translation' is in your task name, the computed metric will be BLEU. Otherwise, ROUGE will be used.
|
||||
|
||||
For t5, you need to specify --task translation_{src}_to_{tgt} as follows:
|
||||
```bash
|
||||
export DATA_DIR=wmt_en_ro
|
||||
./run_eval.py google-t5/t5-base \
|
||||
$DATA_DIR/val.source t5_val_generations.txt \
|
||||
--reference_path $DATA_DIR/val.target \
|
||||
--score_path enro_bleu.json \
|
||||
--task translation_en_to_ro \
|
||||
--n_obs 100 \
|
||||
--device cuda \
|
||||
--fp16 \
|
||||
--bs 32
|
||||
```
|
||||
|
||||
This command works for MBART, although the BLEU score is suspiciously low.
|
||||
```bash
|
||||
export DATA_DIR=wmt_en_ro
|
||||
./run_eval.py facebook/mbart-large-en-ro $DATA_DIR/val.source mbart_val_generations.txt \
|
||||
--reference_path $DATA_DIR/val.target \
|
||||
--score_path enro_bleu.json \
|
||||
--task translation \
|
||||
--n_obs 100 \
|
||||
--device cuda \
|
||||
--fp16 \
|
||||
--bs 32
|
||||
```
|
||||
|
||||
Summarization (xsum will be very similar):
|
||||
```bash
|
||||
export DATA_DIR=cnn_dm
|
||||
./run_eval.py sshleifer/distilbart-cnn-12-6 $DATA_DIR/val.source dbart_val_generations.txt \
|
||||
--reference_path $DATA_DIR/val.target \
|
||||
--score_path cnn_rouge.json \
|
||||
--task summarization \
|
||||
--n_obs 100 \
|
||||
|
||||
th 56 \
|
||||
--fp16 \
|
||||
--bs 32
|
||||
```
|
||||
|
||||
### Multi-GPU Evaluation
|
||||
here is a command to run xsum evaluation on 8 GPUs. It is more than linearly faster than run_eval.py in some cases
|
||||
because it uses SortishSampler to minimize padding. You can also use it on 1 GPU. `data_dir` must have
|
||||
`{type_path}.source` and `{type_path}.target`. Run `./run_distributed_eval.py --help` for all clargs.
|
||||
|
||||
```bash
|
||||
torchrun --nproc_per_node=8 run_distributed_eval.py \
|
||||
--model_name sshleifer/distilbart-large-xsum-12-3 \
|
||||
--save_dir xsum_generations \
|
||||
--data_dir xsum \
|
||||
--fp16 # you can pass generate kwargs like num_beams here, just like run_eval.py
|
||||
```
|
||||
|
||||
Contributions that implement this command for other distributed hardware setups are welcome!
|
||||
|
||||
#### Single-GPU Eval: Tips and Tricks
|
||||
|
||||
When using `run_eval.py`, the following features can be useful:
|
||||
|
||||
* if you running the script multiple times and want to make it easier to track what arguments produced that output, use `--dump-args`. Along with the results it will also dump any custom params that were passed to the script. For example if you used: `--num_beams 8 --early_stopping true`, the output will be:
|
||||
```json
|
||||
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True}
|
||||
```
|
||||
|
||||
`--info` is an additional argument available for the same purpose of tracking the conditions of the experiment. It's useful to pass things that weren't in the argument list, e.g. a language pair `--info "lang:en-ru"`. But also if you pass `--info` without a value it will fallback to the current date/time string, e.g. `2020-09-13 18:44:43`.
|
||||
|
||||
If using `--dump-args --info`, the output will be:
|
||||
|
||||
```json
|
||||
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': '2020-09-13 18:44:43'}
|
||||
```
|
||||
|
||||
If using `--dump-args --info "pair:en-ru chkpt=best`, the output will be:
|
||||
|
||||
```json
|
||||
{'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': 'pair=en-ru chkpt=best'}
|
||||
```
|
||||
|
||||
|
||||
* if you need to perform a parametric search in order to find the best ones that lead to the highest BLEU score, let `run_eval_search.py` to do the searching for you.
|
||||
|
||||
The script accepts the exact same arguments as `run_eval.py`, plus an additional argument `--search`. The value of `--search` is parsed, reformatted and fed to ``run_eval.py`` as additional args.
|
||||
|
||||
The format for the `--search` value is a simple string with hparams and colon separated values to try, e.g.:
|
||||
```
|
||||
--search "num_beams=5:10 length_penalty=0.8:1.0:1.2 early_stopping=true:false"
|
||||
```
|
||||
which will generate `12` `(2*3*2)` searches for a product of each hparam. For example the example that was just used will invoke `run_eval.py` repeatedly with:
|
||||
|
||||
```
|
||||
--num_beams 5 --length_penalty 0.8 --early_stopping true
|
||||
--num_beams 5 --length_penalty 0.8 --early_stopping false
|
||||
[...]
|
||||
--num_beams 10 --length_penalty 1.2 --early_stopping false
|
||||
```
|
||||
|
||||
On completion, this function prints a markdown table of the results sorted by the best BLEU score and the winning arguments.
|
||||
|
||||
```
|
||||
bleu | num_beams | length_penalty | early_stopping
|
||||
----- | --------- | -------------- | --------------
|
||||
26.71 | 5 | 1.1 | 1
|
||||
26.66 | 5 | 0.9 | 1
|
||||
26.66 | 5 | 0.9 | 0
|
||||
26.41 | 5 | 1.1 | 0
|
||||
21.94 | 1 | 0.9 | 1
|
||||
21.94 | 1 | 0.9 | 0
|
||||
21.94 | 1 | 1.1 | 1
|
||||
21.94 | 1 | 1.1 | 0
|
||||
|
||||
Best score args:
|
||||
stas/wmt19-en-ru data/en-ru/val.source data/en-ru/test_translations.txt --reference_path data/en-ru/val.target --score_path data/en-ru/test_bleu.json --bs 8 --task translation --num_beams 5 --length_penalty 1.1 --early_stopping True
|
||||
```
|
||||
|
||||
If you pass `--info "some experiment-specific info"` it will get printed before the results table - this is useful for scripting and multiple runs, so one can tell the different sets of results from each other.
|
||||
|
||||
|
||||
### Contributing
|
||||
- follow the standard contributing guidelines and code of conduct.
|
||||
- add tests to `test_seq2seq_examples.py`
|
||||
- To run only the seq2seq tests, you must be in the root of the repository and run:
|
||||
```bash
|
||||
pytest examples/seq2seq/
|
||||
```
|
||||
|
||||
### Converting pytorch-lightning checkpoints
|
||||
pytorch lightning ``-do_predict`` often fails, after you are done training, the best way to evaluate your model is to convert it.
|
||||
|
||||
This should be done for you, with a file called `{save_dir}/best_tfmr`.
|
||||
|
||||
If that file doesn't exist but you have a lightning `.ckpt` file, you can run
|
||||
```bash
|
||||
python convert_pl_checkpoint_to_hf.py PATH_TO_CKPT randomly_initialized_hf_model_path save_dir/best_tfmr
|
||||
```
|
||||
Then either `run_eval` or `run_distributed_eval` with `save_dir/best_tfmr` (see previous sections)
|
||||
|
||||
|
||||
# Experimental Features
|
||||
These features are harder to use and not always useful.
|
||||
|
||||
### Dynamic Batch Size for MT
|
||||
`finetune.py` has a command line arg `--max_tokens_per_batch` that allows batches to be dynamically sized.
|
||||
This feature can only be used:
|
||||
- with fairseq installed
|
||||
- on 1 GPU
|
||||
- without sortish sampler
|
||||
- after calling `./save_len_file.py $tok $data_dir`
|
||||
|
||||
For example,
|
||||
```bash
|
||||
./save_len_file.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro
|
||||
./dynamic_bs_example.sh --max_tokens_per_batch=2000 --output_dir benchmark_dynamic_bs
|
||||
```
|
||||
splits `wmt_en_ro/train` into 11,197 uneven length batches and can finish 1 epoch in 8 minutes on a v100.
|
||||
|
||||
For comparison,
|
||||
```bash
|
||||
./dynamic_bs_example.sh --sortish_sampler --train_batch_size 48
|
||||
```
|
||||
uses 12,723 batches of length 48 and takes slightly more time 9.5 minutes.
|
||||
|
||||
The feature is still experimental, because:
|
||||
+ we can make it much more robust if we have memory mapped/preprocessed datasets.
|
||||
+ The speedup over sortish sampler is not that large at the moment.
|
||||
5
transformers/examples/legacy/seq2seq/__init__.py
Normal file
5
transformers/examples/legacy/seq2seq/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
sys.path.insert(1, os.path.dirname(os.path.realpath(__file__)))
|
||||
36
transformers/examples/legacy/seq2seq/convert_model_to_fp16.py
Executable file
36
transformers/examples/legacy/seq2seq/convert_model_to_fp16.py
Executable file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import Union
|
||||
|
||||
import fire
|
||||
import torch
|
||||
from tqdm import tqdm
|
||||
|
||||
|
||||
def convert(src_path: str, map_location: str = "cpu", save_path: Union[str, None] = None) -> None:
|
||||
"""Convert a pytorch_model.bin or model.pt file to torch.float16 for faster downloads, less disk space."""
|
||||
state_dict = torch.load(src_path, map_location=map_location, weights_only=True)
|
||||
for k, v in tqdm(state_dict.items()):
|
||||
if not isinstance(v, torch.Tensor):
|
||||
raise TypeError("FP16 conversion only works on paths that are saved state dicts, like pytorch_model.bin")
|
||||
state_dict[k] = v.half()
|
||||
if save_path is None: # overwrite src_path
|
||||
save_path = src_path
|
||||
torch.save(state_dict, save_path)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(convert)
|
||||
67
transformers/examples/legacy/seq2seq/download_wmt.py
Executable file
67
transformers/examples/legacy/seq2seq/download_wmt.py
Executable file
@@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import fire
|
||||
from tqdm import tqdm
|
||||
|
||||
|
||||
def download_wmt_dataset(src_lang="ro", tgt_lang="en", dataset="wmt16", save_dir=None) -> None:
|
||||
"""Download a dataset using the datasets package and save it to the format expected by finetune.py
|
||||
Format of save_dir: train.source, train.target, val.source, val.target, test.source, test.target.
|
||||
|
||||
Args:
|
||||
src_lang: <str> source language
|
||||
tgt_lang: <str> target language
|
||||
dataset: <str> wmt16, wmt17, etc. wmt16 is a good start as it's small. To get the full list run `import datasets; print([d.id for d in datasets.list_datasets() if "wmt" in d.id])`
|
||||
save_dir: <str>, where to save the datasets, defaults to f'{dataset}-{src_lang}-{tgt_lang}'
|
||||
|
||||
Usage:
|
||||
>>> download_wmt_dataset('ro', 'en', dataset='wmt16') # saves to wmt16-ro-en
|
||||
"""
|
||||
try:
|
||||
import datasets
|
||||
except (ModuleNotFoundError, ImportError):
|
||||
raise ImportError("run pip install datasets")
|
||||
pair = f"{src_lang}-{tgt_lang}"
|
||||
print(f"Converting {dataset}-{pair}")
|
||||
ds = datasets.load_dataset(dataset, pair)
|
||||
if save_dir is None:
|
||||
save_dir = f"{dataset}-{pair}"
|
||||
save_dir = Path(save_dir)
|
||||
save_dir.mkdir(exist_ok=True)
|
||||
|
||||
for split in ds:
|
||||
print(f"Splitting {split} with {ds[split].num_rows} records")
|
||||
|
||||
# to save to val.source, val.target like summary datasets
|
||||
fn = "val" if split == "validation" else split
|
||||
src_path = save_dir.joinpath(f"{fn}.source")
|
||||
tgt_path = save_dir.joinpath(f"{fn}.target")
|
||||
src_fp = src_path.open("w+")
|
||||
tgt_fp = tgt_path.open("w+")
|
||||
|
||||
# reader is the bottleneck so writing one record at a time doesn't slow things down
|
||||
for x in tqdm(ds[split]):
|
||||
ex = x["translation"]
|
||||
src_fp.write(ex[src_lang] + "\n")
|
||||
tgt_fp.write(ex[tgt_lang] + "\n")
|
||||
|
||||
print(f"Saved {dataset} dataset to {save_dir}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(download_wmt_dataset)
|
||||
24
transformers/examples/legacy/seq2seq/finetune.sh
Normal file
24
transformers/examples/legacy/seq2seq/finetune.sh
Normal file
@@ -0,0 +1,24 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# the proper usage is documented in the README, you need to specify data_dir, output_dir and model_name_or_path
|
||||
# run ./finetune.sh --help to see all the possible options
|
||||
python finetune_trainer.py \
|
||||
--learning_rate=3e-5 \
|
||||
--fp16 \
|
||||
--do_train --do_eval --do_predict \
|
||||
--eval_strategy steps \
|
||||
--predict_with_generate \
|
||||
--n_val 1000 \
|
||||
"$@"
|
||||
26
transformers/examples/legacy/seq2seq/finetune_tpu.sh
Normal file
26
transformers/examples/legacy/seq2seq/finetune_tpu.sh
Normal file
@@ -0,0 +1,26 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export TPU_NUM_CORES=8
|
||||
|
||||
# the proper usage is documented in the README, you need to specify data_dir, output_dir and model_name_or_path
|
||||
# run ./finetune_tpu.sh --help to see all the possible options
|
||||
python xla_spawn.py --num_cores $TPU_NUM_CORES \
|
||||
finetune_trainer.py \
|
||||
--learning_rate=3e-5 \
|
||||
--do_train --do_eval \
|
||||
--eval_strategy steps \
|
||||
--prediction_loss_only \
|
||||
--n_val 1000 \
|
||||
"$@"
|
||||
375
transformers/examples/legacy/seq2seq/finetune_trainer.py
Executable file
375
transformers/examples/legacy/seq2seq/finetune_trainer.py
Executable file
@@ -0,0 +1,375 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
from seq2seq_trainer import Seq2SeqTrainer
|
||||
from seq2seq_training_args import Seq2SeqTrainingArguments
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoModelForSeq2SeqLM,
|
||||
AutoTokenizer,
|
||||
HfArgumentParser,
|
||||
MBartTokenizer,
|
||||
MBartTokenizerFast,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import EvaluationStrategy, is_main_process
|
||||
from transformers.training_args import ParallelMode
|
||||
from utils import (
|
||||
Seq2SeqDataCollator,
|
||||
Seq2SeqDataset,
|
||||
assert_all_frozen,
|
||||
build_compute_metrics_fn,
|
||||
check_output_dir,
|
||||
freeze_embeds,
|
||||
freeze_params,
|
||||
lmap,
|
||||
save_json,
|
||||
use_task_specific_params,
|
||||
write_txt_file,
|
||||
)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
tokenizer_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
freeze_encoder: bool = field(default=False, metadata={"help": "Whether tp freeze the encoder."})
|
||||
freeze_embeds: bool = field(default=False, metadata={"help": "Whether to freeze the embeddings."})
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
data_dir: str = field(
|
||||
metadata={"help": "The input data dir. Should contain the .tsv files (or other data files) for the task."}
|
||||
)
|
||||
task: Optional[str] = field(
|
||||
default="summarization",
|
||||
metadata={"help": "Task name, summarization (or summarization_{dataset} for pegasus) or translation"},
|
||||
)
|
||||
max_source_length: Optional[int] = field(
|
||||
default=1024,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total input sequence length after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_target_length: Optional[int] = field(
|
||||
default=128,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total sequence length for target text after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
)
|
||||
},
|
||||
)
|
||||
val_max_target_length: Optional[int] = field(
|
||||
default=142,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total sequence length for validation target text after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded. "
|
||||
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
|
||||
"during ``evaluate`` and ``predict``."
|
||||
)
|
||||
},
|
||||
)
|
||||
test_max_target_length: Optional[int] = field(
|
||||
default=142,
|
||||
metadata={
|
||||
"help": (
|
||||
"The maximum total sequence length for test target text after tokenization. Sequences longer "
|
||||
"than this will be truncated, sequences shorter will be padded."
|
||||
)
|
||||
},
|
||||
)
|
||||
n_train: Optional[int] = field(default=-1, metadata={"help": "# training examples. -1 means use all."})
|
||||
n_val: Optional[int] = field(default=-1, metadata={"help": "# validation examples. -1 means use all."})
|
||||
n_test: Optional[int] = field(default=-1, metadata={"help": "# test examples. -1 means use all."})
|
||||
src_lang: Optional[str] = field(default=None, metadata={"help": "Source language id for translation."})
|
||||
tgt_lang: Optional[str] = field(default=None, metadata={"help": "Target language id for translation."})
|
||||
eval_beams: Optional[int] = field(default=None, metadata={"help": "# num_beams to use for evaluation."})
|
||||
ignore_pad_token_for_loss: bool = field(
|
||||
default=True,
|
||||
metadata={"help": "If only pad tokens should be ignored. This assumes that `config.pad_token_id` is defined."},
|
||||
)
|
||||
|
||||
|
||||
def handle_metrics(split, metrics, output_dir):
|
||||
"""
|
||||
Log and save metrics
|
||||
|
||||
Args:
|
||||
- split: one of train, val, test
|
||||
- metrics: metrics dict
|
||||
- output_dir: where to save the metrics
|
||||
"""
|
||||
|
||||
logger.info(f"***** {split} metrics *****")
|
||||
for key in sorted(metrics.keys()):
|
||||
logger.info(f" {key} = {metrics[key]}")
|
||||
save_json(metrics, os.path.join(output_dir, f"{split}_results.json"))
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
|
||||
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
check_output_dir(training_args)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
|
||||
)
|
||||
logger.warning(
|
||||
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
|
||||
training_args.local_rank,
|
||||
training_args.device,
|
||||
training_args.n_gpu,
|
||||
bool(training_args.parallel_mode == ParallelMode.DISTRIBUTED),
|
||||
training_args.fp16,
|
||||
)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
# Set the verbosity to info of the Transformers logger (on main process only):
|
||||
if is_main_process(training_args.local_rank):
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
logger.info("Training/evaluation parameters %s", training_args)
|
||||
|
||||
# Set seed
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# Load pretrained model and tokenizer
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
)
|
||||
|
||||
extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout")
|
||||
for p in extra_model_params:
|
||||
if getattr(training_args, p, None):
|
||||
assert hasattr(config, p), f"({config.__class__.__name__}) doesn't have a `{p}` attribute"
|
||||
setattr(config, p, getattr(training_args, p))
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
)
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=".ckpt" in model_args.model_name_or_path,
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
)
|
||||
|
||||
# use task specific params
|
||||
use_task_specific_params(model, data_args.task)
|
||||
|
||||
# set num_beams for evaluation
|
||||
if data_args.eval_beams is None:
|
||||
data_args.eval_beams = model.config.num_beams
|
||||
|
||||
# set decoder_start_token_id for MBart
|
||||
if model.config.decoder_start_token_id is None and isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
|
||||
assert data_args.tgt_lang is not None and data_args.src_lang is not None, (
|
||||
"mBart requires --tgt_lang and --src_lang"
|
||||
)
|
||||
if isinstance(tokenizer, MBartTokenizer):
|
||||
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
|
||||
else:
|
||||
model.config.decoder_start_token_id = tokenizer.convert_tokens_to_ids(data_args.tgt_lang)
|
||||
|
||||
if model_args.freeze_embeds:
|
||||
freeze_embeds(model)
|
||||
if model_args.freeze_encoder:
|
||||
freeze_params(model.get_encoder())
|
||||
assert_all_frozen(model.get_encoder())
|
||||
|
||||
dataset_class = Seq2SeqDataset
|
||||
|
||||
# Get datasets
|
||||
train_dataset = (
|
||||
dataset_class(
|
||||
tokenizer,
|
||||
type_path="train",
|
||||
data_dir=data_args.data_dir,
|
||||
n_obs=data_args.n_train,
|
||||
max_target_length=data_args.max_target_length,
|
||||
max_source_length=data_args.max_source_length,
|
||||
prefix=model.config.prefix or "",
|
||||
)
|
||||
if training_args.do_train
|
||||
else None
|
||||
)
|
||||
eval_dataset = (
|
||||
dataset_class(
|
||||
tokenizer,
|
||||
type_path="val",
|
||||
data_dir=data_args.data_dir,
|
||||
n_obs=data_args.n_val,
|
||||
max_target_length=data_args.val_max_target_length,
|
||||
max_source_length=data_args.max_source_length,
|
||||
prefix=model.config.prefix or "",
|
||||
)
|
||||
if training_args.do_eval or training_args.eval_strategy != EvaluationStrategy.NO
|
||||
else None
|
||||
)
|
||||
test_dataset = (
|
||||
dataset_class(
|
||||
tokenizer,
|
||||
type_path="test",
|
||||
data_dir=data_args.data_dir,
|
||||
n_obs=data_args.n_test,
|
||||
max_target_length=data_args.test_max_target_length,
|
||||
max_source_length=data_args.max_source_length,
|
||||
prefix=model.config.prefix or "",
|
||||
)
|
||||
if training_args.do_predict
|
||||
else None
|
||||
)
|
||||
|
||||
# Initialize our Trainer
|
||||
compute_metrics_fn = (
|
||||
build_compute_metrics_fn(data_args.task, tokenizer) if training_args.predict_with_generate else None
|
||||
)
|
||||
trainer = Seq2SeqTrainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
data_args=data_args,
|
||||
train_dataset=train_dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
data_collator=Seq2SeqDataCollator(
|
||||
tokenizer, data_args, model.config.decoder_start_token_id, training_args.tpu_num_cores
|
||||
),
|
||||
compute_metrics=compute_metrics_fn,
|
||||
processing_class=tokenizer,
|
||||
)
|
||||
|
||||
all_metrics = {}
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
logger.info("*** Train ***")
|
||||
|
||||
train_result = trainer.train(
|
||||
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
|
||||
)
|
||||
metrics = train_result.metrics
|
||||
metrics["train_n_objs"] = data_args.n_train
|
||||
|
||||
trainer.save_model() # this also saves the tokenizer
|
||||
|
||||
if trainer.is_world_process_zero():
|
||||
handle_metrics("train", metrics, training_args.output_dir)
|
||||
all_metrics.update(metrics)
|
||||
|
||||
# Need to save the state, since Trainer.save_model saves only the tokenizer with the model
|
||||
trainer.state.save_to_json(os.path.join(training_args.output_dir, "trainer_state.json"))
|
||||
|
||||
# For convenience, we also re-save the tokenizer to the same directory,
|
||||
# so that you can share your model easily on huggingface.co/models =)
|
||||
tokenizer.save_pretrained(training_args.output_dir)
|
||||
|
||||
# Evaluation
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
|
||||
metrics = trainer.evaluate(metric_key_prefix="val")
|
||||
metrics["val_n_objs"] = data_args.n_val
|
||||
metrics["val_loss"] = round(metrics["val_loss"], 4)
|
||||
|
||||
if trainer.is_world_process_zero():
|
||||
handle_metrics("val", metrics, training_args.output_dir)
|
||||
all_metrics.update(metrics)
|
||||
|
||||
if training_args.do_predict:
|
||||
logger.info("*** Predict ***")
|
||||
|
||||
test_output = trainer.predict(test_dataset=test_dataset, metric_key_prefix="test")
|
||||
metrics = test_output.metrics
|
||||
metrics["test_n_objs"] = data_args.n_test
|
||||
|
||||
if trainer.is_world_process_zero():
|
||||
metrics["test_loss"] = round(metrics["test_loss"], 4)
|
||||
handle_metrics("test", metrics, training_args.output_dir)
|
||||
all_metrics.update(metrics)
|
||||
|
||||
if training_args.predict_with_generate:
|
||||
test_preds = tokenizer.batch_decode(
|
||||
test_output.predictions, skip_special_tokens=True, clean_up_tokenization_spaces=True
|
||||
)
|
||||
test_preds = lmap(str.strip, test_preds)
|
||||
write_txt_file(test_preds, os.path.join(training_args.output_dir, "test_generations.txt"))
|
||||
|
||||
if trainer.is_world_process_zero():
|
||||
save_json(all_metrics, os.path.join(training_args.output_dir, "all_results.json"))
|
||||
|
||||
return all_metrics
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
||||
main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
34
transformers/examples/legacy/seq2seq/minify_dataset.py
Executable file
34
transformers/examples/legacy/seq2seq/minify_dataset.py
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import fire
|
||||
|
||||
|
||||
def minify(src_dir: str, dest_dir: str, n: int):
|
||||
"""Write first n lines of each file f in src_dir to dest_dir/f"""
|
||||
src_dir = Path(src_dir)
|
||||
dest_dir = Path(dest_dir)
|
||||
dest_dir.mkdir(exist_ok=True)
|
||||
for path in src_dir.iterdir():
|
||||
new = [x.rstrip() for x in list(path.open().readlines())][:n]
|
||||
dest_path = dest_dir.joinpath(path.name)
|
||||
print(dest_path)
|
||||
dest_path.open("w").write("\n".join(new))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(minify)
|
||||
109
transformers/examples/legacy/seq2seq/old_test_calculate_rouge.py
Normal file
109
transformers/examples/legacy/seq2seq/old_test_calculate_rouge.py
Normal file
@@ -0,0 +1,109 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
|
||||
import pandas as pd
|
||||
from rouge_cli import calculate_rouge_path
|
||||
|
||||
from utils import calculate_rouge
|
||||
|
||||
|
||||
PRED = [
|
||||
'Prosecutor: "No videos were used in the crash investigation" German papers say they saw a cell phone video of the'
|
||||
' final seconds on board Flight 9525. The Germanwings co-pilot says he had a "previous episode of severe'
|
||||
" depression\" German airline confirms it knew of Andreas Lubitz's depression years before he took control.",
|
||||
"The Palestinian Authority officially becomes the 123rd member of the International Criminal Court. The formal"
|
||||
" accession was marked with a ceremony at The Hague, in the Netherlands. The Palestinians signed the ICC's"
|
||||
" founding Rome Statute in January. Israel and the United States opposed the Palestinians' efforts to join the"
|
||||
" body.",
|
||||
"Amnesty International releases its annual report on the death penalty. The report catalogs the use of"
|
||||
" state-sanctioned killing as a punitive measure across the globe. At least 607 people were executed around the"
|
||||
" world in 2014, compared to 778 in 2013. The U.S. remains one of the worst offenders for imposing capital"
|
||||
" punishment.",
|
||||
]
|
||||
|
||||
TGT = [
|
||||
'Marseille prosecutor says "so far no videos were used in the crash investigation" despite media reports .'
|
||||
' Journalists at Bild and Paris Match are "very confident" the video clip is real, an editor says . Andreas Lubitz'
|
||||
" had informed his Lufthansa training school of an episode of severe depression, airline says .",
|
||||
"Membership gives the ICC jurisdiction over alleged crimes committed in Palestinian territories since last June ."
|
||||
" Israel and the United States opposed the move, which could open the door to war crimes investigations against"
|
||||
" Israelis .",
|
||||
"Amnesty's annual death penalty report catalogs encouraging signs, but setbacks in numbers of those sentenced to"
|
||||
" death . Organization claims that governments around the world are using the threat of terrorism to advance"
|
||||
" executions . The number of executions worldwide has gone down by almost 22% compared with 2013, but death"
|
||||
" sentences up by 28% .",
|
||||
]
|
||||
|
||||
|
||||
def test_disaggregated_scores_are_determinstic():
|
||||
no_aggregation = calculate_rouge(PRED, TGT, bootstrap_aggregation=False, rouge_keys=["rouge2", "rougeL"])
|
||||
assert isinstance(no_aggregation, defaultdict)
|
||||
no_aggregation_just_r2 = calculate_rouge(PRED, TGT, bootstrap_aggregation=False, rouge_keys=["rouge2"])
|
||||
assert (
|
||||
pd.DataFrame(no_aggregation["rouge2"]).fmeasure.mean()
|
||||
== pd.DataFrame(no_aggregation_just_r2["rouge2"]).fmeasure.mean()
|
||||
)
|
||||
|
||||
|
||||
def test_newline_cnn_improvement():
|
||||
k = "rougeLsum"
|
||||
score = calculate_rouge(PRED, TGT, newline_sep=True, rouge_keys=[k])[k]
|
||||
score_no_sep = calculate_rouge(PRED, TGT, newline_sep=False, rouge_keys=[k])[k]
|
||||
assert score > score_no_sep
|
||||
|
||||
|
||||
def test_newline_irrelevant_for_other_metrics():
|
||||
k = ["rouge1", "rouge2", "rougeL"]
|
||||
score_sep = calculate_rouge(PRED, TGT, newline_sep=True, rouge_keys=k)
|
||||
score_no_sep = calculate_rouge(PRED, TGT, newline_sep=False, rouge_keys=k)
|
||||
assert score_sep == score_no_sep
|
||||
|
||||
|
||||
def test_single_sent_scores_dont_depend_on_newline_sep():
|
||||
pred = [
|
||||
"Her older sister, Margot Frank, died in 1945, a month earlier than previously thought.",
|
||||
'Marseille prosecutor says "so far no videos were used in the crash investigation" despite media reports .',
|
||||
]
|
||||
tgt = [
|
||||
"Margot Frank, died in 1945, a month earlier than previously thought.",
|
||||
'Prosecutor: "No videos were used in the crash investigation" German papers say they saw a cell phone video of'
|
||||
" the final seconds on board Flight 9525.",
|
||||
]
|
||||
assert calculate_rouge(pred, tgt, newline_sep=True) == calculate_rouge(pred, tgt, newline_sep=False)
|
||||
|
||||
|
||||
def test_pegasus_newline():
|
||||
pred = [
|
||||
"""" "a person who has such a video needs to immediately give it to the investigators," prosecutor says .<n> "it is a very disturbing scene," editor-in-chief of bild online tells "erin burnett: outfront" """
|
||||
]
|
||||
tgt = [
|
||||
""" Marseille prosecutor says "so far no videos were used in the crash investigation" despite media reports . Journalists at Bild and Paris Match are "very confident" the video clip is real, an editor says . Andreas Lubitz had informed his Lufthansa training school of an episode of severe depression, airline says ."""
|
||||
]
|
||||
|
||||
prev_score = calculate_rouge(pred, tgt, rouge_keys=["rougeLsum"], newline_sep=False)["rougeLsum"]
|
||||
new_score = calculate_rouge(pred, tgt, rouge_keys=["rougeLsum"])["rougeLsum"]
|
||||
assert new_score > prev_score
|
||||
|
||||
|
||||
def test_rouge_cli():
|
||||
data_dir = Path("examples/seq2seq/test_data/wmt_en_ro")
|
||||
metrics = calculate_rouge_path(data_dir.joinpath("test.source"), data_dir.joinpath("test.target"))
|
||||
assert isinstance(metrics, dict)
|
||||
metrics_default_dict = calculate_rouge_path(
|
||||
data_dir.joinpath("test.source"), data_dir.joinpath("test.target"), bootstrap_aggregation=False
|
||||
)
|
||||
assert isinstance(metrics_default_dict, defaultdict)
|
||||
247
transformers/examples/legacy/seq2seq/old_test_datasets.py
Normal file
247
transformers/examples/legacy/seq2seq/old_test_datasets.py
Normal file
@@ -0,0 +1,247 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
import pytest
|
||||
from pack_dataset import pack_data_dir
|
||||
from parameterized import parameterized
|
||||
from save_len_file import save_len_file
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
from transformers import AutoTokenizer
|
||||
from transformers.models.mbart.modeling_mbart import shift_tokens_right
|
||||
from transformers.testing_utils import TestCasePlus, slow
|
||||
from utils import FAIRSEQ_AVAILABLE, DistributedSortishSampler, LegacySeq2SeqDataset, Seq2SeqDataset
|
||||
|
||||
|
||||
BERT_BASE_CASED = "google-bert/bert-base-cased"
|
||||
PEGASUS_XSUM = "google/pegasus-xsum"
|
||||
ARTICLES = [" Sam ate lunch today.", "Sams lunch ingredients."]
|
||||
SUMMARIES = ["A very interesting story about what I ate for lunch.", "Avocado, celery, turkey, coffee"]
|
||||
T5_TINY = "patrickvonplaten/t5-tiny-random"
|
||||
BART_TINY = "sshleifer/bart-tiny-random"
|
||||
MBART_TINY = "sshleifer/tiny-mbart"
|
||||
MARIAN_TINY = "sshleifer/tiny-marian-en-de"
|
||||
|
||||
|
||||
def _dump_articles(path: Path, articles: list):
|
||||
content = "\n".join(articles)
|
||||
Path(path).open("w").writelines(content)
|
||||
|
||||
|
||||
def make_test_data_dir(tmp_dir):
|
||||
for split in ["train", "val", "test"]:
|
||||
_dump_articles(os.path.join(tmp_dir, f"{split}.source"), ARTICLES)
|
||||
_dump_articles(os.path.join(tmp_dir, f"{split}.target"), SUMMARIES)
|
||||
return tmp_dir
|
||||
|
||||
|
||||
class TestAll(TestCasePlus):
|
||||
@parameterized.expand(
|
||||
[
|
||||
MBART_TINY,
|
||||
MARIAN_TINY,
|
||||
T5_TINY,
|
||||
BART_TINY,
|
||||
PEGASUS_XSUM,
|
||||
],
|
||||
)
|
||||
@slow
|
||||
def test_seq2seq_dataset_truncation(self, tok_name):
|
||||
tokenizer = AutoTokenizer.from_pretrained(tok_name)
|
||||
tmp_dir = make_test_data_dir(tmp_dir=self.get_auto_remove_tmp_dir())
|
||||
max_len_source = max(len(tokenizer.encode(a)) for a in ARTICLES)
|
||||
max_len_target = max(len(tokenizer.encode(a)) for a in SUMMARIES)
|
||||
max_src_len = 4
|
||||
max_tgt_len = 8
|
||||
assert max_len_target > max_src_len # Will be truncated
|
||||
assert max_len_source > max_src_len # Will be truncated
|
||||
src_lang, tgt_lang = "ro_RO", "de_DE" # ignored for all but mbart, but never causes error.
|
||||
train_dataset = Seq2SeqDataset(
|
||||
tokenizer,
|
||||
data_dir=tmp_dir,
|
||||
type_path="train",
|
||||
max_source_length=max_src_len,
|
||||
max_target_length=max_tgt_len, # ignored
|
||||
src_lang=src_lang,
|
||||
tgt_lang=tgt_lang,
|
||||
)
|
||||
dataloader = DataLoader(train_dataset, batch_size=2, collate_fn=train_dataset.collate_fn)
|
||||
for batch in dataloader:
|
||||
assert isinstance(batch, dict)
|
||||
assert batch["attention_mask"].shape == batch["input_ids"].shape
|
||||
# show that articles were trimmed.
|
||||
assert batch["input_ids"].shape[1] == max_src_len
|
||||
# show that targets are the same len
|
||||
assert batch["labels"].shape[1] == max_tgt_len
|
||||
if tok_name != MBART_TINY:
|
||||
continue
|
||||
# check language codes in correct place
|
||||
batch["decoder_input_ids"] = shift_tokens_right(batch["labels"], tokenizer.pad_token_id)
|
||||
assert batch["decoder_input_ids"][0, 0].item() == tokenizer.lang_code_to_id[tgt_lang]
|
||||
assert batch["decoder_input_ids"][0, -1].item() == tokenizer.eos_token_id
|
||||
assert batch["input_ids"][0, -2].item() == tokenizer.eos_token_id
|
||||
assert batch["input_ids"][0, -1].item() == tokenizer.lang_code_to_id[src_lang]
|
||||
|
||||
break # No need to test every batch
|
||||
|
||||
@parameterized.expand([BART_TINY, BERT_BASE_CASED])
|
||||
def test_legacy_dataset_truncation(self, tok):
|
||||
tokenizer = AutoTokenizer.from_pretrained(tok)
|
||||
tmp_dir = make_test_data_dir(tmp_dir=self.get_auto_remove_tmp_dir())
|
||||
max_len_source = max(len(tokenizer.encode(a)) for a in ARTICLES)
|
||||
max_len_target = max(len(tokenizer.encode(a)) for a in SUMMARIES)
|
||||
trunc_target = 4
|
||||
train_dataset = LegacySeq2SeqDataset(
|
||||
tokenizer,
|
||||
data_dir=tmp_dir,
|
||||
type_path="train",
|
||||
max_source_length=20,
|
||||
max_target_length=trunc_target,
|
||||
)
|
||||
dataloader = DataLoader(train_dataset, batch_size=2, collate_fn=train_dataset.collate_fn)
|
||||
for batch in dataloader:
|
||||
assert batch["attention_mask"].shape == batch["input_ids"].shape
|
||||
# show that articles were trimmed.
|
||||
assert batch["input_ids"].shape[1] == max_len_source
|
||||
assert 20 >= batch["input_ids"].shape[1] # trimmed significantly
|
||||
# show that targets were truncated
|
||||
assert batch["labels"].shape[1] == trunc_target # Truncated
|
||||
assert max_len_target > trunc_target # Truncated
|
||||
break # No need to test every batch
|
||||
|
||||
def test_pack_dataset(self):
|
||||
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
|
||||
|
||||
tmp_dir = Path(make_test_data_dir(tmp_dir=self.get_auto_remove_tmp_dir()))
|
||||
orig_examples = tmp_dir.joinpath("train.source").open().readlines()
|
||||
save_dir = Path(make_test_data_dir(tmp_dir=self.get_auto_remove_tmp_dir()))
|
||||
pack_data_dir(tokenizer, tmp_dir, 128, save_dir)
|
||||
orig_paths = {x.name for x in tmp_dir.iterdir()}
|
||||
new_paths = {x.name for x in save_dir.iterdir()}
|
||||
packed_examples = save_dir.joinpath("train.source").open().readlines()
|
||||
# orig: [' Sam ate lunch today.\n', 'Sams lunch ingredients.']
|
||||
# desired_packed: [' Sam ate lunch today.\n Sams lunch ingredients.']
|
||||
assert len(packed_examples) < len(orig_examples)
|
||||
assert len(packed_examples) == 1
|
||||
assert len(packed_examples[0]) == sum(len(x) for x in orig_examples)
|
||||
assert orig_paths == new_paths
|
||||
|
||||
@pytest.mark.skipif(not FAIRSEQ_AVAILABLE, reason="This test requires fairseq")
|
||||
def test_dynamic_batch_size(self):
|
||||
if not FAIRSEQ_AVAILABLE:
|
||||
return
|
||||
ds, max_tokens, tokenizer = self._get_dataset(max_len=64)
|
||||
required_batch_size_multiple = 64
|
||||
batch_sampler = ds.make_dynamic_sampler(max_tokens, required_batch_size_multiple=required_batch_size_multiple)
|
||||
batch_sizes = [len(x) for x in batch_sampler]
|
||||
assert len(set(batch_sizes)) > 1 # it's not dynamic batch size if every batch is the same length
|
||||
assert sum(batch_sizes) == len(ds) # no dropped or added examples
|
||||
data_loader = DataLoader(ds, batch_sampler=batch_sampler, collate_fn=ds.collate_fn, num_workers=2)
|
||||
failures = []
|
||||
num_src_per_batch = []
|
||||
for batch in data_loader:
|
||||
src_shape = batch["input_ids"].shape
|
||||
bs = src_shape[0]
|
||||
assert bs % required_batch_size_multiple == 0 or bs < required_batch_size_multiple
|
||||
num_src_tokens = np.product(batch["input_ids"].shape)
|
||||
num_src_per_batch.append(num_src_tokens)
|
||||
if num_src_tokens > (max_tokens * 1.1):
|
||||
failures.append(num_src_tokens)
|
||||
assert num_src_per_batch[0] == max(num_src_per_batch)
|
||||
if failures:
|
||||
raise AssertionError(f"too many tokens in {len(failures)} batches")
|
||||
|
||||
def test_sortish_sampler_reduces_padding(self):
|
||||
ds, _, tokenizer = self._get_dataset(max_len=512)
|
||||
bs = 2
|
||||
sortish_sampler = ds.make_sortish_sampler(bs, shuffle=False)
|
||||
|
||||
naive_dl = DataLoader(ds, batch_size=bs, collate_fn=ds.collate_fn, num_workers=2)
|
||||
sortish_dl = DataLoader(ds, batch_size=bs, collate_fn=ds.collate_fn, num_workers=2, sampler=sortish_sampler)
|
||||
|
||||
pad = tokenizer.pad_token_id
|
||||
|
||||
def count_pad_tokens(data_loader, k="input_ids"):
|
||||
return [batch[k].eq(pad).sum().item() for batch in data_loader]
|
||||
|
||||
assert sum(count_pad_tokens(sortish_dl, k="labels")) < sum(count_pad_tokens(naive_dl, k="labels"))
|
||||
assert sum(count_pad_tokens(sortish_dl)) < sum(count_pad_tokens(naive_dl))
|
||||
assert len(sortish_dl) == len(naive_dl)
|
||||
|
||||
def _get_dataset(self, n_obs=1000, max_len=128):
|
||||
if os.getenv("USE_REAL_DATA", None):
|
||||
data_dir = "examples/seq2seq/wmt_en_ro"
|
||||
max_tokens = max_len * 2 * 64
|
||||
if not Path(data_dir).joinpath("train.len").exists():
|
||||
save_len_file(MARIAN_TINY, data_dir)
|
||||
else:
|
||||
data_dir = "examples/seq2seq/test_data/wmt_en_ro"
|
||||
max_tokens = max_len * 4
|
||||
save_len_file(MARIAN_TINY, data_dir)
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(MARIAN_TINY)
|
||||
ds = Seq2SeqDataset(
|
||||
tokenizer,
|
||||
data_dir=data_dir,
|
||||
type_path="train",
|
||||
max_source_length=max_len,
|
||||
max_target_length=max_len,
|
||||
n_obs=n_obs,
|
||||
)
|
||||
return ds, max_tokens, tokenizer
|
||||
|
||||
def test_distributed_sortish_sampler_splits_indices_between_procs(self):
|
||||
ds, max_tokens, tokenizer = self._get_dataset()
|
||||
ids1 = set(DistributedSortishSampler(ds, 256, num_replicas=2, rank=0, add_extra_examples=False))
|
||||
ids2 = set(DistributedSortishSampler(ds, 256, num_replicas=2, rank=1, add_extra_examples=False))
|
||||
assert ids1.intersection(ids2) == set()
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
MBART_TINY,
|
||||
MARIAN_TINY,
|
||||
T5_TINY,
|
||||
BART_TINY,
|
||||
PEGASUS_XSUM,
|
||||
],
|
||||
)
|
||||
def test_dataset_kwargs(self, tok_name):
|
||||
tokenizer = AutoTokenizer.from_pretrained(tok_name, use_fast=False)
|
||||
if tok_name == MBART_TINY:
|
||||
train_dataset = Seq2SeqDataset(
|
||||
tokenizer,
|
||||
data_dir=make_test_data_dir(tmp_dir=self.get_auto_remove_tmp_dir()),
|
||||
type_path="train",
|
||||
max_source_length=4,
|
||||
max_target_length=8,
|
||||
src_lang="EN",
|
||||
tgt_lang="FR",
|
||||
)
|
||||
kwargs = train_dataset.dataset_kwargs
|
||||
assert "src_lang" in kwargs and "tgt_lang" in kwargs
|
||||
else:
|
||||
train_dataset = Seq2SeqDataset(
|
||||
tokenizer,
|
||||
data_dir=make_test_data_dir(tmp_dir=self.get_auto_remove_tmp_dir()),
|
||||
type_path="train",
|
||||
max_source_length=4,
|
||||
max_target_length=8,
|
||||
)
|
||||
kwargs = train_dataset.dataset_kwargs
|
||||
assert "add_prefix_space" not in kwargs if tok_name != BART_TINY else "add_prefix_space" in kwargs
|
||||
assert len(kwargs) == 1 if tok_name == BART_TINY else len(kwargs) == 0
|
||||
@@ -0,0 +1,70 @@
|
||||
# Copyright 2020 Huggingface
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import unittest
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
|
||||
from transformers.testing_utils import get_tests_dir, require_torch, slow, torch_device
|
||||
from utils import calculate_bleu
|
||||
|
||||
|
||||
filename = get_tests_dir() + "/test_data/fsmt/fsmt_val_data.json"
|
||||
with open(filename, encoding="utf-8") as f:
|
||||
bleu_data = json.load(f)
|
||||
|
||||
|
||||
@require_torch
|
||||
class ModelEvalTester(unittest.TestCase):
|
||||
def get_tokenizer(self, mname):
|
||||
return FSMTTokenizer.from_pretrained(mname)
|
||||
|
||||
def get_model(self, mname):
|
||||
model = FSMTForConditionalGeneration.from_pretrained(mname).to(torch_device)
|
||||
if torch_device == "cuda":
|
||||
model.half()
|
||||
return model
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
["en-ru", 26.0],
|
||||
["ru-en", 22.0],
|
||||
["en-de", 22.0],
|
||||
["de-en", 29.0],
|
||||
]
|
||||
)
|
||||
@slow
|
||||
def test_bleu_scores(self, pair, min_bleu_score):
|
||||
# note: this test is not testing the best performance since it only evals a small batch
|
||||
# but it should be enough to detect a regression in the output quality
|
||||
mname = f"facebook/wmt19-{pair}"
|
||||
tokenizer = self.get_tokenizer(mname)
|
||||
model = self.get_model(mname)
|
||||
|
||||
src_sentences = bleu_data[pair]["src"]
|
||||
tgt_sentences = bleu_data[pair]["tgt"]
|
||||
|
||||
batch = tokenizer(src_sentences, return_tensors="pt", truncation=True, padding="longest").to(torch_device)
|
||||
outputs = model.generate(
|
||||
input_ids=batch.input_ids,
|
||||
num_beams=8,
|
||||
)
|
||||
decoded_sentences = tokenizer.batch_decode(
|
||||
outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
||||
)
|
||||
scores = calculate_bleu(decoded_sentences, tgt_sentences)
|
||||
print(scores)
|
||||
self.assertGreaterEqual(scores["bleu"], min_bleu_score)
|
||||
@@ -0,0 +1,132 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from parameterized import parameterized
|
||||
from run_eval import run_generate
|
||||
from run_eval_search import run_search
|
||||
|
||||
from transformers.testing_utils import CaptureStdout, TestCasePlus, slow
|
||||
from utils import ROUGE_KEYS
|
||||
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
logger = logging.getLogger()
|
||||
|
||||
|
||||
def _dump_articles(path: Path, articles: list):
|
||||
content = "\n".join(articles)
|
||||
Path(path).open("w").writelines(content)
|
||||
|
||||
|
||||
T5_TINY = "patrickvonplaten/t5-tiny-random"
|
||||
BART_TINY = "sshleifer/bart-tiny-random"
|
||||
MBART_TINY = "sshleifer/tiny-mbart"
|
||||
|
||||
stream_handler = logging.StreamHandler(sys.stdout)
|
||||
logger.addHandler(stream_handler)
|
||||
logging.disable(logging.CRITICAL) # remove noisy download output from tracebacks
|
||||
|
||||
|
||||
class TestTheRest(TestCasePlus):
|
||||
def run_eval_tester(self, model):
|
||||
input_file_name = Path(self.get_auto_remove_tmp_dir()) / "utest_input.source"
|
||||
output_file_name = input_file_name.parent / "utest_output.txt"
|
||||
assert not output_file_name.exists()
|
||||
articles = [" New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County."]
|
||||
_dump_articles(input_file_name, articles)
|
||||
|
||||
score_path = str(Path(self.get_auto_remove_tmp_dir()) / "scores.json")
|
||||
task = "translation_en_to_de" if model == T5_TINY else "summarization"
|
||||
testargs = f"""
|
||||
run_eval_search.py
|
||||
{model}
|
||||
{input_file_name}
|
||||
{output_file_name}
|
||||
--score_path {score_path}
|
||||
--task {task}
|
||||
--num_beams 2
|
||||
--length_penalty 2.0
|
||||
""".split()
|
||||
|
||||
with patch.object(sys, "argv", testargs):
|
||||
run_generate()
|
||||
assert Path(output_file_name).exists()
|
||||
# os.remove(Path(output_file_name))
|
||||
|
||||
# test one model to quickly (no-@slow) catch simple problems and do an
|
||||
# extensive testing of functionality with multiple models as @slow separately
|
||||
def test_run_eval(self):
|
||||
self.run_eval_tester(T5_TINY)
|
||||
|
||||
# any extra models should go into the list here - can be slow
|
||||
@parameterized.expand([BART_TINY, MBART_TINY])
|
||||
@slow
|
||||
def test_run_eval_slow(self, model):
|
||||
self.run_eval_tester(model)
|
||||
|
||||
# testing with 2 models to validate: 1. translation (t5) 2. summarization (mbart)
|
||||
@parameterized.expand([T5_TINY, MBART_TINY])
|
||||
@slow
|
||||
def test_run_eval_search(self, model):
|
||||
input_file_name = Path(self.get_auto_remove_tmp_dir()) / "utest_input.source"
|
||||
output_file_name = input_file_name.parent / "utest_output.txt"
|
||||
assert not output_file_name.exists()
|
||||
|
||||
text = {
|
||||
"en": ["Machine learning is great, isn't it?", "I like to eat bananas", "Tomorrow is another great day!"],
|
||||
"de": [
|
||||
"Maschinelles Lernen ist großartig, oder?",
|
||||
"Ich esse gerne Bananen",
|
||||
"Morgen ist wieder ein toller Tag!",
|
||||
],
|
||||
}
|
||||
|
||||
tmp_dir = Path(self.get_auto_remove_tmp_dir())
|
||||
score_path = str(tmp_dir / "scores.json")
|
||||
reference_path = str(tmp_dir / "val.target")
|
||||
_dump_articles(input_file_name, text["en"])
|
||||
_dump_articles(reference_path, text["de"])
|
||||
task = "translation_en_to_de" if model == T5_TINY else "summarization"
|
||||
testargs = f"""
|
||||
run_eval_search.py
|
||||
{model}
|
||||
{str(input_file_name)}
|
||||
{str(output_file_name)}
|
||||
--score_path {score_path}
|
||||
--reference_path {reference_path}
|
||||
--task {task}
|
||||
""".split()
|
||||
testargs.extend(["--search", "num_beams=1:2 length_penalty=0.9:1.0"])
|
||||
|
||||
with patch.object(sys, "argv", testargs):
|
||||
with CaptureStdout() as cs:
|
||||
run_search()
|
||||
expected_strings = [" num_beams | length_penalty", model, "Best score args"]
|
||||
un_expected_strings = ["Info"]
|
||||
if "translation" in task:
|
||||
expected_strings.append("bleu")
|
||||
else:
|
||||
expected_strings.extend(ROUGE_KEYS)
|
||||
for w in expected_strings:
|
||||
assert w in cs.out
|
||||
for w in un_expected_strings:
|
||||
assert w not in cs.out
|
||||
assert Path(output_file_name).exists()
|
||||
os.remove(Path(output_file_name))
|
||||
@@ -0,0 +1,55 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# as due to their complexity multi-gpu tests could impact other tests, and to aid debug we have those in a separate module.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from transformers.testing_utils import TestCasePlus, execute_subprocess_async, get_gpu_count, require_torch_gpu, slow
|
||||
|
||||
from .utils import load_json
|
||||
|
||||
|
||||
class TestSummarizationDistillerMultiGPU(TestCasePlus):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
return cls
|
||||
|
||||
@slow
|
||||
@require_torch_gpu
|
||||
def test_distributed_eval(self):
|
||||
output_dir = self.get_auto_remove_tmp_dir()
|
||||
args = f"""
|
||||
--model_name Helsinki-NLP/opus-mt-en-ro
|
||||
--save_dir {output_dir}
|
||||
--data_dir {self.test_file_dir_str}/test_data/wmt_en_ro
|
||||
--num_beams 2
|
||||
--task translation
|
||||
""".split()
|
||||
|
||||
# we want this test to run even if there is only one GPU, but if there are more we use them all
|
||||
n_gpu = get_gpu_count()
|
||||
distributed_args = f"""
|
||||
-m torch.distributed.launch
|
||||
--nproc_per_node={n_gpu}
|
||||
{self.test_file_dir}/run_distributed_eval.py
|
||||
""".split()
|
||||
cmd = [sys.executable] + distributed_args + args
|
||||
execute_subprocess_async(cmd, env=self.get_env())
|
||||
|
||||
metrics_save_path = os.path.join(output_dir, "test_bleu.json")
|
||||
metrics = load_json(metrics_save_path)
|
||||
# print(metrics)
|
||||
self.assertGreaterEqual(metrics["bleu"], 25)
|
||||
@@ -0,0 +1,38 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
from transformers.models.marian.convert_marian_tatoeba_to_pytorch import DEFAULT_REPO, TatoebaConverter
|
||||
from transformers.testing_utils import slow
|
||||
from transformers.utils import cached_property
|
||||
|
||||
|
||||
@unittest.skipUnless(os.path.exists(DEFAULT_REPO), "Tatoeba directory does not exist.")
|
||||
class TatoebaConversionTester(unittest.TestCase):
|
||||
@cached_property
|
||||
def resolver(self):
|
||||
tmp_dir = tempfile.mkdtemp()
|
||||
return TatoebaConverter(save_dir=tmp_dir)
|
||||
|
||||
@slow
|
||||
def test_resolver(self):
|
||||
self.resolver.convert_models(["heb-eng"])
|
||||
|
||||
@slow
|
||||
def test_model_card(self):
|
||||
content, mmeta = self.resolver.write_model_card("opus-mt-he-en", dry_run=True)
|
||||
assert mmeta["long_pair"] == "heb-eng"
|
||||
87
transformers/examples/legacy/seq2seq/pack_dataset.py
Executable file
87
transformers/examples/legacy/seq2seq/pack_dataset.py
Executable file
@@ -0,0 +1,87 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Fill examples with bitext up to max_tokens without breaking up examples.
|
||||
[['I went', 'yo fui'],
|
||||
['to the store', 'a la tienda']
|
||||
]
|
||||
=> ['I went to the store', 'yo fui a la tienda']
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from tqdm import tqdm
|
||||
|
||||
from transformers import AutoTokenizer
|
||||
|
||||
|
||||
def pack_examples(tok, src_examples, tgt_examples, max_tokens=1024):
|
||||
finished_src, finished_tgt = [], []
|
||||
|
||||
sorted_examples = list(zip(src_examples, tgt_examples))
|
||||
new_src, new_tgt = sorted_examples[0]
|
||||
|
||||
def is_too_big(strang):
|
||||
return tok(strang, return_tensors="pt").input_ids.shape[1] > max_tokens
|
||||
|
||||
for src, tgt in tqdm(sorted_examples[1:]):
|
||||
cand_src = new_src + " " + src
|
||||
cand_tgt = new_tgt + " " + tgt
|
||||
if is_too_big(cand_src) or is_too_big(cand_tgt): # can't fit, finalize example
|
||||
finished_src.append(new_src)
|
||||
finished_tgt.append(new_tgt)
|
||||
new_src, new_tgt = src, tgt
|
||||
else: # can fit, keep adding
|
||||
new_src, new_tgt = cand_src, cand_tgt
|
||||
|
||||
# cleanup
|
||||
if new_src:
|
||||
assert new_tgt
|
||||
finished_src.append(new_src)
|
||||
finished_tgt.append(new_tgt)
|
||||
return finished_src, finished_tgt
|
||||
|
||||
|
||||
def pack_data_dir(tok, data_dir: Path, max_tokens, save_path):
|
||||
save_path = Path(save_path)
|
||||
save_path.mkdir(exist_ok=True)
|
||||
for split in ["train"]:
|
||||
src_path, tgt_path = data_dir / f"{split}.source", data_dir / f"{split}.target"
|
||||
src_docs = [x.rstrip() for x in Path(src_path).open().readlines()]
|
||||
tgt_docs = [x.rstrip() for x in Path(tgt_path).open().readlines()]
|
||||
packed_src, packed_tgt = pack_examples(tok, src_docs, tgt_docs, max_tokens)
|
||||
print(f"packed {split} split from {len(src_docs)} examples -> {len(packed_src)}.")
|
||||
Path(save_path / f"{split}.source").open("w").write("\n".join(packed_src))
|
||||
Path(save_path / f"{split}.target").open("w").write("\n".join(packed_tgt))
|
||||
for split in ["val", "test"]:
|
||||
src_path, tgt_path = data_dir / f"{split}.source", data_dir / f"{split}.target"
|
||||
shutil.copyfile(src_path, save_path / f"{split}.source")
|
||||
shutil.copyfile(tgt_path, save_path / f"{split}.target")
|
||||
|
||||
|
||||
def packer_cli():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--tok_name", type=str, help="like facebook/bart-large-cnn,google-t5/t5-base, etc.")
|
||||
parser.add_argument("--max_seq_len", type=int, default=128)
|
||||
parser.add_argument("--data_dir", type=str)
|
||||
parser.add_argument("--save_path", type=str)
|
||||
args = parser.parse_args()
|
||||
tokenizer = AutoTokenizer.from_pretrained(args.tok_name)
|
||||
return pack_data_dir(tokenizer, Path(args.data_dir), args.max_seq_len, args.save_path)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
packer_cli()
|
||||
20
transformers/examples/legacy/seq2seq/requirements.txt
Normal file
20
transformers/examples/legacy/seq2seq/requirements.txt
Normal file
@@ -0,0 +1,20 @@
|
||||
tensorboard
|
||||
scikit-learn
|
||||
seqeval
|
||||
psutil
|
||||
sacrebleu
|
||||
rouge-score
|
||||
tensorflow_datasets
|
||||
matplotlib
|
||||
git-python==1.0.3
|
||||
faiss-cpu
|
||||
streamlit
|
||||
elasticsearch
|
||||
nltk
|
||||
pandas
|
||||
datasets >= 1.1.3
|
||||
fire
|
||||
pytest<8.0.1
|
||||
conllu
|
||||
sentencepiece != 0.1.92
|
||||
protobuf
|
||||
@@ -0,0 +1,65 @@
|
||||
### Motivation
|
||||
Without processing, english-> romanian mbart-large-en-ro gets BLEU score 26.8 on the WMT data.
|
||||
With post processing, it can score 37..
|
||||
Here is the postprocessing code, stolen from @mjpost in this [issue](https://github.com/pytorch/fairseq/issues/1758)
|
||||
|
||||
|
||||
|
||||
### Instructions
|
||||
Note: You need to have your test_generations.txt before you start this process.
|
||||
(1) Setup `mosesdecoder` and `wmt16-scripts`
|
||||
```bash
|
||||
cd $HOME
|
||||
git clone git@github.com:moses-smt/mosesdecoder.git
|
||||
cd mosesdecoder
|
||||
git clone git@github.com:rsennrich/wmt16-scripts.git
|
||||
```
|
||||
|
||||
(2) define a function for post processing.
|
||||
It removes diacritics and does other things I don't understand
|
||||
```bash
|
||||
ro_post_process () {
|
||||
sys=$1
|
||||
ref=$2
|
||||
export MOSES_PATH=$HOME/mosesdecoder
|
||||
REPLACE_UNICODE_PUNCT=$MOSES_PATH/scripts/tokenizer/replace-unicode-punctuation.perl
|
||||
NORM_PUNC=$MOSES_PATH/scripts/tokenizer/normalize-punctuation.perl
|
||||
REM_NON_PRINT_CHAR=$MOSES_PATH/scripts/tokenizer/remove-non-printing-char.perl
|
||||
REMOVE_DIACRITICS=$MOSES_PATH/wmt16-scripts/preprocess/remove-diacritics.py
|
||||
NORMALIZE_ROMANIAN=$MOSES_PATH/wmt16-scripts/preprocess/normalise-romanian.py
|
||||
TOKENIZER=$MOSES_PATH/scripts/tokenizer/tokenizer.perl
|
||||
|
||||
|
||||
|
||||
lang=ro
|
||||
for file in $sys $ref; do
|
||||
cat $file \
|
||||
| $REPLACE_UNICODE_PUNCT \
|
||||
| $NORM_PUNC -l $lang \
|
||||
| $REM_NON_PRINT_CHAR \
|
||||
| $NORMALIZE_ROMANIAN \
|
||||
| $REMOVE_DIACRITICS \
|
||||
| $TOKENIZER -no-escape -l $lang \
|
||||
> $(basename $file).tok
|
||||
done
|
||||
# compute BLEU
|
||||
cat $(basename $sys).tok | sacrebleu -tok none -s none -b $(basename $ref).tok
|
||||
}
|
||||
```
|
||||
|
||||
(3) Call the function on test_generations.txt and test.target
|
||||
For example,
|
||||
```bash
|
||||
ro_post_process enro_finetune/test_generations.txt wmt_en_ro/test.target
|
||||
```
|
||||
This will split out a new blue score and write a new fine called `test_generations.tok` with post-processed outputs.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
31
transformers/examples/legacy/seq2seq/rouge_cli.py
Normal file
31
transformers/examples/legacy/seq2seq/rouge_cli.py
Normal file
@@ -0,0 +1,31 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import fire
|
||||
|
||||
from utils import calculate_rouge, save_json
|
||||
|
||||
|
||||
def calculate_rouge_path(pred_path, tgt_path, save_path=None, **kwargs):
|
||||
"""Kwargs will be passed to calculate_rouge"""
|
||||
pred_lns = [x.strip() for x in open(pred_path).readlines()]
|
||||
tgt_lns = [x.strip() for x in open(tgt_path).readlines()][: len(pred_lns)]
|
||||
metrics = calculate_rouge(pred_lns, tgt_lns, **kwargs)
|
||||
if save_path is not None:
|
||||
save_json(metrics, save_path, indent=None)
|
||||
return metrics # these print nicely
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(calculate_rouge_path)
|
||||
262
transformers/examples/legacy/seq2seq/run_distributed_eval.py
Executable file
262
transformers/examples/legacy/seq2seq/run_distributed_eval.py
Executable file
@@ -0,0 +1,262 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import shutil
|
||||
import time
|
||||
from json import JSONDecodeError
|
||||
from logging import getLogger
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
from torch.utils.data import DataLoader
|
||||
from tqdm import tqdm
|
||||
|
||||
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
||||
from utils import (
|
||||
Seq2SeqDataset,
|
||||
calculate_bleu,
|
||||
calculate_rouge,
|
||||
chunks,
|
||||
lmap,
|
||||
load_json,
|
||||
parse_numeric_n_bool_cl_kwargs,
|
||||
save_json,
|
||||
use_task_specific_params,
|
||||
write_txt_file,
|
||||
)
|
||||
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
|
||||
def eval_data_dir(
|
||||
data_dir,
|
||||
save_dir: str,
|
||||
model_name: str,
|
||||
bs: int = 8,
|
||||
max_source_length: int = 1024,
|
||||
type_path="val",
|
||||
n_obs=None,
|
||||
fp16=False,
|
||||
task="summarization",
|
||||
local_rank=None,
|
||||
num_return_sequences=1,
|
||||
dataset_kwargs: Optional[dict] = None,
|
||||
prefix="",
|
||||
**generate_kwargs,
|
||||
) -> dict:
|
||||
"""Run evaluation on part of the data for one gpu and save to {save_dir}/rank_{rank}_output.json"""
|
||||
model_name = str(model_name)
|
||||
assert local_rank is not None
|
||||
torch.distributed.init_process_group(backend="nccl", rank=local_rank)
|
||||
|
||||
save_dir = Path(save_dir)
|
||||
save_path = save_dir.joinpath(f"rank_{local_rank}_output.json")
|
||||
torch.cuda.set_device(local_rank)
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).cuda()
|
||||
if fp16:
|
||||
model = model.half()
|
||||
# determine if we need to increase num_beams
|
||||
use_task_specific_params(model, task) # update config with task specific params
|
||||
num_beams = generate_kwargs.pop("num_beams", model.config.num_beams) # AttributeError risk?
|
||||
if num_return_sequences > num_beams:
|
||||
num_beams = num_return_sequences
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
logger.info(f"Inferred tokenizer type: {tokenizer.__class__}") # if this is wrong, check config.model_type.
|
||||
|
||||
if max_source_length is None:
|
||||
max_source_length = tokenizer.model_max_length
|
||||
if prefix is None:
|
||||
prefix = prefix or getattr(model.config, "prefix", "") or ""
|
||||
ds = Seq2SeqDataset(
|
||||
tokenizer,
|
||||
data_dir,
|
||||
max_source_length,
|
||||
max_target_length=1024,
|
||||
type_path=type_path,
|
||||
n_obs=n_obs,
|
||||
prefix=prefix,
|
||||
**dataset_kwargs,
|
||||
)
|
||||
# I set shuffle=True for a more accurate progress bar.
|
||||
# If all the longest samples are first, the prog bar estimate is too high at the beginning.
|
||||
sampler = ds.make_sortish_sampler(bs, distributed=True, add_extra_examples=False, shuffle=True)
|
||||
data_loader = DataLoader(ds, sampler=sampler, batch_size=bs, collate_fn=ds.collate_fn)
|
||||
results = []
|
||||
for batch in tqdm(data_loader):
|
||||
summaries = model.generate(
|
||||
input_ids=batch["input_ids"].to(model.device),
|
||||
attention_mask=batch["attention_mask"].to(model.device),
|
||||
num_return_sequences=num_return_sequences,
|
||||
num_beams=num_beams,
|
||||
**generate_kwargs,
|
||||
)
|
||||
preds = tokenizer.batch_decode(summaries, skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
||||
ids = batch["ids"]
|
||||
if num_return_sequences > 1:
|
||||
preds = chunks(preds, num_return_sequences) # batch size chunks, each of size num_return_seq
|
||||
for i, pred in enumerate(preds):
|
||||
results.append({"pred": pred, "id": ids[i].item()})
|
||||
save_json(results, save_path)
|
||||
return results, sampler.num_replicas
|
||||
|
||||
|
||||
def run_generate():
|
||||
parser = argparse.ArgumentParser(
|
||||
epilog="Unspecified args like --num_beams=2 --decoder_start_token_id=4 are passed to model.generate"
|
||||
)
|
||||
parser.add_argument("--data_dir", type=str, help="like cnn_dm/test.source")
|
||||
parser.add_argument(
|
||||
"--model_name",
|
||||
type=str,
|
||||
help="like facebook/bart-large-cnn,google-t5/t5-base, etc.",
|
||||
default="sshleifer/distilbart-xsum-12-3",
|
||||
)
|
||||
parser.add_argument("--save_dir", type=str, help="where to save", default="tmp_gen")
|
||||
parser.add_argument("--max_source_length", type=int, default=None)
|
||||
parser.add_argument(
|
||||
"--type_path", type=str, default="test", help="which subset to evaluate typically train/val/test"
|
||||
)
|
||||
parser.add_argument("--task", type=str, default="summarization", help="used for task_specific_params + metrics")
|
||||
parser.add_argument("--bs", type=int, default=8, required=False, help="batch size")
|
||||
parser.add_argument(
|
||||
"--local_rank", type=int, default=-1, required=False, help="should be passed by distributed.launch"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--n_obs", type=int, default=None, required=False, help="How many observations. Defaults to all."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--num_return_sequences", type=int, default=1, required=False, help="How many sequences to return"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sync_timeout",
|
||||
type=int,
|
||||
default=600,
|
||||
required=False,
|
||||
help="How long should master process wait for other processes to finish.",
|
||||
)
|
||||
parser.add_argument("--src_lang", type=str, default=None, required=False)
|
||||
parser.add_argument("--tgt_lang", type=str, default=None, required=False)
|
||||
parser.add_argument(
|
||||
"--prefix", type=str, required=False, default=None, help="will be added to the beginning of src examples"
|
||||
)
|
||||
parser.add_argument("--fp16", action="store_true")
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
start_time = time.time()
|
||||
args, rest = parser.parse_known_args()
|
||||
generate_kwargs = parse_numeric_n_bool_cl_kwargs(rest)
|
||||
if generate_kwargs and args.local_rank <= 0:
|
||||
print(f"parsed the following generate kwargs: {generate_kwargs}")
|
||||
json_save_dir = Path(args.save_dir + "_tmp")
|
||||
Path(json_save_dir).mkdir(exist_ok=True) # this handles locking.
|
||||
intermediate_files = list(json_save_dir.glob("rank_*.json"))
|
||||
if intermediate_files:
|
||||
raise ValueError(f"Found files at {json_save_dir} please move or remove them.")
|
||||
# In theory, a node could finish and save before another node hits this. If this happens, we can address later.
|
||||
dataset_kwargs = {}
|
||||
if args.src_lang is not None:
|
||||
dataset_kwargs["src_lang"] = args.src_lang
|
||||
if args.tgt_lang is not None:
|
||||
dataset_kwargs["tgt_lang"] = args.tgt_lang
|
||||
|
||||
Path(args.save_dir).mkdir(exist_ok=True)
|
||||
results, num_replicas = eval_data_dir(
|
||||
args.data_dir,
|
||||
json_save_dir,
|
||||
args.model_name,
|
||||
type_path=args.type_path,
|
||||
bs=args.bs,
|
||||
fp16=args.fp16,
|
||||
task=args.task,
|
||||
local_rank=args.local_rank,
|
||||
n_obs=args.n_obs,
|
||||
max_source_length=args.max_source_length,
|
||||
num_return_sequences=args.num_return_sequences,
|
||||
prefix=args.prefix,
|
||||
dataset_kwargs=dataset_kwargs,
|
||||
**generate_kwargs,
|
||||
)
|
||||
|
||||
if args.local_rank <= 0:
|
||||
save_dir = Path(args.save_dir)
|
||||
save_dir.mkdir(exist_ok=True)
|
||||
partial_results = gather_results_from_each_node(num_replicas, json_save_dir, args.sync_timeout)
|
||||
preds = combine_partial_results(partial_results)
|
||||
if args.num_return_sequences > 1:
|
||||
save_path = save_dir.joinpath("pseudolabel_results.json")
|
||||
print(f"Saving aggregated results at {save_path}, intermediate in {json_save_dir}/")
|
||||
save_json(preds, save_path)
|
||||
return
|
||||
tgt_file = Path(args.data_dir).joinpath(args.type_path + ".target")
|
||||
with open(tgt_file) as f:
|
||||
labels = [x.rstrip() for x in f.readlines()][: len(preds)]
|
||||
|
||||
# Calculate metrics, save metrics, and save _generations.txt
|
||||
calc_bleu = "translation" in args.task
|
||||
score_fn = calculate_bleu if calc_bleu else calculate_rouge
|
||||
metric_name = "bleu" if calc_bleu else "rouge"
|
||||
metrics: dict = score_fn(preds, labels)
|
||||
metrics["n_obs"] = len(preds)
|
||||
runtime = time.time() - start_time
|
||||
metrics["seconds_per_sample"] = round(runtime / metrics["n_obs"], 4)
|
||||
metrics["n_gpus"] = num_replicas
|
||||
# TODO(@stas00): add whatever metadata to metrics
|
||||
metrics_save_path = save_dir.joinpath(f"{args.type_path}_{metric_name}.json")
|
||||
save_json(metrics, metrics_save_path, indent=None)
|
||||
print(metrics)
|
||||
write_txt_file(preds, save_dir.joinpath(f"{args.type_path}_generations.txt"))
|
||||
if args.debug:
|
||||
write_txt_file(labels, save_dir.joinpath(f"{args.type_path}.target"))
|
||||
else:
|
||||
shutil.rmtree(json_save_dir)
|
||||
|
||||
|
||||
def combine_partial_results(partial_results) -> list:
|
||||
"""Concatenate partial results into one file, then sort it by id."""
|
||||
records = []
|
||||
for partial_result in partial_results:
|
||||
records.extend(partial_result)
|
||||
records = sorted(records, key=lambda x: x["id"])
|
||||
preds = [x["pred"] for x in records]
|
||||
return preds
|
||||
|
||||
|
||||
def gather_results_from_each_node(num_replicas, save_dir, timeout) -> list[dict[str, list]]:
|
||||
# WAIT FOR lots of .json files
|
||||
start_wait = time.time()
|
||||
logger.info("waiting for all nodes to finish")
|
||||
json_data = None
|
||||
while (time.time() - start_wait) < timeout:
|
||||
json_files = list(save_dir.glob("rank_*.json"))
|
||||
if len(json_files) < num_replicas:
|
||||
continue
|
||||
try:
|
||||
# make sure all json files are fully saved
|
||||
json_data = lmap(load_json, json_files)
|
||||
return json_data
|
||||
except JSONDecodeError:
|
||||
continue
|
||||
else:
|
||||
raise TimeoutError("Rank 0 gave up on waiting for other processes")
|
||||
# Unreachable
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Usage for MT:
|
||||
run_generate()
|
||||
184
transformers/examples/legacy/seq2seq/run_eval.py
Executable file
184
transformers/examples/legacy/seq2seq/run_eval.py
Executable file
@@ -0,0 +1,184 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import datetime
|
||||
import json
|
||||
import time
|
||||
import warnings
|
||||
from logging import getLogger
|
||||
from pathlib import Path
|
||||
|
||||
import torch
|
||||
from tqdm import tqdm
|
||||
|
||||
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
||||
from utils import calculate_bleu, calculate_rouge, chunks, parse_numeric_n_bool_cl_kwargs, use_task_specific_params
|
||||
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
|
||||
DEFAULT_DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
|
||||
def generate_summaries_or_translations(
|
||||
examples: list[str],
|
||||
out_file: str,
|
||||
model_name: str,
|
||||
batch_size: int = 8,
|
||||
device: str = DEFAULT_DEVICE,
|
||||
fp16=False,
|
||||
task="summarization",
|
||||
prefix=None,
|
||||
**generate_kwargs,
|
||||
) -> dict:
|
||||
"""Save model.generate results to <out_file>, and return how long it took."""
|
||||
fout = Path(out_file).open("w", encoding="utf-8")
|
||||
model_name = str(model_name)
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
|
||||
if fp16:
|
||||
model = model.half()
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
logger.info(f"Inferred tokenizer type: {tokenizer.__class__}") # if this is wrong, check config.model_type.
|
||||
|
||||
start_time = time.time()
|
||||
# update config with task specific params
|
||||
use_task_specific_params(model, task)
|
||||
if prefix is None:
|
||||
prefix = prefix or getattr(model.config, "prefix", "") or ""
|
||||
for examples_chunk in tqdm(list(chunks(examples, batch_size))):
|
||||
examples_chunk = [prefix + text for text in examples_chunk]
|
||||
batch = tokenizer(examples_chunk, return_tensors="pt", truncation=True, padding="longest").to(device)
|
||||
summaries = model.generate(
|
||||
input_ids=batch.input_ids,
|
||||
attention_mask=batch.attention_mask,
|
||||
**generate_kwargs,
|
||||
)
|
||||
dec = tokenizer.batch_decode(summaries, skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
||||
for hypothesis in dec:
|
||||
fout.write(hypothesis + "\n")
|
||||
fout.flush()
|
||||
fout.close()
|
||||
runtime = int(time.time() - start_time) # seconds
|
||||
n_obs = len(examples)
|
||||
return {"n_obs": n_obs, "runtime": runtime, "seconds_per_sample": round(runtime / n_obs, 4)}
|
||||
|
||||
|
||||
def datetime_now():
|
||||
return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
|
||||
def run_generate(verbose=True):
|
||||
"""
|
||||
|
||||
Takes input text, generates output, and then using reference calculates the BLEU scores.
|
||||
|
||||
The results are saved to a file and returned to the caller, and printed out unless ``verbose=False`` is passed.
|
||||
|
||||
Args:
|
||||
verbose (:obj:`bool`, `optional`, defaults to :obj:`True`): print results to stdout
|
||||
|
||||
Returns:
|
||||
a tuple: ``(scores, params}``
|
||||
- ``scores``: a dict of scores data ``{'bleu': 39.6501, 'n_obs': 2000, 'runtime': 186, 'seconds_per_sample': 0.093}``
|
||||
- ``params``: a dict of custom params, e.g. ``{'num_beams': 5, 'length_penalty': 0.8}``
|
||||
"""
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("model_name", type=str, help="like facebook/bart-large-cnn,google-t5/t5-base, etc.")
|
||||
parser.add_argument("input_path", type=str, help="like cnn_dm/test.source")
|
||||
parser.add_argument("save_path", type=str, help="where to save summaries")
|
||||
parser.add_argument("--reference_path", type=str, required=False, help="like cnn_dm/test.target")
|
||||
parser.add_argument("--score_path", type=str, required=False, default="metrics.json", help="where to save metrics")
|
||||
parser.add_argument("--device", type=str, required=False, default=DEFAULT_DEVICE, help="cuda, cuda:1, cpu etc.")
|
||||
parser.add_argument(
|
||||
"--prefix", type=str, required=False, default=None, help="will be added to the beginning of src examples"
|
||||
)
|
||||
parser.add_argument("--task", type=str, default="summarization", help="used for task_specific_params + metrics")
|
||||
parser.add_argument("--bs", type=int, default=8, required=False, help="batch size")
|
||||
parser.add_argument(
|
||||
"--n_obs", type=int, default=-1, required=False, help="How many observations. Defaults to all."
|
||||
)
|
||||
parser.add_argument("--fp16", action="store_true")
|
||||
parser.add_argument("--dump-args", action="store_true", help="print the custom hparams with the results")
|
||||
parser.add_argument(
|
||||
"--info",
|
||||
nargs="?",
|
||||
type=str,
|
||||
const=datetime_now(),
|
||||
help=(
|
||||
"use in conjunction w/ --dump-args to print with the results whatever other info you'd like, e.g."
|
||||
" lang=en-ru. If no value is passed, the current datetime string will be used."
|
||||
),
|
||||
)
|
||||
# Unspecified args like --num_beams=2 --decoder_start_token_id=4 are passed to model.generate
|
||||
args, rest = parser.parse_known_args()
|
||||
parsed_args = parse_numeric_n_bool_cl_kwargs(rest)
|
||||
if parsed_args and verbose:
|
||||
print(f"parsed the following generate kwargs: {parsed_args}")
|
||||
examples = [" " + x.rstrip() if "t5" in args.model_name else x.rstrip() for x in open(args.input_path).readlines()]
|
||||
if args.n_obs > 0:
|
||||
examples = examples[: args.n_obs]
|
||||
Path(args.save_path).parent.mkdir(exist_ok=True)
|
||||
|
||||
if args.reference_path is None and Path(args.score_path).exists():
|
||||
warnings.warn(f"score_path {args.score_path} will be overwritten unless you type ctrl-c.")
|
||||
|
||||
if args.device == "cpu" and args.fp16:
|
||||
# this mix leads to RuntimeError: "threshold_cpu" not implemented for 'Half'
|
||||
raise ValueError("Can't mix --fp16 and --device cpu")
|
||||
|
||||
runtime_metrics = generate_summaries_or_translations(
|
||||
examples,
|
||||
args.save_path,
|
||||
args.model_name,
|
||||
batch_size=args.bs,
|
||||
device=args.device,
|
||||
fp16=args.fp16,
|
||||
task=args.task,
|
||||
prefix=args.prefix,
|
||||
**parsed_args,
|
||||
)
|
||||
|
||||
if args.reference_path is None:
|
||||
return {}
|
||||
|
||||
# Compute scores
|
||||
score_fn = calculate_bleu if "translation" in args.task else calculate_rouge
|
||||
output_lns = [x.rstrip() for x in open(args.save_path).readlines()]
|
||||
reference_lns = [x.rstrip() for x in open(args.reference_path).readlines()][: len(output_lns)]
|
||||
scores: dict = score_fn(output_lns, reference_lns)
|
||||
scores.update(runtime_metrics)
|
||||
|
||||
if args.dump_args:
|
||||
scores.update(parsed_args)
|
||||
if args.info:
|
||||
scores["info"] = args.info
|
||||
|
||||
if verbose:
|
||||
print(scores)
|
||||
|
||||
if args.score_path is not None:
|
||||
json.dump(scores, open(args.score_path, "w"))
|
||||
|
||||
return scores
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Usage for MT:
|
||||
# python run_eval.py MODEL_NAME $DATA_DIR/test.source $save_dir/test_translations.txt --reference_path $DATA_DIR/test.target --score_path $save_dir/test_bleu.json --task translation $@
|
||||
run_generate(verbose=True)
|
||||
158
transformers/examples/legacy/seq2seq/run_eval_search.py
Executable file
158
transformers/examples/legacy/seq2seq/run_eval_search.py
Executable file
@@ -0,0 +1,158 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import itertools
|
||||
import operator
|
||||
import sys
|
||||
from collections import OrderedDict
|
||||
|
||||
from run_eval import datetime_now, run_generate
|
||||
|
||||
from utils import ROUGE_KEYS
|
||||
|
||||
|
||||
# A table of supported tasks and the list of scores in the order of importance to be sorted by.
|
||||
# To add a new task, simply list the score names that `run_eval.run_generate()` returns
|
||||
task_score_names = {
|
||||
"translation": ["bleu"],
|
||||
"summarization": ROUGE_KEYS,
|
||||
}
|
||||
|
||||
|
||||
def parse_search_arg(search):
|
||||
groups = search.split()
|
||||
entries = dict(g.split("=") for g in groups)
|
||||
entry_names = list(entries.keys())
|
||||
sets = [[f"--{k} {v}" for v in vs.split(":")] for k, vs in entries.items()]
|
||||
matrix = [list(x) for x in itertools.product(*sets)]
|
||||
return matrix, entry_names
|
||||
|
||||
|
||||
def run_search():
|
||||
"""
|
||||
Run parametric search over the desired hparam space with help of ``run_eval.py``.
|
||||
|
||||
All the arguments except ``--search`` are passed to ``run_eval.py`` as is. The values inside of "--search" are parsed, reformatted and fed to ``run_eval.py`` as additional args.
|
||||
|
||||
The format for the ``--search`` value is a simple string with hparams and colon separated values to try, e.g.:
|
||||
```
|
||||
--search "num_beams=5:10 length_penalty=0.8:1.0:1.2 early_stopping=true:false"
|
||||
```
|
||||
which will generate ``12`` ``(2*3*2)`` searches for a product of each hparam. For example the example that was just used will invoke ``run_eval.py`` repeatedly with:
|
||||
|
||||
```
|
||||
--num_beams 5 --length_penalty 0.8 --early_stopping true
|
||||
--num_beams 5 --length_penalty 0.8 --early_stopping false
|
||||
[...]
|
||||
--num_beams 10 --length_penalty 1.2 --early_stopping false
|
||||
```
|
||||
|
||||
On completion, this function prints a markdown table of the results sorted by the best BLEU score and the winning arguments.
|
||||
|
||||
|
||||
"""
|
||||
prog = sys.argv[0]
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
usage=(
|
||||
"\n\nImportant: this script accepts all arguments `run_eval.py` accepts and then a few extra, therefore"
|
||||
" refer to `run_eval.py -h` for the complete list."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--search",
|
||||
type=str,
|
||||
required=False,
|
||||
help='param space to search, e.g. "num_beams=5:10 length_penalty=0.8:1.0:1.2"',
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bs", type=int, default=8, required=False, help="initial batch size (may get reduced if it's too big)"
|
||||
)
|
||||
parser.add_argument("--task", type=str, help="used for task_specific_params + metrics")
|
||||
parser.add_argument(
|
||||
"--info",
|
||||
nargs="?",
|
||||
type=str,
|
||||
const=datetime_now(),
|
||||
help=(
|
||||
"add custom notes to be printed before the results table. If no value is passed, the current datetime"
|
||||
" string will be used."
|
||||
),
|
||||
)
|
||||
args, args_main = parser.parse_known_args()
|
||||
# we share some of the args
|
||||
args_main.extend(["--task", args.task])
|
||||
args_normal = [prog] + args_main
|
||||
|
||||
# to support variations like translation_en_to_de"
|
||||
task = "translation" if "translation" in args.task else "summarization"
|
||||
|
||||
matrix, col_names = parse_search_arg(args.search)
|
||||
col_names[0:0] = task_score_names[task] # score cols first
|
||||
col_widths = {col: len(str(col)) for col in col_names}
|
||||
results = []
|
||||
for r in matrix:
|
||||
hparams = dict(x.replace("--", "").split() for x in r)
|
||||
args_exp = " ".join(r).split()
|
||||
args_exp.extend(["--bs", str(args.bs)]) # in case we need to reduce its size due to CUDA OOM
|
||||
sys.argv = args_normal + args_exp
|
||||
|
||||
# XXX: need to trap CUDA OOM and lower args.bs if that happens and retry
|
||||
|
||||
scores = run_generate(verbose=False)
|
||||
# make sure scores are first in the table
|
||||
result = OrderedDict()
|
||||
for score in task_score_names[task]:
|
||||
result[score] = scores[score]
|
||||
result.update(hparams)
|
||||
results.append(result)
|
||||
|
||||
# find widest entries
|
||||
for k, v in result.items():
|
||||
l = len(str(v))
|
||||
if l > col_widths[k]:
|
||||
col_widths[k] = l
|
||||
|
||||
results_sorted = sorted(results, key=operator.itemgetter(*task_score_names[task]), reverse=True)
|
||||
print(" | ".join([f"{col:{col_widths[col]}}" for col in col_names]))
|
||||
print(" | ".join([f"{'-' * col_widths[col]}" for col in col_names]))
|
||||
for row in results_sorted:
|
||||
print(" | ".join([f"{row[col]:{col_widths[col]}}" for col in col_names]))
|
||||
|
||||
best = results_sorted[0]
|
||||
for score in task_score_names[task]:
|
||||
del best[score]
|
||||
best_args = [f"--{k} {v}" for k, v in best.items()]
|
||||
dyn_args = ["--bs", str(args.bs)]
|
||||
if args.info:
|
||||
print(f"\nInfo: {args.info}")
|
||||
print("\nBest score args:")
|
||||
print(" ".join(args_main + best_args + dyn_args))
|
||||
|
||||
return results_sorted
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Usage:
|
||||
# [normal-run_eval_search.py cmd plus] \
|
||||
# --search="num_beams=1:5:10 length_penalty=0.8:1:1.2 early_stopping=true:false"
|
||||
#
|
||||
# Example:
|
||||
# PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py $MODEL_NAME \
|
||||
# $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target \
|
||||
# --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation \
|
||||
# --search="num_beams=1:5:10 length_penalty=0.8:1:1.2 early_stopping=true:false"
|
||||
run_search()
|
||||
56
transformers/examples/legacy/seq2seq/save_len_file.py
Executable file
56
transformers/examples/legacy/seq2seq/save_len_file.py
Executable file
@@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import fire
|
||||
from torch.utils.data import DataLoader
|
||||
from tqdm import tqdm
|
||||
|
||||
from transformers import AutoTokenizer
|
||||
from utils import Seq2SeqDataset, pickle_save
|
||||
|
||||
|
||||
def save_len_file(
|
||||
tokenizer_name, data_dir, max_source_length=1024, max_target_length=1024, consider_target=False, **kwargs
|
||||
):
|
||||
"""Save max(src_len, tgt_len) for each example to allow dynamic batching."""
|
||||
tok = AutoTokenizer.from_pretrained(tokenizer_name)
|
||||
train_ds = Seq2SeqDataset(tok, data_dir, max_source_length, max_target_length, type_path="train", **kwargs)
|
||||
pad = tok.pad_token_id
|
||||
|
||||
def get_lens(ds):
|
||||
dl = tqdm(
|
||||
DataLoader(ds, batch_size=512, num_workers=8, shuffle=False, collate_fn=ds.collate_fn),
|
||||
desc=str(ds.len_file),
|
||||
)
|
||||
max_lens = []
|
||||
for batch in dl:
|
||||
src_lens = batch["input_ids"].ne(pad).sum(1).tolist()
|
||||
tgt_lens = batch["labels"].ne(pad).sum(1).tolist()
|
||||
if consider_target:
|
||||
for src, tgt in zip(src_lens, tgt_lens):
|
||||
max_lens.append(max(src, tgt))
|
||||
else:
|
||||
max_lens.extend(src_lens)
|
||||
return max_lens
|
||||
|
||||
train_lens = get_lens(train_ds)
|
||||
val_ds = Seq2SeqDataset(tok, data_dir, max_source_length, max_target_length, type_path="val", **kwargs)
|
||||
val_lens = get_lens(val_ds)
|
||||
pickle_save(train_lens, train_ds.len_file)
|
||||
pickle_save(val_lens, val_ds.len_file)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(save_len_file)
|
||||
39
transformers/examples/legacy/seq2seq/save_randomly_initialized_model.py
Executable file
39
transformers/examples/legacy/seq2seq/save_randomly_initialized_model.py
Executable file
@@ -0,0 +1,39 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import fire
|
||||
|
||||
from transformers import AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer
|
||||
|
||||
|
||||
def save_randomly_initialized_version(config_name: str, save_dir: str, **config_kwargs):
|
||||
"""Save a randomly initialized version of a model using a pretrained config.
|
||||
Args:
|
||||
config_name: which config to use
|
||||
save_dir: where to save the resulting model and tokenizer
|
||||
config_kwargs: Passed to AutoConfig
|
||||
|
||||
Usage::
|
||||
save_randomly_initialized_version("facebook/bart-large-cnn", "distilbart_random_cnn_6_3", encoder_layers=6, decoder_layers=3, num_beams=3)
|
||||
"""
|
||||
cfg = AutoConfig.from_pretrained(config_name, **config_kwargs)
|
||||
model = AutoModelForSeq2SeqLM.from_config(cfg)
|
||||
model.save_pretrained(save_dir)
|
||||
AutoTokenizer.from_pretrained(config_name).save_pretrained(save_dir)
|
||||
return model
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(save_randomly_initialized_version)
|
||||
35
transformers/examples/legacy/seq2seq/sentence_splitter.py
Normal file
35
transformers/examples/legacy/seq2seq/sentence_splitter.py
Normal file
@@ -0,0 +1,35 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import re
|
||||
|
||||
from filelock import FileLock
|
||||
|
||||
|
||||
try:
|
||||
import nltk
|
||||
|
||||
NLTK_AVAILABLE = True
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
NLTK_AVAILABLE = False
|
||||
|
||||
if NLTK_AVAILABLE:
|
||||
with FileLock(".lock") as lock:
|
||||
nltk.download("punkt", quiet=True)
|
||||
|
||||
|
||||
def add_newline_to_end_of_each_sentence(x: str) -> str:
|
||||
"""This was added to get rougeLsum scores matching published rougeL scores for BART and PEGASUS."""
|
||||
re.sub("<n>", "", x) # remove pegasus newline char
|
||||
assert NLTK_AVAILABLE, "nltk must be installed to separate newlines between sentences. (pip install nltk)"
|
||||
return "\n".join(nltk.sent_tokenize(x))
|
||||
248
transformers/examples/legacy/seq2seq/seq2seq_trainer.py
Normal file
248
transformers/examples/legacy/seq2seq/seq2seq_trainer.py
Normal file
@@ -0,0 +1,248 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.utils.data import DistributedSampler, RandomSampler
|
||||
|
||||
from transformers import PreTrainedModel, Trainer, logging
|
||||
from transformers.models.fsmt.configuration_fsmt import FSMTConfig
|
||||
from transformers.optimization import (
|
||||
Adafactor,
|
||||
get_constant_schedule,
|
||||
get_constant_schedule_with_warmup,
|
||||
get_cosine_schedule_with_warmup,
|
||||
get_cosine_with_hard_restarts_schedule_with_warmup,
|
||||
get_linear_schedule_with_warmup,
|
||||
get_polynomial_decay_schedule_with_warmup,
|
||||
)
|
||||
from transformers.trainer_pt_utils import get_tpu_sampler
|
||||
from transformers.training_args import ParallelMode
|
||||
from transformers.utils import is_torch_xla_available
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
arg_to_scheduler = {
|
||||
"linear": get_linear_schedule_with_warmup,
|
||||
"cosine": get_cosine_schedule_with_warmup,
|
||||
"cosine_w_restarts": get_cosine_with_hard_restarts_schedule_with_warmup,
|
||||
"polynomial": get_polynomial_decay_schedule_with_warmup,
|
||||
"constant": get_constant_schedule,
|
||||
"constant_w_warmup": get_constant_schedule_with_warmup,
|
||||
}
|
||||
|
||||
|
||||
class Seq2SeqTrainer(Trainer):
|
||||
def __init__(self, config=None, data_args=None, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
if config is None:
|
||||
assert isinstance(self.model, PreTrainedModel), (
|
||||
"If no `config` is passed the model to be trained has to be of type `PreTrainedModel`, but is"
|
||||
f" {self.model.__class__}"
|
||||
)
|
||||
self.config = self.model.config
|
||||
else:
|
||||
self.config = config
|
||||
|
||||
self.data_args = data_args
|
||||
self.vocab_size = self.config.tgt_vocab_size if isinstance(self.config, FSMTConfig) else self.config.vocab_size
|
||||
|
||||
if self.args.label_smoothing != 0 or (self.data_args is not None and self.data_args.ignore_pad_token_for_loss):
|
||||
assert self.config.pad_token_id is not None, (
|
||||
"Make sure that `config.pad_token_id` is correctly defined when ignoring `pad_token` for loss"
|
||||
" calculation or doing label smoothing."
|
||||
)
|
||||
|
||||
if self.config.pad_token_id is None and self.config.eos_token_id is not None:
|
||||
logger.warning(
|
||||
f"The `config.pad_token_id` is `None`. Using `config.eos_token_id` = {self.config.eos_token_id} for"
|
||||
" padding.."
|
||||
)
|
||||
|
||||
if self.args.label_smoothing == 0:
|
||||
self.loss_fn = torch.nn.CrossEntropyLoss(ignore_index=self.config.pad_token_id)
|
||||
else:
|
||||
# dynamically import label_smoothed_nll_loss
|
||||
from utils import label_smoothed_nll_loss
|
||||
|
||||
self.loss_fn = label_smoothed_nll_loss
|
||||
|
||||
def create_optimizer_and_scheduler(self, num_training_steps: int):
|
||||
"""
|
||||
Setup the optimizer and the learning rate scheduler.
|
||||
|
||||
We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
|
||||
Trainer's init through :obj:`optimizers`, or subclass and override this method in a subclass.
|
||||
"""
|
||||
if self.optimizer is None:
|
||||
no_decay = ["bias", "LayerNorm.weight"]
|
||||
optimizer_grouped_parameters = [
|
||||
{
|
||||
"params": [p for n, p in self.model.named_parameters() if not any(nd in n for nd in no_decay)],
|
||||
"weight_decay": self.args.weight_decay,
|
||||
},
|
||||
{
|
||||
"params": [p for n, p in self.model.named_parameters() if any(nd in n for nd in no_decay)],
|
||||
"weight_decay": 0.0,
|
||||
},
|
||||
]
|
||||
if self.args.adafactor:
|
||||
optimizer_cls = Adafactor
|
||||
optimizer_kwargs = {"scale_parameter": False, "relative_step": False}
|
||||
else:
|
||||
optimizer_cls = torch.optim.AdamW
|
||||
optimizer_kwargs = {
|
||||
"betas": (self.args.adam_beta1, self.args.adam_beta2),
|
||||
"eps": self.args.adam_epsilon,
|
||||
}
|
||||
optimizer_kwargs["lr"] = self.args.learning_rate
|
||||
self.optimizer = optimizer_cls(optimizer_grouped_parameters, **optimizer_kwargs)
|
||||
|
||||
if self.lr_scheduler is None:
|
||||
self.lr_scheduler = self._get_lr_scheduler(num_training_steps)
|
||||
else: # ignoring --lr_scheduler
|
||||
logger.warning("scheduler is passed to `Seq2SeqTrainer`, `--lr_scheduler` arg is ignored.")
|
||||
|
||||
def _get_lr_scheduler(self, num_training_steps):
|
||||
schedule_func = arg_to_scheduler[self.args.lr_scheduler]
|
||||
if self.args.lr_scheduler == "constant":
|
||||
scheduler = schedule_func(self.optimizer)
|
||||
elif self.args.lr_scheduler == "constant_w_warmup":
|
||||
scheduler = schedule_func(self.optimizer, num_warmup_steps=self.args.warmup_steps)
|
||||
else:
|
||||
scheduler = schedule_func(
|
||||
self.optimizer, num_warmup_steps=self.args.warmup_steps, num_training_steps=num_training_steps
|
||||
)
|
||||
return scheduler
|
||||
|
||||
def _get_train_sampler(self) -> Optional[torch.utils.data.Sampler]:
|
||||
if isinstance(self.train_dataset, torch.utils.data.IterableDataset):
|
||||
return None
|
||||
elif is_torch_xla_available():
|
||||
return get_tpu_sampler(self.train_dataset)
|
||||
else:
|
||||
if self.args.sortish_sampler:
|
||||
self.train_dataset.make_sortish_sampler(
|
||||
self.args.per_device_train_batch_size,
|
||||
distributed=(self.args.parallel_mode == ParallelMode.DISTRIBUTED),
|
||||
)
|
||||
|
||||
return (
|
||||
RandomSampler(self.train_dataset)
|
||||
if self.args.local_rank == -1
|
||||
else DistributedSampler(self.train_dataset)
|
||||
)
|
||||
|
||||
def _compute_loss(self, model, inputs, labels):
|
||||
if self.args.label_smoothing == 0:
|
||||
if self.data_args is not None and self.data_args.ignore_pad_token_for_loss:
|
||||
# force training to ignore pad token
|
||||
logits = model(**inputs, use_cache=False)[0]
|
||||
loss = self.loss_fn(logits.view(-1, logits.shape[-1]), labels.view(-1))
|
||||
else:
|
||||
# compute usual loss via models
|
||||
loss, logits = model(**inputs, labels=labels, use_cache=False)[:2]
|
||||
else:
|
||||
# compute label smoothed loss
|
||||
logits = model(**inputs, use_cache=False)[0]
|
||||
lprobs = torch.nn.functional.log_softmax(logits, dim=-1)
|
||||
loss, _ = self.loss_fn(lprobs, labels, self.args.label_smoothing, ignore_index=self.config.pad_token_id)
|
||||
return loss, logits
|
||||
|
||||
def compute_loss(self, model, inputs):
|
||||
labels = inputs.pop("labels")
|
||||
loss, _ = self._compute_loss(model, inputs, labels)
|
||||
return loss
|
||||
|
||||
def prediction_step(
|
||||
self,
|
||||
model: nn.Module,
|
||||
inputs: dict[str, Union[torch.Tensor, Any]],
|
||||
prediction_loss_only: bool,
|
||||
ignore_keys: Optional[list[str]] = None,
|
||||
) -> tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
|
||||
"""
|
||||
Perform an evaluation step on :obj:`model` using obj:`inputs`.
|
||||
|
||||
Subclass and override to inject custom behavior.
|
||||
|
||||
Args:
|
||||
model (:obj:`nn.Module`):
|
||||
The model to evaluate.
|
||||
inputs (:obj:`dict[str, Union[torch.Tensor, Any]]`):
|
||||
The inputs and targets of the model.
|
||||
|
||||
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
|
||||
argument :obj:`labels`. Check your model's documentation for all accepted arguments.
|
||||
prediction_loss_only (:obj:`bool`):
|
||||
Whether or not to return the loss only.
|
||||
|
||||
Return:
|
||||
tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
|
||||
A tuple with the loss, logits and labels (each being optional).
|
||||
"""
|
||||
inputs = self._prepare_inputs(inputs)
|
||||
|
||||
gen_kwargs = {
|
||||
"max_length": self.data_args.val_max_target_length
|
||||
if self.data_args is not None
|
||||
else self.config.max_length,
|
||||
"num_beams": self.data_args.eval_beams if self.data_args is not None else self.config.num_beams,
|
||||
}
|
||||
|
||||
if self.args.predict_with_generate and not self.args.prediction_loss_only:
|
||||
generated_tokens = self.model.generate(
|
||||
inputs["input_ids"],
|
||||
attention_mask=inputs["attention_mask"],
|
||||
**gen_kwargs,
|
||||
)
|
||||
# in case the batch is shorter than max length, the output should be padded
|
||||
if generated_tokens.shape[-1] < gen_kwargs["max_length"]:
|
||||
generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_length"])
|
||||
|
||||
labels = inputs.pop("labels")
|
||||
with torch.no_grad():
|
||||
# compute loss on predict data
|
||||
loss, logits = self._compute_loss(model, inputs, labels)
|
||||
|
||||
loss = loss.mean().detach()
|
||||
if self.args.prediction_loss_only:
|
||||
return (loss, None, None)
|
||||
|
||||
logits = generated_tokens if self.args.predict_with_generate else logits
|
||||
|
||||
if labels.shape[-1] < gen_kwargs["max_length"]:
|
||||
labels = self._pad_tensors_to_max_len(labels, gen_kwargs["max_length"])
|
||||
|
||||
return (loss, logits, labels)
|
||||
|
||||
def _pad_tensors_to_max_len(self, tensor, max_length):
|
||||
# If PAD token is not defined at least EOS token has to be defined
|
||||
pad_token_id = self.config.pad_token_id if self.config.pad_token_id is not None else self.config.eos_token_id
|
||||
|
||||
if pad_token_id is None:
|
||||
raise ValueError(
|
||||
"Make sure that either `config.pad_token_id` or `config.eos_token_id` is defined if tensor has to be"
|
||||
f" padded to `max_length`={max_length}"
|
||||
)
|
||||
|
||||
padded_tensor = pad_token_id * torch.ones(
|
||||
(tensor.shape[0], max_length), dtype=tensor.dtype, device=tensor.device
|
||||
)
|
||||
padded_tensor[:, : tensor.shape[-1]] = tensor
|
||||
return padded_tensor
|
||||
@@ -0,0 +1,60 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
from seq2seq_trainer import arg_to_scheduler
|
||||
|
||||
from transformers import TrainingArguments
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Seq2SeqTrainingArguments(TrainingArguments):
|
||||
"""
|
||||
Parameters:
|
||||
label_smoothing (:obj:`float`, `optional`, defaults to 0):
|
||||
The label smoothing epsilon to apply (if not zero).
|
||||
sortish_sampler (:obj:`bool`, `optional`, defaults to :obj:`False`):
|
||||
Whether to SortishSampler or not. It sorts the inputs according to lengths in-order to minimizing the padding size.
|
||||
predict_with_generate (:obj:`bool`, `optional`, defaults to :obj:`False`):
|
||||
Whether to use generate to calculate generative metrics (ROUGE, BLEU).
|
||||
"""
|
||||
|
||||
label_smoothing: Optional[float] = field(
|
||||
default=0.0, metadata={"help": "The label smoothing epsilon to apply (if not zero)."}
|
||||
)
|
||||
sortish_sampler: bool = field(default=False, metadata={"help": "Whether to SortishSampler or not."})
|
||||
predict_with_generate: bool = field(
|
||||
default=False, metadata={"help": "Whether to use generate to calculate generative metrics (ROUGE, BLEU)."}
|
||||
)
|
||||
adafactor: bool = field(default=False, metadata={"help": "whether to use adafactor"})
|
||||
encoder_layerdrop: Optional[float] = field(
|
||||
default=None, metadata={"help": "Encoder layer dropout probability. Goes into model.config."}
|
||||
)
|
||||
decoder_layerdrop: Optional[float] = field(
|
||||
default=None, metadata={"help": "Decoder layer dropout probability. Goes into model.config."}
|
||||
)
|
||||
dropout: Optional[float] = field(default=None, metadata={"help": "Dropout probability. Goes into model.config."})
|
||||
attention_dropout: Optional[float] = field(
|
||||
default=None, metadata={"help": "Attention dropout probability. Goes into model.config."}
|
||||
)
|
||||
lr_scheduler: Optional[str] = field(
|
||||
default="linear",
|
||||
metadata={"help": f"Which lr scheduler to use. Selected in {sorted(arg_to_scheduler.keys())}"},
|
||||
)
|
||||
32
transformers/examples/legacy/seq2seq/test_data/fsmt/build-eval-data.py
Executable file
32
transformers/examples/legacy/seq2seq/test_data/fsmt/build-eval-data.py
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
|
||||
|
||||
pairs = [
|
||||
["en", "ru"],
|
||||
["ru", "en"],
|
||||
["en", "de"],
|
||||
["de", "en"],
|
||||
]
|
||||
|
||||
n_objs = 8
|
||||
|
||||
|
||||
def get_all_data(pairs, n_objs):
|
||||
text = {}
|
||||
for src, tgt in pairs:
|
||||
pair = f"{src}-{tgt}"
|
||||
cmd = f"sacrebleu -t wmt19 -l {pair} --echo src".split()
|
||||
src_lines = subprocess.run(cmd, stdout=subprocess.PIPE).stdout.decode("utf-8").splitlines()
|
||||
cmd = f"sacrebleu -t wmt19 -l {pair} --echo ref".split()
|
||||
tgt_lines = subprocess.run(cmd, stdout=subprocess.PIPE).stdout.decode("utf-8").splitlines()
|
||||
text[pair] = {"src": src_lines[:n_objs], "tgt": tgt_lines[:n_objs]}
|
||||
return text
|
||||
|
||||
|
||||
text = get_all_data(pairs, n_objs)
|
||||
filename = "./fsmt_val_data.json"
|
||||
with open(filename, "w", encoding="utf-8") as f:
|
||||
bleu_data = json.dump(text, f, indent=2, ensure_ascii=False)
|
||||
@@ -0,0 +1,90 @@
|
||||
{
|
||||
"en-ru": {
|
||||
"src": [
|
||||
"Welsh AMs worried about 'looking like muppets'",
|
||||
"There is consternation among some AMs at a suggestion their title should change to MWPs (Member of the Welsh Parliament).",
|
||||
"It has arisen because of plans to change the name of the assembly to the Welsh Parliament.",
|
||||
"AMs across the political spectrum are worried it could invite ridicule.",
|
||||
"One Labour AM said his group was concerned \"it rhymes with Twp and Pwp.\"",
|
||||
"For readers outside of Wales: In Welsh twp means daft and pwp means poo.",
|
||||
"A Plaid AM said the group as a whole was \"not happy\" and has suggested alternatives.",
|
||||
"A Welsh Conservative said his group was \"open minded\" about the name change, but noted it was a short verbal hop from MWP to Muppet."
|
||||
],
|
||||
"tgt": [
|
||||
"Члены Национальной ассамблеи Уэльса обеспокоены, что \"выглядят как куклы\"",
|
||||
"Некоторые члены Национальной ассамблеи Уэльса в ужасе от предложения о том, что их наименование должно измениться на MPW (члены Парламента Уэльса).",
|
||||
"Этот вопрос был поднят в связи с планами по переименованию ассамблеи в Парламент Уэльса.",
|
||||
"Члены Национальной ассамблеи Уэльса всего политического спектра обеспокоены, что это может породить насмешки.",
|
||||
"Один из лейбористских членов Национальной ассамблеи Уэльса сказал, что его партия обеспокоена тем, что \"это рифмуется с Twp и Pwp\".",
|
||||
"Для читателей за предлами Уэльса: по-валлийски twp означает \"глупый\", а pwp означает \"какашка\".",
|
||||
"Член Национальной ассамблеи от Плайд сказал, что эта партия в целом \"не счастлива\" и предложил альтернативы.",
|
||||
"Представитель Консервативной партии Уэльса сказал, что его партия \"открыта\" к переименованию, но отметил, что между WMP и Muppet небольшая разница в произношении."
|
||||
]
|
||||
},
|
||||
"ru-en": {
|
||||
"src": [
|
||||
"Названо число готовящихся к отправке в Донбасс новобранцев из Украины",
|
||||
"Официальный представитель Народной милиции самопровозглашенной Луганской Народной Республики (ЛНР) Андрей Марочко заявил, что зимой 2018-2019 года Украина направит в Донбасс не менее 3 тыс. новобранцев.",
|
||||
"По его словам, таким образом Киев планирует \"хоть как-то доукомплектовать подразделения\".",
|
||||
"\"Нежелание граждан Украины проходить службу в рядах ВС Украины, массовые увольнения привели к низкой укомплектованности подразделений\", - рассказал Марочко, которого цитирует \"РИА Новости\".",
|
||||
"Он также не исключил, что реальные цифры призванных в армию украинцев могут быть увеличены в случае необходимости.",
|
||||
"В 2014-2017 годах Киев начал так называемую антитеррористическую операцию (АТО), которую позже сменили на операцию объединенных сил (ООС).",
|
||||
"Предполагалось, что эта мера приведет к усилению роли украинских силовиков в урегулировании ситуации.",
|
||||
"В конце августа 2018 года ситуация в Донбассе обострилась из-за убийства главы ДНР Александра Захарченко."
|
||||
],
|
||||
"tgt": [
|
||||
"The number of new Ukrainian recruits ready to go to Donbass has become public",
|
||||
"Official representative of the peoples’ militia of the self-proclaimed Lugansk People’s Republic Andrey Marochko claimed that Ukrainian will send at least 3 thousand new recruits to Donbass in winter 2018-2019.",
|
||||
"This is how Kyiv tries “at least somehow to staff the units,” he said.",
|
||||
"“The unwillingness of Ukrainian citizens to serve in the Ukraine’s military forces, mass resignments lead to low understaffing,” said Marochko cited by RIA Novosti.",
|
||||
"Also, he doesn’t exclude that the real numbers of conscripts in the Ukrainian army can be raised is necessary.",
|
||||
"In 2014-2017, Kyiv started so-called antiterrorist operation, that ws later changed to the united forces operation.",
|
||||
"This measure was supposed to strengthen the role of the Ukrainian military in settling the situation.",
|
||||
"In the late August 2018, the situation in Donbass escalated as the DNR head Aleksandr Zakharchenko was killed."
|
||||
]
|
||||
},
|
||||
"en-de": {
|
||||
"src": [
|
||||
"Welsh AMs worried about 'looking like muppets'",
|
||||
"There is consternation among some AMs at a suggestion their title should change to MWPs (Member of the Welsh Parliament).",
|
||||
"It has arisen because of plans to change the name of the assembly to the Welsh Parliament.",
|
||||
"AMs across the political spectrum are worried it could invite ridicule.",
|
||||
"One Labour AM said his group was concerned \"it rhymes with Twp and Pwp.\"",
|
||||
"For readers outside of Wales: In Welsh twp means daft and pwp means poo.",
|
||||
"A Plaid AM said the group as a whole was \"not happy\" and has suggested alternatives.",
|
||||
"A Welsh Conservative said his group was \"open minded\" about the name change, but noted it was a short verbal hop from MWP to Muppet."
|
||||
],
|
||||
"tgt": [
|
||||
"Walisische Ageordnete sorgen sich \"wie Dödel auszusehen\"",
|
||||
"Es herrscht Bestürzung unter einigen Mitgliedern der Versammlung über einen Vorschlag, der ihren Titel zu MWPs (Mitglied der walisischen Parlament) ändern soll.",
|
||||
"Der Grund dafür waren Pläne, den Namen der Nationalversammlung in Walisisches Parlament zu ändern.",
|
||||
"Mitglieder aller Parteien der Nationalversammlung haben Bedenken, dass sie sich dadurch Spott aussetzen könnten.",
|
||||
"Ein Labour-Abgeordneter sagte, dass seine Gruppe \"sich mit Twp und Pwp reimt\".",
|
||||
"Hinweis für den Leser: „twp“ im Walisischen bedeutet „bescheuert“ und „pwp“ bedeutet „Kacke“.",
|
||||
"Ein Versammlungsmitglied von Plaid Cymru sagte, die Gruppe als Ganzes sei \"nicht glücklich\" und hat Alternativen vorgeschlagen.",
|
||||
"Ein walisischer Konservativer sagte, seine Gruppe wäre „offen“ für eine Namensänderung, wies aber darauf hin, dass es von „MWP“ (Mitglied des Walisischen Parlaments) nur ein kurzer verbaler Sprung zu „Muppet“ ist."
|
||||
]
|
||||
},
|
||||
"de-en": {
|
||||
"src": [
|
||||
"Schöne Münchnerin 2018: Schöne Münchnerin 2018 in Hvar: Neun Dates",
|
||||
"Von az, aktualisiert am 04.05.2018 um 11:11",
|
||||
"Ja, sie will...",
|
||||
"\"Schöne Münchnerin\" 2018 werden!",
|
||||
"Am Nachmittag wartet erneut eine Überraschung auf unsere Kandidatinnen: sie werden das romantische Candlelight-Shooting vor der MY SOLARIS nicht alleine bestreiten, sondern an der Seite von Male-Model Fabian!",
|
||||
"Hvar - Flirten, kokettieren, verführen - keine einfachen Aufgaben für unsere Mädchen.",
|
||||
"Insbesondere dann, wenn in Deutschland ein Freund wartet.",
|
||||
"Dennoch liefern die neun \"Schöne Münchnerin\"-Kandidatinnen beim Shooting mit People-Fotograf Tuan ab und trotzen Wind, Gischt und Regen wie echte Profis."
|
||||
],
|
||||
"tgt": [
|
||||
"The Beauty of Munich 2018: the Beauty of Munich 2018 in Hvar: Nine dates",
|
||||
"From A-Z, updated on 04/05/2018 at 11:11",
|
||||
"Yes, she wants to...",
|
||||
"to become \"The Beauty of Munich\" in 2018!",
|
||||
"In the afternoon there is another surprise waiting for our contestants: they will be competing for the romantic candlelight photo shoot at MY SOLARIS not alone, but together with a male-model Fabian!",
|
||||
"Hvar with its flirting, coquetting, and seduction is not an easy task for our girls.",
|
||||
"Especially when there is a boyfriend waiting in Germany.",
|
||||
"Despite dealing with wind, sprays and rain, the nine contestants of \"The Beauty of Munich\" behaved like real professionals at the photo shoot with People-photographer Tuan."
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,20 @@
|
||||
UN Chief Says There Is No Military Solution in Syria Secretary-General Ban Ki-moon says his response to Russia's stepped up military support for Syria is that "there is no military solution" to the nearly five-year conflict and more weapons will only worsen the violence and misery for millions of people. The U.N. chief again urged all parties, including the divided U.N. Security Council, to unite and support inclusive negotiations to find a political solution. Ban told a news conference Wednesday that he plans to meet with foreign ministers of the five permanent council nations - the U.S., Russia, China, Britain and France - on the sidelines of the General Assembly's ministerial session later this month to discuss Syria.
|
||||
He expressed regret that divisions in the council and among the Syrian people and regional powers "made this situation unsolvable." Ban urged the five permanent members to show the solidarity and unity they did in achieving an Iran nuclear deal in addressing the Syria crisis. 8 Poll Numbers That Show Donald Trump Is For Real Some have tried to label him a flip-flopper. Others have dismissed him as a joke. And some are holding out for an implosion. But no matter how some Republicans are trying to drag Donald Trump down from atop the polls, it hasn't worked (yet).
|
||||
Ten of the last 11 national polls have shown Donald Trump's lead at double digits, and some are starting to ask seriously what it means for the real estate mogul's nomination chances. Of course, it's still early in the election cycle. None of this is to say that Trump is likely to win the Republican nomination. Pundits point out that at this time in 2011, Rick Perry's lead was giving way to a rising Herman Cain, neither of whom won even one state in the nomination process. And there are many reasons he would struggle in a general election. But outside groups like Jeb Bush's Super PAC and the economic conservative group Club for Growth are recognizing Trump's staying power and beginning to unload their dollars to topple him.
|
||||
Here are some recent poll numbers that suggest that the real estate mogul isn't just a passing phase: Trump's favorability ratings have turned 180 degrees. Right before Donald Trump announced his candidacy in mid-June, a Monmouth University poll showed only two in 10 Republicans had a positive view of the real estate mogul. By mid-July, it was 40 percent. In early August, it was 52 percent. Now, six in 10 Republicans have a favorable view of Donald Trump. Roughly three in 10 say they have a negative view. And these numbers hold up in early states. A Quinnipiac poll in Iowa last week found that 60 percent of Republicans there had a favorable view of Trump.
|
||||
Two-thirds of GOP voters would be happy with Trump as the nominee. In a CNN/ORC poll last week, 67 percent of Republicans said they would be either "enthusiastic" or "satisfied" if Trump were the nominee. Only two in 10 say they would be "upset" if he were the nominee. Only Ben Carson generates roughly the same level of enthusiasm as Trump (43 percent say they would be "enthusiastic" vs. 40 percent who say the same of Trump). The next closest in enthusiasm? Marco Rubio with only 21 percent.
|
||||
On the flip side, 47 percent of Republican voters say they would be "dissatisfied" or "upset" if establishment favorite Jeb Bush becomes the nominee. A majority of Republicans don't see Trump's temperament as a problem. While Donald Trump has been widely criticized for his bombast and insults, 52 percent of leaned Republican voters nationwide think that the real estate mogul has the right temperament to be president, according to Monday's ABC News/Washington Post poll. The same number holds in the first-in-the-nation caucus state of Iowa, where the same 52 percent of Republicans think he has the personality to be commander in chief, according to Quinnipiac last week.
|
||||
Still, 44 percent think he doesn't have the personality to serve effectively, and almost six in 10 independents say his temperament does not belong in the White House, according to ABC/Post. Republican voters are getting used to the idea. When they put on their pundit hats, Republican voters think Trump is for real. When asked who is most likely to win the GOP nomination, four in 10 said Trump was the best bet, according to a CNN/ORC poll out last week. That's a change from when four in 10 placed their money on Jeb Bush in late July. Full disclosure: GOP voters haven't had the clearest crystal ball in the past.
|
||||
At this time last cycle, four in 10 Republicans picked Rick Perry to win the nomination, vs. only 28 percent for eventual nominee Mitt Romney. Still, it shows that a plurality of GOP voters see Trump's campaign as plausible. Even if Republicans rallied around another candidate, Trump still beats almost everyone. Some pundits point out that the splintered field is likely contributing to Trump's lead, while anti-Trump support is be spread diffusely among more than a dozen other candidates. But a Monmouth University poll in early September shows that, in a hypothetical head-to-head matchup between Trump and most other Republican candidates, Trump almost always garners majority support.
|
||||
He leads Carly Fiorina by 13 points, Marco Rubio by 14 points, Walker by 15 points, Jeb Bush by 19 points, and, finally, Rand Paul, John Kasich and Chris Christie by 33 points each. He's in a dead heat with Ted Cruz. The only candidate who beats him? Ben Carson would lead the businessman by a wide 19 points in a hypothetical head-to-head. A bare majority of Donald Trump's supporters say they've made up their minds. A new CBS/NYT poll out on Tuesday shows that just more than half of voters who support Trump say they have locked in their votes. Obviously, a lot can happen to change that, and no one can really say they would never change their mind.
|
||||
46 percent said they are leaving the door open to switching candidates. Still, Trump's strongest competition at the moment is from fellow outsider neurosurgeon Ben Carson, but voters who say they have made up their minds are twice as likely to go for Trump. Six in 10 Republicans say they agree with Trump on immigration. Even since Donald Trump called immigrants from Mexico "rapists" in his campaign announcement speech two months ago, immigration has been front and center in the 2016 conversation. Some are worried that Trump's bombast will drive crucial Hispanic voters away from the Republican Party and damage rebranding efforts.
|
||||
But according to Monday's new ABC/Post poll, six in 10 Republicans say they agree with Trump on immigration issues. So as long as immigration remains in the spotlight, it seems Donald Trump will remain too. Frustration with government is climbing to new highs. Donald Trump and Ben Carson now account for roughly half of the support from Republican voters, largely due to their outsider status. Six in 10 Republicans in Monday's new ABC/Post poll say they want a political outsider over someone with government experience. And they are angry at Washington, too.
|
||||
A Des Moines Register/Bloomberg poll in Iowa from two weeks ago shows that three in four Iowa Republicans are frustrated with Republicans in Congress, with 54 percent "unsatisfied" and 21 percent "mad as hell." Jeremy Corbyn to make debut at Prime Minister's Questions Since his election, Mr Corbyn's debut at PMQs has been keenly awaited New Labour leader Jeremy Corbyn is to make his debut at Prime Minister's Questions later, taking on David Cameron for the first time.
|
||||
Mr Corbyn will rise to ask the first of his six allotted questions shortly after midday, with his performance likely to be closely scrutinised by the media and Labour MPs. He has called for "less theatre and more facts" at the weekly showpiece. He has also said he could skip some sessions, leaving them to colleagues. The encounter will be the first parliamentary test of Mr Corbyn's leadership, coming after his appointment of a shadow cabinet and his speech to the TUC annual congress on Tuesday.
|
||||
Meanwhile, the Labour leader's decision to stand in silence during the singing of the national anthem at a service on Tuesday to mark the 75th anniversary of the Battle of Britain has attracted criticism from a number of Tory MPs and is the focus of several front page stories in the newspapers. Mr Corbyn's decision not to sing the national anthem has attracted attention A spokesman for Mr Corbyn said he had "stood in respectful silence" and did recognise the "heroism of the Royal Air Force in the Battle of Britain."
|
||||
But a member of Mr Corbyn's shadow cabinet, Owen Smith, told BBC Two's Newsnight programme he would have advised the Labour leader to sing the national anthem "irrespective" of his belief that the monarchy should be abolished. Nearly a dozen shadow ministers have refused to serve in Mr Corbyn's top team, citing differences over the economy, defence and foreign affairs, while less than a sixth of the parliamentary party originally backed him as leader. BBC political correspondent Robin Brant says policy differences are also "stacking up" within Labour following Mr Corbyn's appointment over its position on the European Union and the government's cap on benefits.
|
||||
Mr Corbyn told the TUC conference Labour was putting forward amendments to remove the whole idea of a cap altogether. Hours later Mr Smith, the shadow work and pensions secretary, said the party was "very clear" that it was only opposing government plans to reduce the level of cap from £26,000 to £23,000. Mr Corbyn will be the fifth Labour leader that David Cameron has faced across the despatch box over the past decade since he became Tory leader. The Labour leader, who has promised a different approach to politics, says he has "crowd sourced" ideas for questions to ask Mr Cameron and has been given more than 30,000 suggestions.
|
||||
The Islington North MP has said PMQs is too confrontational and that he will refrain from both "repartee" and trading barbs, instead vowing to focus on serious issues such as poverty, inequality and the challenges facing young people. Mr Corbyn has said that Angela Eagle, the shadow business secretary, will deputise for him at PMQs when he does not attend - for instance when Mr Cameron is travelling abroad. He has also floated the idea of allowing other colleagues to take the floor on occasion, saying he had approached the Commons Speaker John Bercow to discuss the issue.
|
||||
When he became leader in 2005, Mr Cameron said he wanted to move away from the "Punch and Judy" style of politics often associated with PMQs but admitted some years later that he had failed. Since it was first televised in 1990, PMQs has been seen as a key barometer of a leader's judgement, their command of the Commons and their standing among their fellow MPs although critics have argued it has become a caricature and is in need of far-reaching reforms. 'Shot in Joburg': Homeless youth trained as photographers Downtown Johannesburg is a tough place to be homeless.
|
||||
But one group of former street children have found a way to learn a skill and make a living. "I was shot in Joburg" is a non-profit studio that teaches homeless youngsters how to take photographs of their neighbourhood and make a profit from it. BBC News went to meet one of the project's first graduates. JD Sports boss says higher wages could hurt expansion JD Sports Executive Chairman Peter Cowgill says a higher minimum wage for UK workers could mean "more spending power in the pockets of potential consumers." But that spending power is unlikely to outweigh the higher labour costs at his firm, he says.
|
||||
The costs could hit JD Sports' expansion plans, he added, which could mean fewer extra jobs. Thanasi Kokkinakis backed by Tennis Australia president Steve Healy Thanasi Kokkinakis deserves kudos rather than criticism for his behaviour. Thanasi Kokkinakis has been the collateral damage in the recent storm around his friend Nick Kyrgios and deserves kudos rather than criticism for his own behaviour, according to Tennis Australia president Steve Healy.
|
||||
@@ -0,0 +1,20 @@
|
||||
Șeful ONU declară că nu există soluții militare în Siria Secretarul General Ban Ki-moon afirmă că răspunsul său la suportul militar al Rusiei pentru Siria este că „nu există o soluție militară” la conflictul care durează de aproape cinci ani iar mai multe arme nu ar face decât să agraveze violența și suferința a milioane de oameni. Șeful ONU a solicitat din nou tuturor părților, inclusiv Consiliului de securitate ONU divizat să se unifice și să susțină negocierile pentru a găsi o soluție politică. Ban a declarat miercuri în cadrul unei conferințe că intenționează să se întâlnească luna aceasta cu miniștrii de externe din cinci țări permanent prezente în consiliu - SUA, Rusia, China, Anglia și Franța - pe marginea sesiunii ministeriale a Adunării Generale pentru a discuta despre Siria.
|
||||
Ban și-a exprimat regretul că divizările în consiliu și între poporul sirian și puterile regionale „au făcut această situație de nerezolvat”. Ban le-a cerut celor cinci membri permanenți să dea dovadă de solidaritatea și unitatea arătate atunci când au reușit să încheie un acord referitor la armele nucleare ale Iranului, abordând astfel criza din Siria. 8 cifre din sondaje care arată că Donald Trump are șanse reale Unii au încercat să îl eticheteze ca politician „flip-flop”. Alții l-au numit o glumă. Iar alții așteaptă implozia. Însă indiferent de modul în care unii republicani încearcă să îl dărâme pe Donald Trump din vârful sondajelor, nu a funcționat (încă).
|
||||
Zece din ultimele 11 sondaje naționale au arătat că Donald Trump conduce cu un procent din două cifre iar unele voci încep să se întrebe serios ce înseamnă acest lucru pentru șansele de numire ale mogulului imobiliar. Desigur, este încă prematur. Nimic din toate acestea nu spune că Trump va câștiga cursa pentru nominalizarea republicanilor. Pundits arată că, în aceeași perioadă a anului 2011, avansul lui Rick Perry îi făcea loc lui Herman Cain în sondaje, dar niciunul dintre ei nu a câștigat în vreun stat în cursa de nominalizare. Iar motivele pentru care s-ar lupta din greu la alegerile generale sunt numeroase. Însă grupurile din exterior precum Super PAC al lui Jeb Bush și grupul conservator economic Club for Growth admit puterea lui Trump și încep să îl susțină cu bani.
|
||||
În continuare vă prezentăm câteva cifre din sondaje recente care sugerează că mogulul imobiliar nu este doar ceva trecător: Cifrele care indică susținerea față de Trump s-au întors la 180 grade. Chiar înainte ca Donald Trump să își anunțe candidatura, la mijlocul lui iunie, un sondaj realizat de Universitatea din Monmouth arăta că doar doi din 10 republicani aveau o părere pozitivă despre mogulul imobiliar. Până la mijlocul lui iulie, procentul a urcat la 40%. La începutul lui august, era 52%. În prezent, șase din 10 republicani au o părere favorabilă despre Donald Trump. Aproximativ trei din 10 declară că au o părere negativă. Aceste cifre se mențin. Un sondaj realizat săptămâna trecută de Quinnipiac în Iowa a concluzionat că 60% dintre republicanii din regiune au o părere favorabilă despre Trump.
|
||||
Două treimi dintre alegătorii GOP ar fi fericiți dacă Trump ar câștiga cursa pentru nominalizare. Într-un sondaj realizat săptămâna trecută de CNN/ORC, 67% dintre republicani au declarat că ar fi „entuziasmați” sau „mulțumiți” dacă Trump ar câștiga cursa pentru nominalizare. Doar doi din 10 declară că ar fi „supărați” dacă Trump ar câștiga cursa pentru nominalizare. Doar Ben Carson generează aproximativ același nivel de entuziasm ca Trump (43% declară că ar fi „entuziasmați” față de 40% care declară același lucru despre Trump). Cel mai aproape în ceea ce privește entuziasmul? Marco Rubio, cu doar 21%.
|
||||
De partea cealaltă, 47% dintre alegătorii republicani afirmă că ar fi „nemulțumiți” sau „supărați” dacă favoritul Jeb Bush câștigă cursa pentru nominalizare. Majoritatea republicanilor nu consideră temperamentul lui Trump o problemă. Deși Donald Trump a fost puternic criticat pentru insultele aduse și stilul său bombastic, 52% dintre alegătorii republicani la nivel național consideră că mogulul imobiliar are temperamentul potrivit pentru a fi președinte, conform sondajului realizat luni de ABC News/Washington Post. Regăsim aceleași cifre în statul Iowa, unde tot 52% dintre republicani cred că Trump are personalitatea potrivită pentru a fi conducător, conform sondajului realizat săptămâna trecută de Quinnipiac.
|
||||
Totuși, 44% sunt de părere că nu are personalitatea necesară pentru a acționa eficient și aproape șase din 10 independenți afirmă că temperamentul său nu are ce căuta la Casa Albă, conform ABC/Post. Alegătorii republicani se obișnuiesc cu ideea. Atunci când iau atitudinea de intelectuali, alegătorii republicani consideră că Trump este autentic. Conform unui sondaj realizat săptămâna trecută de CNN/ORC, la întrebarea cine are cele mai multe șanse să câștige cursa pentru nominalizare GOP, patru din 10 au declarat că Trump. Situația s-a schimbat față de finalul lui iulie, când patru din 10 ar fi pariat pe Jeb Bush. Informare completă: în trecut, alegătorii GOP nu au citit foarte bine viitorul.
|
||||
În aceeași perioadă a ultimelor alegeri, patru din 10 republicani l-au ales pe Rick Perry în cursa pentru nominalizare, față de doar 28% pentru Mitt Romney. Însă, aceste cifre arată că majoritatea alegătorilor GOP consideră plauzibilă campania lui Trump. Chiar dacă republicanii sau repliat spre un alt candidat. Trump încă se află în fruntea tuturor. Unele voci spun că situația divizată va contribui probabil la victoria lui Trump, în timp ce susținerea contra lui Trump se va împărți la mai mult de doisprezece candidați. Însă un sondaj derulat la începutul lui septembrie de Universitatea din Monmouth arată că, în situația ipotetică a unei colaborări între Trump și majoritatea celorlalți candidați republicani, aproape întotdeauna Trump va beneficia de susținerea majoritară.
|
||||
Trump se află la distanță de 13 puncte de Carly Fiorina, la 14 puncte de Marco Rubio, la 15 puncte de Walker, la 19 puncte de Jeb Bush și, în cele din urmă, la câte 33 de puncte față de Rand Paul, John Kasich și Chris Christie. Este aproape la egalitate cu Ted Cruz. Singurul candidat care îl învinge? Ben Carson l-ar învinge pe omul de afaceri cu 19 puncte într-o confruntare ipotetică de unu la unu. Majoritatea susținătorilor lui Donald Trump declară că s-au decis. Un nou sondaj realizat marți de CBS/NYT arată că peste jumătate dintre alegătorii care îl susțin pe Trump declară că nu își schimbă opțiunea de vot. Evident, se pot întâmpla multe în acest sens și nimeni nu poate spune că aceștia nu se vor răzgândi niciodată.
|
||||
46% afirmă că lasă portița deschisă posibilității de a-și schimba opțiunea. Cu toate acestea, cel mai important adversar al lui Trump este în prezent neurochirurgul Ben Carson, însă este de două ori mai probabil ca alegătorii care declară că s-au decis să voteze cu Trump. Șase din 10 republicani afirmă că sunt de acord cu Trump în problema imigrării. De când Donald Trump i-a numit pe imigranții din Mexic „violatori” în discursul de deschidere a campaniei sale, în urmă cu două luni, imigrarea a fost subiectul central în campania pentru 2016. Unii sunt îngrijorați că stilul bombastic al lui Trump va duce la o scindare între alegătorii hispanici importanți și Partidul Republican și va prejudicia eforturile de rebranding.
|
||||
Însă, conform sondajului realizat luni de ABC/Post, șase din 10 republicani afirmă că sunt de acord cu Trump în problema imigrării. Așa că, se pare că atâta timp cât problema imigrării rămâne în lumina reflectoarelor, la fel va rămâne și Doland Trump. Frustrarea față de autorități atinge noi culmi. Donald Trump și Ben Carson sunt acum susținuți de aproape jumătate dintre alegătorii republicani, în mare parte datorită statutului lor de outsideri. Conform sondajului realizat luni de ABC/Post, șase din 10 republicani afirmă că preferă un outsider politic în detrimentul cuiva cu experiență în guvernare. Oamenii sunt de asemenea supărați pe autoritățile de la Washington.
|
||||
Un sondaj derulat în urmă cu două săptămâni în Iowa de către Des Moines Register/Bloomberg arată că trei din patru republicani din Iowa sunt frustrați de prestația republicanilor din COngres, 54% declarându-se „nemulțumiți” iar 21% „nervoși la culme”. Jeremy Corbyn își face debutul la Prime Minister's Questions Încă de la alegerea sa, debutul domnului Corbyn la PMQs a fost îndelung așteptat Noul lider al Partidului Laburist, Jeremy Corbyn, își va face mai târziu debutul la Prime Minister's Questions, confruntându-se pentru prima dată cu David Cameron.
|
||||
Dl Corbyn va adresa primele dintre cele șase întrebări la care are dreptul la scurt timp după prânz; prestația sa va fi probabil analizată îndeaproape de mass-media și parlamentarii laburiști. În cadrul aparițiilor săptămânale, el a cerut „mai puțin teatru și mai multe fapte”. A declarat de asemenea că poate renunța la câteva participări și că le cedează colegilor săi. Confruntarea va fi primul test parlamentar al Dl Corbyn în poziție de lider, venind după ce a numit un „cabinet fantomă” și după discursul pe care l-a ținut marți la congresul anual TUC.
|
||||
Între timp, decizia liderului Partidului laburist de a păstra tăcerea la rostirea imnului național în cadrul unei slujbe ținute marți cu ocazia aniversării a 75 de ani de la Bătălia Angliei a atras critici din partea unor parlamentari conservatori și a ținut prima pagină a ziarelor. Decizia domnului Corbyn de a nu cânta imnul național a atras atenția Un purtător de cuvânt al Dl Corbyn a declarat că acesta „a păstrat tăcerea în mod respectuos” și a recunoscut „eroismul Forțelor aeriene britanice în Bătălia Angliei.”
|
||||
Însă un membru al cabinetului fantomă al Dl Corbyn, Owen Smith, a declarat pentru emisiunea Two's Newsnight transmisă de BBC că i-ar fi recomandat liderului laburist să cânte imnul național „indiferent” de credința sa că monarhia ar trebui abolită. În jur de doisprezece miniștri din cabinetul fantomă au refuzat să facă parte din echipa de frunte a Dl Corbyn, argumentând prin diferențe de opinie legate de economie, apărare și externe, în timp ce mai puțin de o șesime din partidul parlamentar l-a susținut ca lider. Corespondentul politic al BBC, Robin Brant, declară că diferențele de politică „se cumulează” în Partidul Laburist după numirea domnului Corbyn referitor la poziția sa față de Uniunea Europeană și limita de beneficii.
|
||||
Dl Corbyn a declarat la conferința TUC că Partidul Laburist va aduce modificări prin care se va elimina integral ideea limitării. Câteva ore mai târziu, Dl Smith, Ministrul Muncii și Pensiilor, a declarat că partidul „este foarte clar” în opoziția exclusivă față de planurile guvernului de a reduce nivelul „cap” de la 26.000 lire la 23.000 lire. Dl Corbyn va fi al cincilea lider laburist cu care se confruntă David Cameron la tribună în ultimul deceniu, de când a preluat conducerea Partidului Conservator. Liderul laburist, care a promis o abordare diferită a politicii, spune că are idei „din surse externe” pentru întrebări pe care să i le adreseze Domnului Cameron și că a primit peste 30.000 de sugestii.
|
||||
Parlamentarul Islington North a afirmat că PMQs implică un nivel de confruntare prea înalt și că se va abține de la replici și atacuri, angajându-se să se concentreze în schimb pe probleme serioase precum sărăcia, inegalitatea și provocările cu care se confruntă tinerii. Dl Corbyn a declarat că Angela Eagle, Ministrul de finanțe, îi va ține locul la PMQs atunci când el nu poate participa - de exemplu atunci când Dl Cameron se deplasează în străinătate. A exprimat de asemenea ideea că va permite altor colegi să ia cuvântul ocazional, spunând că l-a abordat pe Președintele Camerei Deputaților, John Bercow, pentru a discuta acest aspect.
|
||||
În 2005, când a preluat conducerea, Dl Cameron a declarat că dorește să renunțe la stilul politic „Punch and Judy” asociat adesea cu PMQs însă a recunoscut câțiva ani mai târziu că nu a reușit în demersul său. De la prima transmisie, în 1990, PMQs a fost considerată un barometru cheie al raționamentului unui lider, al modului în care acesta conduce Camera Deputaților și a poziției sale în rândul colegilor parlamentari, deși criticii afirmă a ca devenit o caricatură și că are nevoie de o reformare profundă. „Cadru în Joburg”: Tineri fără adăpost beneficiază de cursuri de fotografie Este dificil să fii un om fără adăpost în Johannesburg.
|
||||
Însă un grup de oameni care au trăit pe străzi în copilărie au găsit un mod de a învăța o meserie și de a-și câștiga traiul. „I was shot în Joburg” este un studio non-profit care îi învață pe tinerii fără adăpost să facă fotografii ale zonelor în care trăiesc și să câștige bani din asta. BBC News s-a întâlnit cu unul dintre primii absolvenți ai proiectului. Șeful JD Sports spune că salariile mai mari ar putea dăuna extinderii Președintele JD Sports, Peter Cowgill, declară că o creștere a salariului minim în Marea Britanie ar putea însemna „o putere de cumpărare mai mare în buzunarele potențialilor consumatori.” Este însă puțin probabil ca respectiva putere de cumpărare să depășească costurile mai mari pentru forța de muncă în cadrul firmei, afirmă el.
|
||||
Costurile ar putea avea impact asupra planurilor de extindere ale JD Sports, a adăugat el, ceea ce ar putea însemna mai puține locuri de muncă noi. Thanasi Kokkinakis susținut de președintele Tennis Australia, Steve Healy Thanasi Kokkinakis ar merita să fie lăudat și nu criticat pentru comportamentul său. Thanasi Kokkinakis a fost victimă colaterală în „furtuna” creată în jurul prietenului său, Nick Kyrgios, iar comportamentul său merită mai degrabă cuvinte de laudă și nu critică, în opinia președintelui Tennis Australia, Steve Healy.
|
||||
Binary file not shown.
@@ -0,0 +1,11 @@
|
||||
Corrections to votes and voting intentions: see Minutes Assignment conferred on a Member: see Minutes Membership of committees and delegations: see Minutes Decisions concerning certain documents: see Minutes Forwarding of texts adopted during the sitting: see Minutes Dates for next sittings: see Minutes
|
||||
Membership of Parliament: see Minutes Approval of Minutes of previous sitting: see Minutes Membership of Parliament: see Minutes Verification of credentials: see Minutes Documents received: see Minutes Written statements and oral questions (tabling): see Minutes Petitions: see Minutes Texts of agreements forwarded by the Council: see Minutes Action taken on Parliament's resolutions: see Minutes Agenda for next sitting: see Minutes Closure of sitting (The sitting was closed at 7.45 p.m.)
|
||||
Election of Vice-Presidents of the European Parliament (deadline for submitting nominations): see Minutes (The sitting was suspended at 12.40 p.m. and resumed at 3.00 p.m.) Election of Quaestors of the European Parliament (deadline for submitting nominations): see Minutes (The sitting was suspended at 3.25 p.m. and resumed at 6.00 p.m.) Agenda for next sitting: see Minutes Closure of sitting (The sitting was closed at 6.15 p.m.) Opening of the sitting (The sitting was opened at 9.35 a.m.) Documents received: see Minutes Approval of Minutes of previous sitting: see Minutes Membership of Parliament: see Minutes
|
||||
Membership of committees (deadline for tabling amendments): see Minutes (The sitting was suspended at 7 p.m. and resumed at 9 p.m.) Agenda for next sitting: see Minutes Closure of sitting (The sitting was suspended at 23.25 p.m.) Documents received: see Minutes Communication of Council common positions: see Minutes (The sitting was suspended at 11.35 a.m. and resumed for voting time at noon) Approval of Minutes of previous sitting: see Minutes Committee of Inquiry into the crisis of the Equitable Life Assurance Society (extension of mandate): see Minutes
|
||||
Announcement by the President: see Minutes 1. Membership of committees (vote) 2. Amendment of the ACP-EC Partnership Agreement (vote) 4. Certification of train drivers operating locomotives and trains on the railway system in the Community (vote) 6. Law applicable to non-contractual obligations ("ROME II") (vote) 8. Seventh and eighth annual reports on arms exports (vote) Corrections to votes and voting intentions: see Minutes Membership of committees and delegations: see Minutes Request for waiver of parliamentary immunity: see Minutes Decisions concerning certain documents: see Minutes
|
||||
Written statements for entry
|
||||
Written statements for entry in the register (Rule 116): see Minutes Forwarding of texts adopted during the sitting: see Minutes Dates for next sittings: see Minutes Adjournment of the session I declare the session of the European Parliament adjourned. (The sitting was closed at 1 p.m.) Approval of Minutes of previous sitting: see Minutes Membership of Parliament: see Minutes Request for the defence of parliamentary immunity: see Minutes Appointments to committees (proposal by the Conference of Presidents): see Minutes Documents received: see Minutes Texts of agreements forwarded by the Council: see Minutes
|
||||
Action taken on Parliament's resolutions: see Minutes Oral questions and written statements (tabling): see Minutes Written statements (Rule 116): see Minutes Agenda: see Minutes 1. Appointments to parliamentary committees (vote): see Minutes Voting time Agenda for next sitting: see Minutes Closure of sitting (The sitting was closed at 12 midnight) Opening of the sitting (The sitting was opened at 09.05) Documents received: see Minutes Approval of Minutes of previous sitting: see Minutes 1. Protection of passengers against displaced luggage (vote) 2.
|
||||
Approval of motor vehicles with regard to the forward field of vision of the driver (vote) 3. EC-Korea Agreement on scientific and technological cooperation (vote) 4. Mainstreaming sustainability in development cooperation policies (vote) 5. Draft Amending Budget No 1/2007 (vote) 7. EC-Gabon Fisheries Partnership (vote) 10. Limitation periods in cross-border disputes involving personal injuries and fatal accidents (vote) 12. Strategy for a strengthened partnership with the Pacific Islands (vote) 13. The European private company statute (vote) That concludes the vote.
|
||||
Corrections to votes and voting intentions: see Minutes Assignment conferred on a Member: see Minutes Membership of committees and delegations: see Minutes Decisions concerning certain documents: see Minutes Forwarding of texts adopted during the sitting: see Minutes Dates for next sittings: see Minutes
|
||||
Written statements for entry
|
||||
@@ -0,0 +1,11 @@
|
||||
Corectările voturilor şi intenţiile de vot: a se vedea procesul-verbal Misiune încredinţată unui deputat: consultaţi procesul-verbal Componenţa comisiilor şi a delegaţiilor: a se vedea procesul-verbal Decizii privind anumite documente: a se vedea procesul-verbal Transmiterea textelor adoptate în cursul prezentei şedinţe: a se vedea procesul-verbal Calendarul următoarelor şedinţe: a se vedea procesul-verbal
|
||||
Componenţa Parlamentului: a se vedea procesul-verbal Aprobarea procesului-verbal al şedinţei precedente: a se vedea procesul-verbal Componenţa Parlamentului: a se vedea procesul-verbal Verificarea prerogativelor: a se vedea procesul-verbal Depunere de documente: a se vedea procesul-verbal Declaraţii scrise şi întrebări orale (depunere): consultaţi procesul-verbal Petiţii: a se vedea procesul-verbal Transmiterea de către Consiliu a textelor acordurilor: a se vedea procesul-verbal Cursul dat rezoluţiilor Parlamentului: a se vedea procesul-verbal Ordinea de zi a următoarei şedinţe: a se vedea procesul-verbal Ridicarea şedinţei (Se levanta la sesión a las 19.45 horas)
|
||||
Alegerea vicepreşedinţilor Parlamentului European (termenul de depunere a candidaturilor): consultaţi procesul-verbal (Die Sitzung wird um 12.40 Uhr unterbrochen und um 15.00 Uhr wiederaufgenommen). Alegerea chestorilor Parlamentului European (termenul de depunere a candidaturilor): consultaţi procesul-verbal (Die Sitzung wird um 15.25 Uhr unterbrochen und um 18.00 Uhr wiederaufgenommen). Ordinea de zi a următoarei şedinţe: a se vedea procesul-verbal Ridicarea şedinţei (Die Sitzung wird um 18.15 Uhr geschlossen.) Deschiderea şedinţei (Die Sitzung wird um 9.35 Uhr eröffnet.) Depunerea documentelor: a se vedea procesul-verbal Aprobarea procesului-verbal al şedinţei precedente: a se vedea procesul-verbal Componenţa Parlamentului: a se vedea procesul-verbal
|
||||
Componenţa comisiilor (termenul de depunere a amendamentelor): consultaţi procesul-verbal (La seduta, sospesa alle 19.00, è ripresa alle 21.00) Ordinea de zi a următoarei şedinţe: a se vedea procesul-verbal Ridicarea şedinţei (Die Sitzung wird um 23.25 Uhr geschlossen.) Depunerea documentelor: a se vedea procesul-verbal Comunicarea poziţiilor comune ale Parlamentului: a se vedea procesul-verbal (La séance, suspendue à 11h35 dans l'attente de l'Heure des votes, est reprise à midi) Aprobarea procesului-verbal al şedinţei precedente: a se vedea procesul-verbal Comisia de anchetă privind criza societăţii de asigurări "Equitable Life” (prelungirea mandatului): consultaţi procesul-verbal
|
||||
Comunicarea Preşedintelui: consultaţi procesul-verbal 1. Componenţa comisiilor (vot) 2. Modificarea Acordului de parteneriat ACP-CE ("Acordul de la Cotonou”) (vot) 4. Certificarea mecanicilor de locomotivă care conduc locomotive şi trenuri în sistemul feroviar comunitar (vot) 6. Legea aplicabilă obligaţiilor necontractuale ("Roma II”) (vot) 8. Al şaptelea şi al optulea raport anual privind exportul de armament (vot) Corectările voturilor şi intenţiile de vot: a se vedea procesul-verbal Componenţa comisiilor şi a delegaţiilor: a se vedea procesul-verbal Cerere de ridicare a imunităţii parlamentare: consultaţi procesul-verbal Decizii privind anumite documente: a se vedea procesul-verbal
|
||||
Declaraţii scrise înscrise
|
||||
Declaraţii scrise înscrise în registru (articolul 116 din Regulamentul de procedură): a se vedea procesul-verbal Transmiterea textelor adoptate în cursul prezentei şedinţe: a se vedea procesul-verbal Calendarul următoarelor şedinţe: a se vedea procesul-verbal Întreruperea sesiunii Dichiaro interrotta la sessione del Parlamento europeo. (La seduta è tolta alle 13.00) Aprobarea procesului-verbal al şedinţei precedente: a se vedea procesul-verbal Componenţa Parlamentului: a se vedea procesul-verbal Cerere de apărare a imunităţii parlamentare: consultaţi procesul-verbal Numiri în comisii (propunerea Conferinţei preşedinţilor): consultaţi procesul-verbal Depunerea documentelor: a se vedea procesul-verbal Transmiterea de către Consiliu a textelor acordurilor: a se vedea procesul-verbal
|
||||
Continuări ale rezoluţiilor Parlamentului: consultaţi procesul-verbal Declaraţii scrise şi întrebări orale (depunere): consultaţi procesul-verbal Declaraţii scrise (articolul 116 din Regulamentul de procedură) Ordinea de zi: a se vedea procesul-verbal 1. Numiri în comisiile parlamentare (vot): consultaţi procesul-verbal Timpul afectat votului Ordinea de zi a următoarei şedinţe: a se vedea procesul-verbal Ridicarea şedinţei (La seduta è tolta alle 24.00) Deschiderea şedinţei (The sitting was opened at 09.05) Depunerea documentelor: a se vedea procesul-verbal Aprobarea procesului-verbal al şedinţei precedente: a se vedea procesul-verbal 1. Protecţia pasagerilor împotriva deplasării bagajelor (vot) 2.
|
||||
Omologarea vehiculelor cu motor cu privire la câmpul de vizibilitate înainte al conducătorului auto (vot) 3. Acordul CE-Coreea de cooperare ştiinţifică şi tehnologică (vot) 4. Integrarea durabilităţii în politicile de cooperare pentru dezvoltare (vot) 5. Proiect de buget rectificativ nr.1/2007 (vot) 7. Acordul de parteneriat în domeniul pescuitului între Comunitatea Europeană şi Republica Gaboneză (vot) 10. Termenele de prescripţie aplicabile în cadrul litigiilor transfrontaliere cu privire la vătămările corporale şi accidentele mortale (vot) 12. Relaţiile UE cu insulele din Pacific: Strategie pentru un parteneriat consolidat (vot) 13. Statutul societăţii private europene (vot) Damit ist die Abstimmungsstunde beendet.
|
||||
Corectările voturilor şi intenţiile de vot: a se vedea procesul-verbal Misiune încredinţată unui deputat: consultaţi procesul-verbal Componenţa comisiilor şi a delegaţiilor: a se vedea procesul-verbal Decizii privind anumite documente: a se vedea procesul-verbal Transmiterea textelor adoptate în cursul prezentei şedinţe: a se vedea procesul-verbal Calendarul următoarelor şedinţe: a se vedea procesul-verbal
|
||||
Declaraţii scrise înscrise
|
||||
BIN
transformers/examples/legacy/seq2seq/test_data/wmt_en_ro/val.len
Normal file
BIN
transformers/examples/legacy/seq2seq/test_data/wmt_en_ro/val.len
Normal file
Binary file not shown.
@@ -0,0 +1,16 @@
|
||||
Brazil's Former Presidential Chief-of-Staff to Stand Trial A federal judge on Tuesday accepted the charges filed against Brazil's former presidential chief of staff for his alleged involvement in a massive corruption scheme at state-owned oil company Petrobras. The federal prosecutor's office said Jose Dirceu will face trial on the corruption, racketeering and money laundering charges filed earlier this month. Fourteen other people will also be tried, including Joao Vaccari Neto, the former treasurer of Brazil's governing Workers' Party and Renato de Souza Duque, Petrobras' former head of corporate services.
|
||||
Dirceu is the most senior member of the ruling Workers' Party to be taken into custody in connection with the scheme. Dirceu served as former President Luiz Inacio Lula da Silva's chief of staff between 2003 and 2005. He was arrested early August in his home, where he already was under house arrest serving an 11-year sentence for his involvement in a cash-for-votes scheme in Congress more than 10 years ago. Prosecutors have said that Dirceu masterminded the kickback scheme at Petrobras, accepted bribes while in office and continued to receive payments from contractors after he was jailed in late 2013 for the vote-buying scandal.
|
||||
According to prosecutors, the scheme at Petrobras involved roughly $2 billion in bribes and other illegal funds. Some of that money was allegedly funneled back to campaign coffers of the ruling party and its allies. It also allegedly included the payment of bribes to Petrobras executives in return for inflated contracts. 'Miraculous' recovery for Peshawar massacre schoolboy A teenager paralysed after being shot four times in Pakistan's deadliest terror attack has made a "miraculous" recovery following treatment in the UK. Muhammad Ibrahim Khan, 13, had been told by doctors in Pakistan that he would never walk again.
|
||||
At least 140 people, mostly children, were killed when gunmen stormed Peshawar's Army Public School last December. Muhammad, who arrived in London last month for surgery, is being discharged from hospital later. Exactly nine months ago, on an ordinary Tuesday morning, Muhammad sat in his first aid class listening to his teachers intently. At the same time seven gunmen disguised in security uniforms were entering the Army Public School. They were strapped with explosives and had one simple mission in mind: Kill every man, woman and child they came across. "I can't forget what happened that day," Muhammad says with a severe stare.
|
||||
We were sitting in the auditorium, we were asking questions... and then we heard heavy gunfire outside. The terrorists moved inside and they started killing - our teacher was burned alive. Muhammad described pulling four other pupils out of the auditorium as the carnage unfolded. He said he then heard his friend, Hamza calling to him. He said, 'oh brother save me'. I held his hand. That's when I was shot in the back, and he was shot in the head. Most of the people killed in the attack were pupils Hamza died in Muhammad's arms. Muhammad recalled blacking out after that, and the next thing he knew he was in a hospital bed, paralysed from the waist down.
|
||||
Doctors in Peshawar in northern Pakistan, and then Rawalpindi, close to the capital, told his family there was no treatment, and he would never walk again. "Seeing him I felt like my soul had left my body," says Muhammad's father, Sher Khan Those nine months were the hardest in my life. But Mr Khan and his wife, Sherbano, refused to believe that their cricket-mad son would never be able to use his legs again. They campaigned, and appealed for help on Pakistani TV, gaining the support of high profile people such as cricketer turned politician Imran Khan.
|
||||
Finally, they were able to raise the funds to bring Muhammad to the UK and provide him with treatment at London's private Harley Street Clinic. Consultant neurosurgeon Irfan Malik described Muhammad as "terrified" when he first arrived at the hospital. "He'd spent the last [few] months lying on a bed, unable to move side to side," says Mr Malik. He was weak, he had a pressure sore on his back. He wasn't in great shape. A vertebra at the base of Muhammad's spine was destroyed Muhammad was shot in his shoulder, his hip, and his back during the attack, damaging his lower spine - leading to paralysis.
|
||||
But during six hours of surgery, Mr Malik and his team were able to reattach nerve endings and reconstruct the damaged part of the spine. Even Mr Malik was surprised at what happened next. Exactly one week after the surgery Muhammad stood up and started taking steps and walking. We were not expecting to get that sort of excellent result. That was miraculous," he says. Less than two weeks after his operation, Muhammad is ready to leave hospital and start the long road to recovery. Muhammad has defied the odds and started to walk again He says he wants to build his strength and continue his education in the UK. But he says he is determined to return to Pakistan, join the army and help fight terrorism.
|
||||
"I feel like I have a second chance at life," he says as he shows off pictures he's drawn of guns scribbled out next to school books and pens Muhammad grows physically stronger every day but the psychological trauma he continues to endure is unimaginable. "My anger is not diminishing" he says. In my school little kids were killed. What was their crime? His mother, wiping a tear from her eye, caressed his head and said: "I can see my son walking again." He'll be able to get on with his normal life. 'Super Voice' 4G service from Three offers better signal Three is making use of a lower frequency 4G spectrum that can travel more widely
|
||||
Mobile phone provider Three has launched a UK service it says will improve reception inside buildings and in rural black spots. Its 4G Super Voice enables customers to make calls and send texts using a lower frequency spectrum. Other networks are looking into introducing the technology, known as Voice Over Long-Term Evolution (VoLTE). It currently works on only the Samsung Galaxy S5, but recent iPhone handsets will be added in the coming months. Three said up to 5.5 million customers would have access to the service by 2017.
|
||||
Chief technology officer Bryn Jones said: "By the end of the year, one million of our customers will have access to better indoor coverage and be able to use their phones in more places than ever before." Stars prepare for panto season Pantomime season is big business for theatres up and down the UK, with many getting ready for this year's season now. Some of the biggest names in showbusiness now take part in the yuletide theatre. Matthew Kelly and Hayley Mills will be appearing in Cinderella - one as an ugly sister, the other as fairy godmother. They reveal their panto secrets to BBC Breakfast. Steven Wilson: 'If I don't do anything, I feel this creeping guilt'
|
||||
Steven Wilson was recently the big winner at the Progressive Music Awards Steven Wilson is often dubbed the hardest working musician in the world of progressive rock. The multi-talented musician won three prizes at this month's Progressive Music Awards in London, including album of the year for Hand. The Guardian's five-star review called it "a smart, soulful and immersive work of art." Since the 1980s, Wilson has been the driving force in a number of musical projects, the best known of which is the rock band Porcupine Tree. Now, ahead of two sell-out shows at the Royal Albert Hall, Wilson is releasing a vinyl-only double LP, Transience, to showcase the "more accessible" side of his solo output.
|
||||
He tells the BBC about his love of vinyl, his busy schedule and explains how comic actor Matt Berry came to be his support act. What does vinyl mean to you? I grew up at the very tail end of the vinyl era, and at the time, I remember, we couldn't wait for CD to come along because vinyl was so frustrating. You would buy the record, take it home, and it would have a scratch, and you would have to take it back again. I love CDs, and for some kinds of music - classical for example - it is better than vinyl. But the problem with the CD and digital downloads is that there's nothing you can really cherish or treasure. Owning vinyl is like having a beautiful painting hanging in your living room.
|
||||
It's something you can hold, pore over the lyrics and immerse yourself in the art work. I thought it was just a nostalgic thing, but it can't be if kids too young to remember vinyl are enjoying that kind of experience. Do you have a piece of vinyl that you treasure? The truth is I got rid of 100% of my vinyl in the 90s. All the vinyl I have is re-bought. I started off from the perspective that I wanted to recreate the collection I had when I was 15, but it's gone beyond that. The first record which I persuaded my parents to buy for me was Electric Light Orchestra's Out of the Blue.
|
||||
If I still had my original copy, it would have sentimental value, but, alas, it's in a charity shop somewhere. Steven Wilson hopes the album will be a doorway for potential new fans Why release your new compilation Transience on vinyl? It was originally conceived as an idea for Record Store Day, but we missed the boat on that. My record company had suggested I put together some of my shorter, more accessible songs. I got a bit obsessed by the idea to make something like "an introduction to Steven Wilson," and I was committed to it being a vinyl-only release. Anyone who buys the vinyl does also get a high-resolution download.
|
||||
Do you have a concern that the album won't show your work in a true light?
|
||||
@@ -0,0 +1,16 @@
|
||||
Fostul șef al cabinetului prezidențial brazilian este adus în fața instanței Marți, un judecător federal a acceptat acuzațiile aduse împotriva fostului șef al cabinetului prezidențial brazilian pentru presupusa implicare a acestuia într-o schemă masivă de corupție privind compania petrolieră de stat Petrobras. Biroul procurorului federal a declarat că Jose Dirceu va fi trimis în judecată pentru acuzațiile de corupție, înșelătorie și spălare de bani aduse în această lună. Alte paisprezece persoane vor fi judecate, printre acestea numărându-se Joao Vaccari Neto, fostul trezorier al Partidului Muncitorilor, aflat la putere în Brazilia, și Renato de Souza Duque, fostul președinte al serviciilor pentru întreprinderi ale Petrobras.
|
||||
Dirceu este cel mai vechi membru al Partidului Muncitorilor aflat la guvernare luat în custodie pentru legăturile cu această schemă. Dirceu a servit ca șef de cabinet al fostului președinte Luiz Inacio Lula da Silva între 2003 și 2005. A fost arestat la începutul lui august de acasă, unde deja se afla sub arest la domiciliu, cu o pedeapsă de 11 ani pentru implicarea într-o schemă de cumpărare a voturilor în Congres cu peste 10 ani în urmă. Procurorii au declarat că Dirceu a dezvoltat schema de luare de mită de la Petrobras, a acceptat mită în timp ce se afla în funcție și a continuat să primească plăți de la antreprenori după ce a fost închis la sfârșitul lui 2013 pentru scandalul voturilor cumpărate.
|
||||
Conform procurorilor, schema de la Petrobras a implicat aproximativ 2 miliarde de dolari sub formă de mită și alte fonduri ilegale. O parte din acei bani s-ar fi întors în fondul de campanie al partidului aflat la guvernare și al aliaților acestora. De asemenea, ar fi inclus mită către directorii Petrobras în schimbul unor contracte umflate. Recuperarea „miraculoasă” a unui elev supraviețuitor al masacrului de la Peshawar Un adolescent paralizat după ce fusese împușcat de patru ori în cel mai cumplit atac terorist din Pakistan a reușit o recuperare „miraculoasă” după ce a urmat un tratament în Regatul Unit. Lui Mohamed Ibrahim Khan, în vârstă de 13 ani, doctorii din Pakistan îi spuseseră că nu va mai putea să meargă niciodată.
|
||||
Cel puțin 140 de persoane, majoritatea copii, au fost ucise când bărbați înarmați au atacat școala publică a armatei din Peshawar în luna decembrie a anului trecut. Mohamed, care a sosit la Londra luna trecută pentru operație, va fi externat mai târziu din spital. Exact cu nouă luni în urmă, într-o dimineață obișnuită de marți, Mohamed stătea la ora de primul ajutor și își asculta atent profesorii. Chiar atunci, șapte bărbați înarmați deghizați în uniformele agenților de pază intrau în școala publică a armatei. Purtau centuri cu explozivi și aveau de îndeplinit o misiune simplă: să îi ucidă pe toți bărbații, femeile și copiii care le ieșeau în cale. „Nu pot uita ce s-a întâmplat în acea zi”, spune Mohamed cu o privire aspră.
|
||||
Stăteam în amfiteatru, puneam întrebări... apoi am auzit focuri de armă afară. Teroriștii au intrat înăuntru și au început să ucidă. Profesorul nostru a fost ars de viu. Mohamed descrie cum a scos patru elevi din amfiteatru în timp ce se desfășura carnagiul. Apoi spune că și-a auzit prietenul, pe Hamza, strigându-l. Spunea „oh, frate, salvează-mă”. L-am ținut de mână. Atunci eu am fost împușcat în spate, iar el în cap. Cei mai mulți dintre cei uciși în atac erau elevi Hamza a murit în brațele lui Mohamed. Mohamed își amintește că imediat după asta a leșinat și că următorul lucru pe care l-a știut a fost că se afla pe un pat de spital, paralizat de la brâu în jos.
|
||||
Doctorii din Peshawar din nordul Pakistanului, apoi cei din Rawalpindi, aproape de capitală, i-au spus familiei sale că nu exista tratament și că nu va mai putea merge niciodată. „Când l-am văzut, am simțit cum îmi iese sufletul”, spune Sher Khan, tatăl lui Mohamed. Acele nouă luni au fost cele mai grele din viața mea. Însă Khan și soția lui, Sherbano, au refuzat să creadă că fiul lor atât de pasionat de crichet nu-și va mai putea folosi vreodată picioarele. Au făcut o campanie și au cerut ajutor de la televiziunea pakistaneză, atrăgând sprijinul unor oameni faimoși precum Imran Khan, jucător de crichet devenit politician.
|
||||
Într-un final, au reușit să strângă fonduri pentru a-l duce pe Mohamed în Regatul Unit și a-i oferi tratament la clinica privată Harley Street din Londra. Neurochirurgul consultant Irfan Malik l-a descris pe Mohamed drept „înspăimântat” când acesta a ajuns la spital. „Își petrecuse ultimele [câteva] luni zăcând în pat, fără să se poată mișca de pe o parte pe alta, spune Malik. Era slăbit, se pusese multă presiune pe spatele lui. Nu era într-o formă prea bună. O vertebră de la baza coloanei vertebrale a lui Mohamed fusese distrusă Mohamed fusese împușcat în umăr, în șold și în spate în timpul atacului, iar coloana vertebrală inferioară îi fusese distrusă, ducând la paralizie.
|
||||
Însă, în timpul unei operații care a durat șase ore, Malik și echipa lui au reușit să lege din nou terminațiile nervoase și să reconstruiască partea distrusă a coloanei. Chiar și Malik a fost surprins de ceea ce s-a întâmplat în continuare. Exact la o săptămână după operație, Mohamed s-a ridicat și a început să facă pași și să meargă. Nu ne așteptam la un rezultat atât de bun. A fost un miracol”, spune acesta. În mai puțin de două săptămâni de la operație, Mohamed este gata să părăsească spitalul și să înceapă procesul lung de recuperare. Mohamed a sfidat soarta și a început să meargă din nou Vrea să devină puternic și să își continue studiile în Regatul Unit. Însă este hotărât să revină în Pakistan, să se înroleze în armată și să lupte împotriva terorismului.
|
||||
„Simt că am încă o șansă la viață” spune el, arătând imaginile cu arme desenate de el lângă manuale școlare și stilouri Fizic, Mohamed devine tot mai puternic în fiecare zi, însă trauma psihologică prin care trece și acum este de neimaginat. „Furia mea nu a scăzut”, mărturisește el. În școala mea au fost uciși copii mici. Ce crimă au comis ei? Mama lui își șterge o lacrimă, îl mângâie pe creștet și spune: „Îmi văd fiul mergând din nou”. Va putea să-și continue firesc viața. Serviciul 4G „Super Voice” de la Three oferă semnal mai bun Three folosește un spectru 4G cu o frecvență mai joasă, care poate acoperi o zonă mai extinsă
|
||||
Furnizorul de telefonie mobilă Three a lansat în Regatul Unit un serviciu despre care spune că va îmbunătăți recepția în interiorul clădirilor și în zonele rurale fără semnal. Serviciul 4G Super Voice le permite clienților să efectueze apeluri și să trimită mesaje text folosind un spectru cu o frecvență mai joasă. Și alte rețele intenționează să introducă aceeași tehnologie, cunoscută ca „Voice Over Long-Term Evolution (VoLTE)”. Aceasta funcționează momentan doar cu Samsung Galaxy S5, însă telefoanele iPhone recente vor beneficia de ea în lunile următoare. Three menționează că până la 5,5 milioane de clienți vor avea acces la serviciu până în 2017.
|
||||
Responsabilul șef pentru tehnologie, Bryn Jones a declarat: „Până la sfârșitul anului, un milion dintre clienții noștri vor avea acces la o acoperire mai bună în interior și își vor putea folosi telefoanele în mai multe locuri ca până acum”. Vedetele se pregătesc pentru stagiunea de pantomimă Stagiunea de pantomimă este foarte importantă pentru teatrele din tot Regatul Unit, multe dintre ele pregătindu-se acum pentru stagiunea din acest an. Acum, la teatrul de Crăciun participă unele dintre numele cele mai mari din showbusiness. Matthew Kelly și Hayley Mills vor apărea în Cenușăreasa - primul în rolul uneia dintre surorile rele, iar a doua în rolul zânei. Aceștia dezvăluie secretele pantomimei lor la BBC Breakfast. Steven Wilson: „Dacă nu fac nimic, mă simt vinovat”
|
||||
Steven Wilson a fost desemnat recent drept marele câștigător al Progressive Music Awards Steven Wilson a fost numit de multe ori drept cel mai muncitor muzician din lumea rockului progresiv. Talentatul muzician a câștigat trei premii la Progressive Music Awards, care a avut loc luna aceasta la Londra, printre care și premiul pentru cel mai bun album al anului pentru Hand. În recenzia sa de cinci stele, The Guardian a numit albumul „o operă de artă inteligentă, expresivă și captivantă”. Încă din anii 1980, Wilson este motorul mai multor proiecte muzicale, cel mai cunoscut dintre acestea fiind trupa de rock Porcupine Tree. Acum, înainte de două spectacole cu casa închisă la Royal Albert Hall, Wilson lansează un dublu LP doar în format vinil, Transience, pentru a arăta latura „mai accesibilă” a activității sale solo.
|
||||
A povestit pentru BBC despre dragostea lui pentru viniluri și despre programul său încărcat și a explicat cum a ajuns actorul de comedie Matt Berry să îi deschidă spectacolele. Ce înseamnă vinil pentru tine? Am crescut chiar în perioada de sfârșit a erei vinilurilor și îmi amintesc că atunci abia așteptam apariția CD-ului, căci vinilul era atât de enervant. Cumpărai un disc, mergeai cu el acasă, avea o zgârietură și trebuia să îl aduci înapoi. Iubesc CD-urile, iar pentru anumite tipuri de muzică, de exemplu cea clasică, sunt mai bune decât vinilurile. Însă problema cu CD-urile și cu descărcările digitale este aceea că nu mai există nimic pe care să îl prețuiești cu adevărat. Să ai un vinil e ca și cum ai avea un tablou frumos agățat în sufragerie.
|
||||
E ceva ce poți ține în mână, în timp ce te lași absorbit de versuri și copleșit de actul artistic. Am crezut că e doar o chestie nostalgică, însă nu are cum să fie așa dacă unor puști prea tineri să-și amintească de viniluri le place acest gen de experiență. Ai vreun vinil la care ții în mod special? Recunosc că am scăpat de toate vinilurile în anii '90. Toate vinilurile pe care le am sunt cumpărate din nou. Am pornit de la ideea de a reface colecția pe care o aveam la 15 ani, însă am trecut de limita aceea. Primul disc pe care mi-am convins părinții să mi-l cumpere a fost Out of the Blue de la Electric Light Orchestra.
|
||||
Dacă aș mai fi avut încă exemplarul inițial, acesta ar fi avut valoare sentimentală, însă, din păcate, se află pe undeva printr-un magazin de caritate. Steven Wilson speră că albumul va fi o poartă către posibili fani noi De ce ți-ai lansat noua compilație Transience pe vinil? Aceasta a fost concepută inițial ca idee pentru Ziua magazinelor de discuri, însă am ratat ocazia. Casa mea de discuri sugerase să adun câteva dintre melodiile mele mai scurte și mai accesibile. Am ajuns să fiu ușor obsedat de ideea de a face ceva gen „introducere în muzica lui Steven Wilson” și am ținut neapărat ca proiectul să fie lansat doar pe vinil. Cine cumpără vinilul primește, de asemenea, și o variantă descărcată la rezoluție înaltă.
|
||||
Ești îngrijorat că albumul nu va arăta muzica ta în adevărata ei lumină?
|
||||
@@ -0,0 +1,38 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export WANDB_PROJECT=distil-marian
|
||||
export BS=64
|
||||
export GAS=1
|
||||
export m=sshleifer/student_marian_en_ro_6_3
|
||||
export MAX_LEN=128
|
||||
python finetune_trainer.py \
|
||||
--tokenizer_name $m --model_name_or_path $m \
|
||||
--data_dir $ENRO_DIR \
|
||||
--output_dir marian_en_ro_6_3 --overwrite_output_dir \
|
||||
--learning_rate=3e-4 \
|
||||
--warmup_steps 500 --sortish_sampler \
|
||||
--fp16 \
|
||||
--gradient_accumulation_steps=$GAS \
|
||||
--per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \
|
||||
--freeze_encoder --freeze_embeds \
|
||||
--num_train_epochs=6 \
|
||||
--save_steps 3000 --eval_steps 3000 \
|
||||
--max_source_length $MAX_LEN --max_target_length $MAX_LEN \
|
||||
--val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \
|
||||
--do_train --do_eval --do_predict \
|
||||
--eval_strategy steps \
|
||||
--predict_with_generate --logging_first_step \
|
||||
--task translation --label_smoothing_factor 0.1 \
|
||||
"$@"
|
||||
@@ -0,0 +1,39 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export WANDB_PROJECT=distil-marian
|
||||
export BS=64
|
||||
export m=sshleifer/student_marian_en_ro_6_3
|
||||
export MAX_LEN=128
|
||||
export TPU_NUM_CORES=8
|
||||
|
||||
python xla_spawn.py --num_cores $TPU_NUM_CORES \
|
||||
finetune_trainer.py \
|
||||
--tokenizer_name $m --model_name_or_path $m \
|
||||
--data_dir $ENRO_DIR \
|
||||
--output_dir marian_en_ro_6_3 --overwrite_output_dir \
|
||||
--learning_rate=3e-4 \
|
||||
--warmup_steps 500 \
|
||||
--per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \
|
||||
--freeze_encoder --freeze_embeds \
|
||||
--num_train_epochs=6 \
|
||||
--save_steps 500 --eval_steps 500 \
|
||||
--logging_first_step --logging_steps 200 \
|
||||
--max_source_length $MAX_LEN --max_target_length $MAX_LEN \
|
||||
--val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \
|
||||
--do_train --do_eval \
|
||||
--eval_strategy steps \
|
||||
--prediction_loss_only \
|
||||
--task translation --label_smoothing_factor 0.1 \
|
||||
"$@"
|
||||
39
transformers/examples/legacy/seq2seq/train_distilbart_cnn.sh
Normal file
39
transformers/examples/legacy/seq2seq/train_distilbart_cnn.sh
Normal file
@@ -0,0 +1,39 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export WANDB_PROJECT=distilbart-trainer
|
||||
export BS=32
|
||||
export m=sshleifer/student_cnn_12_6
|
||||
export tok=facebook/bart-large
|
||||
export MAX_TGT_LEN=142
|
||||
|
||||
python finetune_trainer.py \
|
||||
--model_name_or_path $m --tokenizer_name $tok \
|
||||
--data_dir cnn_dm \
|
||||
--output_dir distilbart-cnn-12-6 --overwrite_output_dir \
|
||||
--learning_rate=3e-5 \
|
||||
--warmup_steps 500 --sortish_sampler \
|
||||
--fp16 \
|
||||
--n_val 500 \
|
||||
--gradient_accumulation_steps=1 \
|
||||
--per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \
|
||||
--freeze_encoder --freeze_embeds \
|
||||
--num_train_epochs=2 \
|
||||
--save_steps 3000 --eval_steps 3000 \
|
||||
--logging_first_step \
|
||||
--max_target_length 56 --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN\
|
||||
--do_train --do_eval --do_predict \
|
||||
--eval_strategy steps \
|
||||
--predict_with_generate --sortish_sampler \
|
||||
"$@"
|
||||
@@ -0,0 +1,35 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
python finetune_trainer.py \
|
||||
--model_name_or_path=facebook/mbart-large-cc25 \
|
||||
--data_dir $ENRO_DIR \
|
||||
--output_dir mbart_cc25_enro --overwrite_output_dir \
|
||||
--learning_rate=3e-5 \
|
||||
--warmup_steps 500 \
|
||||
--fp16 \
|
||||
--label_smoothing 0.1 \
|
||||
--adam_eps 1e-06 \
|
||||
--src_lang en_XX --tgt_lang ro_RO \
|
||||
--freeze_embeds \
|
||||
--per_device_train_batch_size=4 --per_device_eval_batch_size=4 \
|
||||
--max_source_length 128 --max_target_length 128 --val_max_target_length 128 --test_max_target_length 128\
|
||||
--sortish_sampler \
|
||||
--num_train_epochs 6 \
|
||||
--save_steps 25000 --eval_steps 25000 --logging_steps 1000 \
|
||||
--do_train --do_eval --do_predict \
|
||||
--eval_strategy steps \
|
||||
--predict_with_generate --logging_first_step \
|
||||
--task translation \
|
||||
"$@"
|
||||
665
transformers/examples/legacy/seq2seq/utils.py
Normal file
665
transformers/examples/legacy/seq2seq/utils.py
Normal file
@@ -0,0 +1,665 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import itertools
|
||||
import json
|
||||
import linecache
|
||||
import math
|
||||
import os
|
||||
import pickle
|
||||
import socket
|
||||
from collections.abc import Iterable
|
||||
from logging import getLogger
|
||||
from pathlib import Path
|
||||
from typing import Callable, Union
|
||||
|
||||
import git
|
||||
import numpy as np
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
from rouge_score import rouge_scorer, scoring
|
||||
from sacrebleu import corpus_bleu
|
||||
from sentence_splitter import add_newline_to_end_of_each_sentence
|
||||
from torch import nn
|
||||
from torch.utils.data import Dataset, Sampler
|
||||
|
||||
from transformers import BartTokenizer, EvalPrediction, PreTrainedTokenizer, T5Tokenizer
|
||||
from transformers.models.bart.modeling_bart import shift_tokens_right
|
||||
from transformers.utils import cached_property
|
||||
|
||||
|
||||
try:
|
||||
from fairseq.data.data_utils import batch_by_size
|
||||
|
||||
FAIRSEQ_AVAILABLE = True
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
FAIRSEQ_AVAILABLE = False
|
||||
|
||||
|
||||
def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=-100):
|
||||
"""From fairseq"""
|
||||
if target.dim() == lprobs.dim() - 1:
|
||||
target = target.unsqueeze(-1)
|
||||
nll_loss = -lprobs.gather(dim=-1, index=target)
|
||||
smooth_loss = -lprobs.sum(dim=-1, keepdim=True)
|
||||
if ignore_index is not None:
|
||||
pad_mask = target.eq(ignore_index)
|
||||
nll_loss.masked_fill_(pad_mask, 0.0)
|
||||
smooth_loss.masked_fill_(pad_mask, 0.0)
|
||||
else:
|
||||
nll_loss = nll_loss.squeeze(-1)
|
||||
smooth_loss = smooth_loss.squeeze(-1)
|
||||
|
||||
nll_loss = nll_loss.sum() # mean()? Scared to break other math.
|
||||
smooth_loss = smooth_loss.sum()
|
||||
eps_i = epsilon / lprobs.size(-1)
|
||||
loss = (1.0 - epsilon) * nll_loss + eps_i * smooth_loss
|
||||
return loss, nll_loss
|
||||
|
||||
|
||||
def lmap(f: Callable, x: Iterable) -> list:
|
||||
"""list(map(f, x))"""
|
||||
return list(map(f, x))
|
||||
|
||||
|
||||
def calculate_bleu(output_lns, refs_lns, **kwargs) -> dict:
|
||||
"""Uses sacrebleu's corpus_bleu implementation."""
|
||||
return {"bleu": round(corpus_bleu(output_lns, [refs_lns], **kwargs).score, 4)}
|
||||
|
||||
|
||||
def build_compute_metrics_fn(task_name: str, tokenizer: PreTrainedTokenizer) -> Callable[[EvalPrediction], dict]:
|
||||
def non_pad_len(tokens: np.ndarray) -> int:
|
||||
return np.count_nonzero(tokens != tokenizer.pad_token_id)
|
||||
|
||||
def decode_pred(pred: EvalPrediction) -> tuple[list[str], list[str]]:
|
||||
pred_ids = pred.predictions
|
||||
label_ids = pred.label_ids
|
||||
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
|
||||
label_ids[label_ids == -100] = tokenizer.pad_token_id
|
||||
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
|
||||
pred_str = lmap(str.strip, pred_str)
|
||||
label_str = lmap(str.strip, label_str)
|
||||
return pred_str, label_str
|
||||
|
||||
def summarization_metrics(pred: EvalPrediction) -> dict:
|
||||
pred_str, label_str = decode_pred(pred)
|
||||
rouge: dict = calculate_rouge(pred_str, label_str)
|
||||
summ_len = np.round(np.mean(lmap(non_pad_len, pred.predictions)), 1)
|
||||
rouge.update({"gen_len": summ_len})
|
||||
return rouge
|
||||
|
||||
def translation_metrics(pred: EvalPrediction) -> dict:
|
||||
pred_str, label_str = decode_pred(pred)
|
||||
bleu: dict = calculate_bleu(pred_str, label_str)
|
||||
gen_len = np.round(np.mean(lmap(non_pad_len, pred.predictions)), 1)
|
||||
bleu.update({"gen_len": gen_len})
|
||||
return bleu
|
||||
|
||||
compute_metrics_fn = summarization_metrics if "summarization" in task_name else translation_metrics
|
||||
return compute_metrics_fn
|
||||
|
||||
|
||||
def trim_batch(
|
||||
input_ids,
|
||||
pad_token_id,
|
||||
attention_mask=None,
|
||||
):
|
||||
"""Remove columns that are populated exclusively by pad_token_id"""
|
||||
keep_column_mask = input_ids.ne(pad_token_id).any(dim=0)
|
||||
if attention_mask is None:
|
||||
return input_ids[:, keep_column_mask]
|
||||
else:
|
||||
return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask])
|
||||
|
||||
|
||||
class AbstractSeq2SeqDataset(Dataset):
|
||||
def __init__(
|
||||
self,
|
||||
tokenizer,
|
||||
data_dir,
|
||||
max_source_length,
|
||||
max_target_length,
|
||||
type_path="train",
|
||||
n_obs=None,
|
||||
prefix="",
|
||||
**dataset_kwargs,
|
||||
):
|
||||
super().__init__()
|
||||
self.src_file = Path(data_dir).joinpath(type_path + ".source")
|
||||
self.tgt_file = Path(data_dir).joinpath(type_path + ".target")
|
||||
self.len_file = Path(data_dir).joinpath(type_path + ".len")
|
||||
if os.path.exists(self.len_file):
|
||||
self.src_lens = pickle_load(self.len_file)
|
||||
self.used_char_len = False
|
||||
else:
|
||||
self.src_lens = self.get_char_lens(self.src_file)
|
||||
self.used_char_len = True
|
||||
self.max_source_length = max_source_length
|
||||
self.max_target_length = max_target_length
|
||||
assert min(self.src_lens) > 0, f"found empty line in {self.src_file}"
|
||||
self.tokenizer = tokenizer
|
||||
self.prefix = prefix if prefix is not None else ""
|
||||
|
||||
if n_obs is not None:
|
||||
self.src_lens = self.src_lens[:n_obs]
|
||||
self.pad_token_id = self.tokenizer.pad_token_id
|
||||
self.dataset_kwargs = dataset_kwargs
|
||||
dataset_kwargs.update({"add_prefix_space": True} if isinstance(self.tokenizer, BartTokenizer) else {})
|
||||
|
||||
def __len__(self):
|
||||
return len(self.src_lens)
|
||||
|
||||
@staticmethod
|
||||
def get_char_lens(data_file):
|
||||
return [len(x) for x in Path(data_file).open().readlines()]
|
||||
|
||||
@cached_property
|
||||
def tgt_lens(self):
|
||||
"""Length in characters of target documents"""
|
||||
return self.get_char_lens(self.tgt_file)
|
||||
|
||||
def make_sortish_sampler(self, batch_size, distributed=False, shuffle=True, **kwargs):
|
||||
if distributed:
|
||||
return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs)
|
||||
else:
|
||||
return SortishSampler(self.src_lens, batch_size, shuffle=shuffle)
|
||||
|
||||
def make_dynamic_sampler(self, max_tokens_per_batch=1024, **kwargs):
|
||||
assert FAIRSEQ_AVAILABLE, "Dynamic batch size requires `pip install fairseq`"
|
||||
assert not self.used_char_len, "You must call python make_len_file.py before calling make_dynamic_sampler"
|
||||
sorted_indices = list(self.make_sortish_sampler(1024, shuffle=False))
|
||||
|
||||
def num_tokens_in_example(i):
|
||||
return min(self.src_lens[i], self.max_target_length)
|
||||
|
||||
# call fairseq cython function
|
||||
batch_sampler: list[list[int]] = batch_by_size(
|
||||
sorted_indices,
|
||||
num_tokens_fn=num_tokens_in_example,
|
||||
max_tokens=max_tokens_per_batch,
|
||||
required_batch_size_multiple=64,
|
||||
)
|
||||
shuffled_batches = [batch_sampler[i] for i in np.random.permutation(range(len(batch_sampler)))]
|
||||
# move the largest batch to the front to OOM quickly (uses an approximation for padding)
|
||||
approximate_toks_per_batch = [max(self.src_lens[i] for i in batch) * len(batch) for batch in shuffled_batches]
|
||||
largest_batch_idx = np.argmax(approximate_toks_per_batch)
|
||||
shuffled_batches[0], shuffled_batches[largest_batch_idx] = (
|
||||
shuffled_batches[largest_batch_idx],
|
||||
shuffled_batches[0],
|
||||
)
|
||||
return shuffled_batches
|
||||
|
||||
def __getitem__(self, item):
|
||||
raise NotImplementedError("You must implement this")
|
||||
|
||||
def collate_fn(self, batch):
|
||||
raise NotImplementedError("You must implement this")
|
||||
|
||||
|
||||
class LegacySeq2SeqDataset(AbstractSeq2SeqDataset):
|
||||
def __getitem__(self, index) -> dict[str, torch.Tensor]:
|
||||
"""Call tokenizer on src and tgt_lines"""
|
||||
index = index + 1 # linecache starts at 1
|
||||
source_line = self.prefix + linecache.getline(str(self.src_file), index).rstrip("\n")
|
||||
tgt_line = linecache.getline(str(self.tgt_file), index).rstrip("\n")
|
||||
assert source_line, f"empty source line for index {index}"
|
||||
assert tgt_line, f"empty tgt line for index {index}"
|
||||
source_inputs = self.encode_line(self.tokenizer, source_line, self.max_source_length)
|
||||
target_inputs = self.encode_line(self.tokenizer, tgt_line, self.max_target_length)
|
||||
|
||||
source_ids = source_inputs["input_ids"].squeeze()
|
||||
target_ids = target_inputs["input_ids"].squeeze()
|
||||
src_mask = source_inputs["attention_mask"].squeeze()
|
||||
return {
|
||||
"input_ids": source_ids,
|
||||
"attention_mask": src_mask,
|
||||
"labels": target_ids,
|
||||
}
|
||||
|
||||
def encode_line(self, tokenizer, line, max_length, pad_to_max_length=True, return_tensors="pt"):
|
||||
"""Only used by LegacyDataset"""
|
||||
return tokenizer(
|
||||
[line],
|
||||
max_length=max_length,
|
||||
padding="max_length" if pad_to_max_length else None,
|
||||
truncation=True,
|
||||
return_tensors=return_tensors,
|
||||
**self.dataset_kwargs,
|
||||
)
|
||||
|
||||
def collate_fn(self, batch) -> dict[str, torch.Tensor]:
|
||||
input_ids = torch.stack([x["input_ids"] for x in batch])
|
||||
masks = torch.stack([x["attention_mask"] for x in batch])
|
||||
target_ids = torch.stack([x["labels"] for x in batch])
|
||||
pad_token_id = self.pad_token_id
|
||||
y = trim_batch(target_ids, pad_token_id)
|
||||
source_ids, source_mask = trim_batch(input_ids, pad_token_id, attention_mask=masks)
|
||||
batch = {
|
||||
"input_ids": source_ids,
|
||||
"attention_mask": source_mask,
|
||||
"labels": y,
|
||||
}
|
||||
return batch
|
||||
|
||||
|
||||
class Seq2SeqDataset(AbstractSeq2SeqDataset):
|
||||
"""A dataset that calls prepare_seq2seq_batch."""
|
||||
|
||||
def __getitem__(self, index) -> dict[str, str]:
|
||||
index = index + 1 # linecache starts at 1
|
||||
source_line = self.prefix + linecache.getline(str(self.src_file), index).rstrip("\n")
|
||||
tgt_line = linecache.getline(str(self.tgt_file), index).rstrip("\n")
|
||||
assert source_line, f"empty source line for index {index}"
|
||||
assert tgt_line, f"empty tgt line for index {index}"
|
||||
return {"tgt_texts": tgt_line, "src_texts": source_line, "id": index - 1}
|
||||
|
||||
def collate_fn(self, batch) -> dict[str, torch.Tensor]:
|
||||
"""Call prepare_seq2seq_batch."""
|
||||
batch_encoding: dict[str, torch.Tensor] = self.tokenizer.prepare_seq2seq_batch(
|
||||
[x["src_texts"] for x in batch],
|
||||
tgt_texts=[x["tgt_texts"] for x in batch],
|
||||
max_length=self.max_source_length,
|
||||
max_target_length=self.max_target_length,
|
||||
return_tensors="pt",
|
||||
**self.dataset_kwargs,
|
||||
).data
|
||||
batch_encoding["ids"] = torch.tensor([x["id"] for x in batch])
|
||||
return batch_encoding
|
||||
|
||||
|
||||
class Seq2SeqDataCollator:
|
||||
def __init__(self, tokenizer, data_args, decoder_start_token_id, tpu_num_cores=None):
|
||||
self.tokenizer = tokenizer
|
||||
self.pad_token_id = tokenizer.pad_token_id
|
||||
self.decoder_start_token_id = decoder_start_token_id
|
||||
assert self.pad_token_id is not None, (
|
||||
f"pad_token_id is not defined for ({self.tokenizer.__class__.__name__}), it must be defined."
|
||||
)
|
||||
self.data_args = data_args
|
||||
self.tpu_num_cores = tpu_num_cores
|
||||
self.dataset_kwargs = {"add_prefix_space": True} if isinstance(tokenizer, BartTokenizer) else {}
|
||||
if data_args.src_lang is not None:
|
||||
self.dataset_kwargs["src_lang"] = data_args.src_lang
|
||||
if data_args.tgt_lang is not None:
|
||||
self.dataset_kwargs["tgt_lang"] = data_args.tgt_lang
|
||||
|
||||
def __call__(self, batch) -> dict[str, torch.Tensor]:
|
||||
if hasattr(self.tokenizer, "prepare_seq2seq_batch"):
|
||||
batch = self._encode(batch)
|
||||
input_ids, attention_mask, labels = (
|
||||
batch["input_ids"],
|
||||
batch["attention_mask"],
|
||||
batch["labels"],
|
||||
)
|
||||
else:
|
||||
input_ids = torch.stack([x["input_ids"] for x in batch])
|
||||
attention_mask = torch.stack([x["attention_mask"] for x in batch])
|
||||
labels = torch.stack([x["labels"] for x in batch])
|
||||
|
||||
labels = trim_batch(labels, self.pad_token_id)
|
||||
input_ids, attention_mask = trim_batch(input_ids, self.pad_token_id, attention_mask=attention_mask)
|
||||
|
||||
if isinstance(self.tokenizer, T5Tokenizer):
|
||||
decoder_input_ids = self._shift_right_t5(labels)
|
||||
else:
|
||||
decoder_input_ids = shift_tokens_right(labels, self.pad_token_id, self.decoder_start_token_id)
|
||||
|
||||
batch = {
|
||||
"input_ids": input_ids,
|
||||
"attention_mask": attention_mask,
|
||||
"decoder_input_ids": decoder_input_ids,
|
||||
"labels": labels,
|
||||
}
|
||||
return batch
|
||||
|
||||
def _shift_right_t5(self, input_ids):
|
||||
# shift inputs to the right
|
||||
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
|
||||
shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
|
||||
shifted_input_ids[..., 0] = self.pad_token_id
|
||||
return shifted_input_ids
|
||||
|
||||
def _encode(self, batch) -> dict[str, torch.Tensor]:
|
||||
batch_encoding = self.tokenizer.prepare_seq2seq_batch(
|
||||
[x["src_texts"] for x in batch],
|
||||
tgt_texts=[x["tgt_texts"] for x in batch],
|
||||
max_length=self.data_args.max_source_length,
|
||||
max_target_length=self.data_args.max_target_length,
|
||||
padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack
|
||||
return_tensors="pt",
|
||||
**self.dataset_kwargs,
|
||||
)
|
||||
return batch_encoding.data
|
||||
|
||||
|
||||
class SortishSampler(Sampler):
|
||||
"Go through the text data by order of src length with a bit of randomness. From fastai repo."
|
||||
|
||||
def __init__(self, data, batch_size, shuffle=True):
|
||||
self.data, self.bs, self.shuffle = data, batch_size, shuffle
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self.data)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(sortish_sampler_indices(self.data, self.bs, shuffle=self.shuffle))
|
||||
|
||||
|
||||
def sortish_sampler_indices(data: list, bs: int, shuffle=True) -> np.array:
|
||||
"Go through the text data by order of src length with a bit of randomness. From fastai repo."
|
||||
if not shuffle:
|
||||
return np.argsort(np.array(data) * -1)
|
||||
|
||||
def key_fn(i):
|
||||
return data[i]
|
||||
|
||||
idxs = np.random.permutation(len(data))
|
||||
sz = bs * 50
|
||||
ck_idx = [idxs[i : i + sz] for i in range(0, len(idxs), sz)]
|
||||
sort_idx = np.concatenate([sorted(s, key=key_fn, reverse=True) for s in ck_idx])
|
||||
sz = bs
|
||||
ck_idx = [sort_idx[i : i + sz] for i in range(0, len(sort_idx), sz)]
|
||||
max_ck = np.argmax([key_fn(ck[0]) for ck in ck_idx]) # find the chunk with the largest key,
|
||||
ck_idx[0], ck_idx[max_ck] = ck_idx[max_ck], ck_idx[0] # then make sure it goes first.
|
||||
sort_idx = np.concatenate(np.random.permutation(ck_idx[1:])) if len(ck_idx) > 1 else np.array([], dtype=int)
|
||||
sort_idx = np.concatenate((ck_idx[0], sort_idx))
|
||||
return sort_idx
|
||||
|
||||
|
||||
class DistributedSortishSampler(Sampler):
|
||||
"""Copied from torch DistributedSampler"""
|
||||
|
||||
def __init__(self, dataset, batch_size, num_replicas=None, rank=None, add_extra_examples=True, shuffle=True):
|
||||
if num_replicas is None:
|
||||
if not dist.is_available():
|
||||
raise RuntimeError("Requires distributed package to be available")
|
||||
num_replicas = dist.get_world_size()
|
||||
if rank is None:
|
||||
if not dist.is_available():
|
||||
raise RuntimeError("Requires distributed package to be available")
|
||||
rank = dist.get_rank()
|
||||
self.dataset = dataset
|
||||
self.num_replicas = num_replicas
|
||||
self.rank = rank
|
||||
self.epoch = 0
|
||||
if add_extra_examples:
|
||||
self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))
|
||||
self.total_size = self.num_samples * self.num_replicas
|
||||
else:
|
||||
self.total_size = len(dataset)
|
||||
self.num_samples = len(self.available_indices)
|
||||
self.batch_size = batch_size
|
||||
self.add_extra_examples = add_extra_examples
|
||||
self.shuffle = shuffle
|
||||
|
||||
def __iter__(self) -> Iterable:
|
||||
g = torch.Generator()
|
||||
g.manual_seed(self.epoch)
|
||||
|
||||
sortish_data = [self.dataset.src_lens[i] for i in self.available_indices]
|
||||
sortish_indices = sortish_sampler_indices(sortish_data, self.batch_size, shuffle=self.shuffle)
|
||||
indices = [self.available_indices[i] for i in sortish_indices]
|
||||
assert len(indices) == self.num_samples
|
||||
return iter(indices)
|
||||
|
||||
@cached_property
|
||||
def available_indices(self) -> np.array:
|
||||
indices = list(range(len(self.dataset)))
|
||||
# add extra samples to make it evenly divisible
|
||||
indices += indices[: (self.total_size - len(indices))]
|
||||
assert len(indices) == self.total_size
|
||||
# subsample
|
||||
available_indices = indices[self.rank : self.total_size : self.num_replicas]
|
||||
return available_indices
|
||||
|
||||
def __len__(self):
|
||||
return self.num_samples
|
||||
|
||||
def set_epoch(self, epoch):
|
||||
self.epoch = epoch
|
||||
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
|
||||
def use_task_specific_params(model, task):
|
||||
"""Update config with summarization specific params."""
|
||||
task_specific_params = model.config.task_specific_params
|
||||
|
||||
if task_specific_params is not None:
|
||||
pars = task_specific_params.get(task, {})
|
||||
logger.info(f"setting model.config to task specific params for {task}:\n {pars}")
|
||||
logger.info("note: command line args may override some of these")
|
||||
model.config.update(pars)
|
||||
|
||||
|
||||
def pickle_load(path):
|
||||
"""pickle.load(path)"""
|
||||
with open(path, "rb") as f:
|
||||
return pickle.load(f)
|
||||
|
||||
|
||||
def pickle_save(obj, path):
|
||||
"""pickle.dump(obj, path)"""
|
||||
with open(path, "wb") as f:
|
||||
return pickle.dump(obj, f)
|
||||
|
||||
|
||||
def flatten_list(summary_ids: list[list]):
|
||||
return list(itertools.chain.from_iterable(summary_ids))
|
||||
|
||||
|
||||
def save_git_info(folder_path: str) -> None:
|
||||
"""Save git information to output_dir/git_log.json"""
|
||||
repo_infos = get_git_info()
|
||||
save_json(repo_infos, os.path.join(folder_path, "git_log.json"))
|
||||
|
||||
|
||||
def save_json(content, path, indent=4, **json_dump_kwargs):
|
||||
with open(path, "w") as f:
|
||||
json.dump(content, f, indent=indent, sort_keys=True, **json_dump_kwargs)
|
||||
|
||||
|
||||
def load_json(path):
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def get_git_info():
|
||||
try:
|
||||
repo = git.Repo(search_parent_directories=True)
|
||||
repo_infos = {
|
||||
"repo_id": str(repo),
|
||||
"repo_sha": str(repo.head.object.hexsha),
|
||||
"repo_branch": str(repo.active_branch),
|
||||
"hostname": str(socket.gethostname()),
|
||||
}
|
||||
return repo_infos
|
||||
except TypeError:
|
||||
return {
|
||||
"repo_id": None,
|
||||
"repo_sha": None,
|
||||
"repo_branch": None,
|
||||
"hostname": None,
|
||||
}
|
||||
|
||||
|
||||
ROUGE_KEYS = ["rouge1", "rouge2", "rougeL", "rougeLsum"]
|
||||
|
||||
|
||||
def extract_rouge_mid_statistics(dct):
|
||||
new_dict = {}
|
||||
for k1, v1 in dct.items():
|
||||
mid = v1.mid
|
||||
new_dict[k1] = {stat: round(getattr(mid, stat), 4) for stat in ["precision", "recall", "fmeasure"]}
|
||||
return new_dict
|
||||
|
||||
|
||||
def calculate_rouge(
|
||||
pred_lns: list[str],
|
||||
tgt_lns: list[str],
|
||||
use_stemmer=True,
|
||||
rouge_keys=ROUGE_KEYS,
|
||||
return_precision_and_recall=False,
|
||||
bootstrap_aggregation=True,
|
||||
newline_sep=True,
|
||||
) -> dict:
|
||||
"""Calculate rouge using rouge_scorer package.
|
||||
|
||||
Args:
|
||||
pred_lns: list of summaries generated by model
|
||||
tgt_lns: list of groundtruth summaries (e.g. contents of val.target)
|
||||
use_stemmer: Bool indicating whether Porter stemmer should be used to
|
||||
strip word suffixes to improve matching.
|
||||
rouge_keys: which metrics to compute, defaults to rouge1, rouge2, rougeL, rougeLsum
|
||||
return_precision_and_recall: (False) whether to also return precision and recall.
|
||||
bootstrap_aggregation: whether to do the typical bootstrap resampling of scores. Defaults to True, if False
|
||||
this function returns a collections.defaultdict[metric: list of values for each observation for each subscore]``
|
||||
newline_sep:(default=True) whether to add newline between sentences. This is essential for calculation rougeL
|
||||
on multi sentence summaries (CNN/DM dataset).
|
||||
|
||||
Returns:
|
||||
dict[score: value] if aggregate else defaultdict(list) keyed by rouge_keys
|
||||
|
||||
"""
|
||||
scorer = rouge_scorer.RougeScorer(rouge_keys, use_stemmer=use_stemmer)
|
||||
aggregator = scoring.BootstrapAggregator()
|
||||
for pred, tgt in zip(tgt_lns, pred_lns):
|
||||
# rougeLsum expects "\n" separated sentences within a summary
|
||||
if newline_sep:
|
||||
pred = add_newline_to_end_of_each_sentence(pred)
|
||||
tgt = add_newline_to_end_of_each_sentence(tgt)
|
||||
scores = scorer.score(pred, tgt)
|
||||
aggregator.add_scores(scores)
|
||||
|
||||
if bootstrap_aggregation:
|
||||
result = aggregator.aggregate()
|
||||
if return_precision_and_recall:
|
||||
return extract_rouge_mid_statistics(result) # here we return dict
|
||||
else:
|
||||
return {k: round(v.mid.fmeasure * 100, 4) for k, v in result.items()}
|
||||
|
||||
else:
|
||||
return aggregator._scores # here we return defaultdict(list)
|
||||
|
||||
|
||||
# Utilities for freezing parameters and checking whether they are frozen
|
||||
|
||||
|
||||
def freeze_params(model: nn.Module):
|
||||
"""Set requires_grad=False for each of model.parameters()"""
|
||||
for par in model.parameters():
|
||||
par.requires_grad = False
|
||||
|
||||
|
||||
def freeze_embeds(model):
|
||||
"""Freeze token embeddings and positional embeddings for bart, just token embeddings for t5."""
|
||||
model_type = model.config.model_type
|
||||
|
||||
if model_type in ["t5", "mt5"]:
|
||||
freeze_params(model.shared)
|
||||
for d in [model.encoder, model.decoder]:
|
||||
freeze_params(d.embed_tokens)
|
||||
elif model_type == "fsmt":
|
||||
for d in [model.model.encoder, model.model.decoder]:
|
||||
freeze_params(d.embed_positions)
|
||||
freeze_params(d.embed_tokens)
|
||||
else:
|
||||
freeze_params(model.model.shared)
|
||||
for d in [model.model.encoder, model.model.decoder]:
|
||||
freeze_params(d.embed_positions)
|
||||
freeze_params(d.embed_tokens)
|
||||
|
||||
|
||||
def grad_status(model: nn.Module) -> Iterable:
|
||||
return (par.requires_grad for par in model.parameters())
|
||||
|
||||
|
||||
def any_requires_grad(model: nn.Module) -> bool:
|
||||
return any(grad_status(model))
|
||||
|
||||
|
||||
def assert_all_frozen(model):
|
||||
model_grads: list[bool] = list(grad_status(model))
|
||||
n_require_grad = sum(lmap(int, model_grads))
|
||||
npars = len(model_grads)
|
||||
assert not any(model_grads), f"{n_require_grad / npars:.1%} of {npars} weights require grad"
|
||||
|
||||
|
||||
def assert_not_all_frozen(model):
|
||||
model_grads: list[bool] = list(grad_status(model))
|
||||
npars = len(model_grads)
|
||||
assert any(model_grads), f"none of {npars} weights require grad"
|
||||
|
||||
|
||||
def parse_numeric_n_bool_cl_kwargs(unparsed_args: list[str]) -> dict[str, Union[int, float, bool]]:
|
||||
"""
|
||||
Parse an argv list of unspecified command line args to a dict.
|
||||
Assumes all values are either numeric or boolean in the form of true/false.
|
||||
"""
|
||||
result = {}
|
||||
assert len(unparsed_args) % 2 == 0, f"got odd number of unparsed args: {unparsed_args}"
|
||||
num_pairs = len(unparsed_args) // 2
|
||||
for pair_num in range(num_pairs):
|
||||
i = 2 * pair_num
|
||||
assert unparsed_args[i].startswith("--")
|
||||
if unparsed_args[i + 1].lower() == "true":
|
||||
value = True
|
||||
elif unparsed_args[i + 1].lower() == "false":
|
||||
value = False
|
||||
else:
|
||||
try:
|
||||
value = int(unparsed_args[i + 1])
|
||||
except ValueError:
|
||||
value = float(unparsed_args[i + 1]) # this can raise another informative ValueError
|
||||
|
||||
result[unparsed_args[i][2:]] = value
|
||||
return result
|
||||
|
||||
|
||||
def write_txt_file(ordered_tgt, path):
|
||||
f = Path(path).open("w")
|
||||
for ln in ordered_tgt:
|
||||
f.write(ln + "\n")
|
||||
f.flush()
|
||||
|
||||
|
||||
def chunks(lst, n):
|
||||
"""Yield successive n-sized chunks from lst."""
|
||||
for i in range(0, len(lst), n):
|
||||
yield lst[i : i + n]
|
||||
|
||||
|
||||
def check_output_dir(args, expected_items=0):
|
||||
"""
|
||||
Checks whether to bail out if output_dir already exists and has more than expected_items in it
|
||||
|
||||
`args`: needs to have the following attributes of `args`:
|
||||
- output_dir
|
||||
- do_train
|
||||
- overwrite_output_dir
|
||||
|
||||
`expected_items`: normally 0 (default) - i.e. empty dir, but in some cases a few files are expected (e.g. recovery from OOM)
|
||||
"""
|
||||
if (
|
||||
os.path.exists(args.output_dir)
|
||||
and len(os.listdir(args.output_dir)) > expected_items
|
||||
and args.do_train
|
||||
and not args.overwrite_output_dir
|
||||
):
|
||||
raise ValueError(
|
||||
f"Output directory ({args.output_dir}) already exists and "
|
||||
f"has {len(os.listdir(args.output_dir))} items in it (expected {expected_items} items). "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
82
transformers/examples/legacy/seq2seq/xla_spawn.py
Normal file
82
transformers/examples/legacy/seq2seq/xla_spawn.py
Normal file
@@ -0,0 +1,82 @@
|
||||
# Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
A simple launcher script for TPU training
|
||||
|
||||
Inspired by https://github.com/pytorch/pytorch/blob/master/torch/distributed/launch.py
|
||||
|
||||
::
|
||||
>>> python xla_spawn.py --num_cores=NUM_CORES_YOU_HAVE
|
||||
YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other
|
||||
arguments of your training script)
|
||||
|
||||
"""
|
||||
|
||||
import importlib
|
||||
import sys
|
||||
from argparse import REMAINDER, ArgumentParser
|
||||
from pathlib import Path
|
||||
|
||||
import torch_xla.distributed.xla_multiprocessing as xmp
|
||||
|
||||
|
||||
def parse_args():
|
||||
"""
|
||||
Helper function parsing the command line options
|
||||
@retval ArgumentParser
|
||||
"""
|
||||
parser = ArgumentParser(
|
||||
description=(
|
||||
"PyTorch TPU distributed training launch helper utility that will spawn up multiple distributed processes"
|
||||
)
|
||||
)
|
||||
|
||||
# Optional arguments for the launch helper
|
||||
parser.add_argument("--num_cores", type=int, default=1, help="Number of TPU cores to use (1 or 8).")
|
||||
|
||||
# positional
|
||||
parser.add_argument(
|
||||
"training_script",
|
||||
type=str,
|
||||
help=(
|
||||
"The full path to the single TPU training "
|
||||
"program/script to be launched in parallel, "
|
||||
"followed by all the arguments for the "
|
||||
"training script"
|
||||
),
|
||||
)
|
||||
|
||||
# rest from the training program
|
||||
parser.add_argument("training_script_args", nargs=REMAINDER)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Import training_script as a module.
|
||||
script_fpath = Path(args.training_script)
|
||||
sys.path.append(str(script_fpath.parent.resolve()))
|
||||
mod_name = script_fpath.stem
|
||||
mod = importlib.import_module(mod_name)
|
||||
|
||||
# Patch sys.argv
|
||||
sys.argv = [args.training_script] + args.training_script_args + ["--tpu_num_cores", str(args.num_cores)]
|
||||
|
||||
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user