init
This commit is contained in:
617
transformers/examples/pytorch/speech-recognition/README.md
Normal file
617
transformers/examples/pytorch/speech-recognition/README.md
Normal file
@@ -0,0 +1,617 @@
|
||||
<!---
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
|
||||
# Automatic Speech Recognition Examples
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Automatic Speech Recognition with CTC](#connectionist-temporal-classification)
|
||||
- [Single GPU example](#single-gpu-ctc)
|
||||
- [Multi GPU example](#multi-gpu-ctc)
|
||||
- [Examples](#examples-ctc)
|
||||
- [TIMIT](#timit-ctc)
|
||||
- [Librispeech](#librispeech-ctc)
|
||||
- [Common Voice](#common-voice-ctc)
|
||||
- [Multilingual Librispeech](#multilingual-librispeech-ctc)
|
||||
- [Automatic Speech Recognition with CTC and Adapter Layers](#connectionist-temporal-classification-with-adapters)
|
||||
- [Massive Multilingual Speech (MMS)](#mms-model)
|
||||
- [Examples](#examples-ctc-adapter)
|
||||
- [Common Voice](#common-voice-ctc-adapter)
|
||||
- [Automatic Speech Recognition with Sequence-to-Sequence](#sequence-to-sequence)
|
||||
- [Whisper Model](#whisper-model)
|
||||
- [Speech-Encoder-Decoder Model](#warm-started-speech-encoder-decoder-model)
|
||||
- [Examples](#examples-seq2seq)
|
||||
- [Librispeech](#librispeech-seq2seq)
|
||||
|
||||
## Connectionist Temporal Classification
|
||||
|
||||
The script [`run_speech_recognition_ctc.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) can be used to fine-tune any pretrained [Connectionist Temporal Classification Model](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForCTC) for automatic speech
|
||||
recognition on one of the [official speech recognition datasets](https://huggingface.co/datasets?task_ids=task_ids:automatic-speech-recognition) or a custom dataset.
|
||||
|
||||
Speech recognition models that have been pretrained in unsupervised fashion on audio data alone, *e.g.* [Wav2Vec2](https://huggingface.co/transformers/main/model_doc/wav2vec2.html), [HuBERT](https://huggingface.co/transformers/main/model_doc/hubert.html), [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html), have shown to require only
|
||||
very little annotated data to yield good performance on automatic speech recognition datasets.
|
||||
|
||||
In the script [`run_speech_recognition_ctc`], we first create a vocabulary from all unique characters of both the training data and evaluation data. Then, we preprocesses the speech recognition dataset, which includes correct resampling, normalization and padding. Finally, the pretrained speech recognition model is fine-tuned on the annotated speech recognition datasets using CTC loss.
|
||||
|
||||
---
|
||||
**NOTE**
|
||||
|
||||
If you encounter problems with data preprocessing by setting `--preprocessing_num_workers` > 1,
|
||||
you might want to set the environment variable `OMP_NUM_THREADS` to 1 as follows:
|
||||
|
||||
```bash
|
||||
OMP_NUM_THREADS=1 python run_speech_recognition_ctc ...
|
||||
```
|
||||
|
||||
If the environment variable is not set, the training script might freeze, *i.e.* see: https://github.com/pytorch/audio/issues/1021#issuecomment-726915239
|
||||
|
||||
---
|
||||
|
||||
### Single GPU CTC
|
||||
|
||||
The following command shows how to fine-tune [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html) on [Common Voice](https://huggingface.co/datasets/common_voice) using a single GPU in half-precision.
|
||||
|
||||
```bash
|
||||
python run_speech_recognition_ctc.py \
|
||||
--dataset_name="common_voice" \
|
||||
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
|
||||
--dataset_config_name="tr" \
|
||||
--output_dir="./wav2vec2-common_voice-tr-demo" \
|
||||
--overwrite_output_dir \
|
||||
--num_train_epochs="15" \
|
||||
--per_device_train_batch_size="16" \
|
||||
--gradient_accumulation_steps="2" \
|
||||
--learning_rate="3e-4" \
|
||||
--warmup_steps="500" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="sentence" \
|
||||
--length_column_name="input_length" \
|
||||
--save_steps="400" \
|
||||
--eval_steps="100" \
|
||||
--layerdrop="0.0" \
|
||||
--save_total_limit="3" \
|
||||
--freeze_feature_encoder \
|
||||
--gradient_checkpointing \
|
||||
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” <20> \
|
||||
--fp16 \
|
||||
--group_by_length \
|
||||
--push_to_hub \
|
||||
--do_train --do_eval
|
||||
```
|
||||
|
||||
On a single V100 GPU, this script should run in *ca.* 1 hour 20 minutes and yield a CTC loss of **0.39** and word error rate
|
||||
of **0.35**.
|
||||
|
||||
### Multi GPU CTC
|
||||
|
||||
The following command shows how to fine-tune [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html) on [Common Voice](https://huggingface.co/datasets/common_voice) using 8 GPUs in half-precision.
|
||||
|
||||
```bash
|
||||
torchrun \
|
||||
--nproc_per_node 8 run_speech_recognition_ctc.py \
|
||||
--dataset_name="common_voice" \
|
||||
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
|
||||
--dataset_config_name="tr" \
|
||||
--output_dir="./wav2vec2-common_voice-tr-demo-dist" \
|
||||
--overwrite_output_dir \
|
||||
--num_train_epochs="15" \
|
||||
--per_device_train_batch_size="4" \
|
||||
--learning_rate="3e-4" \
|
||||
--warmup_steps="500" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="sentence" \
|
||||
--length_column_name="input_length" \
|
||||
--save_steps="400" \
|
||||
--eval_steps="100" \
|
||||
--logging_steps="1" \
|
||||
--layerdrop="0.0" \
|
||||
--save_total_limit="3" \
|
||||
--freeze_feature_encoder \
|
||||
--gradient_checkpointing \
|
||||
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” <20> \
|
||||
--fp16 \
|
||||
--group_by_length \
|
||||
--push_to_hub \
|
||||
--do_train --do_eval
|
||||
```
|
||||
|
||||
On 8 V100 GPUs, this script should run in *ca.* 18 minutes and yield a CTC loss of **0.39** and word error rate
|
||||
of **0.36**.
|
||||
|
||||
|
||||
### Multi GPU CTC with Dataset Streaming
|
||||
|
||||
The following command shows how to use [Dataset Streaming mode](https://huggingface.co/docs/datasets/dataset_streaming)
|
||||
to fine-tune [XLS-R](https://huggingface.co/transformers/main/model_doc/xls_r.html)
|
||||
on [Common Voice](https://huggingface.co/datasets/common_voice) using 4 GPUs in half-precision.
|
||||
|
||||
Streaming mode imposes several constraints on training:
|
||||
1. We need to construct a tokenizer beforehand and define it via `--tokenizer_name_or_path`.
|
||||
2. `--num_train_epochs` has to be replaced by `--max_steps`. Similarly, all other epoch-based arguments have to be
|
||||
replaced by step-based ones.
|
||||
3. Full dataset shuffling on each epoch is not possible, since we don't have the whole dataset available at once.
|
||||
However, the `--shuffle_buffer_size` argument controls how many examples we can pre-download before shuffling them.
|
||||
|
||||
|
||||
```bash
|
||||
**torchrun \
|
||||
--nproc_per_node 4 run_speech_recognition_ctc_streaming.py \
|
||||
--dataset_name="common_voice" \
|
||||
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
|
||||
--tokenizer_name_or_path="anton-l/wav2vec2-tokenizer-turkish" \
|
||||
--dataset_config_name="tr" \
|
||||
--train_split_name="train+validation" \
|
||||
--eval_split_name="test" \
|
||||
--output_dir="wav2vec2-xls-r-common_voice-tr-ft" \
|
||||
--overwrite_output_dir \
|
||||
--max_steps="5000" \
|
||||
--per_device_train_batch_size="8" \
|
||||
--gradient_accumulation_steps="2" \
|
||||
--learning_rate="5e-4" \
|
||||
--warmup_steps="500" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="sentence" \
|
||||
--save_steps="500" \
|
||||
--eval_steps="500" \
|
||||
--logging_steps="1" \
|
||||
--layerdrop="0.0" \
|
||||
--eval_metrics wer cer \
|
||||
--save_total_limit="1" \
|
||||
--mask_time_prob="0.3" \
|
||||
--mask_time_length="10" \
|
||||
--mask_feature_prob="0.1" \
|
||||
--mask_feature_length="64" \
|
||||
--freeze_feature_encoder \
|
||||
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” <20> \
|
||||
--max_duration_in_seconds="20" \
|
||||
--shuffle_buffer_size="500" \
|
||||
--fp16 \
|
||||
--push_to_hub \
|
||||
--do_train --do_eval \
|
||||
--gradient_checkpointing**
|
||||
```
|
||||
|
||||
On 4 V100 GPUs, this script should run in *ca.* 3h 31min and yield a CTC loss of **0.35** and word error rate
|
||||
of **0.29**.
|
||||
|
||||
### Examples CTC
|
||||
|
||||
The following tables present a couple of example runs on the most popular speech-recognition datasets.
|
||||
The presented performances are by no means optimal as no hyper-parameter tuning was done. Nevertheless,
|
||||
they can serve as a baseline to improve upon.
|
||||
|
||||
|
||||
#### TIMIT CTC
|
||||
|
||||
- [TIMIT](https://huggingface.co/datasets/timit_asr)
|
||||
|
||||
| Dataset | Dataset Config | Pretrained Model | Word error rate on eval | Phoneme error rate on eval | GPU setup | Training time | Fine-tuned Model & Logs | Command to reproduce |
|
||||
|-------|------------------------------|-------------|---------------|---------------|----------------------|-------------| -------------| ------- |
|
||||
| [TIMIT](https://huggingface.co/datasets/timit_asr)| - | [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) | 0.21 | - | 1 GPU TITAN RTX | 32min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-fine-tuned) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-fine-tuned/blob/main/run.sh) |
|
||||
| [TIMIT](https://huggingface.co/datasets/timit_asr)| - | [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) | 0.21 | - | 1 GPU TITAN RTX | 32min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-fine-tuned) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-fine-tuned/blob/main/run.sh) |
|
||||
| [TIMIT](https://huggingface.co/datasets/timit_asr)| - | [unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) | 0.22 | - | 1 GPU TITAN RTX | 35min | [here](https://huggingface.co/patrickvonplaten/unispeech-large-1500h-cv-timit) | [run.sh](https://huggingface.co/patrickvonplaten/unispeech-large-1500h-cv-timit/blob/main/run.sh) |
|
||||
| [TIMIT](https://huggingface.co/datasets/timit_asr)| - | [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) | 0.30 | - | 1 GPU TITAN RTX | 28min | [here](https://huggingface.co/patrickvonplaten/sew-small-100k-timit) | [run.sh](https://huggingface.co/patrickvonplaten/sew-small-100k-timit/blob/main/run.sh) |
|
||||
| [TIMIT](https://huggingface.co/datasets/timit_asr)| - | [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) | 0.68 | - | 1 GPU TITAN RTX | 26min | [here](https://huggingface.co/patrickvonplaten/distilhubert-timit) | [run.sh](https://huggingface.co/patrickvonplaten/distilhubert-timit/blob/main/run.sh) |
|
||||
|
||||
|
||||
#### Librispeech CTC
|
||||
|
||||
- [Librispeech](https://huggingface.co/datasets/librispeech_asr)
|
||||
|
||||
| Dataset | Dataset Config | Pretrained Model | Word error rate on eval | Phoneme error rate on eval | GPU setup | Training time | Fine-tuned Model & Logs | Command to reproduce |
|
||||
|-------|------------------------------|-------------|---------------|---------------|----------------------|-------------| -------------| ------- |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) | 0.049 | - | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-large) | [run.sh](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-large/blob/main/run.sh) |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) | 0.068 | - | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus) | [run.sh](https://huggingface.co/patrickvonplaten/wavlm-libri-clean-100h-base-plus/blob/main/run.sh) |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) | 0.042 | - | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist/blob/main/run.sh) |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) | 0.042 | - | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist/blob/main/run.sh) |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) | 0.088 | - | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/hubert-librispeech-clean-100h-demo-dist) | [run.sh](https://huggingface.co/patrickvonplaten/hubert-librispeech-clean-100h-demo-dist/blob/main/run.sh) |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) | 0.167 | | 8 GPU V100 | 54min | [here](https://huggingface.co/patrickvonplaten/sew-mid-100k-librispeech-clean-100h-ft) | [run.sh](https://huggingface.co/patrickvonplaten/sew-mid-100k-librispeech-clean-100h-ft/blob/main/run.sh) |
|
||||
|
||||
|
||||
#### Common Voice CTC
|
||||
|
||||
- [Common Voice](https://huggingface.co/datasets/common_voice)
|
||||
|
||||
| Dataset | Dataset Config | Pretrained Model | Word error rate on eval | Phoneme error rate on eval | GPU setup | Training time | Fine-tuned Model & Logs | Command to reproduce |
|
||||
|-------|------------------------------|-------------|---------------|---------------|----------------------|-------------| -------------| ------- |
|
||||
| [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_3_0)| `"tr"` | [facebook/wav2vec2-large-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) | - | 0.099 | 8 GPU V100 | 23min | [here](https://huggingface.co/patrickvonplaten/xls-r-300m-tr-phoneme) | [run.sh](https://huggingface.co/patrickvonplaten/xls-r-300m-tr-phoneme/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_3_0)| `"it"` | [facebook/wav2vec2-large-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) | - | 0.077 | 8 GPU V100 | 23min | [here](https://huggingface.co/patrickvonplaten/xls-r-300m-it-phoneme) | [run.sh](https://huggingface.co/patrickvonplaten/xls-r-300m-it-phoneme/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_3_0)| `"sv-SE"` | [facebook/wav2vec2-large-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) | - | 0.099 | 8 GPU V100 | 23min | [here](https://huggingface.co/patrickvonplaten/xls-r-300m-sv-phoneme) | [run.sh](https://huggingface.co/patrickvonplaten/xls-r-300m-sv-phoneme/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) | 0.36 | - | 8 GPU V100 | 18min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo-dist) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo-dist/blob/main/run_dist.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) | 0.31 | - | 8 GPU V100 | 1h05 | [here](https://huggingface.co/patrickvonplaten/wav2vec2-large-xlsr-53-common_voice-tr-ft) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-large-xlsr-53-common_voice-tr-ft/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) | 0.35 | - | 1 GPU V100 | 1h20min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) | 0.31 | - | 8 GPU V100 | 1h05 | [here](https://huggingface.co/patrickvonplaten/wav2vec2-large-xls-r-300m-common_voice-tr-ft) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-large-xls-r-300m-common_voice-tr-ft/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) | 0.21 | - | 2 GPU Titan 24 GB RAM | 15h10 | [here](https://huggingface.co/patrickvonplaten/wav2vec2-xls-r-1b-common_voice-tr-ft) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-large-xls-r-1b-common_voice-tr-ft/blob/main/run.sh) |
|
||||
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` in streaming mode | [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) | 0.29 | - | 4 GPU V100 | 3h31 | [here](https://huggingface.co/anton-l/wav2vec2-xls-r-common_voice-tr-ft-stream) | [run.sh](https://huggingface.co/anton-l/wav2vec2-xls-r-common_voice-tr-ft-stream/blob/main/run.sh) |
|
||||
|
||||
|
||||
#### Multilingual Librispeech CTC
|
||||
|
||||
- [Multilingual Librispeech](https://huggingface.co/datasets/multilingual_librispeech)
|
||||
|
||||
| Dataset | Dataset Config | Pretrained Model | Word error rate on eval | Phoneme error rate on eval | GPU setup | Training time | Fine-tuned Model & Logs | Command to reproduce |
|
||||
|-------|------------------------------|-------------|---------------|---------------|----------------------|-------------| -------------| ------- |
|
||||
| [Multilingual Librispeech](https://huggingface.co/datasets/multilingual_librispeech)| `"german"` | [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) | 0.13 | - | 1 GPU Titan 24 GB RAM | 15h04 | [here](https://huggingface.co/patrickvonplaten/wav2vec2-xlsr-53-300m-mls-german-ft) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-xlsr-53-300m-mls-german-ft/blob/main/run.sh) |
|
||||
| [Multilingual Librispeech](https://huggingface.co/datasets/multilingual_librispeech)| `"german"` | [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) | 0.15 | - | 1 GPU Titan 24 GB RAM | 15h04 | [here](https://huggingface.co/patrickvonplaten/wav2vec2-300m-mls-german-ft) | [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-300m-mls-german-ft/blob/main/run.sh) |
|
||||
|
||||
## Connectionist Temporal Classification With Adapters
|
||||
|
||||
The script [`run_speech_recognition_ctc_adapter.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py) can be used to fine-tune adapter layers for [Wav2Vec2-like models like MMS](https://huggingface.co/docs/transformers/main/en/model_doc/mms) for automatic speech recognition.
|
||||
|
||||
### MMS Model
|
||||
|
||||
The [Massive Multilingual Speech (MMS) model](https://huggingface.co/facebook/mms-1b-all) has been pre-trained and fine-tuned
|
||||
on 1000+ languages. The model makes use of adapter attention layers to fine-tune only a small part
|
||||
of the model on a specific language. The model already comes with fine-tuned adapter layers for 1000+ languages and
|
||||
can be used for inference for 1000+ languages out of the box.
|
||||
|
||||
However, for improved performance or more specific use cases one can re-initialize the adapter weights, freeze all
|
||||
other weights and fine-tune them on a specific dataset as shown in the [example below](#examples-ctc-adapter).
|
||||
|
||||
Note that the adapter weights include low dimensional linear layers for every attention block as well as the final language
|
||||
model head layers.
|
||||
|
||||
### Examples CTC Adapter
|
||||
|
||||
In the following we will look at how one can fine-tune adapter weights for any of the
|
||||
[MMS CTC checkpoints](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&other=mms&sort=downloads) in less than 1 hour.
|
||||
|
||||
#### Common Voice CTC Adapter
|
||||
|
||||
As in the examples [above](#examples-ctc), we fine-tune on Common Voice's 6 dataset in Turkish as an example.
|
||||
Contrary to [`run_speech_recognition_ctc.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) before there is a `--target_language` which has to be defined to state for which
|
||||
language or concept the adapter layers shall be trained. The adapter weights will then
|
||||
accordingly be called `adapter.{<target_language}.safetensors`.
|
||||
|
||||
Let's run an example script. Make sure to be logged in so that your model can be directly uploaded to the Hub.
|
||||
```bash
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Now, let's run an example and upload it to the Hub under `wav2vec2-common_voice-tr-mms-demo`.
|
||||
|
||||
```sh
|
||||
python run_speech_recognition_ctc.py \
|
||||
--dataset_name="common_voice" \
|
||||
--model_name_or_path="facebook/mms-1b-all" \
|
||||
--dataset_config_name="tr" \
|
||||
--output_dir="./wav2vec2-common_voice-tr-mms-demo" \
|
||||
--num_train_epochs="4" \
|
||||
--per_device_train_batch_size="32" \
|
||||
--learning_rate="1e-3" \
|
||||
--warmup_steps="100" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="sentence" \
|
||||
--length_column_name="input_length" \
|
||||
--save_steps="200" \
|
||||
--eval_steps="100" \
|
||||
--save_total_limit="3" \
|
||||
--target_language="tur" \
|
||||
--gradient_checkpointing \
|
||||
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” <20> \
|
||||
--fp16 \
|
||||
--group_by_length \
|
||||
--do_train --do_eval \
|
||||
--push_to_hub
|
||||
```
|
||||
|
||||
This should take less than 10 minutes on most GPUs and you should very quickly get word error rates
|
||||
below 27%.
|
||||
|
||||
For an example run, you can have a look at [`patrickvonplaten/wav2vec2-common_voice-tr-mms-demo`](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-mms-demo).
|
||||
|
||||
|
||||
If you'd like to train another adapter model with the same base model, you can simply re-use the same `--output_dir`,
|
||||
but make sure to pass the `--output_dir` folder also to `--tokenizer_name_or_path` so that the vocabulary is not
|
||||
overwritten but **extended**. Assuming you would like to train adapter weights on Swedish in addition to Turkish and save
|
||||
the adapter weights in the same model repo, you can run:
|
||||
|
||||
```sh
|
||||
python run_speech_recognition_ctc.py \
|
||||
--dataset_name="common_voice" \
|
||||
--model_name_or_path="facebook/mms-1b-all" \
|
||||
--dataset_config_name="sw" \
|
||||
--output_dir="./wav2vec2-common_voice-tr-mms-demo" \
|
||||
--tokenizer_name_or_path="./wav2vec2-common_voice-tr-mms-demo" \
|
||||
--num_train_epochs="4" \
|
||||
--per_device_train_batch_size="32" \
|
||||
--learning_rate="1e-3" \
|
||||
--warmup_steps="100" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="sentence" \
|
||||
--length_column_name="input_length" \
|
||||
--save_steps="200" \
|
||||
--eval_steps="100" \
|
||||
--save_total_limit="3" \
|
||||
--target_language="swe" \
|
||||
--gradient_checkpointing \
|
||||
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” <20> \
|
||||
--fp16 \
|
||||
--group_by_length \
|
||||
--do_train --do_eval \
|
||||
--push_to_hub
|
||||
```
|
||||
|
||||
Now you should have both `adapter.tur.safetensors` and `adapter.swe.safetensors` in the model repo
|
||||
and you can load the respective language with:
|
||||
```py
|
||||
model.load_adapter("tur") # or "swe"
|
||||
```
|
||||
respectively.
|
||||
|
||||
## Sequence to Sequence
|
||||
|
||||
The script [`run_speech_recognition_seq2seq.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) can be used to fine-tune any [Speech Sequence-to-Sequence Model](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSpeechSeq2Seq) for automatic speech
|
||||
recognition on one of the [official speech recognition datasets](https://huggingface.co/datasets?task_ids=task_ids:automatic-speech-recognition) or a custom dataset. This includes the Whisper model from OpenAI or a warm-started Speech-Encoder-Decoder Model, examples for which are included below.
|
||||
|
||||
### Whisper Model
|
||||
We can load all components of the Whisper model directly from the pretrained checkpoint, including the pretrained model weights, feature extractor and tokenizer. We simply have to specify our fine-tuning dataset and training hyperparameters.
|
||||
|
||||
#### Single GPU Whisper Training
|
||||
The following example shows how to fine-tune the [Whisper small](https://huggingface.co/openai/whisper-small) checkpoint on the Hindi subset of [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) using a single GPU device in half-precision:
|
||||
```bash
|
||||
python run_speech_recognition_seq2seq.py \
|
||||
--model_name_or_path="openai/whisper-small" \
|
||||
--dataset_name="mozilla-foundation/common_voice_11_0" \
|
||||
--dataset_config_name="hi" \
|
||||
--language="hindi" \
|
||||
--task="transcribe" \
|
||||
--train_split_name="train+validation" \
|
||||
--eval_split_name="test" \
|
||||
--max_steps="5000" \
|
||||
--output_dir="./whisper-small-hi" \
|
||||
--per_device_train_batch_size="16" \
|
||||
--gradient_accumulation_steps="2" \
|
||||
--per_device_eval_batch_size="16" \
|
||||
--logging_steps="25" \
|
||||
--learning_rate="1e-5" \
|
||||
--warmup_steps="500" \
|
||||
--eval_strategy="steps" \
|
||||
--eval_steps="1000" \
|
||||
--save_strategy="steps" \
|
||||
--save_steps="1000" \
|
||||
--generation_max_length="225" \
|
||||
--preprocessing_num_workers="16" \
|
||||
--max_duration_in_seconds="30" \
|
||||
--text_column_name="sentence" \
|
||||
--freeze_feature_encoder="False" \
|
||||
--gradient_checkpointing \
|
||||
--fp16 \
|
||||
--overwrite_output_dir \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--predict_with_generate \
|
||||
--use_auth_token
|
||||
```
|
||||
On a single V100, training should take approximately 8 hours, with a final cross-entropy loss of **1e-4** and word error rate of **32.6%**.
|
||||
|
||||
If training on a different language, you should be sure to change the `language` argument. The `language` and `task`
|
||||
arguments should be omitted for English speech recognition.
|
||||
|
||||
#### Multi GPU Whisper Training
|
||||
The following example shows how to fine-tune the [Whisper small](https://huggingface.co/openai/whisper-small) checkpoint on the Hindi subset of [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) using 2 GPU devices in half-precision:
|
||||
```bash
|
||||
torchrun \
|
||||
--nproc_per_node 2 run_speech_recognition_seq2seq.py \
|
||||
--model_name_or_path="openai/whisper-small" \
|
||||
--dataset_name="mozilla-foundation/common_voice_11_0" \
|
||||
--dataset_config_name="hi" \
|
||||
--language="hindi" \
|
||||
--task="transcribe" \
|
||||
--train_split_name="train+validation" \
|
||||
--eval_split_name="test" \
|
||||
--max_steps="5000" \
|
||||
--output_dir="./whisper-small-hi" \
|
||||
--per_device_train_batch_size="16" \
|
||||
--per_device_eval_batch_size="16" \
|
||||
--logging_steps="25" \
|
||||
--learning_rate="1e-5" \
|
||||
--warmup_steps="500" \
|
||||
--eval_strategy="steps" \
|
||||
--eval_steps="1000" \
|
||||
--save_strategy="steps" \
|
||||
--save_steps="1000" \
|
||||
--generation_max_length="225" \
|
||||
--preprocessing_num_workers="16" \
|
||||
--max_duration_in_seconds="30" \
|
||||
--text_column_name="sentence" \
|
||||
--freeze_feature_encoder="False" \
|
||||
--gradient_checkpointing \
|
||||
--fp16 \
|
||||
--overwrite_output_dir \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--predict_with_generate \
|
||||
--use_auth_token
|
||||
```
|
||||
On two V100s, training should take approximately 4 hours, with a final cross-entropy loss of **1e-4** and word error rate of **32.6%**.
|
||||
|
||||
### Warm-Started Speech-Encoder-Decoder Model
|
||||
A very common use case is to leverage a pretrained speech encoder model,
|
||||
*e.g.* [Wav2Vec2](https://huggingface.co/transformers/main/model_doc/wav2vec2.html), [HuBERT](https://huggingface.co/transformers/main/model_doc/hubert.html) or [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html), with a pretrained text decoder model, *e.g.* [BART](https://huggingface.co/docs/transformers/main/en/model_doc/bart#transformers.BartForCausalLM) or [GPT-2](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2#transformers.GPT2ForCausalLM), to create a [Speech-Encoder-Decoder Model](https://huggingface.co/docs/transformers/main/en/model_doc/speech-encoder-decoder#speech-encoder-decoder-models).
|
||||
|
||||
By pairing a pretrained speech model with a pretrained text model, the warm-started model has prior knowledge of both the source audio and target text domains. However, the cross-attention weights between the encoder and decoder are randomly initialised. Thus, the model requires fine-tuning to learn the cross-attention weights and align the encoder mapping with that of the decoder. We can perform this very fine-tuning procedure using the example script.
|
||||
|
||||
As an example, let's instantiate a *Wav2Vec2-2-Bart* model with the `SpeechEncoderDecoderModel` framework. First create an empty repo on `hf.co`:
|
||||
|
||||
```bash
|
||||
hf repo create wav2vec2-2-bart-base
|
||||
git clone https://huggingface.co/<your-user-name>/wav2vec2-2-bart-base
|
||||
cd wav2vec2-2-bart-base
|
||||
```
|
||||
|
||||
Next, run the following script **inside** the just cloned repo:
|
||||
|
||||
```python
|
||||
from transformers import SpeechEncoderDecoderModel, AutoFeatureExtractor, AutoTokenizer, Wav2Vec2Processor
|
||||
|
||||
# checkpoints to leverage
|
||||
encoder_id = "facebook/wav2vec2-base"
|
||||
decoder_id = "facebook/bart-base"
|
||||
|
||||
# load and save speech-encoder-decoder model
|
||||
# set some hyper-parameters for training and evaluation
|
||||
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_feat_proj_dropout=0.0, encoder_layerdrop=0.0, max_length=200, num_beams=5)
|
||||
model.config.decoder_start_token_id = model.decoder.config.bos_token_id
|
||||
model.config.pad_token_id = model.decoder.config.pad_token_id
|
||||
model.config.eos_token_id = model.decoder.config.eos_token_id
|
||||
model.save_pretrained("./")
|
||||
|
||||
# load and save processor
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
|
||||
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
|
||||
processor = Wav2Vec2Processor(feature_extractor, tokenizer)
|
||||
processor.save_pretrained("./")
|
||||
```
|
||||
|
||||
Finally, we can upload all files:
|
||||
```bash
|
||||
git lfs install
|
||||
git add . && git commit -m "upload model files" && git push
|
||||
```
|
||||
|
||||
and link the official `run_speech_recognition_seq2seq.py` script to the folder:
|
||||
|
||||
```bash
|
||||
ln -s $(realpath <path/to/transformers>/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) ./
|
||||
```
|
||||
|
||||
Note that we have added a randomly initialized _adapter layer_ to `wav2vec2-base` with the argument
|
||||
`encoder_add_adapter=True`. This adapter sub-samples the output sequence of
|
||||
`wav2vec2-base` along the time dimension. By default, a single
|
||||
output vector of `wav2vec2-base` has a receptive field of *ca.* 25ms (*cf.*
|
||||
Section *4.2* of the [official Wav2Vec2 paper](https://huggingface.co/papers/2006.11477)), which represents a little less a single character. On the other hand, BART
|
||||
makes use of a sentence-piece tokenizer as an input processor, so that a single
|
||||
hidden vector of `bart-base` represents *ca.* 4 characters. To better align the
|
||||
receptive field of the *Wav2Vec2* output vectors with *BART*'s hidden-states in the cross-attention
|
||||
mechanism, we further subsample *Wav2Vec2*'s output by a factor of 8 by
|
||||
adding a convolution-based adapter.
|
||||
|
||||
Having warm-started the speech-encoder-decoder model under `<your-user-name>/wav2vec2-2-bart`, we can now fine-tune it on the task of speech recognition.
|
||||
|
||||
In the script [`run_speech_recognition_seq2seq`], we load the warm-started model,
|
||||
feature extractor, and tokenizer, process a speech recognition dataset,
|
||||
and subsequently make use of the [`Seq2SeqTrainer`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Seq2SeqTrainer) to train our system.
|
||||
Note that it is important to align the target transcriptions with the decoder's vocabulary. For example, the [`Librispeech`](https://huggingface.co/datasets/librispeech_asr) dataset only contains capitalized letters in the transcriptions,
|
||||
whereas BART was pretrained mostly on normalized text. Thus, it is recommended to add the argument
|
||||
`--do_lower_case` to the fine-tuning script when using a warm-started `SpeechEncoderDecoderModel`.
|
||||
The model is fine-tuned on the standard cross-entropy language modeling
|
||||
loss for sequence-to-sequence (just like *T5* or *BART* in natural language processing).
|
||||
|
||||
---
|
||||
**NOTE**
|
||||
|
||||
If you encounter problems with data preprocessing by setting `--preprocessing_num_workers` > 1,
|
||||
you might want to set the environment variable `OMP_NUM_THREADS` to 1 as follows:
|
||||
|
||||
```bash
|
||||
OMP_NUM_THREADS=1 python run_speech_recognition_ctc ...
|
||||
```
|
||||
|
||||
If the environment variable is not set, the training script might freeze, *i.e.* see: https://github.com/pytorch/audio/issues/1021#issuecomment-726915239.
|
||||
|
||||
---
|
||||
|
||||
#### Single GPU Seq2Seq
|
||||
|
||||
The following command shows how to fine-tune [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html) on [Common Voice](https://huggingface.co/datasets/common_voice) using a single GPU in half-precision.
|
||||
|
||||
```bash
|
||||
python run_speech_recognition_seq2seq.py \
|
||||
--dataset_name="librispeech_asr" \
|
||||
--model_name_or_path="./" \
|
||||
--dataset_config_name="clean" \
|
||||
--train_split_name="train.100" \
|
||||
--eval_split_name="validation" \
|
||||
--output_dir="./" \
|
||||
--preprocessing_num_workers="16" \
|
||||
--length_column_name="input_length" \
|
||||
--overwrite_output_dir \
|
||||
--num_train_epochs="5" \
|
||||
--per_device_train_batch_size="8" \
|
||||
--per_device_eval_batch_size="8" \
|
||||
--gradient_accumulation_steps="8" \
|
||||
--learning_rate="3e-4" \
|
||||
--warmup_steps="400" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="text" \
|
||||
--save_steps="400" \
|
||||
--eval_steps="400" \
|
||||
--logging_steps="10" \
|
||||
--save_total_limit="1" \
|
||||
--freeze_feature_encoder \
|
||||
--gradient_checkpointing \
|
||||
--fp16 \
|
||||
--group_by_length \
|
||||
--predict_with_generate \
|
||||
--generation_max_length="40" \
|
||||
--generation_num_beams="1" \
|
||||
--do_train --do_eval \
|
||||
--do_lower_case
|
||||
```
|
||||
|
||||
On a single V100 GPU, this script should run in *ca.* 5 hours and yield a
|
||||
cross-entropy loss of **0.405** and word error rate of **0.0728**.
|
||||
|
||||
#### Multi GPU Seq2Seq
|
||||
|
||||
The following command shows how to fine-tune [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html) on [Common Voice](https://huggingface.co/datasets/common_voice) using 8 GPUs in half-precision.
|
||||
|
||||
```bash
|
||||
torchrun \
|
||||
--nproc_per_node 8 run_speech_recognition_seq2seq.py \
|
||||
--dataset_name="librispeech_asr" \
|
||||
--model_name_or_path="./" \
|
||||
--dataset_config_name="clean" \
|
||||
--train_split_name="train.100" \
|
||||
--eval_split_name="validation" \
|
||||
--output_dir="./" \
|
||||
--preprocessing_num_workers="16" \
|
||||
--length_column_name="input_length" \
|
||||
--overwrite_output_dir \
|
||||
--num_train_epochs="5" \
|
||||
--per_device_train_batch_size="8" \
|
||||
--per_device_eval_batch_size="8" \
|
||||
--gradient_accumulation_steps="1" \
|
||||
--learning_rate="3e-4" \
|
||||
--warmup_steps="400" \
|
||||
--eval_strategy="steps" \
|
||||
--text_column_name="text" \
|
||||
--save_steps="400" \
|
||||
--eval_steps="400" \
|
||||
--logging_steps="10" \
|
||||
--save_total_limit="1" \
|
||||
--freeze_feature_encoder \
|
||||
--gradient_checkpointing \
|
||||
--fp16 \
|
||||
--group_by_length \
|
||||
--predict_with_generate \
|
||||
--do_train --do_eval \
|
||||
--do_lower_case
|
||||
```
|
||||
|
||||
On 8 V100 GPUs, this script should run in *ca.* 45 minutes and yield a cross-entropy loss of **0.405** and word error rate of **0.0728**
|
||||
|
||||
### Examples Seq2Seq
|
||||
|
||||
#### Librispeech Seq2Seq
|
||||
|
||||
- [Librispeech](https://huggingface.co/datasets/librispeech_asr)
|
||||
|
||||
| Dataset | Dataset Config | Pretrained Model | Word error rate on eval | Phoneme error rate on eval | GPU setup | Training time | Fine-tuned Model & Logs | Command to reproduce |
|
||||
|----------------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|----------------------------|------------|---------------|-----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr) | `"clean"` - `"train.100"` | [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) and [facebook/bart-base](https://huggingface.co/facebook/bart-base) | 0.0728 | - | 8 GPU V100 | 45min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-base) | [create_model.py](https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-base/blob/main/create_model.py) & [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-base/blob/main/run_librispeech.sh) |
|
||||
| [Librispeech](https://huggingface.co/datasets/librispeech_asr) | `"clean"` - `"train.100"` | [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) and [facebook/bart-large](https://huggingface.co/facebook/bart-large) | 0.0486 | - | 8 GPU V100 | 1h20min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-large) | [create_model.py](https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-large/blob/main/create_model.py) & [run.sh](https://huggingface.co/patrickvonplaten/wav2vec2-2-bart-large/blob/main/run_librispeech.sh) |
|
||||
@@ -0,0 +1,6 @@
|
||||
datasets[audio] >= 1.18.0
|
||||
torch >= 1.5
|
||||
torchaudio
|
||||
librosa
|
||||
jiwer
|
||||
evaluate
|
||||
833
transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
Executable file
833
transformers/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
Executable file
@@ -0,0 +1,833 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers @ git+https://github.com/huggingface/transformers.git",
|
||||
# "datasets[audio] >= 1.18.0",
|
||||
# "torch >= 1.5",
|
||||
# "torchaudio",
|
||||
# "librosa",
|
||||
# "jiwer",
|
||||
# "evaluate",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""Fine-tuning a 🤗 Transformers CTC model for automatic speech recognition"""
|
||||
|
||||
import functools
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import warnings
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional, Union
|
||||
|
||||
import datasets
|
||||
import evaluate
|
||||
import torch
|
||||
from datasets import DatasetDict, load_dataset
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoFeatureExtractor,
|
||||
AutoModelForCTC,
|
||||
AutoProcessor,
|
||||
AutoTokenizer,
|
||||
HfArgumentParser,
|
||||
Trainer,
|
||||
TrainingArguments,
|
||||
Wav2Vec2Processor,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint, is_main_process
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt")
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def list_field(default=None, metadata=None):
|
||||
return field(default_factory=lambda: default, metadata=metadata)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
tokenizer_name_or_path: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Path to pretrained tokenizer or tokenizer identifier from huggingface.co/models"},
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
freeze_feature_encoder: bool = field(
|
||||
default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."}
|
||||
)
|
||||
attention_dropout: float = field(
|
||||
default=0.0, metadata={"help": "The dropout ratio for the attention probabilities."}
|
||||
)
|
||||
activation_dropout: float = field(
|
||||
default=0.0, metadata={"help": "The dropout ratio for activations inside the fully connected layer."}
|
||||
)
|
||||
feat_proj_dropout: float = field(default=0.0, metadata={"help": "The dropout ratio for the projected features."})
|
||||
hidden_dropout: float = field(
|
||||
default=0.0,
|
||||
metadata={
|
||||
"help": "The dropout probability for all fully connected layers in the embeddings, encoder, and pooler."
|
||||
},
|
||||
)
|
||||
final_dropout: float = field(
|
||||
default=0.0,
|
||||
metadata={"help": "The dropout probability for the final projection layer."},
|
||||
)
|
||||
mask_time_prob: float = field(
|
||||
default=0.05,
|
||||
metadata={
|
||||
"help": (
|
||||
"Probability of each feature vector along the time axis to be chosen as the start of the vector "
|
||||
"span to be masked. Approximately ``mask_time_prob * sequence_length // mask_time_length`` feature "
|
||||
"vectors will be masked along the time axis."
|
||||
)
|
||||
},
|
||||
)
|
||||
mask_time_length: int = field(
|
||||
default=10,
|
||||
metadata={"help": "Length of vector span to mask along the time axis."},
|
||||
)
|
||||
mask_feature_prob: float = field(
|
||||
default=0.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"Probability of each feature vector along the feature axis to be chosen as the start of the vectorspan"
|
||||
" to be masked. Approximately ``mask_feature_prob * sequence_length // mask_feature_length`` feature"
|
||||
" bins will be masked along the time axis."
|
||||
)
|
||||
},
|
||||
)
|
||||
mask_feature_length: int = field(
|
||||
default=10,
|
||||
metadata={"help": "Length of vector span to mask along the feature axis."},
|
||||
)
|
||||
layerdrop: float = field(default=0.0, metadata={"help": "The LayerDrop probability."})
|
||||
ctc_loss_reduction: Optional[str] = field(
|
||||
default="mean", metadata={"help": "The way the ctc loss should be reduced. Should be one of 'mean' or 'sum'."}
|
||||
)
|
||||
ctc_zero_infinity: Optional[bool] = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": "Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly"
|
||||
" occur when the inputs are too short to be aligned to the targets."
|
||||
},
|
||||
)
|
||||
add_adapter: Optional[bool] = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": "Whether a convolutional attention network should be stacked on top of the Wav2Vec2Bert Encoder. Can be very"
|
||||
"useful to downsample the output length."
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
|
||||
Using `HfArgumentParser` we can turn this class
|
||||
into argparse arguments to be able to specify them on
|
||||
the command line.
|
||||
"""
|
||||
|
||||
dataset_name: str = field(
|
||||
metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
dataset_config_name: str = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
train_split_name: str = field(
|
||||
default="train+validation",
|
||||
metadata={
|
||||
"help": (
|
||||
"The name of the training data set split to use (via the datasets library). Defaults to "
|
||||
"'train+validation'"
|
||||
)
|
||||
},
|
||||
)
|
||||
eval_split_name: str = field(
|
||||
default="test",
|
||||
metadata={
|
||||
"help": "The name of the evaluation data set split to use (via the datasets library). Defaults to 'test'"
|
||||
},
|
||||
)
|
||||
audio_column_name: str = field(
|
||||
default="audio",
|
||||
metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"},
|
||||
)
|
||||
text_column_name: str = field(
|
||||
default="text",
|
||||
metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of validation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
chars_to_ignore: Optional[list[str]] = list_field(
|
||||
default=None,
|
||||
metadata={"help": "A list of characters to remove from the transcripts."},
|
||||
)
|
||||
eval_metrics: list[str] = list_field(
|
||||
default=["wer"],
|
||||
metadata={"help": "A list of metrics the model should be evaluated on. E.g. `'wer cer'`"},
|
||||
)
|
||||
max_duration_in_seconds: float = field(
|
||||
default=20.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"Filter audio files that are longer than `max_duration_in_seconds` seconds to"
|
||||
" 'max_duration_in_seconds`"
|
||||
)
|
||||
},
|
||||
)
|
||||
min_duration_in_seconds: float = field(
|
||||
default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"}
|
||||
)
|
||||
preprocessing_only: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to only do data preprocessing and skip training. This is especially useful when data"
|
||||
" preprocessing errors out in distributed training due to timeout. In this case, one should run the"
|
||||
" preprocessing in a non-distributed setup with `preprocessing_only=True` so that the cached datasets"
|
||||
" can consequently be loaded in distributed training"
|
||||
)
|
||||
},
|
||||
)
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
unk_token: str = field(
|
||||
default="[UNK]",
|
||||
metadata={"help": "The unk token for the tokenizer"},
|
||||
)
|
||||
pad_token: str = field(
|
||||
default="[PAD]",
|
||||
metadata={"help": "The padding token for the tokenizer"},
|
||||
)
|
||||
word_delimiter_token: str = field(
|
||||
default="|",
|
||||
metadata={"help": "The word delimiter token for the tokenizer"},
|
||||
)
|
||||
phoneme_language: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The target language that should be used be"
|
||||
" passed to the tokenizer for tokenization. Note that"
|
||||
" this is only relevant if the model classifies the"
|
||||
" input audio to a sequence of phoneme sequences."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataCollatorCTCWithPadding:
|
||||
"""
|
||||
Data collator that will dynamically pad the inputs received.
|
||||
Args:
|
||||
processor (:class:`~transformers.AutoProcessor`)
|
||||
The processor used for processing the data.
|
||||
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
|
||||
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
|
||||
among:
|
||||
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
|
||||
sequence if provided).
|
||||
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
|
||||
maximum acceptable input length for the model if that argument is not provided.
|
||||
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
|
||||
different lengths).
|
||||
max_length (:obj:`int`, `optional`):
|
||||
Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).
|
||||
max_length_labels (:obj:`int`, `optional`):
|
||||
Maximum length of the ``labels`` returned list and optionally padding length (see above).
|
||||
pad_to_multiple_of (:obj:`int`, `optional`):
|
||||
If set will pad the sequence to a multiple of the provided value.
|
||||
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
|
||||
7.5 (Volta).
|
||||
"""
|
||||
|
||||
processor: AutoProcessor
|
||||
padding: Union[bool, str] = "longest"
|
||||
pad_to_multiple_of: Optional[int] = None
|
||||
pad_to_multiple_of_labels: Optional[int] = None
|
||||
feature_extractor_input_name: Optional[str] = "input_values"
|
||||
|
||||
def __call__(self, features: list[dict[str, Union[list[int], torch.Tensor]]]) -> dict[str, torch.Tensor]:
|
||||
# split inputs and labels since they have to be of different lengths and need
|
||||
# different padding methods
|
||||
input_features = [
|
||||
{self.feature_extractor_input_name: feature[self.feature_extractor_input_name]} for feature in features
|
||||
]
|
||||
label_features = [{"input_ids": feature["labels"]} for feature in features]
|
||||
|
||||
batch = self.processor.pad(
|
||||
input_features,
|
||||
padding=self.padding,
|
||||
pad_to_multiple_of=self.pad_to_multiple_of,
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
labels_batch = self.processor.pad(
|
||||
labels=label_features,
|
||||
padding=self.padding,
|
||||
pad_to_multiple_of=self.pad_to_multiple_of_labels,
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
# replace padding with -100 to ignore loss correctly
|
||||
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
|
||||
|
||||
batch["labels"] = labels
|
||||
if "attention_mask" in batch:
|
||||
batch["attention_mask"] = batch["attention_mask"].to(torch.long)
|
||||
|
||||
return batch
|
||||
|
||||
|
||||
def create_vocabulary_from_data(
|
||||
datasets: DatasetDict,
|
||||
word_delimiter_token: Optional[str] = None,
|
||||
unk_token: Optional[str] = None,
|
||||
pad_token: Optional[str] = None,
|
||||
):
|
||||
# Given training and test labels create vocabulary
|
||||
def extract_all_chars(batch):
|
||||
all_text = " ".join(batch["target_text"])
|
||||
vocab = list(set(all_text))
|
||||
return {"vocab": [vocab], "all_text": [all_text]}
|
||||
|
||||
vocabs = datasets.map(
|
||||
extract_all_chars,
|
||||
batched=True,
|
||||
batch_size=-1,
|
||||
keep_in_memory=True,
|
||||
remove_columns=datasets["train"].column_names,
|
||||
)
|
||||
|
||||
# take union of all unique characters in each dataset
|
||||
vocab_set = functools.reduce(
|
||||
lambda vocab_1, vocab_2: set(vocab_1["vocab"][0]) | set(vocab_2["vocab"][0]), vocabs.values()
|
||||
)
|
||||
|
||||
vocab_dict = {v: k for k, v in enumerate(sorted(vocab_set))}
|
||||
|
||||
# replace white space with delimiter token
|
||||
if word_delimiter_token is not None:
|
||||
vocab_dict[word_delimiter_token] = vocab_dict[" "]
|
||||
del vocab_dict[" "]
|
||||
|
||||
# add unk and pad token
|
||||
if unk_token is not None:
|
||||
vocab_dict[unk_token] = len(vocab_dict)
|
||||
|
||||
if pad_token is not None:
|
||||
vocab_dict[pad_token] = len(vocab_dict)
|
||||
|
||||
return vocab_dict
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_speech_recognition_ctc", model_args, data_args)
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
# Set the verbosity to info of the Transformers logger (on main process only):
|
||||
if is_main_process(training_args.local_rank):
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
logger.info("Training/evaluation parameters %s", training_args)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# 1. First, let's load the dataset
|
||||
raw_datasets = DatasetDict()
|
||||
|
||||
if training_args.do_train:
|
||||
raw_datasets["train"] = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
split=data_args.train_split_name,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if data_args.audio_column_name not in raw_datasets["train"].column_names:
|
||||
raise ValueError(
|
||||
f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'."
|
||||
" Make sure to set `--audio_column_name` to the correct audio column - one of"
|
||||
f" {', '.join(raw_datasets['train'].column_names)}."
|
||||
)
|
||||
|
||||
if data_args.text_column_name not in raw_datasets["train"].column_names:
|
||||
raise ValueError(
|
||||
f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. "
|
||||
"Make sure to set `--text_column_name` to the correct text column - one of "
|
||||
f"{', '.join(raw_datasets['train'].column_names)}."
|
||||
)
|
||||
|
||||
if data_args.max_train_samples is not None:
|
||||
raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples))
|
||||
|
||||
if training_args.do_eval:
|
||||
raw_datasets["eval"] = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
split=data_args.eval_split_name,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if data_args.max_eval_samples is not None:
|
||||
raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples))
|
||||
|
||||
# 2. We remove some special characters from the datasets
|
||||
# that make training complicated and do not help in transcribing the speech
|
||||
# E.g. characters, such as `,` and `.` do not really have an acoustic characteristic
|
||||
# that could be easily picked up by the model
|
||||
chars_to_ignore_regex = (
|
||||
f"[{''.join(data_args.chars_to_ignore)}]" if data_args.chars_to_ignore is not None else None
|
||||
)
|
||||
text_column_name = data_args.text_column_name
|
||||
|
||||
def remove_special_characters(batch):
|
||||
if chars_to_ignore_regex is not None:
|
||||
batch["target_text"] = re.sub(chars_to_ignore_regex, "", batch[text_column_name]).lower() + " "
|
||||
else:
|
||||
batch["target_text"] = batch[text_column_name].lower() + " "
|
||||
return batch
|
||||
|
||||
with training_args.main_process_first(desc="dataset map special characters removal"):
|
||||
raw_datasets = raw_datasets.map(
|
||||
remove_special_characters,
|
||||
remove_columns=[text_column_name],
|
||||
desc="remove special characters from datasets",
|
||||
)
|
||||
|
||||
# save special tokens for tokenizer
|
||||
word_delimiter_token = data_args.word_delimiter_token
|
||||
unk_token = data_args.unk_token
|
||||
pad_token = data_args.pad_token
|
||||
|
||||
# 3. Next, let's load the config as we might need it to create
|
||||
# the tokenizer
|
||||
# load config
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# 4. Next, if no tokenizer file is defined,
|
||||
# we create the vocabulary of the model by extracting all unique characters from
|
||||
# the training and evaluation datasets
|
||||
# We need to make sure that only first rank saves vocabulary
|
||||
# make sure all processes wait until vocab is created
|
||||
tokenizer_name_or_path = model_args.tokenizer_name_or_path
|
||||
tokenizer_kwargs = {}
|
||||
if tokenizer_name_or_path is None:
|
||||
# save vocab in training output dir
|
||||
tokenizer_name_or_path = training_args.output_dir
|
||||
|
||||
vocab_file = os.path.join(tokenizer_name_or_path, "vocab.json")
|
||||
|
||||
with training_args.main_process_first():
|
||||
if training_args.overwrite_output_dir and os.path.isfile(vocab_file):
|
||||
try:
|
||||
os.remove(vocab_file)
|
||||
except OSError:
|
||||
# in shared file-systems it might be the case that
|
||||
# two processes try to delete the vocab file at the some time
|
||||
pass
|
||||
|
||||
with training_args.main_process_first(desc="dataset map vocabulary creation"):
|
||||
if not os.path.isfile(vocab_file):
|
||||
os.makedirs(tokenizer_name_or_path, exist_ok=True)
|
||||
vocab_dict = create_vocabulary_from_data(
|
||||
raw_datasets,
|
||||
word_delimiter_token=word_delimiter_token,
|
||||
unk_token=unk_token,
|
||||
pad_token=pad_token,
|
||||
)
|
||||
|
||||
# save vocab dict to be loaded into tokenizer
|
||||
with open(vocab_file, "w") as file:
|
||||
json.dump(vocab_dict, file)
|
||||
|
||||
# if tokenizer has just been created
|
||||
# it is defined by `tokenizer_class` if present in config else by `model_type`
|
||||
tokenizer_kwargs = {
|
||||
"config": config if config.tokenizer_class is not None else None,
|
||||
"tokenizer_type": config.model_type if config.tokenizer_class is None else None,
|
||||
"unk_token": unk_token,
|
||||
"pad_token": pad_token,
|
||||
"word_delimiter_token": word_delimiter_token,
|
||||
}
|
||||
|
||||
# 5. Now we can instantiate the feature extractor, tokenizer and model
|
||||
# Note for distributed training, the .from_pretrained methods guarantee that only
|
||||
# one local process can concurrently download model & vocab.
|
||||
|
||||
# load feature_extractor and tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
tokenizer_name_or_path,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
**tokenizer_kwargs,
|
||||
)
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# adapt config
|
||||
config.update(
|
||||
{
|
||||
"feat_proj_dropout": model_args.feat_proj_dropout,
|
||||
"attention_dropout": model_args.attention_dropout,
|
||||
"hidden_dropout": model_args.hidden_dropout,
|
||||
"final_dropout": model_args.final_dropout,
|
||||
"mask_time_prob": model_args.mask_time_prob,
|
||||
"mask_time_length": model_args.mask_time_length,
|
||||
"mask_feature_prob": model_args.mask_feature_prob,
|
||||
"mask_feature_length": model_args.mask_feature_length,
|
||||
"gradient_checkpointing": training_args.gradient_checkpointing,
|
||||
"layerdrop": model_args.layerdrop,
|
||||
"ctc_loss_reduction": model_args.ctc_loss_reduction,
|
||||
"ctc_zero_infinity": model_args.ctc_zero_infinity,
|
||||
"pad_token_id": tokenizer.pad_token_id,
|
||||
"vocab_size": len(tokenizer),
|
||||
"activation_dropout": model_args.activation_dropout,
|
||||
"add_adapter": model_args.add_adapter,
|
||||
}
|
||||
)
|
||||
|
||||
# create model
|
||||
model = AutoModelForCTC.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
config=config,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# freeze encoder
|
||||
if model_args.freeze_feature_encoder:
|
||||
model.freeze_feature_encoder()
|
||||
|
||||
# 6. Now we preprocess the datasets including loading the audio, resampling and normalization
|
||||
# Thankfully, `datasets` takes care of automatically loading and resampling the audio,
|
||||
# so that we just need to set the correct target sampling rate and normalize the input
|
||||
# via the `feature_extractor`
|
||||
|
||||
# make sure that dataset decodes audio with correct sampling rate
|
||||
dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate
|
||||
if dataset_sampling_rate != feature_extractor.sampling_rate:
|
||||
raw_datasets = raw_datasets.cast_column(
|
||||
data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
|
||||
)
|
||||
|
||||
# derive max & min input length for sample rate & max duration
|
||||
max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate
|
||||
min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate
|
||||
audio_column_name = data_args.audio_column_name
|
||||
num_workers = data_args.preprocessing_num_workers
|
||||
feature_extractor_input_name = feature_extractor.model_input_names[0]
|
||||
|
||||
# `phoneme_language` is only relevant if the model is fine-tuned on phoneme classification
|
||||
phoneme_language = data_args.phoneme_language
|
||||
|
||||
# Preprocessing the datasets.
|
||||
# We need to read the audio files as arrays and tokenize the targets.
|
||||
def prepare_dataset(batch):
|
||||
# load audio
|
||||
sample = batch[audio_column_name]
|
||||
|
||||
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
|
||||
batch[feature_extractor_input_name] = getattr(inputs, feature_extractor_input_name)[0]
|
||||
# take length of raw audio waveform
|
||||
batch["input_length"] = len(sample["array"].squeeze())
|
||||
|
||||
# encode targets
|
||||
additional_kwargs = {}
|
||||
if phoneme_language is not None:
|
||||
additional_kwargs["phonemizer_lang"] = phoneme_language
|
||||
|
||||
batch["labels"] = tokenizer(batch["target_text"], **additional_kwargs).input_ids
|
||||
return batch
|
||||
|
||||
with training_args.main_process_first(desc="dataset map preprocessing"):
|
||||
vectorized_datasets = raw_datasets.map(
|
||||
prepare_dataset,
|
||||
remove_columns=next(iter(raw_datasets.values())).column_names,
|
||||
num_proc=num_workers,
|
||||
desc="preprocess datasets",
|
||||
)
|
||||
|
||||
def is_audio_in_length_range(length):
|
||||
return length > min_input_length and length < max_input_length
|
||||
|
||||
# filter data that is shorter than min_input_length
|
||||
vectorized_datasets = vectorized_datasets.filter(
|
||||
is_audio_in_length_range,
|
||||
num_proc=num_workers,
|
||||
input_columns=["input_length"],
|
||||
)
|
||||
|
||||
# 7. Next, we can prepare the training.
|
||||
# Let's use word error rate (WER) as our evaluation metric,
|
||||
# instantiate a data collator and the trainer
|
||||
|
||||
# Define evaluation metrics during training, *i.e.* word error rate, character error rate
|
||||
eval_metrics = {metric: evaluate.load(metric, cache_dir=model_args.cache_dir) for metric in data_args.eval_metrics}
|
||||
|
||||
# for large datasets it is advised to run the preprocessing on a
|
||||
# single machine first with ``args.preprocessing_only`` since there will mostly likely
|
||||
# be a timeout when running the script in distributed mode.
|
||||
# In a second step ``args.preprocessing_only`` can then be set to `False` to load the
|
||||
# cached dataset
|
||||
if data_args.preprocessing_only:
|
||||
logger.info(f"Data preprocessing finished. Files cached at {vectorized_datasets.cache_files}")
|
||||
return
|
||||
|
||||
# For languages like Chinese with large vocabulary size, we need to discard logits
|
||||
# and only keep the argmax, otherwise we run out of memory during evaluation.
|
||||
def preprocess_logits_for_metrics(logits, labels):
|
||||
pred_ids = torch.argmax(logits, dim=-1)
|
||||
return pred_ids, labels
|
||||
|
||||
def compute_metrics(pred):
|
||||
pred_ids = pred.predictions[0]
|
||||
pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id
|
||||
|
||||
pred_str = tokenizer.batch_decode(pred_ids)
|
||||
# we do not want to group tokens when computing the metrics
|
||||
label_str = tokenizer.batch_decode(pred.label_ids, group_tokens=False)
|
||||
|
||||
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
|
||||
|
||||
return metrics
|
||||
|
||||
# Now save everything to be able to create a single processor later
|
||||
# make sure all processes wait until data is saved
|
||||
with training_args.main_process_first():
|
||||
# only the main process saves them
|
||||
if is_main_process(training_args.local_rank):
|
||||
# save feature extractor, tokenizer and config
|
||||
feature_extractor.save_pretrained(training_args.output_dir)
|
||||
tokenizer.save_pretrained(training_args.output_dir)
|
||||
config.save_pretrained(training_args.output_dir)
|
||||
|
||||
try:
|
||||
processor = AutoProcessor.from_pretrained(training_args.output_dir)
|
||||
except (OSError, KeyError):
|
||||
warnings.warn(
|
||||
"Loading a processor from a feature extractor config that does not"
|
||||
" include a `processor_class` attribute is deprecated and will be removed in v5. Please add the following "
|
||||
" attribute to your `preprocessor_config.json` file to suppress this warning: "
|
||||
" `'processor_class': 'Wav2Vec2Processor'`",
|
||||
FutureWarning,
|
||||
)
|
||||
processor = Wav2Vec2Processor.from_pretrained(training_args.output_dir)
|
||||
|
||||
# Instantiate custom data collator
|
||||
data_collator = DataCollatorCTCWithPadding(
|
||||
processor=processor, feature_extractor_input_name=feature_extractor_input_name
|
||||
)
|
||||
|
||||
# Initialize Trainer
|
||||
trainer = Trainer(
|
||||
model=model,
|
||||
data_collator=data_collator,
|
||||
args=training_args,
|
||||
compute_metrics=compute_metrics,
|
||||
train_dataset=vectorized_datasets["train"] if training_args.do_train else None,
|
||||
eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None,
|
||||
processing_class=processor,
|
||||
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
|
||||
)
|
||||
|
||||
# 8. Finally, we can start training
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
# use last checkpoint if exist
|
||||
if last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
elif os.path.isdir(model_args.model_name_or_path):
|
||||
checkpoint = model_args.model_name_or_path
|
||||
else:
|
||||
checkpoint = None
|
||||
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model()
|
||||
|
||||
metrics = train_result.metrics
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples
|
||||
if data_args.max_train_samples is not None
|
||||
else len(vectorized_datasets["train"])
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"]))
|
||||
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
results = {}
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
metrics = trainer.evaluate()
|
||||
max_eval_samples = (
|
||||
data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"])
|
||||
)
|
||||
metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"]))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Write model card and (optionally) push to hub
|
||||
config_name = data_args.dataset_config_name if data_args.dataset_config_name is not None else "na"
|
||||
kwargs = {
|
||||
"finetuned_from": model_args.model_name_or_path,
|
||||
"tasks": "automatic-speech-recognition",
|
||||
"tags": ["automatic-speech-recognition", data_args.dataset_name],
|
||||
"dataset_args": (
|
||||
f"Config: {config_name}, Training split: {data_args.train_split_name}, Eval split:"
|
||||
f" {data_args.eval_split_name}"
|
||||
),
|
||||
"dataset": f"{data_args.dataset_name.upper()} - {config_name.upper()}",
|
||||
}
|
||||
if "common_voice" in data_args.dataset_name:
|
||||
kwargs["language"] = config_name
|
||||
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,834 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers @ git+https://github.com/huggingface/transformers.git",
|
||||
# "datasets[audio] >= 1.18.0",
|
||||
# "torch >= 1.5",
|
||||
# "torchaudio",
|
||||
# "librosa",
|
||||
# "jiwer",
|
||||
# "evaluate",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""Fine-tuning a 🤗 Transformers CTC adapter model for automatic speech recognition"""
|
||||
|
||||
import functools
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import warnings
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional, Union
|
||||
|
||||
import datasets
|
||||
import evaluate
|
||||
import numpy as np
|
||||
import torch
|
||||
from datasets import DatasetDict, load_dataset
|
||||
from safetensors.torch import save_file as safe_save_file
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoFeatureExtractor,
|
||||
AutoModelForCTC,
|
||||
AutoProcessor,
|
||||
AutoTokenizer,
|
||||
HfArgumentParser,
|
||||
Trainer,
|
||||
TrainingArguments,
|
||||
Wav2Vec2Processor,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.models.wav2vec2.modeling_wav2vec2 import WAV2VEC2_ADAPTER_SAFE_FILE
|
||||
from transformers.trainer_utils import get_last_checkpoint, is_main_process
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt")
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def list_field(default=None, metadata=None):
|
||||
return field(default_factory=lambda: default, metadata=metadata)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
tokenizer_name_or_path: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Path to pretrained tokenizer or tokenizer identifier from huggingface.co/models"},
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
final_dropout: float = field(
|
||||
default=0.0,
|
||||
metadata={"help": "The dropout probability for the final projection layer."},
|
||||
)
|
||||
mask_time_prob: float = field(
|
||||
default=0.05,
|
||||
metadata={
|
||||
"help": (
|
||||
"Probability of each feature vector along the time axis to be chosen as the start of the vector "
|
||||
"span to be masked. Approximately ``mask_time_prob * sequence_length // mask_time_length`` feature "
|
||||
"vectors will be masked along the time axis."
|
||||
)
|
||||
},
|
||||
)
|
||||
mask_time_length: int = field(
|
||||
default=10,
|
||||
metadata={"help": "Length of vector span to mask along the time axis."},
|
||||
)
|
||||
mask_feature_prob: float = field(
|
||||
default=0.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"Probability of each feature vector along the feature axis to be chosen as the start of the vectorspan"
|
||||
" to be masked. Approximately ``mask_feature_prob * sequence_length // mask_feature_length`` feature"
|
||||
" bins will be masked along the time axis."
|
||||
)
|
||||
},
|
||||
)
|
||||
mask_feature_length: int = field(
|
||||
default=10,
|
||||
metadata={"help": "Length of vector span to mask along the feature axis."},
|
||||
)
|
||||
layerdrop: float = field(default=0.0, metadata={"help": "The LayerDrop probability."})
|
||||
ctc_loss_reduction: Optional[str] = field(
|
||||
default="mean", metadata={"help": "The way the ctc loss should be reduced. Should be one of 'mean' or 'sum'."}
|
||||
)
|
||||
adapter_attn_dim: int = field(
|
||||
default=16,
|
||||
metadata={
|
||||
"help": "The hidden dimension of the adapter layers that will be randomly initialized and trained. The higher the dimension, the more capacity is given to the adapter weights. Note that only the adapter weights are fine-tuned."
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
|
||||
Using `HfArgumentParser` we can turn this class
|
||||
into argparse arguments to be able to specify them on
|
||||
the command line.
|
||||
"""
|
||||
|
||||
dataset_name: str = field(
|
||||
metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
target_language: Optional[str] = field(
|
||||
metadata={
|
||||
"help": (
|
||||
"The target language on which the adapter attention layers"
|
||||
" should be trained on in ISO 693-3 code, e.g. `tur` for Turkish"
|
||||
" Wav2Vec2's MMS ISO codes can be looked up here: https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html"
|
||||
" If you are not training the adapter layers on a language, simply choose"
|
||||
" another acronym that fits your data."
|
||||
)
|
||||
},
|
||||
)
|
||||
dataset_config_name: str = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
train_split_name: str = field(
|
||||
default="train+validation",
|
||||
metadata={
|
||||
"help": (
|
||||
"The name of the training data set split to use (via the datasets library). Defaults to "
|
||||
"'train+validation'"
|
||||
)
|
||||
},
|
||||
)
|
||||
eval_split_name: str = field(
|
||||
default="test",
|
||||
metadata={
|
||||
"help": "The name of the evaluation data set split to use (via the datasets library). Defaults to 'test'"
|
||||
},
|
||||
)
|
||||
audio_column_name: str = field(
|
||||
default="audio",
|
||||
metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"},
|
||||
)
|
||||
text_column_name: str = field(
|
||||
default="text",
|
||||
metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"},
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of validation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
chars_to_ignore: Optional[list[str]] = list_field(
|
||||
default=None,
|
||||
metadata={"help": "A list of characters to remove from the transcripts."},
|
||||
)
|
||||
eval_metrics: list[str] = list_field(
|
||||
default=["wer"],
|
||||
metadata={"help": "A list of metrics the model should be evaluated on. E.g. `'wer cer'`"},
|
||||
)
|
||||
max_duration_in_seconds: float = field(
|
||||
default=20.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"Filter audio files that are longer than `max_duration_in_seconds` seconds to"
|
||||
" 'max_duration_in_seconds`"
|
||||
)
|
||||
},
|
||||
)
|
||||
min_duration_in_seconds: float = field(
|
||||
default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"}
|
||||
)
|
||||
preprocessing_only: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to only do data preprocessing and skip training. This is especially useful when data"
|
||||
" preprocessing errors out in distributed training due to timeout. In this case, one should run the"
|
||||
" preprocessing in a non-distributed setup with `preprocessing_only=True` so that the cached datasets"
|
||||
" can consequently be loaded in distributed training"
|
||||
)
|
||||
},
|
||||
)
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
unk_token: str = field(
|
||||
default="[UNK]",
|
||||
metadata={"help": "The unk token for the tokenizer"},
|
||||
)
|
||||
pad_token: str = field(
|
||||
default="[PAD]",
|
||||
metadata={"help": "The padding token for the tokenizer"},
|
||||
)
|
||||
word_delimiter_token: str = field(
|
||||
default="|",
|
||||
metadata={"help": "The word delimiter token for the tokenizer"},
|
||||
)
|
||||
overwrite_lang_vocab: bool = field(
|
||||
default=False,
|
||||
metadata={"help": ("If :obj:`True`, will overwrite existing `target_language` vocabulary of tokenizer.")},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataCollatorCTCWithPadding:
|
||||
"""
|
||||
Data collator that will dynamically pad the inputs received.
|
||||
Args:
|
||||
processor (:class:`~transformers.AutoProcessor`)
|
||||
The processor used for processing the data.
|
||||
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
|
||||
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
|
||||
among:
|
||||
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
|
||||
sequence if provided).
|
||||
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
|
||||
maximum acceptable input length for the model if that argument is not provided.
|
||||
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
|
||||
different lengths).
|
||||
max_length (:obj:`int`, `optional`):
|
||||
Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).
|
||||
max_length_labels (:obj:`int`, `optional`):
|
||||
Maximum length of the ``labels`` returned list and optionally padding length (see above).
|
||||
pad_to_multiple_of (:obj:`int`, `optional`):
|
||||
If set will pad the sequence to a multiple of the provided value.
|
||||
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
|
||||
7.5 (Volta).
|
||||
"""
|
||||
|
||||
processor: AutoProcessor
|
||||
padding: Union[bool, str] = "longest"
|
||||
pad_to_multiple_of: Optional[int] = None
|
||||
pad_to_multiple_of_labels: Optional[int] = None
|
||||
|
||||
def __call__(self, features: list[dict[str, Union[list[int], torch.Tensor]]]) -> dict[str, torch.Tensor]:
|
||||
# split inputs and labels since they have to be of different lengths and need
|
||||
# different padding methods
|
||||
input_features = [{"input_values": feature["input_values"]} for feature in features]
|
||||
label_features = [{"input_ids": feature["labels"]} for feature in features]
|
||||
|
||||
batch = self.processor.pad(
|
||||
input_features,
|
||||
padding=self.padding,
|
||||
pad_to_multiple_of=self.pad_to_multiple_of,
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
labels_batch = self.processor.pad(
|
||||
labels=label_features,
|
||||
padding=self.padding,
|
||||
pad_to_multiple_of=self.pad_to_multiple_of_labels,
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
# replace padding with -100 to ignore loss correctly
|
||||
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
|
||||
|
||||
batch["labels"] = labels
|
||||
if "attention_mask" in batch:
|
||||
batch["attention_mask"] = batch["attention_mask"].to(torch.long)
|
||||
|
||||
return batch
|
||||
|
||||
|
||||
def create_vocabulary_from_data(
|
||||
datasets: DatasetDict,
|
||||
word_delimiter_token: Optional[str] = None,
|
||||
unk_token: Optional[str] = None,
|
||||
pad_token: Optional[str] = None,
|
||||
):
|
||||
# Given training and test labels create vocabulary
|
||||
def extract_all_chars(batch):
|
||||
all_text = " ".join(batch["target_text"])
|
||||
vocab = list(set(all_text))
|
||||
return {"vocab": [vocab], "all_text": [all_text]}
|
||||
|
||||
vocabs = datasets.map(
|
||||
extract_all_chars,
|
||||
batched=True,
|
||||
batch_size=-1,
|
||||
keep_in_memory=True,
|
||||
remove_columns=datasets["train"].column_names,
|
||||
)
|
||||
|
||||
# take union of all unique characters in each dataset
|
||||
vocab_set = functools.reduce(
|
||||
lambda vocab_1, vocab_2: set(vocab_1["vocab"][0]) | set(vocab_2["vocab"][0]), vocabs.values()
|
||||
)
|
||||
|
||||
vocab_dict = {v: k for k, v in enumerate(sorted(vocab_set))}
|
||||
|
||||
# replace white space with delimiter token
|
||||
if word_delimiter_token is not None:
|
||||
vocab_dict[word_delimiter_token] = vocab_dict[" "]
|
||||
del vocab_dict[" "]
|
||||
|
||||
# add unk and pad token
|
||||
if unk_token is not None:
|
||||
vocab_dict[unk_token] = len(vocab_dict)
|
||||
|
||||
if pad_token is not None:
|
||||
vocab_dict[pad_token] = len(vocab_dict)
|
||||
|
||||
return vocab_dict
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_speech_recognition_ctc_adapter", model_args, data_args)
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
# Set the verbosity to info of the Transformers logger (on main process only):
|
||||
if is_main_process(training_args.local_rank):
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
logger.info("Training/evaluation parameters %s", training_args)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# 1. First, let's load the dataset
|
||||
raw_datasets = DatasetDict()
|
||||
|
||||
if training_args.do_train:
|
||||
raw_datasets["train"] = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
split=data_args.train_split_name,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if data_args.audio_column_name not in raw_datasets["train"].column_names:
|
||||
raise ValueError(
|
||||
f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'."
|
||||
" Make sure to set `--audio_column_name` to the correct audio column - one of"
|
||||
f" {', '.join(raw_datasets['train'].column_names)}."
|
||||
)
|
||||
|
||||
if data_args.text_column_name not in raw_datasets["train"].column_names:
|
||||
raise ValueError(
|
||||
f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. "
|
||||
"Make sure to set `--text_column_name` to the correct text column - one of "
|
||||
f"{', '.join(raw_datasets['train'].column_names)}."
|
||||
)
|
||||
|
||||
if data_args.max_train_samples is not None:
|
||||
raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples))
|
||||
|
||||
if training_args.do_eval:
|
||||
raw_datasets["eval"] = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
split=data_args.eval_split_name,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if data_args.max_eval_samples is not None:
|
||||
raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples))
|
||||
|
||||
# 2. We remove some special characters from the datasets
|
||||
# that make training complicated and do not help in transcribing the speech
|
||||
# E.g. characters, such as `,` and `.` do not really have an acoustic characteristic
|
||||
# that could be easily picked up by the model
|
||||
chars_to_ignore_regex = (
|
||||
f"[{''.join(data_args.chars_to_ignore)}]" if data_args.chars_to_ignore is not None else None
|
||||
)
|
||||
text_column_name = data_args.text_column_name
|
||||
|
||||
def remove_special_characters(batch):
|
||||
if chars_to_ignore_regex is not None:
|
||||
batch["target_text"] = re.sub(chars_to_ignore_regex, "", batch[text_column_name]).lower() + " "
|
||||
else:
|
||||
batch["target_text"] = batch[text_column_name].lower() + " "
|
||||
return batch
|
||||
|
||||
with training_args.main_process_first(desc="dataset map special characters removal"):
|
||||
raw_datasets = raw_datasets.map(
|
||||
remove_special_characters,
|
||||
remove_columns=[text_column_name],
|
||||
desc="remove special characters from datasets",
|
||||
)
|
||||
|
||||
# save special tokens for tokenizer
|
||||
word_delimiter_token = data_args.word_delimiter_token
|
||||
unk_token = data_args.unk_token
|
||||
pad_token = data_args.pad_token
|
||||
|
||||
# 3. Next, let's load the config as we might need it to create
|
||||
# the tokenizer
|
||||
# load config
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# 4. Next, if no tokenizer file is defined,
|
||||
# we create the vocabulary of the model by extracting all unique characters from
|
||||
# the training and evaluation datasets
|
||||
# We need to make sure that only first rank saves vocabulary
|
||||
# make sure all processes wait until vocab is created
|
||||
tokenizer_name_or_path = model_args.tokenizer_name_or_path
|
||||
tokenizer_kwargs = {}
|
||||
|
||||
vocab_dict = {}
|
||||
if tokenizer_name_or_path is not None:
|
||||
# load vocabulary of other adapter languages so that new language can be appended
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
tokenizer_name_or_path,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
vocab_dict = tokenizer.vocab.copy()
|
||||
if tokenizer.target_lang is None:
|
||||
raise ValueError("Make sure to load a multi-lingual tokenizer with a set target language.")
|
||||
|
||||
if data_args.target_language in tokenizer.vocab and not data_args.overwrite_lang_vocab:
|
||||
logger.info(
|
||||
"Adapter language already exists."
|
||||
" Skipping vocabulary creating. If you want to create a new vocabulary"
|
||||
f" for {data_args.target_language} make sure to add '--overwrite_lang_vocab'"
|
||||
)
|
||||
else:
|
||||
tokenizer_name_or_path = None
|
||||
|
||||
if tokenizer_name_or_path is None:
|
||||
# save vocab in training output dir
|
||||
tokenizer_name_or_path = training_args.output_dir
|
||||
|
||||
vocab_file = os.path.join(tokenizer_name_or_path, "vocab.json")
|
||||
|
||||
with training_args.main_process_first():
|
||||
if training_args.overwrite_output_dir and os.path.isfile(vocab_file):
|
||||
try:
|
||||
os.remove(vocab_file)
|
||||
except OSError:
|
||||
# in shared file-systems it might be the case that
|
||||
# two processes try to delete the vocab file at the some time
|
||||
pass
|
||||
|
||||
with training_args.main_process_first(desc="dataset map vocabulary creation"):
|
||||
if not os.path.isfile(vocab_file):
|
||||
os.makedirs(tokenizer_name_or_path, exist_ok=True)
|
||||
lang_dict = create_vocabulary_from_data(
|
||||
raw_datasets,
|
||||
word_delimiter_token=word_delimiter_token,
|
||||
unk_token=unk_token,
|
||||
pad_token=pad_token,
|
||||
)
|
||||
|
||||
# if we doing adapter language training, save
|
||||
# vocab with adapter language
|
||||
if data_args.target_language is not None:
|
||||
vocab_dict[data_args.target_language] = lang_dict
|
||||
|
||||
# save vocab dict to be loaded into tokenizer
|
||||
with open(vocab_file, "w") as file:
|
||||
json.dump(vocab_dict, file)
|
||||
|
||||
# if tokenizer has just been created
|
||||
# it is defined by `tokenizer_class` if present in config else by `model_type`
|
||||
tokenizer_kwargs = {
|
||||
"config": config if config.tokenizer_class is not None else None,
|
||||
"tokenizer_type": config.model_type if config.tokenizer_class is None else None,
|
||||
"unk_token": unk_token,
|
||||
"pad_token": pad_token,
|
||||
"word_delimiter_token": word_delimiter_token,
|
||||
"target_lang": data_args.target_language,
|
||||
}
|
||||
|
||||
# 5. Now we can instantiate the feature extractor, tokenizer and model
|
||||
# Note for distributed training, the .from_pretrained methods guarantee that only
|
||||
# one local process can concurrently download model & vocab.
|
||||
|
||||
# load feature_extractor and tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
tokenizer_name_or_path,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
**tokenizer_kwargs,
|
||||
)
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# adapt config
|
||||
config.update(
|
||||
{
|
||||
"final_dropout": model_args.final_dropout,
|
||||
"mask_time_prob": model_args.mask_time_prob,
|
||||
"mask_time_length": model_args.mask_time_length,
|
||||
"mask_feature_prob": model_args.mask_feature_prob,
|
||||
"mask_feature_length": model_args.mask_feature_length,
|
||||
"gradient_checkpointing": training_args.gradient_checkpointing,
|
||||
"layerdrop": model_args.layerdrop,
|
||||
"ctc_loss_reduction": model_args.ctc_loss_reduction,
|
||||
"pad_token_id": tokenizer.pad_token_id,
|
||||
"vocab_size": len(tokenizer),
|
||||
"adapter_attn_dim": model_args.adapter_attn_dim,
|
||||
}
|
||||
)
|
||||
|
||||
# create model
|
||||
model = AutoModelForCTC.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
config=config,
|
||||
token=data_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
ignore_mismatched_sizes=True,
|
||||
)
|
||||
|
||||
# if attn adapter is defined, freeze all non-adapter weights
|
||||
if model.config.adapter_attn_dim is not None:
|
||||
model.init_adapter_layers()
|
||||
# first we freeze the whole base model
|
||||
model.freeze_base_model()
|
||||
|
||||
# next we unfreeze all adapter layers
|
||||
adapter_weights = model._get_adapters()
|
||||
for param in adapter_weights.values():
|
||||
param.requires_grad = True
|
||||
|
||||
# 6. Now we preprocess the datasets including loading the audio, resampling and normalization
|
||||
# Thankfully, `datasets` takes care of automatically loading and resampling the audio,
|
||||
# so that we just need to set the correct target sampling rate and normalize the input
|
||||
# via the `feature_extractor`
|
||||
|
||||
# make sure that dataset decodes audio with correct sampling rate
|
||||
dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate
|
||||
if dataset_sampling_rate != feature_extractor.sampling_rate:
|
||||
raw_datasets = raw_datasets.cast_column(
|
||||
data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
|
||||
)
|
||||
|
||||
# derive max & min input length for sample rate & max duration
|
||||
max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate
|
||||
min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate
|
||||
audio_column_name = data_args.audio_column_name
|
||||
num_workers = data_args.preprocessing_num_workers
|
||||
|
||||
# Preprocessing the datasets.
|
||||
# We need to read the audio files as arrays and tokenize the targets.
|
||||
def prepare_dataset(batch):
|
||||
# load audio
|
||||
sample = batch[audio_column_name]
|
||||
|
||||
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
|
||||
batch["input_values"] = inputs.input_values[0]
|
||||
batch["input_length"] = len(batch["input_values"])
|
||||
|
||||
# encode targets
|
||||
batch["labels"] = tokenizer(batch["target_text"]).input_ids
|
||||
return batch
|
||||
|
||||
with training_args.main_process_first(desc="dataset map preprocessing"):
|
||||
vectorized_datasets = raw_datasets.map(
|
||||
prepare_dataset,
|
||||
remove_columns=next(iter(raw_datasets.values())).column_names,
|
||||
num_proc=num_workers,
|
||||
desc="preprocess datasets",
|
||||
)
|
||||
|
||||
def is_audio_in_length_range(length):
|
||||
return length > min_input_length and length < max_input_length
|
||||
|
||||
# filter data that is shorter than min_input_length
|
||||
vectorized_datasets = vectorized_datasets.filter(
|
||||
is_audio_in_length_range,
|
||||
num_proc=num_workers,
|
||||
input_columns=["input_length"],
|
||||
)
|
||||
|
||||
# 7. Next, we can prepare the training.
|
||||
# Let's use word error rate (WER) as our evaluation metric,
|
||||
# instantiate a data collator and the trainer
|
||||
|
||||
# Define evaluation metrics during training, *i.e.* word error rate, character error rate
|
||||
eval_metrics = {metric: evaluate.load(metric, cache_dir=model_args.cache_dir) for metric in data_args.eval_metrics}
|
||||
|
||||
# for large datasets it is advised to run the preprocessing on a
|
||||
# single machine first with ``args.preprocessing_only`` since there will mostly likely
|
||||
# be a timeout when running the script in distributed mode.
|
||||
# In a second step ``args.preprocessing_only`` can then be set to `False` to load the
|
||||
# cached dataset
|
||||
if data_args.preprocessing_only:
|
||||
logger.info(f"Data preprocessing finished. Files cached at {vectorized_datasets.cache_files}")
|
||||
return
|
||||
|
||||
def compute_metrics(pred):
|
||||
pred_logits = pred.predictions
|
||||
pred_ids = np.argmax(pred_logits, axis=-1)
|
||||
|
||||
pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id
|
||||
|
||||
pred_str = tokenizer.batch_decode(pred_ids)
|
||||
# we do not want to group tokens when computing the metrics
|
||||
label_str = tokenizer.batch_decode(pred.label_ids, group_tokens=False)
|
||||
|
||||
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
|
||||
|
||||
return metrics
|
||||
|
||||
# Now save everything to be able to create a single processor later
|
||||
# make sure all processes wait until data is saved
|
||||
with training_args.main_process_first():
|
||||
# only the main process saves them
|
||||
if is_main_process(training_args.local_rank):
|
||||
# save feature extractor, tokenizer and config
|
||||
feature_extractor.save_pretrained(training_args.output_dir)
|
||||
tokenizer.save_pretrained(training_args.output_dir)
|
||||
config.save_pretrained(training_args.output_dir)
|
||||
|
||||
try:
|
||||
processor = AutoProcessor.from_pretrained(training_args.output_dir)
|
||||
except (OSError, KeyError):
|
||||
warnings.warn(
|
||||
"Loading a processor from a feature extractor config that does not"
|
||||
" include a `processor_class` attribute is deprecated and will be removed in v5. Please add the following "
|
||||
" attribute to your `preprocessor_config.json` file to suppress this warning: "
|
||||
" `'processor_class': 'Wav2Vec2Processor'`",
|
||||
FutureWarning,
|
||||
)
|
||||
processor = Wav2Vec2Processor.from_pretrained(training_args.output_dir)
|
||||
|
||||
# Instantiate custom data collator
|
||||
data_collator = DataCollatorCTCWithPadding(processor=processor)
|
||||
|
||||
# Initialize Trainer
|
||||
trainer = Trainer(
|
||||
model=model,
|
||||
data_collator=data_collator,
|
||||
args=training_args,
|
||||
compute_metrics=compute_metrics,
|
||||
train_dataset=vectorized_datasets["train"] if training_args.do_train else None,
|
||||
eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None,
|
||||
processing_class=processor,
|
||||
)
|
||||
|
||||
# 8. Finally, we can start training
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
# use last checkpoint if exist
|
||||
if last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
elif os.path.isdir(model_args.model_name_or_path):
|
||||
checkpoint = model_args.model_name_or_path
|
||||
else:
|
||||
checkpoint = None
|
||||
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model()
|
||||
|
||||
metrics = train_result.metrics
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples
|
||||
if data_args.max_train_samples is not None
|
||||
else len(vectorized_datasets["train"])
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"]))
|
||||
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
results = {}
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
metrics = trainer.evaluate()
|
||||
max_eval_samples = (
|
||||
data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"])
|
||||
)
|
||||
metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"]))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Write model card and (optionally) push to hub
|
||||
config_name = data_args.dataset_config_name if data_args.dataset_config_name is not None else "na"
|
||||
kwargs = {
|
||||
"finetuned_from": model_args.model_name_or_path,
|
||||
"tasks": "automatic-speech-recognition",
|
||||
"tags": ["automatic-speech-recognition", data_args.dataset_name, "mms"],
|
||||
"dataset_args": (
|
||||
f"Config: {config_name}, Training split: {data_args.train_split_name}, Eval split:"
|
||||
f" {data_args.eval_split_name}"
|
||||
),
|
||||
"dataset": f"{data_args.dataset_name.upper()} - {config_name.upper()}",
|
||||
}
|
||||
if "common_voice" in data_args.dataset_name:
|
||||
kwargs["language"] = config_name
|
||||
|
||||
# make sure that adapter weights are saved separately
|
||||
adapter_file = WAV2VEC2_ADAPTER_SAFE_FILE.format(data_args.target_language)
|
||||
adapter_file = os.path.join(training_args.output_dir, adapter_file)
|
||||
logger.info(f"Saving adapter weights under {adapter_file}...")
|
||||
safe_save_file(model._get_adapters(), adapter_file, metadata={"format": "pt"})
|
||||
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,646 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers @ git+https://github.com/huggingface/transformers.git",
|
||||
# "datasets[audio] >= 1.18.0",
|
||||
# "torch >= 1.5",
|
||||
# "torchaudio",
|
||||
# "librosa",
|
||||
# "jiwer",
|
||||
# "evaluate",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for sequence to sequence speech recognition.
|
||||
"""
|
||||
# You can also adapt this script on your own sequence to sequence speech
|
||||
# recognition task. Pointers for this are left as comments.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
import datasets
|
||||
import evaluate
|
||||
import torch
|
||||
from datasets import DatasetDict, load_dataset
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
AutoConfig,
|
||||
AutoFeatureExtractor,
|
||||
AutoModelForSpeechSeq2Seq,
|
||||
AutoProcessor,
|
||||
AutoTokenizer,
|
||||
HfArgumentParser,
|
||||
Seq2SeqTrainer,
|
||||
Seq2SeqTrainingArguments,
|
||||
set_seed,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint, is_main_process
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
tokenizer_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
|
||||
)
|
||||
feature_extractor_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "feature extractor name or path if not the same as model_name"}
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"},
|
||||
)
|
||||
use_fast_tokenizer: bool = field(
|
||||
default=True,
|
||||
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
freeze_feature_encoder: bool = field(
|
||||
default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."}
|
||||
)
|
||||
freeze_encoder: bool = field(
|
||||
default=False, metadata={"help": "Whether to freeze the entire encoder of the seq2seq model."}
|
||||
)
|
||||
forced_decoder_ids: list[list[int]] = field(
|
||||
default=None,
|
||||
metadata={"help": "Deprecated. Please use the `language` and `task` arguments instead."},
|
||||
)
|
||||
suppress_tokens: list[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"Deprecated. The use of `suppress_tokens` should not be required for the majority of fine-tuning examples."
|
||||
"Should you need to use `suppress_tokens`, please manually update them in the fine-tuning script directly."
|
||||
)
|
||||
},
|
||||
)
|
||||
apply_spec_augment: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": "Whether to apply *SpecAugment* data augmentation to the input features. This is currently only relevant for Wav2Vec2, HuBERT, WavLM and Whisper models."
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
"""
|
||||
|
||||
dataset_name: str = field(
|
||||
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
overwrite_cache: bool = field(
|
||||
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
|
||||
)
|
||||
preprocessing_num_workers: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "The number of processes to use for the preprocessing."},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
audio_column_name: str = field(
|
||||
default="audio",
|
||||
metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"},
|
||||
)
|
||||
text_column_name: str = field(
|
||||
default="text",
|
||||
metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"},
|
||||
)
|
||||
max_duration_in_seconds: float = field(
|
||||
default=20.0,
|
||||
metadata={
|
||||
"help": (
|
||||
"Truncate audio files that are longer than `max_duration_in_seconds` seconds to"
|
||||
" 'max_duration_in_seconds`"
|
||||
)
|
||||
},
|
||||
)
|
||||
min_duration_in_seconds: float = field(
|
||||
default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"}
|
||||
)
|
||||
preprocessing_only: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to only do data preprocessing and skip training. This is especially useful when data"
|
||||
" preprocessing errors out in distributed training due to timeout. In this case, one should run the"
|
||||
" preprocessing in a non-distributed setup with `preprocessing_only=True` so that the cached datasets"
|
||||
" can consequently be loaded in distributed training"
|
||||
)
|
||||
},
|
||||
)
|
||||
train_split_name: str = field(
|
||||
default="train",
|
||||
metadata={
|
||||
"help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'"
|
||||
},
|
||||
)
|
||||
eval_split_name: str = field(
|
||||
default="test",
|
||||
metadata={
|
||||
"help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'"
|
||||
},
|
||||
)
|
||||
do_lower_case: bool = field(
|
||||
default=True,
|
||||
metadata={"help": "Whether the target text should be lower cased."},
|
||||
)
|
||||
language: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"Language for multilingual fine-tuning. This argument should be set for multilingual fine-tuning "
|
||||
"only. For English speech recognition, it should be set to `None`."
|
||||
)
|
||||
},
|
||||
)
|
||||
task: str = field(
|
||||
default="transcribe",
|
||||
metadata={"help": "Task, either `transcribe` for speech recognition or `translate` for speech translation."},
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataCollatorSpeechSeq2SeqWithPadding:
|
||||
"""
|
||||
Data collator that will dynamically pad the inputs received.
|
||||
Args:
|
||||
processor ([`WhisperProcessor`])
|
||||
The processor used for processing the data.
|
||||
decoder_start_token_id (`int`)
|
||||
The begin-of-sentence of the decoder.
|
||||
forward_attention_mask (`bool`)
|
||||
Whether to return attention_mask.
|
||||
"""
|
||||
|
||||
processor: Any
|
||||
decoder_start_token_id: int
|
||||
forward_attention_mask: bool
|
||||
|
||||
def __call__(self, features: list[dict[str, Union[list[int], torch.Tensor]]]) -> dict[str, torch.Tensor]:
|
||||
# split inputs and labels since they have to be of different lengths and need
|
||||
# different padding methods
|
||||
model_input_name = self.processor.model_input_names[0]
|
||||
input_features = [{model_input_name: feature[model_input_name]} for feature in features]
|
||||
label_features = [{"input_ids": feature["labels"]} for feature in features]
|
||||
|
||||
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
|
||||
|
||||
if self.forward_attention_mask:
|
||||
batch["attention_mask"] = torch.LongTensor([feature["attention_mask"] for feature in features])
|
||||
|
||||
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
|
||||
|
||||
# replace padding with -100 to ignore loss correctly
|
||||
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
|
||||
|
||||
# if bos token is appended in previous tokenization step,
|
||||
# cut bos token here as it's append later anyways
|
||||
if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item():
|
||||
labels = labels[:, 1:]
|
||||
|
||||
batch["labels"] = labels
|
||||
|
||||
return batch
|
||||
|
||||
|
||||
def main():
|
||||
# 1. Parse input arguments
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
|
||||
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_speech_recognition_seq2seq", model_args, data_args)
|
||||
|
||||
# 2. Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
log_level = training_args.get_process_log_level()
|
||||
logger.setLevel(log_level)
|
||||
datasets.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
|
||||
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
logger.info(f"Training/evaluation parameters {training_args}")
|
||||
|
||||
# Set the verbosity to info of the Transformers logger (on main process only):
|
||||
if is_main_process(training_args.local_rank):
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
logger.info("Training/evaluation parameters %s", training_args)
|
||||
|
||||
# 3. Detecting last checkpoint and eventually continue from last checkpoint
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Set seed before initializing model.
|
||||
set_seed(training_args.seed)
|
||||
|
||||
# 4. Load dataset
|
||||
raw_datasets = DatasetDict()
|
||||
|
||||
if training_args.do_train:
|
||||
raw_datasets["train"] = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
split=data_args.train_split_name,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if training_args.do_eval:
|
||||
raw_datasets["eval"] = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
split=data_args.eval_split_name,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if data_args.audio_column_name not in next(iter(raw_datasets.values())).column_names:
|
||||
raise ValueError(
|
||||
f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'. "
|
||||
"Make sure to set `--audio_column_name` to the correct audio column - one of "
|
||||
f"{', '.join(next(iter(raw_datasets.values())).column_names)}."
|
||||
)
|
||||
|
||||
if data_args.text_column_name not in next(iter(raw_datasets.values())).column_names:
|
||||
raise ValueError(
|
||||
f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. "
|
||||
"Make sure to set `--text_column_name` to the correct text column - one of "
|
||||
f"{', '.join(next(iter(raw_datasets.values())).column_names)}."
|
||||
)
|
||||
|
||||
# 5. Load pretrained model, tokenizer, and feature extractor
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
config = AutoConfig.from_pretrained(
|
||||
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# SpecAugment for whisper models
|
||||
if getattr(config, "model_type", None) == "whisper":
|
||||
config.update({"apply_spec_augment": model_args.apply_spec_augment})
|
||||
|
||||
feature_extractor = AutoFeatureExtractor.from_pretrained(
|
||||
model_args.feature_extractor_name if model_args.feature_extractor_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(
|
||||
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
|
||||
cache_dir=model_args.cache_dir,
|
||||
use_fast=model_args.use_fast_tokenizer,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
model = AutoModelForSpeechSeq2Seq.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
if model.config.decoder_start_token_id is None:
|
||||
raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
|
||||
|
||||
if model_args.freeze_feature_encoder:
|
||||
model.freeze_feature_encoder()
|
||||
|
||||
if model_args.freeze_encoder:
|
||||
model.freeze_encoder()
|
||||
model.model.encoder.gradient_checkpointing = False
|
||||
|
||||
if hasattr(model.generation_config, "is_multilingual") and model.generation_config.is_multilingual:
|
||||
# We only need to set the language and task ids in a multilingual setting
|
||||
tokenizer.set_prefix_tokens(language=data_args.language, task=data_args.task)
|
||||
model.generation_config.language = data_args.language
|
||||
model.generation_config.task = data_args.task
|
||||
elif data_args.language is not None:
|
||||
raise ValueError(
|
||||
"Setting language token for an English-only checkpoint is not permitted. The language argument should "
|
||||
"only be set for multilingual checkpoints."
|
||||
)
|
||||
|
||||
# TODO (Sanchit): deprecate these arguments in v4.41
|
||||
if model_args.forced_decoder_ids is not None:
|
||||
logger.warning(
|
||||
"The use of `forced_decoder_ids` is deprecated and will be removed in v4.41."
|
||||
"Please use the `language` and `task` arguments instead"
|
||||
)
|
||||
model.generation_config.forced_decoder_ids = model_args.forced_decoder_ids
|
||||
else:
|
||||
model.generation_config.forced_decoder_ids = None
|
||||
model.config.forced_decoder_ids = None
|
||||
|
||||
if model_args.suppress_tokens is not None:
|
||||
logger.warning(
|
||||
"The use of `suppress_tokens` is deprecated and will be removed in v4.41."
|
||||
"Should you need `suppress_tokens`, please manually set them in the fine-tuning script."
|
||||
)
|
||||
model.generation_config.suppress_tokens = model_args.suppress_tokens
|
||||
|
||||
# 6. Resample speech dataset if necessary
|
||||
dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate
|
||||
if dataset_sampling_rate != feature_extractor.sampling_rate:
|
||||
raw_datasets = raw_datasets.cast_column(
|
||||
data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
|
||||
)
|
||||
|
||||
# 7. Preprocessing the datasets.
|
||||
# We need to read the audio files as arrays and tokenize the targets.
|
||||
max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate
|
||||
min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate
|
||||
audio_column_name = data_args.audio_column_name
|
||||
num_workers = data_args.preprocessing_num_workers
|
||||
text_column_name = data_args.text_column_name
|
||||
model_input_name = feature_extractor.model_input_names[0]
|
||||
do_lower_case = data_args.do_lower_case
|
||||
# if SpecAugment is used for whisper models, return attention_mask to guide the mask along time axis
|
||||
forward_attention_mask = (
|
||||
getattr(config, "model_type", None) == "whisper"
|
||||
and getattr(config, "apply_spec_augment", False)
|
||||
and getattr(config, "mask_time_prob", 0) > 0
|
||||
)
|
||||
|
||||
if data_args.max_train_samples is not None:
|
||||
raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples))
|
||||
|
||||
if data_args.max_eval_samples is not None:
|
||||
raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples))
|
||||
|
||||
def prepare_dataset(batch):
|
||||
# process audio
|
||||
sample = batch[audio_column_name]
|
||||
inputs = feature_extractor(
|
||||
sample["array"], sampling_rate=sample["sampling_rate"], return_attention_mask=forward_attention_mask
|
||||
)
|
||||
# process audio length
|
||||
batch[model_input_name] = inputs.get(model_input_name)[0]
|
||||
batch["input_length"] = len(sample["array"])
|
||||
if forward_attention_mask:
|
||||
batch["attention_mask"] = inputs.get("attention_mask")[0]
|
||||
|
||||
# process targets
|
||||
input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name]
|
||||
batch["labels"] = tokenizer(input_str).input_ids
|
||||
return batch
|
||||
|
||||
with training_args.main_process_first(desc="dataset map pre-processing"):
|
||||
vectorized_datasets = raw_datasets.map(
|
||||
prepare_dataset,
|
||||
remove_columns=next(iter(raw_datasets.values())).column_names,
|
||||
num_proc=data_args.preprocessing_num_workers,
|
||||
desc="preprocess train dataset",
|
||||
)
|
||||
|
||||
# filter data that is shorter than min_input_length or longer than
|
||||
# max_input_length
|
||||
def is_audio_in_length_range(length):
|
||||
return length > min_input_length and length < max_input_length
|
||||
|
||||
vectorized_datasets = vectorized_datasets.filter(
|
||||
is_audio_in_length_range,
|
||||
num_proc=num_workers,
|
||||
input_columns=["input_length"],
|
||||
)
|
||||
|
||||
# for large datasets it is advised to run the preprocessing on a
|
||||
# single machine first with `args.preprocessing_only` since there will mostly likely
|
||||
# be a timeout when running the script in distributed mode.
|
||||
# In a second step `args.preprocessing_only` can then be set to `False` to load the
|
||||
# cached dataset
|
||||
if data_args.preprocessing_only:
|
||||
cache = {k: v.cache_files for k, v in vectorized_datasets.items()}
|
||||
logger.info(f"Data preprocessing finished. Files cached at {cache}.")
|
||||
return
|
||||
|
||||
# 8. Load Metric
|
||||
metric = evaluate.load("wer", cache_dir=model_args.cache_dir)
|
||||
|
||||
def compute_metrics(pred):
|
||||
pred_ids = pred.predictions
|
||||
|
||||
pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id
|
||||
|
||||
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
|
||||
# we do not want to group tokens when computing the metrics
|
||||
label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)
|
||||
|
||||
wer = metric.compute(predictions=pred_str, references=label_str)
|
||||
|
||||
return {"wer": wer}
|
||||
|
||||
# 9. Create a single speech processor
|
||||
# make sure all processes wait until data is saved
|
||||
with training_args.main_process_first():
|
||||
# only the main process saves them
|
||||
if is_main_process(training_args.local_rank):
|
||||
# save feature extractor, tokenizer and config
|
||||
feature_extractor.save_pretrained(training_args.output_dir)
|
||||
tokenizer.save_pretrained(training_args.output_dir)
|
||||
config.save_pretrained(training_args.output_dir)
|
||||
|
||||
processor = AutoProcessor.from_pretrained(training_args.output_dir)
|
||||
|
||||
# 10. Define data collator
|
||||
data_collator = DataCollatorSpeechSeq2SeqWithPadding(
|
||||
processor=processor,
|
||||
decoder_start_token_id=model.config.decoder_start_token_id,
|
||||
forward_attention_mask=forward_attention_mask,
|
||||
)
|
||||
|
||||
# 11. Initialize Trainer
|
||||
trainer = Seq2SeqTrainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=vectorized_datasets["train"] if training_args.do_train else None,
|
||||
eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None,
|
||||
processing_class=feature_extractor,
|
||||
data_collator=data_collator,
|
||||
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
|
||||
)
|
||||
|
||||
# 12. Training
|
||||
if training_args.do_train:
|
||||
checkpoint = None
|
||||
if training_args.resume_from_checkpoint is not None:
|
||||
checkpoint = training_args.resume_from_checkpoint
|
||||
elif last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model() # Saves the feature extractor too for easy upload
|
||||
|
||||
metrics = train_result.metrics
|
||||
max_train_samples = (
|
||||
data_args.max_train_samples
|
||||
if data_args.max_train_samples is not None
|
||||
else len(vectorized_datasets["train"])
|
||||
)
|
||||
metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"]))
|
||||
trainer.log_metrics("train", metrics)
|
||||
trainer.save_metrics("train", metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# 13. Evaluation
|
||||
results = {}
|
||||
if training_args.do_eval:
|
||||
logger.info("*** Evaluate ***")
|
||||
metrics = trainer.evaluate(
|
||||
metric_key_prefix="eval",
|
||||
max_length=training_args.generation_max_length,
|
||||
num_beams=training_args.generation_num_beams,
|
||||
)
|
||||
max_eval_samples = (
|
||||
data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"])
|
||||
)
|
||||
metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"]))
|
||||
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# 14. Write Training Stats
|
||||
kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "automatic-speech-recognition"}
|
||||
if data_args.dataset_name is not None:
|
||||
kwargs["dataset_tags"] = data_args.dataset_name
|
||||
if data_args.dataset_config_name is not None:
|
||||
kwargs["dataset_args"] = data_args.dataset_config_name
|
||||
kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
|
||||
else:
|
||||
kwargs["dataset"] = data_args.dataset_name
|
||||
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user