init
This commit is contained in:
256
transformers/examples/pytorch/image-pretraining/README.md
Normal file
256
transformers/examples/pytorch/image-pretraining/README.md
Normal file
@@ -0,0 +1,256 @@
|
||||
<!---
|
||||
Copyright 2022 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
|
||||
# Image pretraining examples
|
||||
|
||||
This directory contains Python scripts that allow you to pre-train Transformer-based vision models (like [ViT](https://huggingface.co/docs/transformers/model_doc/vit), [Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)) on your own data, after which you can easily load the weights into a [`AutoModelForImageClassification`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForImageClassification). It currently includes scripts for:
|
||||
- [SimMIM](#simmim) (by Microsoft Research)
|
||||
- [MAE](#mae) (by Facebook AI).
|
||||
|
||||
NOTE: If you encounter problems/have suggestions for improvement, open an issue on Github and tag @NielsRogge.
|
||||
|
||||
|
||||
## SimMIM
|
||||
|
||||
The `run_mim.py` script can be used to pre-train any Transformer-based vision model in the library (concretely, any model supported by the `AutoModelForMaskedImageModeling` API) for masked image modeling as proposed in [SimMIM: A Simple Framework for Masked Image Modeling](https://huggingface.co/papers/2111.09886) using PyTorch.
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/simmim_architecture.jpg"
|
||||
alt="drawing" width="300"/>
|
||||
|
||||
<small> SimMIM framework. Taken from the <a href="https://huggingface.co/papers/2111.09886">original paper</a>. </small>
|
||||
|
||||
The goal for the model is to predict raw pixel values for the masked patches, using just a linear layer as prediction head. The model is trained using a simple L1 loss.
|
||||
|
||||
### Using datasets from 🤗 datasets
|
||||
|
||||
Here we show how to pre-train a `ViT` from scratch for masked image modeling on the [cifar10](https://huggingface.co/datasets/cifar10) dataset.
|
||||
|
||||
Alternatively, one can decide to further pre-train an already pre-trained (or fine-tuned) checkpoint from the [hub](https://huggingface.co/). This can be done by setting the `model_name_or_path` argument to "google/vit-base-patch16-224-in21k" for example (and not specifying the `model_type` argument).
|
||||
|
||||
```bash
|
||||
!python run_mim.py \
|
||||
--model_type vit \
|
||||
--output_dir ./outputs/ \
|
||||
--overwrite_output_dir \
|
||||
--remove_unused_columns False \
|
||||
--label_names bool_masked_pos \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--learning_rate 2e-5 \
|
||||
--weight_decay 0.05 \
|
||||
--num_train_epochs 100 \
|
||||
--per_device_train_batch_size 8 \
|
||||
--per_device_eval_batch_size 8 \
|
||||
--logging_strategy steps \
|
||||
--logging_steps 10 \
|
||||
--eval_strategy epoch \
|
||||
--save_strategy epoch \
|
||||
--load_best_model_at_end True \
|
||||
--save_total_limit 3 \
|
||||
--seed 1337
|
||||
```
|
||||
|
||||
Here, we train for 100 epochs with a learning rate of 2e-5. Note that the SimMIM authors used a more sophisticated learning rate schedule, see the [config files](https://github.com/microsoft/SimMIM/blob/main/configs/vit_base__800ep/simmim_pretrain__vit_base__img224__800ep.yaml) for more info. One can easily tweak the script to include this learning rate schedule (several learning rate schedulers are supported via the [training arguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)).
|
||||
|
||||
We can also for instance replicate the pre-training of a Swin Transformer using the same architecture as used by the SimMIM authors. For this, we first create a custom configuration and save it locally:
|
||||
|
||||
```python
|
||||
from transformers import SwinConfig
|
||||
|
||||
IMAGE_SIZE = 192
|
||||
PATCH_SIZE = 4
|
||||
EMBED_DIM = 128
|
||||
DEPTHS = [2, 2, 18, 2]
|
||||
NUM_HEADS = [4, 8, 16, 32]
|
||||
WINDOW_SIZE = 6
|
||||
|
||||
config = SwinConfig(
|
||||
image_size=IMAGE_SIZE,
|
||||
patch_size=PATCH_SIZE,
|
||||
embed_dim=EMBED_DIM,
|
||||
depths=DEPTHS,
|
||||
num_heads=NUM_HEADS,
|
||||
window_size=WINDOW_SIZE,
|
||||
)
|
||||
config.save_pretrained("path_to_config")
|
||||
```
|
||||
|
||||
Next, we can run the script by providing the path to this custom configuration (replace `path_to_config` below with your path):
|
||||
|
||||
```bash
|
||||
!python run_mim.py \
|
||||
--config_name_or_path path_to_config \
|
||||
--model_type swin \
|
||||
--output_dir ./outputs/ \
|
||||
--overwrite_output_dir \
|
||||
--remove_unused_columns False \
|
||||
--label_names bool_masked_pos \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--learning_rate 2e-5 \
|
||||
--num_train_epochs 5 \
|
||||
--per_device_train_batch_size 8 \
|
||||
--per_device_eval_batch_size 8 \
|
||||
--logging_strategy steps \
|
||||
--logging_steps 10 \
|
||||
--eval_strategy epoch \
|
||||
--save_strategy epoch \
|
||||
--load_best_model_at_end True \
|
||||
--save_total_limit 3 \
|
||||
--seed 1337
|
||||
```
|
||||
|
||||
This will train a Swin Transformer from scratch.
|
||||
|
||||
### Using your own data
|
||||
|
||||
To use your own dataset, the training script expects the following directory structure:
|
||||
|
||||
```bash
|
||||
root/dog/xxx.png
|
||||
root/dog/xxy.png
|
||||
root/dog/[...]/xxz.png
|
||||
|
||||
root/cat/123.png
|
||||
root/cat/nsdf3.png
|
||||
root/cat/[...]/asd932_.png
|
||||
```
|
||||
|
||||
Note that you can put images in dummy subfolders, whose names will be ignored by default (as labels aren't required). You can also just place all images into a single dummy subfolder. Once you've prepared your dataset, you can run the script like this:
|
||||
|
||||
```bash
|
||||
python run_mim.py \
|
||||
--model_type vit \
|
||||
--dataset_name nateraw/image-folder \
|
||||
--train_dir <path-to-train-root> \
|
||||
--output_dir ./outputs/ \
|
||||
--remove_unused_columns False \
|
||||
--label_names bool_masked_pos \
|
||||
--do_train \
|
||||
--do_eval
|
||||
```
|
||||
|
||||
## MAE
|
||||
|
||||
The `run_mae.py` script can be used to pre-train a Vision Transformer as a masked autoencoder (MAE), as proposed in [Masked Autoencoders Are Scalable Vision Learners](https://huggingface.co/papers/2111.06377). The script can be used to train a `ViTMAEForPreTraining` model in the Transformers library, using PyTorch. After self-supervised pre-training, one can load the weights of the encoder directly into a `ViTForImageClassification`. The MAE method allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data.
|
||||
|
||||
The goal for the model is to predict raw pixel values for the masked patches. As the model internally masks patches and learns to reconstruct them, there's no need for any labels. The model uses the mean squared error (MSE) between the reconstructed and original images in the pixel space.
|
||||
|
||||
### Using datasets from 🤗 `datasets`
|
||||
|
||||
One can use the following command to pre-train a `ViTMAEForPreTraining` model from scratch on the [cifar10](https://huggingface.co/datasets/cifar10) dataset:
|
||||
|
||||
```bash
|
||||
python run_mae.py \
|
||||
--dataset_name cifar10 \
|
||||
--output_dir ./vit-mae-demo \
|
||||
--remove_unused_columns False \
|
||||
--label_names pixel_values \
|
||||
--mask_ratio 0.75 \
|
||||
--norm_pix_loss \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--base_learning_rate 1.5e-4 \
|
||||
--lr_scheduler_type cosine \
|
||||
--weight_decay 0.05 \
|
||||
--num_train_epochs 800 \
|
||||
--warmup_ratio 0.05 \
|
||||
--per_device_train_batch_size 8 \
|
||||
--per_device_eval_batch_size 8 \
|
||||
--logging_strategy steps \
|
||||
--logging_steps 10 \
|
||||
--eval_strategy epoch \
|
||||
--save_strategy epoch \
|
||||
--load_best_model_at_end True \
|
||||
--save_total_limit 3 \
|
||||
--seed 1337
|
||||
```
|
||||
|
||||
Here we set:
|
||||
- `mask_ratio` to 0.75 (to mask 75% of the patches for each image)
|
||||
- `norm_pix_loss` to use normalized pixel values as target (the authors reported better representations with this enabled)
|
||||
- `base_learning_rate` to 1.5e-4. Note that the effective learning rate is computed by the [linear schedule](https://huggingface.co/papers/1706.02677): `lr` = `blr` * total training batch size / 256. The total training batch size is computed as `training_args.train_batch_size` * `training_args.gradient_accumulation_steps` * `training_args.world_size`.
|
||||
|
||||
This replicates the same hyperparameters as used in the original implementation, as shown in the table below.
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mae_pretraining_setting.png"
|
||||
alt="drawing" width="300"/>
|
||||
|
||||
<small> Original hyperparameters. Taken from the <a href="https://huggingface.co/papers/2111.06377">original paper</a>. </small>
|
||||
|
||||
Alternatively, one can decide to further pre-train an already pre-trained (or fine-tuned) checkpoint from the [hub](https://huggingface.co/). This can be done by setting the `model_name_or_path` argument to "facebook/vit-mae-base" for example.
|
||||
|
||||
|
||||
### Using your own data
|
||||
|
||||
To use your own dataset, the training script expects the following directory structure:
|
||||
|
||||
```bash
|
||||
root/dog/xxx.png
|
||||
root/dog/xxy.png
|
||||
root/dog/[...]/xxz.png
|
||||
|
||||
root/cat/123.png
|
||||
root/cat/nsdf3.png
|
||||
root/cat/[...]/asd932_.png
|
||||
```
|
||||
|
||||
Note that you can put images in dummy subfolders, whose names will be ignored by default (as labels aren't required). You can also just place all images into a single dummy subfolder. Once you've prepared your dataset, you can run the script like this:
|
||||
|
||||
```bash
|
||||
python run_mae.py \
|
||||
--model_type vit_mae \
|
||||
--dataset_name nateraw/image-folder \
|
||||
--train_dir <path-to-train-root> \
|
||||
--output_dir ./outputs/ \
|
||||
--remove_unused_columns False \
|
||||
--label_names pixel_values \
|
||||
--do_train \
|
||||
--do_eval
|
||||
```
|
||||
|
||||
#### 💡 The above will split the train dir into training and evaluation sets
|
||||
- To control the split amount, use the `--train_val_split` flag.
|
||||
- To provide your own validation split in its own directory, you can pass the `--validation_dir <path-to-val-root>` flag.
|
||||
|
||||
|
||||
## Sharing your model on 🤗 Hub
|
||||
|
||||
0. If you haven't already, [sign up](https://huggingface.co/join) for a 🤗 account
|
||||
|
||||
1. Make sure you have `git-lfs` installed and git set up.
|
||||
|
||||
```bash
|
||||
$ apt install git-lfs
|
||||
$ git config --global user.email "you@example.com"
|
||||
$ git config --global user.name "Your Name"
|
||||
```
|
||||
|
||||
2. Log in with your HuggingFace account credentials using `hf`
|
||||
|
||||
```bash
|
||||
$ hf auth login
|
||||
# ...follow the prompts
|
||||
```
|
||||
|
||||
3. When running the script, pass the following arguments:
|
||||
|
||||
```bash
|
||||
python run_xxx.py \
|
||||
--push_to_hub \
|
||||
--push_to_hub_model_id <name-of-your-model> \
|
||||
...
|
||||
```
|
||||
@@ -0,0 +1,3 @@
|
||||
torch>=1.5.0
|
||||
torchvision>=0.6.0
|
||||
datasets>=1.8.0
|
||||
416
transformers/examples/pytorch/image-pretraining/run_mae.py
Normal file
416
transformers/examples/pytorch/image-pretraining/run_mae.py
Normal file
@@ -0,0 +1,416 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers @ git+https://github.com/huggingface/transformers.git",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
from datasets import load_dataset
|
||||
from torchvision.transforms import Compose, Lambda, Normalize, RandomHorizontalFlip, RandomResizedCrop, ToTensor
|
||||
from torchvision.transforms.functional import InterpolationMode
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
HfArgumentParser,
|
||||
Trainer,
|
||||
TrainingArguments,
|
||||
ViTImageProcessor,
|
||||
ViTMAEConfig,
|
||||
ViTMAEForPreTraining,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
""" Pre-training a 🤗 ViT model as an MAE (masked autoencoder), as proposed in https://huggingface.co/papers/2111.06377."""
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/image-pretraining/requirements.txt")
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
Using `HfArgumentParser` we can turn this class
|
||||
into argparse arguments to be able to specify them on
|
||||
the command line.
|
||||
"""
|
||||
|
||||
dataset_name: Optional[str] = field(
|
||||
default="cifar10", metadata={"help": "Name of a dataset from the datasets package"}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
image_column_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The column name of the images in the files."}
|
||||
)
|
||||
train_dir: Optional[str] = field(default=None, metadata={"help": "A folder containing the training data."})
|
||||
validation_dir: Optional[str] = field(default=None, metadata={"help": "A folder containing the validation data."})
|
||||
train_val_split: Optional[float] = field(
|
||||
default=0.15, metadata={"help": "Percent to split off of train for validation."}
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
def __post_init__(self):
|
||||
data_files = {}
|
||||
if self.train_dir is not None:
|
||||
data_files["train"] = self.train_dir
|
||||
if self.validation_dir is not None:
|
||||
data_files["val"] = self.validation_dir
|
||||
self.data_files = data_files if data_files else None
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/image processor we are going to pre-train.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The model checkpoint for weights initialization. Don't set if you want to train a model from scratch."
|
||||
)
|
||||
},
|
||||
)
|
||||
config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name_or_path"}
|
||||
)
|
||||
config_overrides: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"Override some existing default config settings when a model is trained from scratch. Example: "
|
||||
"n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
|
||||
)
|
||||
},
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
image_processor_name: str = field(default=None, metadata={"help": "Name or path of preprocessor config."})
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
mask_ratio: float = field(
|
||||
default=0.75, metadata={"help": "The ratio of the number of masked tokens in the input sequence."}
|
||||
)
|
||||
norm_pix_loss: bool = field(
|
||||
default=True, metadata={"help": "Whether or not to train with normalized pixel values as target."}
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class CustomTrainingArguments(TrainingArguments):
|
||||
base_learning_rate: float = field(
|
||||
default=1e-3, metadata={"help": "Base learning rate: absolute_lr = base_lr * total_batch_size / 256."}
|
||||
)
|
||||
|
||||
|
||||
def collate_fn(examples):
|
||||
pixel_values = torch.stack([example["pixel_values"] for example in examples])
|
||||
return {"pixel_values": pixel_values}
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, CustomTrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_mae", model_args, data_args)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
|
||||
if training_args.should_log:
|
||||
# The default of training_args.log_level is passive, so we set log level at info here to have that default.
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
|
||||
log_level = training_args.get_process_log_level()
|
||||
logger.setLevel(log_level)
|
||||
transformers.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
+ f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
logger.info(f"Training/evaluation parameters {training_args}")
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Initialize our dataset.
|
||||
ds = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
data_files=data_args.data_files,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=data_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# If we don't have a validation split, split off a percentage of train as validation.
|
||||
data_args.train_val_split = None if "validation" in ds else data_args.train_val_split
|
||||
if isinstance(data_args.train_val_split, float) and data_args.train_val_split > 0.0:
|
||||
split = ds["train"].train_test_split(data_args.train_val_split)
|
||||
ds["train"] = split["train"]
|
||||
ds["validation"] = split["test"]
|
||||
|
||||
# Load pretrained model and image processor
|
||||
#
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config_kwargs = {
|
||||
"cache_dir": model_args.cache_dir,
|
||||
"revision": model_args.model_revision,
|
||||
"token": model_args.token,
|
||||
}
|
||||
if model_args.config_name:
|
||||
config = ViTMAEConfig.from_pretrained(model_args.config_name, **config_kwargs)
|
||||
elif model_args.model_name_or_path:
|
||||
config = ViTMAEConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
|
||||
else:
|
||||
config = ViTMAEConfig()
|
||||
logger.warning("You are instantiating a new config instance from scratch.")
|
||||
if model_args.config_overrides is not None:
|
||||
logger.info(f"Overriding config: {model_args.config_overrides}")
|
||||
config.update_from_string(model_args.config_overrides)
|
||||
logger.info(f"New config: {config}")
|
||||
|
||||
# adapt config
|
||||
config.update(
|
||||
{
|
||||
"mask_ratio": model_args.mask_ratio,
|
||||
"norm_pix_loss": model_args.norm_pix_loss,
|
||||
}
|
||||
)
|
||||
|
||||
# create image processor
|
||||
if model_args.image_processor_name:
|
||||
image_processor = ViTImageProcessor.from_pretrained(model_args.image_processor_name, **config_kwargs)
|
||||
elif model_args.model_name_or_path:
|
||||
image_processor = ViTImageProcessor.from_pretrained(model_args.model_name_or_path, **config_kwargs)
|
||||
else:
|
||||
image_processor = ViTImageProcessor()
|
||||
|
||||
# create model
|
||||
if model_args.model_name_or_path:
|
||||
model = ViTMAEForPreTraining.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in model_args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
)
|
||||
else:
|
||||
logger.info("Training new model from scratch")
|
||||
model = ViTMAEForPreTraining(config)
|
||||
|
||||
if training_args.do_train:
|
||||
column_names = ds["train"].column_names
|
||||
else:
|
||||
column_names = ds["validation"].column_names
|
||||
|
||||
if data_args.image_column_name is not None:
|
||||
image_column_name = data_args.image_column_name
|
||||
elif "image" in column_names:
|
||||
image_column_name = "image"
|
||||
elif "img" in column_names:
|
||||
image_column_name = "img"
|
||||
else:
|
||||
image_column_name = column_names[0]
|
||||
|
||||
# transformations as done in original MAE paper
|
||||
# source: https://github.com/facebookresearch/mae/blob/main/main_pretrain.py
|
||||
if "shortest_edge" in image_processor.size:
|
||||
size = image_processor.size["shortest_edge"]
|
||||
else:
|
||||
size = (image_processor.size["height"], image_processor.size["width"])
|
||||
transforms = Compose(
|
||||
[
|
||||
Lambda(lambda img: img.convert("RGB") if img.mode != "RGB" else img),
|
||||
RandomResizedCrop(size, scale=(0.2, 1.0), interpolation=InterpolationMode.BICUBIC),
|
||||
RandomHorizontalFlip(),
|
||||
ToTensor(),
|
||||
Normalize(mean=image_processor.image_mean, std=image_processor.image_std),
|
||||
]
|
||||
)
|
||||
|
||||
def preprocess_images(examples):
|
||||
"""Preprocess a batch of images by applying transforms."""
|
||||
|
||||
examples["pixel_values"] = [transforms(image) for image in examples[image_column_name]]
|
||||
return examples
|
||||
|
||||
if training_args.do_train:
|
||||
if "train" not in ds:
|
||||
raise ValueError("--do_train requires a train dataset")
|
||||
if data_args.max_train_samples is not None:
|
||||
ds["train"] = ds["train"].shuffle(seed=training_args.seed).select(range(data_args.max_train_samples))
|
||||
# Set the training transforms
|
||||
ds["train"].set_transform(preprocess_images)
|
||||
|
||||
if training_args.do_eval:
|
||||
if "validation" not in ds:
|
||||
raise ValueError("--do_eval requires a validation dataset")
|
||||
if data_args.max_eval_samples is not None:
|
||||
ds["validation"] = (
|
||||
ds["validation"].shuffle(seed=training_args.seed).select(range(data_args.max_eval_samples))
|
||||
)
|
||||
# Set the validation transforms
|
||||
ds["validation"].set_transform(preprocess_images)
|
||||
|
||||
# Compute absolute learning rate
|
||||
total_train_batch_size = (
|
||||
training_args.train_batch_size * training_args.gradient_accumulation_steps * training_args.world_size
|
||||
)
|
||||
if training_args.base_learning_rate is not None:
|
||||
training_args.learning_rate = training_args.base_learning_rate * total_train_batch_size / 256
|
||||
|
||||
# Initialize our trainer
|
||||
trainer = Trainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=ds["train"] if training_args.do_train else None,
|
||||
eval_dataset=ds["validation"] if training_args.do_eval else None,
|
||||
processing_class=image_processor,
|
||||
data_collator=collate_fn,
|
||||
)
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
checkpoint = None
|
||||
if training_args.resume_from_checkpoint is not None:
|
||||
checkpoint = training_args.resume_from_checkpoint
|
||||
elif last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model()
|
||||
trainer.log_metrics("train", train_result.metrics)
|
||||
trainer.save_metrics("train", train_result.metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
if training_args.do_eval:
|
||||
metrics = trainer.evaluate()
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Write model card and (optionally) push to hub
|
||||
kwargs = {
|
||||
"tasks": "masked-auto-encoding",
|
||||
"dataset": data_args.dataset_name,
|
||||
"tags": ["masked-auto-encoding"],
|
||||
}
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
|
||||
def _mp_fn(index):
|
||||
# For xla_spawn (TPUs)
|
||||
main()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
491
transformers/examples/pytorch/image-pretraining/run_mim.py
Normal file
491
transformers/examples/pytorch/image-pretraining/run_mim.py
Normal file
@@ -0,0 +1,491 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers @ git+https://github.com/huggingface/transformers.git",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from datasets import load_dataset
|
||||
from torchvision.transforms import Compose, Lambda, Normalize, RandomHorizontalFlip, RandomResizedCrop, ToTensor
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
CONFIG_MAPPING,
|
||||
IMAGE_PROCESSOR_MAPPING,
|
||||
MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING,
|
||||
AutoConfig,
|
||||
AutoImageProcessor,
|
||||
AutoModelForMaskedImageModeling,
|
||||
HfArgumentParser,
|
||||
Trainer,
|
||||
TrainingArguments,
|
||||
)
|
||||
from transformers.trainer_utils import get_last_checkpoint
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
""" Pre-training a 🤗 Transformers model for simple masked image modeling (SimMIM).
|
||||
Any model supported by the AutoModelForMaskedImageModeling API can be used.
|
||||
"""
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/image-pretraining/requirements.txt")
|
||||
|
||||
MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING.keys())
|
||||
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataTrainingArguments:
|
||||
"""
|
||||
Arguments pertaining to what data we are going to input our model for training and eval.
|
||||
Using `HfArgumentParser` we can turn this class into argparse arguments to be able to
|
||||
specify them on the command line.
|
||||
"""
|
||||
|
||||
dataset_name: Optional[str] = field(
|
||||
default="cifar10", metadata={"help": "Name of a dataset from the datasets package"}
|
||||
)
|
||||
dataset_config_name: Optional[str] = field(
|
||||
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
|
||||
)
|
||||
image_column_name: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "The column name of the images in the files. If not set, will try to use 'image' or 'img'."},
|
||||
)
|
||||
train_dir: Optional[str] = field(default=None, metadata={"help": "A folder containing the training data."})
|
||||
validation_dir: Optional[str] = field(default=None, metadata={"help": "A folder containing the validation data."})
|
||||
train_val_split: Optional[float] = field(
|
||||
default=0.15, metadata={"help": "Percent to split off of train for validation."}
|
||||
)
|
||||
mask_patch_size: int = field(default=32, metadata={"help": "The size of the square patches to use for masking."})
|
||||
mask_ratio: float = field(
|
||||
default=0.6,
|
||||
metadata={"help": "Percentage of patches to mask."},
|
||||
)
|
||||
max_train_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
max_eval_samples: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
def __post_init__(self):
|
||||
data_files = {}
|
||||
if self.train_dir is not None:
|
||||
data_files["train"] = self.train_dir
|
||||
if self.validation_dir is not None:
|
||||
data_files["val"] = self.validation_dir
|
||||
self.data_files = data_files if data_files else None
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelArguments:
|
||||
"""
|
||||
Arguments pertaining to which model/config/image processor we are going to pre-train.
|
||||
"""
|
||||
|
||||
model_name_or_path: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The model checkpoint for weights initialization. Can be a local path to a pytorch_model.bin or a "
|
||||
"checkpoint identifier on the hub. "
|
||||
"Don't set if you want to train a model from scratch."
|
||||
)
|
||||
},
|
||||
)
|
||||
model_type: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
|
||||
)
|
||||
config_name_or_path: Optional[str] = field(
|
||||
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
|
||||
)
|
||||
config_overrides: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"Override some existing default config settings when a model is trained from scratch. Example: "
|
||||
"n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
|
||||
)
|
||||
},
|
||||
)
|
||||
cache_dir: Optional[str] = field(
|
||||
default=None,
|
||||
metadata={"help": "Where do you want to store (cache) the pretrained models/datasets downloaded from the hub"},
|
||||
)
|
||||
model_revision: str = field(
|
||||
default="main",
|
||||
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
|
||||
)
|
||||
image_processor_name: str = field(default=None, metadata={"help": "Name or path of preprocessor config."})
|
||||
token: str = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
trust_remote_code: bool = field(
|
||||
default=False,
|
||||
metadata={
|
||||
"help": (
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
)
|
||||
},
|
||||
)
|
||||
image_size: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The size (resolution) of each image. If not specified, will use `image_size` of the configuration."
|
||||
)
|
||||
},
|
||||
)
|
||||
patch_size: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"help": (
|
||||
"The size (resolution) of each patch. If not specified, will use `patch_size` of the configuration."
|
||||
)
|
||||
},
|
||||
)
|
||||
encoder_stride: Optional[int] = field(
|
||||
default=None,
|
||||
metadata={"help": "Stride to use for the encoder."},
|
||||
)
|
||||
|
||||
|
||||
class MaskGenerator:
|
||||
"""
|
||||
A class to generate boolean masks for the pretraining task.
|
||||
|
||||
A mask is a 1D tensor of shape (model_patch_size**2,) where the value is either 0 or 1,
|
||||
where 1 indicates "masked".
|
||||
"""
|
||||
|
||||
def __init__(self, input_size=192, mask_patch_size=32, model_patch_size=4, mask_ratio=0.6):
|
||||
self.input_size = input_size
|
||||
self.mask_patch_size = mask_patch_size
|
||||
self.model_patch_size = model_patch_size
|
||||
self.mask_ratio = mask_ratio
|
||||
|
||||
if self.input_size % self.mask_patch_size != 0:
|
||||
raise ValueError("Input size must be divisible by mask patch size")
|
||||
if self.mask_patch_size % self.model_patch_size != 0:
|
||||
raise ValueError("Mask patch size must be divisible by model patch size")
|
||||
|
||||
self.rand_size = self.input_size // self.mask_patch_size
|
||||
self.scale = self.mask_patch_size // self.model_patch_size
|
||||
|
||||
self.token_count = self.rand_size**2
|
||||
self.mask_count = int(np.ceil(self.token_count * self.mask_ratio))
|
||||
|
||||
def __call__(self):
|
||||
mask_idx = np.random.permutation(self.token_count)[: self.mask_count]
|
||||
mask = np.zeros(self.token_count, dtype=int)
|
||||
mask[mask_idx] = 1
|
||||
|
||||
mask = mask.reshape((self.rand_size, self.rand_size))
|
||||
mask = mask.repeat(self.scale, axis=0).repeat(self.scale, axis=1)
|
||||
|
||||
return torch.tensor(mask.flatten())
|
||||
|
||||
|
||||
def collate_fn(examples):
|
||||
pixel_values = torch.stack([example["pixel_values"] for example in examples])
|
||||
mask = torch.stack([example["mask"] for example in examples])
|
||||
return {"pixel_values": pixel_values, "bool_masked_pos": mask}
|
||||
|
||||
|
||||
def main():
|
||||
# See all possible arguments in src/transformers/training_args.py
|
||||
# or by passing the --help flag to this script.
|
||||
# We now keep distinct sets of args, for a cleaner separation of concerns.
|
||||
|
||||
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
|
||||
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
|
||||
# If we pass only one argument to the script and it's the path to a json file,
|
||||
# let's parse it to get our arguments.
|
||||
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
|
||||
else:
|
||||
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_mim", model_args, data_args)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
handlers=[logging.StreamHandler(sys.stdout)],
|
||||
)
|
||||
|
||||
if training_args.should_log:
|
||||
# The default of training_args.log_level is passive, so we set log level at info here to have that default.
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
|
||||
log_level = training_args.get_process_log_level()
|
||||
logger.setLevel(log_level)
|
||||
transformers.utils.logging.set_verbosity(log_level)
|
||||
transformers.utils.logging.enable_default_handler()
|
||||
transformers.utils.logging.enable_explicit_format()
|
||||
|
||||
# Log on each process the small summary:
|
||||
logger.warning(
|
||||
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
|
||||
+ f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
|
||||
)
|
||||
logger.info(f"Training/evaluation parameters {training_args}")
|
||||
|
||||
# Detecting last checkpoint.
|
||||
last_checkpoint = None
|
||||
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
|
||||
last_checkpoint = get_last_checkpoint(training_args.output_dir)
|
||||
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
|
||||
raise ValueError(
|
||||
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
|
||||
"Use --overwrite_output_dir to overcome."
|
||||
)
|
||||
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
|
||||
logger.info(
|
||||
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
|
||||
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
|
||||
)
|
||||
|
||||
# Initialize our dataset.
|
||||
ds = load_dataset(
|
||||
data_args.dataset_name,
|
||||
data_args.dataset_config_name,
|
||||
data_files=data_args.data_files,
|
||||
cache_dir=model_args.cache_dir,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
|
||||
# If we don't have a validation split, split off a percentage of train as validation.
|
||||
data_args.train_val_split = None if "validation" in ds else data_args.train_val_split
|
||||
if isinstance(data_args.train_val_split, float) and data_args.train_val_split > 0.0:
|
||||
split = ds["train"].train_test_split(data_args.train_val_split)
|
||||
ds["train"] = split["train"]
|
||||
ds["validation"] = split["test"]
|
||||
|
||||
# Create config
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config_kwargs = {
|
||||
"cache_dir": model_args.cache_dir,
|
||||
"revision": model_args.model_revision,
|
||||
"token": model_args.token,
|
||||
"trust_remote_code": model_args.trust_remote_code,
|
||||
}
|
||||
if model_args.config_name_or_path:
|
||||
config = AutoConfig.from_pretrained(model_args.config_name_or_path, **config_kwargs)
|
||||
elif model_args.model_name_or_path:
|
||||
config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
|
||||
else:
|
||||
config = CONFIG_MAPPING[model_args.model_type]()
|
||||
logger.warning("You are instantiating a new config instance from scratch.")
|
||||
if model_args.config_overrides is not None:
|
||||
logger.info(f"Overriding config: {model_args.config_overrides}")
|
||||
config.update_from_string(model_args.config_overrides)
|
||||
logger.info(f"New config: {config}")
|
||||
|
||||
# make sure the decoder_type is "simmim" (only relevant for BEiT)
|
||||
if hasattr(config, "decoder_type"):
|
||||
config.decoder_type = "simmim"
|
||||
|
||||
# adapt config
|
||||
model_args.image_size = model_args.image_size if model_args.image_size is not None else config.image_size
|
||||
model_args.patch_size = model_args.patch_size if model_args.patch_size is not None else config.patch_size
|
||||
model_args.encoder_stride = (
|
||||
model_args.encoder_stride if model_args.encoder_stride is not None else config.encoder_stride
|
||||
)
|
||||
|
||||
config.update(
|
||||
{
|
||||
"image_size": model_args.image_size,
|
||||
"patch_size": model_args.patch_size,
|
||||
"encoder_stride": model_args.encoder_stride,
|
||||
}
|
||||
)
|
||||
|
||||
# create image processor
|
||||
if model_args.image_processor_name:
|
||||
image_processor = AutoImageProcessor.from_pretrained(model_args.image_processor_name, **config_kwargs)
|
||||
elif model_args.model_name_or_path:
|
||||
image_processor = AutoImageProcessor.from_pretrained(model_args.model_name_or_path, **config_kwargs)
|
||||
else:
|
||||
IMAGE_PROCESSOR_TYPES = {
|
||||
conf.model_type: image_processor_class for conf, image_processor_class in IMAGE_PROCESSOR_MAPPING.items()
|
||||
}
|
||||
image_processor = IMAGE_PROCESSOR_TYPES[model_args.model_type][-1]()
|
||||
|
||||
# create model
|
||||
if model_args.model_name_or_path:
|
||||
model = AutoModelForMaskedImageModeling.from_pretrained(
|
||||
model_args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in model_args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=model_args.cache_dir,
|
||||
revision=model_args.model_revision,
|
||||
token=model_args.token,
|
||||
trust_remote_code=model_args.trust_remote_code,
|
||||
)
|
||||
else:
|
||||
logger.info("Training new model from scratch")
|
||||
model = AutoModelForMaskedImageModeling.from_config(config, trust_remote_code=model_args.trust_remote_code)
|
||||
|
||||
if training_args.do_train:
|
||||
column_names = ds["train"].column_names
|
||||
else:
|
||||
column_names = ds["validation"].column_names
|
||||
|
||||
if data_args.image_column_name is not None:
|
||||
image_column_name = data_args.image_column_name
|
||||
elif "image" in column_names:
|
||||
image_column_name = "image"
|
||||
elif "img" in column_names:
|
||||
image_column_name = "img"
|
||||
else:
|
||||
image_column_name = column_names[0]
|
||||
|
||||
# transformations as done in original SimMIM paper
|
||||
# source: https://github.com/microsoft/SimMIM/blob/main/data/data_simmim.py
|
||||
transforms = Compose(
|
||||
[
|
||||
Lambda(lambda img: img.convert("RGB") if img.mode != "RGB" else img),
|
||||
RandomResizedCrop(model_args.image_size, scale=(0.67, 1.0), ratio=(3.0 / 4.0, 4.0 / 3.0)),
|
||||
RandomHorizontalFlip(),
|
||||
ToTensor(),
|
||||
Normalize(mean=image_processor.image_mean, std=image_processor.image_std),
|
||||
]
|
||||
)
|
||||
|
||||
# create mask generator
|
||||
mask_generator = MaskGenerator(
|
||||
input_size=model_args.image_size,
|
||||
mask_patch_size=data_args.mask_patch_size,
|
||||
model_patch_size=model_args.patch_size,
|
||||
mask_ratio=data_args.mask_ratio,
|
||||
)
|
||||
|
||||
def preprocess_images(examples):
|
||||
"""Preprocess a batch of images by applying transforms + creating a corresponding mask, indicating
|
||||
which patches to mask."""
|
||||
|
||||
examples["pixel_values"] = [transforms(image) for image in examples[image_column_name]]
|
||||
examples["mask"] = [mask_generator() for i in range(len(examples[image_column_name]))]
|
||||
|
||||
return examples
|
||||
|
||||
if training_args.do_train:
|
||||
if "train" not in ds:
|
||||
raise ValueError("--do_train requires a train dataset")
|
||||
if data_args.max_train_samples is not None:
|
||||
ds["train"] = ds["train"].shuffle(seed=training_args.seed).select(range(data_args.max_train_samples))
|
||||
# Set the training transforms
|
||||
ds["train"].set_transform(preprocess_images)
|
||||
|
||||
if training_args.do_eval:
|
||||
if "validation" not in ds:
|
||||
raise ValueError("--do_eval requires a validation dataset")
|
||||
if data_args.max_eval_samples is not None:
|
||||
ds["validation"] = (
|
||||
ds["validation"].shuffle(seed=training_args.seed).select(range(data_args.max_eval_samples))
|
||||
)
|
||||
# Set the validation transforms
|
||||
ds["validation"].set_transform(preprocess_images)
|
||||
|
||||
# Initialize our trainer
|
||||
trainer = Trainer(
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=ds["train"] if training_args.do_train else None,
|
||||
eval_dataset=ds["validation"] if training_args.do_eval else None,
|
||||
processing_class=image_processor,
|
||||
data_collator=collate_fn,
|
||||
)
|
||||
|
||||
# Training
|
||||
if training_args.do_train:
|
||||
checkpoint = None
|
||||
if training_args.resume_from_checkpoint is not None:
|
||||
checkpoint = training_args.resume_from_checkpoint
|
||||
elif last_checkpoint is not None:
|
||||
checkpoint = last_checkpoint
|
||||
train_result = trainer.train(resume_from_checkpoint=checkpoint)
|
||||
trainer.save_model()
|
||||
trainer.log_metrics("train", train_result.metrics)
|
||||
trainer.save_metrics("train", train_result.metrics)
|
||||
trainer.save_state()
|
||||
|
||||
# Evaluation
|
||||
if training_args.do_eval:
|
||||
metrics = trainer.evaluate()
|
||||
trainer.log_metrics("eval", metrics)
|
||||
trainer.save_metrics("eval", metrics)
|
||||
|
||||
# Write model card and (optionally) push to hub
|
||||
kwargs = {
|
||||
"finetuned_from": model_args.model_name_or_path,
|
||||
"tasks": "masked-image-modeling",
|
||||
"dataset": data_args.dataset_name,
|
||||
"tags": ["masked-image-modeling"],
|
||||
}
|
||||
if training_args.push_to_hub:
|
||||
trainer.push_to_hub(**kwargs)
|
||||
else:
|
||||
trainer.create_model_card(**kwargs)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,812 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers @ git+https://github.com/huggingface/transformers.git",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import datasets
|
||||
import numpy as np
|
||||
import torch
|
||||
from accelerate import Accelerator, DistributedType
|
||||
from accelerate.utils import set_seed
|
||||
from datasets import load_dataset
|
||||
from huggingface_hub import HfApi
|
||||
from torch.utils.data import DataLoader
|
||||
from torchvision.transforms import Compose, Lambda, Normalize, RandomHorizontalFlip, RandomResizedCrop, ToTensor
|
||||
from tqdm.auto import tqdm
|
||||
|
||||
import transformers
|
||||
from transformers import (
|
||||
CONFIG_MAPPING,
|
||||
IMAGE_PROCESSOR_MAPPING,
|
||||
MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING,
|
||||
AutoConfig,
|
||||
AutoImageProcessor,
|
||||
AutoModelForMaskedImageModeling,
|
||||
SchedulerType,
|
||||
get_scheduler,
|
||||
)
|
||||
from transformers.utils import check_min_version, send_example_telemetry
|
||||
from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
""" Pre-training a 🤗 Transformers model for simple masked image modeling (SimMIM)
|
||||
without using HuggingFace Trainer.
|
||||
Any model supported by the AutoModelForMaskedImageModeling API can be used.
|
||||
"""
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.57.0.dev0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/image-pretraining/requirements.txt")
|
||||
|
||||
MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_IMAGE_MODELING_MAPPING.keys())
|
||||
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Finetune a transformers model on a simple Masked Image Modeling task"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dataset_name",
|
||||
type=str,
|
||||
default="cifar10",
|
||||
help="Name of a dataset from the datasets package",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dataset_config_name",
|
||||
type=str,
|
||||
default=None,
|
||||
help="The configuration name of the dataset to use (via the datasets library).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--image_column_name",
|
||||
type=str,
|
||||
default=None,
|
||||
help="The column name of the images in the files. If not set, will try to use 'image' or 'img'.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--train_dir",
|
||||
type=str,
|
||||
default=None,
|
||||
help="A folder containing the training data.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--validation_dir",
|
||||
type=None,
|
||||
default=None,
|
||||
help="A folder containing the validation data.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--train_val_split",
|
||||
type=float,
|
||||
default=0.15,
|
||||
help="Percent to split off of train for validation.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mask_patch_size",
|
||||
type=int,
|
||||
default=32,
|
||||
help="The size of the square patches to use for masking.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mask_ratio",
|
||||
type=float,
|
||||
default=0.6,
|
||||
help="Percentage of patches to mask.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max_train_samples",
|
||||
type=int,
|
||||
default=None,
|
||||
help=(
|
||||
"For debugging purposes or quicker training, truncate the number of training examples to this "
|
||||
"value if set."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max_eval_samples",
|
||||
type=int,
|
||||
default=None,
|
||||
help=(
|
||||
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
|
||||
"value if set."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model_name_or_path",
|
||||
type=str,
|
||||
default=None,
|
||||
help=(
|
||||
"The model checkpoint for weights initialization. Can be a local path to a pytorch_model.bin or a "
|
||||
"checkpoint identifier on the hub. "
|
||||
"Don't set if you want to train a model from scratch."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model_type",
|
||||
type=str,
|
||||
default=None,
|
||||
help="If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config_name_or_path",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Pretrained config name or path if not the same as model_name",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config_overrides",
|
||||
type=str,
|
||||
default=None,
|
||||
help=(
|
||||
"Override some existing default config settings when a model is trained from scratch. Example: "
|
||||
"n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--cache_dir",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Where do you want to store (cache) the pretrained models/datasets downloaded from the hub",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model_revision",
|
||||
type=str,
|
||||
default="main",
|
||||
help="The specific model version to use (can be a branch name, tag name or commit id).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--gradient_accumulation_steps",
|
||||
type=int,
|
||||
default=1,
|
||||
help="Number of updates steps to accumulate before performing a backward/update pass.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--image_processor_name",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Name or path of preprocessor config.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--token",
|
||||
type=str,
|
||||
default=None,
|
||||
help=(
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--trust_remote_code",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Whether to trust the execution of code from datasets/models defined on the Hub."
|
||||
" This option should only be set to `True` for repositories you trust and in which you have read the"
|
||||
" code, as it will execute code present on the Hub on your local machine."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--image_size",
|
||||
type=int,
|
||||
default=None,
|
||||
help="The size (resolution) of each image. If not specified, will use `image_size` of the configuration.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--patch_size",
|
||||
type=int,
|
||||
default=None,
|
||||
help="The size (resolution) of each patch. If not specified, will use `patch_size` of the configuration.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--encoder_stride",
|
||||
type=int,
|
||||
default=None,
|
||||
help={"help": "Stride to use for the encoder."},
|
||||
)
|
||||
parser.add_argument(
|
||||
"--push_to_hub",
|
||||
action="store_true",
|
||||
help="Whether or not to push the model to the Hub.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--with_tracking",
|
||||
action="store_true",
|
||||
help="Whether to enable experiment trackers for logging.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--report_to",
|
||||
type=str,
|
||||
default="all",
|
||||
help=(
|
||||
'The integration to report the results and logs to. Supported platforms are `"tensorboard"`,'
|
||||
' `"wandb"`, `"comet_ml"` and `"clearml"`. Use `"all"` (default) to report to all integrations. '
|
||||
"Only applicable when `--with_tracking` is passed."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--seed",
|
||||
type=int,
|
||||
default=None,
|
||||
help="A seed for reproducible training.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--per_device_train_batch_size",
|
||||
type=int,
|
||||
default=8,
|
||||
help="Batch size (per device) for the training dataloader.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--learning_rate",
|
||||
type=float,
|
||||
default=5e-5,
|
||||
help="The initial learning rate for [`AdamW`] optimizer.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--weight_decay",
|
||||
type=float,
|
||||
default=0.0,
|
||||
help="Weight decay to use.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--num_train_epochs",
|
||||
type=float,
|
||||
default=3.0,
|
||||
help="Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max_train_steps",
|
||||
type=int,
|
||||
default=None,
|
||||
help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--lr_scheduler_type",
|
||||
type=SchedulerType,
|
||||
default="linear",
|
||||
help="The scheduler type to use.",
|
||||
choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
|
||||
)
|
||||
parser.add_argument(
|
||||
"--num_warmup_steps",
|
||||
type=int,
|
||||
default=0,
|
||||
help="Number of steps for the warmup in the lr scheduler.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--checkpointing_steps",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--resume_from_checkpoint",
|
||||
type=str,
|
||||
default=None,
|
||||
help="If the training should continue from a checkpoint folder.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--per_device_eval_batch_size",
|
||||
type=int,
|
||||
default=8,
|
||||
help="Batch size (per device) for the evaluation dataloader.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output_dir",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Where to store the final model.",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Sanity checks
|
||||
data_files = {}
|
||||
if args.train_dir is not None:
|
||||
data_files["train"] = args.train_dir
|
||||
if args.validation_dir is not None:
|
||||
data_files["val"] = args.validation_dir
|
||||
args.data_files = data_files if data_files else None
|
||||
|
||||
if args.push_to_hub:
|
||||
assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
|
||||
|
||||
return args
|
||||
|
||||
|
||||
class MaskGenerator:
|
||||
"""
|
||||
A class to generate boolean masks for the pretraining task.
|
||||
|
||||
A mask is a 1D tensor of shape (model_patch_size**2,) where the value is either 0 or 1,
|
||||
where 1 indicates "masked".
|
||||
"""
|
||||
|
||||
def __init__(self, input_size=192, mask_patch_size=32, model_patch_size=4, mask_ratio=0.6):
|
||||
self.input_size = input_size
|
||||
self.mask_patch_size = mask_patch_size
|
||||
self.model_patch_size = model_patch_size
|
||||
self.mask_ratio = mask_ratio
|
||||
|
||||
if self.input_size % self.mask_patch_size != 0:
|
||||
raise ValueError("Input size must be divisible by mask patch size")
|
||||
if self.mask_patch_size % self.model_patch_size != 0:
|
||||
raise ValueError("Mask patch size must be divisible by model patch size")
|
||||
|
||||
self.rand_size = self.input_size // self.mask_patch_size
|
||||
self.scale = self.mask_patch_size // self.model_patch_size
|
||||
|
||||
self.token_count = self.rand_size**2
|
||||
self.mask_count = int(np.ceil(self.token_count * self.mask_ratio))
|
||||
|
||||
def __call__(self):
|
||||
mask_idx = np.random.permutation(self.token_count)[: self.mask_count]
|
||||
mask = np.zeros(self.token_count, dtype=int)
|
||||
mask[mask_idx] = 1
|
||||
|
||||
mask = mask.reshape((self.rand_size, self.rand_size))
|
||||
mask = mask.repeat(self.scale, axis=0).repeat(self.scale, axis=1)
|
||||
|
||||
return torch.tensor(mask.flatten())
|
||||
|
||||
|
||||
def collate_fn(examples):
|
||||
pixel_values = torch.stack([example["pixel_values"] for example in examples])
|
||||
mask = torch.stack([example["mask"] for example in examples])
|
||||
return {"pixel_values": pixel_values, "bool_masked_pos": mask}
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
|
||||
# information sent is the one passed as arguments along with your Python/PyTorch versions.
|
||||
send_example_telemetry("run_mim_no_trainer", args)
|
||||
|
||||
# Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
|
||||
# If we're using tracking, we also need to initialize it here and it will by default pick up all supported trackers
|
||||
# in the environment
|
||||
accelerator_log_kwargs = {}
|
||||
|
||||
if args.with_tracking:
|
||||
accelerator_log_kwargs["log_with"] = args.report_to
|
||||
accelerator_log_kwargs["project_dir"] = args.output_dir
|
||||
|
||||
accelerator = Accelerator(
|
||||
gradient_accumulation_steps=args.gradient_accumulation_steps,
|
||||
**accelerator_log_kwargs,
|
||||
)
|
||||
|
||||
# Make one log on every process with the configuration for debugging.
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
|
||||
datefmt="%m/%d/%Y %H:%M:%S",
|
||||
level=logging.INFO,
|
||||
)
|
||||
logger.info(accelerator.state)
|
||||
if accelerator.is_local_main_process:
|
||||
datasets.utils.logging.set_verbosity_warning()
|
||||
transformers.utils.logging.set_verbosity_info()
|
||||
else:
|
||||
datasets.utils.logging.set_verbosity_error()
|
||||
transformers.utils.logging.set_verbosity_error()
|
||||
|
||||
# If passed along, set the training seed now.
|
||||
if args.seed is not None:
|
||||
set_seed(args.seed)
|
||||
|
||||
# Handle the repository creation
|
||||
if accelerator.is_main_process:
|
||||
if args.push_to_hub:
|
||||
# Retrieve of infer repo_name
|
||||
repo_name = args.hub_model_id
|
||||
if repo_name is None:
|
||||
repo_name = Path(args.output_dir).absolute().name
|
||||
# Create repo and retrieve repo_id
|
||||
api = HfApi()
|
||||
repo_id = api.create_repo(repo_name, exist_ok=True, token=args.hub_token).repo_id
|
||||
|
||||
with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
|
||||
if "step_*" not in gitignore:
|
||||
gitignore.write("step_*\n")
|
||||
if "epoch_*" not in gitignore:
|
||||
gitignore.write("epoch_*\n")
|
||||
|
||||
elif args.output_dir is not None:
|
||||
os.makedirs(args.output_dir, exist_ok=True)
|
||||
accelerator.wait_for_everyone()
|
||||
|
||||
# Initialize our dataset.
|
||||
ds = load_dataset(
|
||||
args.dataset_name,
|
||||
args.dataset_config_name,
|
||||
data_files=args.data_files,
|
||||
cache_dir=args.cache_dir,
|
||||
token=args.token,
|
||||
trust_remote_code=args.trust_remote_code,
|
||||
)
|
||||
|
||||
# If we don't have a validation split, split off a percentage of train as validation.
|
||||
args.train_val_split = None if "validation" in ds else args.train_val_split
|
||||
if isinstance(args.train_val_split, float) and args.train_val_split > 0.0:
|
||||
split = ds["train"].train_test_split(args.train_val_split)
|
||||
ds["train"] = split["train"]
|
||||
ds["validation"] = split["test"]
|
||||
|
||||
# Create config
|
||||
# Distributed training:
|
||||
# The .from_pretrained methods guarantee that only one local process can concurrently
|
||||
# download model & vocab.
|
||||
config_kwargs = {
|
||||
"cache_dir": args.cache_dir,
|
||||
"revision": args.model_revision,
|
||||
"token": args.token,
|
||||
"trust_remote_code": args.trust_remote_code,
|
||||
}
|
||||
if args.config_name_or_path:
|
||||
config = AutoConfig.from_pretrained(args.config_name_or_path, **config_kwargs)
|
||||
elif args.model_name_or_path:
|
||||
config = AutoConfig.from_pretrained(args.model_name_or_path, **config_kwargs)
|
||||
else:
|
||||
config = CONFIG_MAPPING[args.model_type]()
|
||||
logger.warning("You are instantiating a new config instance from scratch.")
|
||||
if args.config_overrides is not None:
|
||||
logger.info(f"Overriding config: {args.config_overrides}")
|
||||
config.update_from_string(args.config_overrides)
|
||||
logger.info(f"New config: {config}")
|
||||
|
||||
# make sure the decoder_type is "simmim" (only relevant for BEiT)
|
||||
if hasattr(config, "decoder_type"):
|
||||
config.decoder_type = "simmim"
|
||||
|
||||
# adapt config
|
||||
args.image_size = args.image_size if args.image_size is not None else config.image_size
|
||||
args.patch_size = args.patch_size if args.patch_size is not None else config.patch_size
|
||||
args.encoder_stride = args.encoder_stride if args.encoder_stride is not None else config.encoder_stride
|
||||
|
||||
config.update(
|
||||
{
|
||||
"image_size": args.image_size,
|
||||
"patch_size": args.patch_size,
|
||||
"encoder_stride": args.encoder_stride,
|
||||
}
|
||||
)
|
||||
|
||||
# create image processor
|
||||
if args.image_processor_name:
|
||||
image_processor = AutoImageProcessor.from_pretrained(args.image_processor_name, **config_kwargs)
|
||||
elif args.model_name_or_path:
|
||||
image_processor = AutoImageProcessor.from_pretrained(args.model_name_or_path, **config_kwargs)
|
||||
else:
|
||||
IMAGE_PROCESSOR_TYPES = {
|
||||
conf.model_type: image_processor_class for conf, image_processor_class in IMAGE_PROCESSOR_MAPPING.items()
|
||||
}
|
||||
image_processor = IMAGE_PROCESSOR_TYPES[args.model_type]()
|
||||
|
||||
# create model
|
||||
if args.model_name_or_path:
|
||||
model = AutoModelForMaskedImageModeling.from_pretrained(
|
||||
args.model_name_or_path,
|
||||
from_tf=bool(".ckpt" in args.model_name_or_path),
|
||||
config=config,
|
||||
cache_dir=args.cache_dir,
|
||||
revision=args.model_revision,
|
||||
token=args.token,
|
||||
trust_remote_code=args.trust_remote_code,
|
||||
)
|
||||
else:
|
||||
logger.info("Training new model from scratch")
|
||||
model = AutoModelForMaskedImageModeling.from_config(
|
||||
config,
|
||||
token=args.token,
|
||||
trust_remote_code=args.trust_remote_code,
|
||||
)
|
||||
|
||||
column_names = ds["train"].column_names
|
||||
|
||||
if args.image_column_name is not None:
|
||||
image_column_name = args.image_column_name
|
||||
elif "image" in column_names:
|
||||
image_column_name = "image"
|
||||
elif "img" in column_names:
|
||||
image_column_name = "img"
|
||||
else:
|
||||
image_column_name = column_names[0]
|
||||
|
||||
# transformations as done in original SimMIM paper
|
||||
# source: https://github.com/microsoft/SimMIM/blob/main/data/data_simmim.py
|
||||
transforms = Compose(
|
||||
[
|
||||
Lambda(lambda img: img.convert("RGB")),
|
||||
RandomResizedCrop(args.image_size, scale=(0.67, 1.0), ratio=(3.0 / 4.0, 4.0 / 3.0)),
|
||||
RandomHorizontalFlip(),
|
||||
ToTensor(),
|
||||
Normalize(mean=image_processor.image_mean, std=image_processor.image_std),
|
||||
]
|
||||
)
|
||||
|
||||
# create mask generator
|
||||
mask_generator = MaskGenerator(
|
||||
input_size=args.image_size,
|
||||
mask_patch_size=args.mask_patch_size,
|
||||
model_patch_size=args.patch_size,
|
||||
mask_ratio=args.mask_ratio,
|
||||
)
|
||||
|
||||
def preprocess_images(examples):
|
||||
"""Preprocess a batch of images by applying transforms + creating a corresponding mask, indicating
|
||||
which patches to mask."""
|
||||
|
||||
examples["pixel_values"] = [transforms(image) for image in examples[image_column_name]]
|
||||
examples["mask"] = [mask_generator() for i in range(len(examples[image_column_name]))]
|
||||
|
||||
return examples
|
||||
|
||||
if args.max_train_samples is not None:
|
||||
ds["train"] = ds["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
|
||||
# Set the training transforms
|
||||
ds["train"].set_transform(preprocess_images)
|
||||
|
||||
if args.max_eval_samples is not None:
|
||||
ds["validation"] = ds["validation"].shuffle(seed=args.seed).select(range(args.max_eval_samples))
|
||||
# Set the validation transforms
|
||||
ds["validation"].set_transform(preprocess_images)
|
||||
|
||||
# DataLoaders creation:
|
||||
train_dataloader = DataLoader(
|
||||
ds["train"],
|
||||
shuffle=True,
|
||||
collate_fn=collate_fn,
|
||||
batch_size=args.per_device_train_batch_size,
|
||||
)
|
||||
eval_dataloader = DataLoader(
|
||||
ds["validation"],
|
||||
collate_fn=collate_fn,
|
||||
batch_size=args.per_device_eval_batch_size,
|
||||
)
|
||||
|
||||
# Optimizer
|
||||
# Split weights in two groups, one with weight decay and the other not.
|
||||
no_decay = ["bias", "LayerNorm.weight"]
|
||||
optimizer_grouped_parameters = [
|
||||
{
|
||||
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
|
||||
"weight_decay": args.weight_decay,
|
||||
},
|
||||
{
|
||||
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
|
||||
"weight_decay": 0.0,
|
||||
},
|
||||
]
|
||||
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
|
||||
|
||||
# Note -> the training dataloader needs to be prepared before we grab his length below (cause its length will be
|
||||
# shorter in multiprocess)
|
||||
|
||||
# Scheduler and math around the number of training steps.
|
||||
overrode_max_train_steps = False
|
||||
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
||||
if args.max_train_steps is None:
|
||||
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
||||
overrode_max_train_steps = True
|
||||
|
||||
lr_scheduler = get_scheduler(
|
||||
name=args.lr_scheduler_type,
|
||||
optimizer=optimizer,
|
||||
num_warmup_steps=args.num_warmup_steps * accelerator.num_processes,
|
||||
num_training_steps=args.max_train_steps
|
||||
if overrode_max_train_steps
|
||||
else args.max_train_steps * accelerator.num_processes,
|
||||
)
|
||||
|
||||
# Prepare everything with our `accelerator`.
|
||||
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
|
||||
model,
|
||||
optimizer,
|
||||
train_dataloader,
|
||||
eval_dataloader,
|
||||
lr_scheduler,
|
||||
)
|
||||
|
||||
# On TPU, the tie weights in our model have been disconnected, so we need to restore the ties.
|
||||
if accelerator.distributed_type == DistributedType.TPU:
|
||||
model.tie_weights()
|
||||
|
||||
# We need to recalculate our total training steps as the size of the training dataloader may have changed.
|
||||
num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
|
||||
if overrode_max_train_steps:
|
||||
args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
|
||||
# Afterwards we recalculate our number of training epochs
|
||||
args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
|
||||
|
||||
# Figure out how many steps we should save the Accelerator states
|
||||
checkpointing_steps = args.checkpointing_steps
|
||||
if checkpointing_steps is not None and checkpointing_steps.isdigit():
|
||||
checkpointing_steps = int(checkpointing_steps)
|
||||
|
||||
# We need to initialize the trackers we use, and also store our configuration.
|
||||
# The trackers initializes automatically on the main process.
|
||||
if args.with_tracking:
|
||||
experiment_config = vars(args)
|
||||
# TensorBoard cannot log Enums, need the raw value
|
||||
experiment_config["lr_scheduler_type"] = experiment_config["lr_scheduler_type"].value
|
||||
accelerator.init_trackers("mim_no_trainer", experiment_config)
|
||||
|
||||
# Train!
|
||||
total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
|
||||
|
||||
logger.info("***** Running training *****")
|
||||
logger.info(f" Num examples = {len(ds['train'])}")
|
||||
logger.info(f" Num Epochs = {args.num_train_epochs}")
|
||||
logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
|
||||
logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
|
||||
logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
|
||||
logger.info(f" Total optimization steps = {args.max_train_steps}")
|
||||
# Only show the progress bar once on each machine.
|
||||
progress_bar = tqdm(range(int(args.max_train_steps)), disable=not accelerator.is_local_main_process)
|
||||
completed_steps = 0
|
||||
starting_epoch = 0
|
||||
|
||||
# Potentially load in the weights and states from a previous save
|
||||
if args.resume_from_checkpoint:
|
||||
if args.resume_from_checkpoint is not None or args.resume_from_checkpoint != "":
|
||||
checkpoint_path = args.resume_from_checkpoint
|
||||
path = os.path.basename(args.resume_from_checkpoint)
|
||||
else:
|
||||
# Get the most recent checkpoint
|
||||
dirs = [f.name for f in os.scandir(os.getcwd()) if f.is_dir()]
|
||||
dirs.sort(key=os.path.getctime)
|
||||
path = dirs[-1] # Sorts folders by date modified, most recent checkpoint is the last
|
||||
checkpoint_path = path
|
||||
path = os.path.basename(checkpoint_path)
|
||||
|
||||
accelerator.print(f"Resumed from checkpoint: {checkpoint_path}")
|
||||
accelerator.load_state(checkpoint_path)
|
||||
# Extract `epoch_{i}` or `step_{i}`
|
||||
training_difference = os.path.splitext(path)[0]
|
||||
|
||||
if "epoch" in training_difference:
|
||||
starting_epoch = int(training_difference.replace("epoch_", "")) + 1
|
||||
resume_step = None
|
||||
completed_steps = starting_epoch * num_update_steps_per_epoch
|
||||
else:
|
||||
# need to multiply `gradient_accumulation_steps` to reflect real steps
|
||||
resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps
|
||||
starting_epoch = resume_step // len(train_dataloader)
|
||||
completed_steps = resume_step // args.gradient_accumulation_steps
|
||||
resume_step -= starting_epoch * len(train_dataloader)
|
||||
|
||||
# update the progress_bar if load from checkpoint
|
||||
progress_bar.update(completed_steps)
|
||||
|
||||
for epoch in range(starting_epoch, args.num_train_epochs):
|
||||
model.train()
|
||||
if args.with_tracking:
|
||||
total_loss = 0
|
||||
if args.resume_from_checkpoint and epoch == starting_epoch and resume_step is not None:
|
||||
# We skip the first `n` batches in the dataloader when resuming from a checkpoint
|
||||
active_dataloader = accelerator.skip_first_batches(train_dataloader, resume_step)
|
||||
else:
|
||||
active_dataloader = train_dataloader
|
||||
for step, batch in enumerate(active_dataloader):
|
||||
with accelerator.accumulate(model):
|
||||
outputs = model(**batch)
|
||||
loss = outputs.loss
|
||||
# We keep track of the loss at each epoch
|
||||
if args.with_tracking:
|
||||
total_loss += loss.detach().float()
|
||||
accelerator.backward(loss)
|
||||
optimizer.step()
|
||||
lr_scheduler.step()
|
||||
optimizer.zero_grad()
|
||||
|
||||
# Checks if the accelerator has performed an optimization step behind the scenes
|
||||
if accelerator.sync_gradients:
|
||||
progress_bar.update(1)
|
||||
completed_steps += 1
|
||||
|
||||
if isinstance(checkpointing_steps, int):
|
||||
if completed_steps % checkpointing_steps == 0 and accelerator.sync_gradients:
|
||||
output_dir = f"step_{completed_steps}"
|
||||
if args.output_dir is not None:
|
||||
output_dir = os.path.join(args.output_dir, output_dir)
|
||||
accelerator.save_state(output_dir)
|
||||
|
||||
if completed_steps >= args.max_train_steps:
|
||||
break
|
||||
|
||||
model.eval()
|
||||
losses = []
|
||||
for step, batch in enumerate(eval_dataloader):
|
||||
with torch.no_grad():
|
||||
outputs = model(**batch)
|
||||
|
||||
loss = outputs.loss
|
||||
losses.append(accelerator.gather_for_metrics(loss.repeat(args.per_device_eval_batch_size)))
|
||||
|
||||
losses = torch.cat(losses)
|
||||
eval_loss = torch.mean(losses)
|
||||
|
||||
logger.info(f"epoch {epoch}: eval_loss: {eval_loss}")
|
||||
|
||||
if args.with_tracking:
|
||||
accelerator.log(
|
||||
{
|
||||
"eval_loss": eval_loss,
|
||||
"train_loss": total_loss.item() / len(train_dataloader),
|
||||
"epoch": epoch,
|
||||
"step": completed_steps,
|
||||
},
|
||||
step=completed_steps,
|
||||
)
|
||||
|
||||
if args.push_to_hub and epoch < args.num_train_epochs - 1:
|
||||
accelerator.wait_for_everyone()
|
||||
unwrapped_model = accelerator.unwrap_model(model)
|
||||
unwrapped_model.save_pretrained(
|
||||
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
|
||||
)
|
||||
if accelerator.is_main_process:
|
||||
image_processor.save_pretrained(args.output_dir)
|
||||
api.upload_folder(
|
||||
commit_message=f"Training in progress epoch {epoch}",
|
||||
folder_path=args.output_dir,
|
||||
repo_id=repo_id,
|
||||
repo_type="model",
|
||||
token=args.hub_token,
|
||||
)
|
||||
|
||||
if args.checkpointing_steps == "epoch":
|
||||
output_dir = f"epoch_{epoch}"
|
||||
if args.output_dir is not None:
|
||||
output_dir = os.path.join(args.output_dir, output_dir)
|
||||
accelerator.save_state(output_dir)
|
||||
|
||||
if args.output_dir is not None:
|
||||
accelerator.wait_for_everyone()
|
||||
unwrapped_model = accelerator.unwrap_model(model)
|
||||
unwrapped_model.save_pretrained(
|
||||
args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save
|
||||
)
|
||||
if accelerator.is_main_process:
|
||||
image_processor.save_pretrained(args.output_dir)
|
||||
if args.push_to_hub:
|
||||
api.upload_folder(
|
||||
commit_message="End of training",
|
||||
folder_path=args.output_dir,
|
||||
repo_id=repo_id,
|
||||
repo_type="model",
|
||||
token=args.hub_token,
|
||||
)
|
||||
|
||||
accelerator.wait_for_everyone()
|
||||
accelerator.end_training()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user