init
This commit is contained in:
84
transformers/docs/source/en/model_doc/upernet.md
Normal file
84
transformers/docs/source/en/model_doc/upernet.md
Normal file
@@ -0,0 +1,84 @@
|
||||
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
*This model was released on 2018-07-26 and added to Hugging Face Transformers on 2023-01-16.*
|
||||
|
||||
# UPerNet
|
||||
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
|
||||
## Overview
|
||||
|
||||
The UPerNet model was proposed in [Unified Perceptual Parsing for Scene Understanding](https://huggingface.co/papers/1807.10221)
|
||||
by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment
|
||||
a wide range of concepts from images, leveraging any vision backbone like [ConvNeXt](convnext) or [Swin](swin).
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes.*
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg"
|
||||
alt="drawing" width="600"/>
|
||||
|
||||
<small> UPerNet framework. Taken from the <a href="https://huggingface.co/papers/1807.10221">original paper</a>. </small>
|
||||
|
||||
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code is based on OpenMMLab's mmsegmentation [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/decode_heads/uper_head.py).
|
||||
|
||||
## Usage examples
|
||||
|
||||
UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so:
|
||||
|
||||
```py
|
||||
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
|
||||
|
||||
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
|
||||
|
||||
config = UperNetConfig(backbone_config=backbone_config)
|
||||
model = UperNetForSemanticSegmentation(config)
|
||||
```
|
||||
|
||||
To use another vision backbone, like [ConvNeXt](convnext), simply instantiate the model with the appropriate backbone:
|
||||
|
||||
```py
|
||||
from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation
|
||||
|
||||
backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
|
||||
|
||||
config = UperNetConfig(backbone_config=backbone_config)
|
||||
model = UperNetForSemanticSegmentation(config)
|
||||
```
|
||||
|
||||
Note that this will randomly initialize all the weights of the model.
|
||||
|
||||
## Resources
|
||||
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet.
|
||||
|
||||
- Demo notebooks for UPerNet can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/UPerNet).
|
||||
- [`UperNetForSemanticSegmentation`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/semantic-segmentation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb).
|
||||
- See also: [Semantic segmentation task guide](../tasks/semantic_segmentation)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
## UperNetConfig
|
||||
|
||||
[[autodoc]] UperNetConfig
|
||||
|
||||
## UperNetForSemanticSegmentation
|
||||
|
||||
[[autodoc]] UperNetForSemanticSegmentation
|
||||
- forward
|
||||
Reference in New Issue
Block a user