Resnet-18 as backbone in Faster R-CNN
Asked Answered
E

3

5

I code with pytorch and I want to use resnet-18 as backbone of Faster R-RCNN. When I print structure of resnet18, this is the output:

>>import torch
>>import torchvision
>>import numpy as np
>>import torchvision.models as models

>>resnet18 = models.resnet18(pretrained=False)
>>print(resnet18)


ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=512, out_features=1000, bias=True)
)

My question is, until which layer it is feature extractor? is AdaptiveAvgPool2d should be part of backbone of Faster R-CNN?

In this toturial, it is shown how to train a Mask R-CNN with an arbitrary backbone, I want to do the same thing with Faster R-CNN and train a Faster R-CNN with resnet-18 but until which layer should be part of feature extractor is confusing for me.

I know how to use resnet+Feature Pyramid Network as backbone, My question is about resent.

Exorcism answered 13/10, 2019 at 10:58 Comment(0)
E
11

If we want to use output of Adaptive Average Pooling we use this code for different Resnets:

# backbone
        if backbone_name == 'resnet_18':
            resnet_net = torchvision.models.resnet18(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 512
        elif backbone_name == 'resnet_34':
            resnet_net = torchvision.models.resnet34(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 512
        elif backbone_name == 'resnet_50':
            resnet_net = torchvision.models.resnet50(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 2048
        elif backbone_name == 'resnet_101':
            resnet_net = torchvision.models.resnet101(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 2048
        elif backbone_name == 'resnet_152':
            resnet_net = torchvision.models.resnet152(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 2048
        elif backbone_name == 'resnet_50_modified_stride_1':
            resnet_net = resnet50(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 2048
        elif backbone_name == 'resnext101_32x8d':
            resnet_net = torchvision.models.resnext101_32x8d(pretrained=True)
            modules = list(resnet_net.children())[:-1]
            backbone = nn.Sequential(*modules)
            backbone.out_channels = 2048

If we want to use convolution feature map we use this code:

 # backbone
        if backbone_name == 'resnet_18':
            resnet_net = torchvision.models.resnet18(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)

        elif backbone_name == 'resnet_34':
            resnet_net = torchvision.models.resnet34(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)

        elif backbone_name == 'resnet_50':
            resnet_net = torchvision.models.resnet50(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)

        elif backbone_name == 'resnet_101':
            resnet_net = torchvision.models.resnet101(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)

        elif backbone_name == 'resnet_152':
            resnet_net = torchvision.models.resnet152(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)

        elif backbone_name == 'resnet_50_modified_stride_1':
            resnet_net = resnet50(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)

        elif backbone_name == 'resnext101_32x8d':
            resnet_net = torchvision.models.resnext101_32x8d(pretrained=True)
            modules = list(resnet_net.children())[:-2]
            backbone = nn.Sequential(*modules)
Exorcism answered 9/1, 2020 at 6:51 Comment(0)
D
3

torchvision automatically takes in the feature extraction layers for vgg and mobilenet. .features automatically extracts out the relevant layers that are needed from the backbone model and passes it onto the object detection pipeline. You can read more about this in resnet_fpn_backbone function.

In the object detection link that you shared, you just need to change backbone = torchvision.models.mobilenet_v2(pretrained=True).features tobackbone = resnet_fpn_backbone('resnet50', pretrained_backbone).

Just to give you a brief understanding,resnet_fpn_backbone function utilizes the resnet backbone_name (18, 34, 50 ...) that you provide, instantiate retinanet and extract layers 1 through 4 using forward. This backbonewithFPN will be used in faster RCNN as backbone.

Demythologize answered 13/10, 2019 at 11:57 Comment(5)
I tested resnet18(pretrained=True).features but it gave: AttributeError: 'ResNet' object has no attribute 'features'Exorcism
Can you replace backbone = torchvision.models.resnet18(pretrained=True).features with backbone = resnet_fpn_backbone('resnet50', pretrained_backbone) You need to include from .backbone_utils import resnet_fpn_backboneDemythologize
resnet_fpn_backbone returns Feature Pyramid Network that is based on resnet, I want resnet as backbone.Exorcism
As per my knowledge, torchvision supports resnet with fpn at the moment by default. You can customize by pulling their repo if you want. I'll do my research once again and I'll confirm.Demythologize
I think this code solves it: [code] resnet_net = torchvision.models.resnet18(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) backbone.out_channels = 512 [\code]Exorcism
M
2

I use something like this with the fresh versions of torch and torchvision.

def get_resnet18_backbone_model(num_classes, pretrained):
    from torchvision.models.detection.backbone_utils import resnet_fpn_backbone

    print('Using fasterrcnn with res18 backbone...')

    backbone = resnet_fpn_backbone('resnet18', pretrained=pretrained, trainable_layers=5)

    anchor_generator = AnchorGenerator(
        sizes=((16,), (32,), (64,), (128,), (256,)),
        aspect_ratios=tuple([(0.25, 0.5, 1.0, 2.0) for _ in range(5)]))

    roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'],
                                                    output_size=7, sampling_ratio=2)

    # put the pieces together inside a FasterRCNN model
    model = FasterRCNN(backbone, num_classes=num_classes,
                       rpn_anchor_generator=anchor_generator,
                       box_roi_pool=roi_pooler)
    return model

Note that resnet_fpn_backbone() is already setting backbone.out_channels to the correct value.

Meekins answered 7/9, 2020 at 10:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.