Pytorch unable to export trained model as ONNX
Asked Answered
H

2

5

I have been training a model in the Pytorch framework using multiple convolutional layers (3x3, stride 1, padding same). The model performs well and I want to use it in Matlab for inference. For that, the ONNX format for NN exchange between frameworks seems to be the (only?) solution. The model can be exported using the following command:

torch.onnx.export(net.to('cpu'), test_input,'onnxfile.onnx')

Here is my CNN architecture definition:

class Encoder_decoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.model = nn.Sequential(
        nn.Conv2d(2,8, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(8,8, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(8,16, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(16,16, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(16,32, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(32,32, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(32,64, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(64,64, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(64,128, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(128,128, (3, 3),stride = 1, padding='same'),
        nn.ReLU(),
        nn.Conv2d(128,1, (1, 1))
        )


    def forward(self, x):
        x = self.model(x)
        
        return x

However, when I run the torch.onnx.export command I get the following error:

RuntimeError: Exporting the operator _convolution_mode to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

I have tried changing the opset, but that doesn't solve the problem. ONNX has full support for convolutional neural networks. Also, I am training the network in google colab.

Do you know other methods to transfer the model to matlab?

Headforemost answered 28/7, 2021 at 17:46 Comment(0)
D
7

Currently, _convolution_mode operator isn't supported in pytorch. This is due to the use of padding='same'.

You need to change padding to an integer value or change it to its equivalent. Consult Same padding equivalent in Pytorch.

Dashtikavir answered 29/7, 2021 at 3:44 Comment(0)
L
1

I made a workaround:

...
def calc_same_padding(kernel_size, stride, input_size):
    if isinstance(kernel_size, Sequence):
        kernel_size = kernel_size[0]

    if isinstance(stride, Sequence):
        stride = stride[0]

    if isinstance(input_size, Sequence):
        input_size = input_size[0]

    pad = ((stride - 1) * input_size - stride + kernel_size) / 2
    return int(pad)

def replace_conv2d_with_same_padding(m: nn.Module, input_size=512):
    if isinstance(m, nn.Conv2d):
        if m.padding == "same":
            m.padding = calc_same_padding(
                kernel_size=m.kernel_size,
                stride=m.stride,
                input_size=input_size
            )

...
model = MyModel()

model.apply(lambda m: replace_conv2d_with_same_padding(m, 512))
example_input = torch.ones((1, 3, 512, 512))

torch.onnx.export(model,
                  example_input,
                  input_names=["input"],
                  output_names=["output"],
                  f=save_path,
                  opset_version=12)

All my input/outputs tensors have even dimentions aka 512x512/256x256/128x128 etc, so input size doesn't matter here.

Leatherette answered 6/5, 2022 at 11:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.