I have been training a model in the Pytorch framework using multiple convolutional layers (3x3, stride 1, padding same). The model performs well and I want to use it in Matlab for inference. For that, the ONNX format for NN exchange between frameworks seems to be the (only?) solution. The model can be exported using the following command:
torch.onnx.export(net.to('cpu'), test_input,'onnxfile.onnx')
Here is my CNN architecture definition:
class Encoder_decoder(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Conv2d(2,8, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(8,8, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(8,16, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(16,16, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(16,32, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(32,32, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(32,64, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(64,64, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(64,128, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(128,128, (3, 3),stride = 1, padding='same'),
nn.ReLU(),
nn.Conv2d(128,1, (1, 1))
)
def forward(self, x):
x = self.model(x)
return x
However, when I run the torch.onnx.export
command I get the following error:
RuntimeError: Exporting the operator _convolution_mode to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
I have tried changing the opset, but that doesn't solve the problem. ONNX has full support for convolutional neural networks. Also, I am training the network in google colab.
Do you know other methods to transfer the model to matlab?