I have Pytorch model.pth using Detectron2's COCO Object Detection Baselines pretrained model R50-FPN.
I am trying to convert the .pth model to onnx
.
My code is as follows.
import io
import numpy as np
from torch import nn
import torch.utils.model_zoo as model_zoo
import torch.onnx
from torchvision import models
model = torch.load('output_object_detection/model_final.pth')
x = torch.randn(1, 3, 1080, 1920, requires_grad=True)#0, in_cha, in_h, in_w
torch_out = torch_model(x)
print(model)
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"super_resolution.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['cls_score','bbox_pred'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
Is it correct way to convert ONNX model? If it is the right way, how to know input_names and output_names?
Used netron to see input and output, but the graph doesn't show input/output layers.