I found that we can optimize the Tensorflow model in several ways. If I am mistaken, please tell me.
1- Using TF-TRT, This API developer by tensorflow and integreted TensoRT to Tensorflow and this API called as :
from tensorflow.python.compiler.tensorrt import trt_convert as trt
This API can be applied to any tensorflow models (new and old version models) without any converting error, because If this API don't support any new layers, don't consider these layers for TensorRT engines and these layers remain for Tensorflow engine and run on Tensorflow. right?
2- Using TensorRT, This API developed by NVIDA and is independent of Tenorflow library (Not integrated to Tensorflow), and this API called as:
import tensorrt as trt
If we want to use this api, first, we must converting the tensorflow graph to UFF using uff-convertor and then parse the UFF graph to this API. In this case, If the Tensorflow graph have unsupported layers we must use plugin or custom code for these layers, right?
3- I don't know, when we work with Tensorflow models, Why we use UFF converter then TensorRT, we can use directly TF-TRT API, right? If so, Are you tested the Tensorflow optimization model from these two method to get same performance? what's advantage of this UFF converter method?
I have some question about the two cases above:
4- I convert the ssd_mobilenet_v2 using two cases, In the case 1, I achieve slight improvement in speed but in the case 2, I achieve more improvement, why? My opinion is that, In the case 1, The API only consider converting the precision (FP32 to FP16) and merging the possible layers together, But in the case 2, the graph is clean by UFF such as remove any redundant nodes like Asserts and Identity and then converted to tensorrt graph, right?
5- when we convert the trained model files like .ckpt
and .meta
, ... to frozen inference graph(.pb file
), These layers don't remove from graph? only loss states and optimizer states , ... are removed?