Converting .tflite to .pb
Asked Answered
V

3

9

Problem: How can i convert a .tflite (serialised flat buffer) to .pb (frozen model)? The documentation only talks about one way conversion.

Use-case is: I have a model that is trained on converted to .tflite but unfortunately, i do not have details of the model and i would like to inspect the graph, how can i do that?

Vallee answered 7/12, 2018 at 6:19 Comment(0)
V
10

I found the answer here

We can use Interpreter to analysis the model and the same code looks like following:

import numpy as np
import tensorflow as tf

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)

Netron is the best analysis/visualising tool i found, it can understand lot of formats including .tflite.

Vallee answered 19/1, 2019 at 4:22 Comment(2)
Link 'here' is not opening. 404Catfall
Lot of tensroflow repository restructuring has happened. You can find all the tflite documentation hereVallee
C
4

I don't think there is a way to restore tflite back to pb as some information are lost after conversion. I found an indirect way to have a glimpse on what is inside tflite model is to read back each of the tensor.

interpreter = tf.contrib.lite.Interpreter(model_path=model_path)     
interpreter.allocate_tensors()

# trial some arbitrary numbers to find out the num of tensors
num_layer = 89 
for i in range(num_layer):
    detail = interpreter._get_tensor_details(i)
    print(i, detail['name'], detail['shape'])

and you would see something like below. As there are only limited of operations that are currently supported, it is not too difficult to reverse engineer the network architecture. I have put some tutorials too on my Github

0 MobilenetV1/Logits/AvgPool_1a/AvgPool [   1    1    1 1024]
1 MobilenetV1/Logits/Conv2d_1c_1x1/BiasAdd [   1    1    1 1001]
2 MobilenetV1/Logits/Conv2d_1c_1x1/Conv2D_bias [1001]
3 MobilenetV1/Logits/Conv2d_1c_1x1/weights_quant/FakeQuantWithMinMaxVars [1001    1    1 1024]
4 MobilenetV1/Logits/SpatialSqueeze [   1 1001]
5 MobilenetV1/Logits/SpatialSqueeze_shape [2]
6 MobilenetV1/MobilenetV1/Conv2d_0/Conv2D_Fold_bias [32]
7 MobilenetV1/MobilenetV1/Conv2d_0/Relu6 [  1 112 112  32]
8 MobilenetV1/MobilenetV1/Conv2d_0/weights_quant/FakeQuantWithMinMaxVars [32  3  3  3]
9 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/Relu6 [  1  14  14 512]
10 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/depthwise_Fold_bias [512]
11 MobilenetV1/MobilenetV1/Conv2d_10_depthwise/weights_quant/FakeQuantWithMinMaxVars [  1   3   3 512]
12 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Conv2D_Fold_bias [512]
13 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/Relu6 [  1  14  14 512]
14 MobilenetV1/MobilenetV1/Conv2d_10_pointwise/weights_quant/FakeQuantWithMinMaxVars [512   1   1 512]
15 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/Relu6 [  1  14  14 512]
16 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/depthwise_Fold_bias [512]
17 MobilenetV1/MobilenetV1/Conv2d_11_depthwise/weights_quant/FakeQuantWithMinMaxVars [  1   3   3 512]
18 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Conv2D_Fold_bias [512]
19 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/Relu6 [  1  14  14 512]
20 MobilenetV1/MobilenetV1/Conv2d_11_pointwise/weights_quant/FakeQuantWithMinMaxVars [512   1   1 512]
Choir answered 12/12, 2018 at 11:57 Comment(0)
J
1

I have done this with TOCO, using tf 1.12

tensorflow_1.12/tensorflow/bazel-bin/tensorflow/contrib/lite/toco/toco -- 
output_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.pb -- 
output_format=TENSORFLOW_GRAPHDEF --input_format=TFLITE -- 
input_file=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.tflite -- 
inference_type=FLOAT --input_type=FLOAT --input_array="" --output_array="" -- 
input_shape=1,450,450,3 --dump_grapHviz=./

(you can remove the dump_graphviz option)

Jalousie answered 14/7, 2019 at 15:22 Comment(3)
toco: error: argument --output_format: invalid choice: 'TENSORFLOW_GRAPHDEF' (choose from 'TFLITE', 'GRAPHVIZ_DOT') on TF=1.15.0-dev20190810, does the higher version no longer support it?Nimwegen
I think it does not. Could you try with 1.12 ?Jalousie
I tried it with tf1.12 and still get the same error as @MeadowMuffins.Maemaeander

© 2022 - 2024 — McMap. All rights reserved.