Convert a graph proto (pb/pbtxt) to a SavedModel for use in TensorFlow Serving or Cloud ML Engine
Asked Answered
F

2

12

I've been following the TensorFlow for Poets 2 codelab on a model I've trained, and have created a frozen, quantized graph with embedded weights. It's captured in a single file - say my_quant_graph.pb.

Since I can use that graph for inference with the TensorFlow Android inference library just fine, I thought I could do the same with Cloud ML Engine, but it seems it only works on a SavedModel model.

How can I simply convert a frozen/quantized graph in a single pb file to use on ML engine?

Flynt answered 2/6, 2017 at 12:38 Comment(1)
F
23

It turns out that a SavedModel provides some extra info around a saved graph. Assuming a frozen graph doesn't need assets, then it needs only a serving signature specified.

Here's the python code I ran to convert my graph to a format that Cloud ML engine accepted. Note I only have a single pair of input/output tensors.

import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants

export_dir = './saved'
graph_pb = 'my_quant_graph.pb'

builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

with tf.gfile.GFile(graph_pb, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

sigs = {}

with tf.Session(graph=tf.Graph()) as sess:
    # name="" is important to ensure we don't get spurious prefixing
    tf.import_graph_def(graph_def, name="")
    g = tf.get_default_graph()
    inp = g.get_tensor_by_name("real_A_and_B_images:0")
    out = g.get_tensor_by_name("generator/Tanh:0")

    sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
        tf.saved_model.signature_def_utils.predict_signature_def(
            {"in": inp}, {"out": out})

    builder.add_meta_graph_and_variables(sess,
                                         [tag_constants.SERVING],
                                         signature_def_map=sigs)

builder.save()
Flynt answered 2/6, 2017 at 12:39 Comment(8)
I'm trying to do this, but someone gave me the checkpoint directory without the code. It seems like I need the names of the input and output nodes. Is there a way to get the input and output nodes from the info in the checkpoint directory?Parallel
yep use the inspect checkpoint tool: github.com/tensorflow/tensorflow/blob/master/tensorflow/python/…Flynt
Thanks for the quick reply. When I ran it I got: python inspect_checkpoint.py --file_name=checkpoint 2017-07-14 07:38:02.585722: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open ./checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator? Unable to open table file ./checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?Parallel
I tried out your code ! but the variables folder is empty. I'm using tensorflow hub to retrain an image classifier following this is the variables folder supposed to be empty ? (in some cases)Nodab
@Parallel - If the checkpoint tool isn't working, you can try loading the model in TensorBoard and inspecting it visually. Alternatively the checkpoint should have a .pbtxt file that contains the description of the model graph, you can either inspect it by hand or use tensorboard's graph viz element. I did the latter in this repo, you'll just need to replace the existing pbtxt file with yours.Flynt
@user007 I'm not sure how TF Hub fits in here, try asking a brand new question here to get some more visibility. Make sure you tag it with tensorflow and tensorflow-hub.Flynt
What if I wanted to also export the variables? Currently, this method does export any variables.Donaldson
@MarkMcDonald what if I have multiple input and output nodes?Neufer
L
1

There is sample with multiple outputs nodes:

# Convert PtotoBuf model to saved_model, format for TF Serving
# https://cloud.google.com/ai-platform/prediction/docs/exporting-savedmodel-for-prediction
import shutil
import tensorflow.compat.v1 as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants

export_dir = './1' # TF Serving supports run different versions of same model. So we put current model to '1' folder.
graph_pb = 'frozen_inference_graph.pb'

# Clear out folder
shutil.rmtree(export_dir, ignore_errors=True)

builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

with tf.io.gfile.GFile(graph_pb, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

sigs = {}

with tf.Session(graph=tf.Graph()) as sess:
    # Prepare input and outputs of model
    tf.import_graph_def(graph_def, name="")
    g = tf.get_default_graph()
    image_tensor = g.get_tensor_by_name("image_tensor:0")
    num_detections = g.get_tensor_by_name("num_detections:0")
    detection_scores = g.get_tensor_by_name("detection_scores:0")
    detection_boxes = g.get_tensor_by_name("detection_boxes:0")
    detection_classes = g.get_tensor_by_name("detection_classes:0")

    sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
        tf.saved_model.signature_def_utils.predict_signature_def(
            {"input_image": image_tensor}, 
            {   "num_detections": num_detections,
                "detection_scores": detection_scores, 
                "detection_boxes": detection_boxes, 
                "detection_classes": detection_classes})

    builder.add_meta_graph_and_variables(sess,
                                         [tag_constants.SERVING],
                                         signature_def_map=sigs)

builder.save()
Labdanum answered 9/4, 2021 at 20:47 Comment(1)
My frozen TF1 graph (that I want to convert to saved model) has the following tensors: ghostbin.com/MlDHG - what params do I need to specify using your script?Execration

© 2022 - 2024 — McMap. All rights reserved.