Graph optimizations on a tensorflow serveable created using tf.Estimator
Asked Answered
C

2

29

Context:

I have a simple classifier based on tf.estimator.DNNClassifier that takes text and output probabilities over an intent tags. I am able to train an export the model to a serveable as well as serve the serveable using tensorflow serving. The problem is this servable is too big (around 1GB) and so I wanted to try some tensorflow graph transforms to try to reduce the size of the files being served.

Problem:

I understand how to take the saved_model.pb and use freeze_model.py to create a new .pb file that can be used to call transforms on. The result of these transforms (a .pb file as well) is not a servable and cannot be used with tensorflow serving.

How can a developer go from:

saved model -> graph transforms -> back to a servable

There's documentation that suggests that this is certainly possible, but its not at all intuitive from the docs as to how to do this.

What I've Tried:

import tensorflow as tf

from tensorflow.saved_model import simple_save
from tensorflow.saved_model import signature_constants
from tensorflow.saved_model import tag_constants
from tensorflow.tools.graph_transforms import TransformGraph


with tf.Session(graph=tf.Graph()) as sess_meta:
    meta_graph_def = tf.saved_model.loader.load(
        sess_meta,
        [tag_constants.SERVING],
        "/model/path")

    graph_def = meta_graph_def.graph_def

    other_graph_def = TransformGraph(
        graph_def,
        ["Placeholder"],
        ["dnn/head/predictions/probabilities"],
        ["quantize_weights"])


    with tf.Graph().as_default():
        graph = tf.get_default_graph()
        tf.import_graph_def(other_graph_def)
        in_tensor = graph.get_tensor_by_name(
            "import/Placeholder:0")
        out_tensor = graph.get_tensor_by_name(
            "import/dnn/head/predictions/probabilities:0")

        inputs = {"inputs": in_tensor}
        outputs = {"outputs": out_tensor}

        simple_save(sess_meta, "./new", inputs, outputs)

My idea was to load the servable, extract the graph_def from the meta_graph_def, transform the graph_def and then try to recreate the servable. This seems to be the incorrect approach.

Is there a way to successfully perform transforms (to reduce file size at inference) on a graph from an exported servable, and then recreate a servable with the transformed graph?

Thanks.

Update (2018-08-28):

Found contrib.meta_graph_transform() which looks promising.

Update (2018-12-03):

A related github issue I opened that seems to be resolved in a detailed blog post which is listed at the end of the ticket.

Cottonseed answered 22/8, 2018 at 16:13 Comment(1)
I've actually done something similar before. I always convert to saved_model instead of frozen model, then I wanted to try some optimization. After hours of searching, I made a script that converts a saved_model2frozen_model and frozen_model2saved_model.Tranquillize
T
0

The way to go from a SavedModel to a servable after running tensorflow graph transforms is to use the SavedModel Builder API.

First, you need to create a SavedModel Builder object and then rebuild the graph you have just transformed, using the SavedModel Builder API.

Next, you need to add the assets, signatures, and other meta-data back into the model. Finally, you need to call the SavedModel Builder API's save() method, which will save the model as a servable.

This servable can then be used with tensorflow serving.

Tawny answered 22/3, 2023 at 5:43 Comment(0)
F
0

Load the SavedModel and Extract the Graph Definition Apply Graph Transformation Recreate the Serveable SavedModel

Fiord answered 15/7 at 9:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.