Convert Keras model to TensorFlow protobuf
Asked Answered
R

5

22

We're currently training various neural networks using Keras, which is ideal because it has a nice interface and is relatively easy to use, but we'd like to be able to apply them in our production environment.

Unfortunately the production environment is C++, so our plan is to:

  • Use the TensorFlow backend to save the model to a protobuf
  • Link our production code to TensorFlow, and then load in the protobuf

Unfortunately I don't know how to access the TensorFlow saving utilities from Keras, which normally saves to HDF5 and JSON. How do I save to protobuf?

Residuum answered 4/4, 2016 at 20:16 Comment(1)
Not familiar with Keras, but if it's using the default graph, you can get the protobuf as tf.get_default_graph().as_graph_def()Ascanius
V
7

In case you don't need to utilize a GPU in the environment you are deploying to, you could also use my library, called frugally-deep. It is available on GitHub and published under the MIT License: https://github.com/Dobiasd/frugally-deep

frugally-deep allows running forward passes on already-trained Keras models directly in C++ without the need to link against TensorFlow or any other backend.

Vidavidal answered 3/1, 2018 at 17:50 Comment(2)
Any plans to support RNNs? Interesting overlap with lwtnn which handles RNNs (but not convolutions).Residuum
@Residuum I would like to support them in the future, but I don't have a schedule planned for this yet.Vidavidal
C
4

You can access TensorFlow backend by:

import keras.backend.tensorflow_backend as K

Then you can call any TensorFlow utility or function like:

K.tf.ConfigProto
Chitin answered 22/4, 2016 at 5:26 Comment(0)
F
4

This seems to be answered in "Keras as a simplified interface to TensorFlow: tutorial", posted on The Keras Blog by Francois Chollet.

In particular, section II, "Using Keras models with TensorFlow".

Fabri answered 10/5, 2016 at 13:52 Comment(0)
M
2

Save your keras model as an HDF5 file.

You can then do the conversion with the following code:

from keras import backend as K
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io

weight_file_path = 'path to your keras model'
net_model = load_model(weight_file_path)
sess = K.get_session()

constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), 'name of the output tensor')
graph_io.write_graph(constant_graph, 'output_folder_path', 'output.pb', as_text=False)
print('saved the constant graph (ready for inference) at: ', osp.join('output_folder_path', 'output.pb'))

Here is my sample code which handles multiple input and multiple output cases: https://github.com/amir-abdi/keras_to_tensorflow

Mediative answered 10/5, 2017 at 5:33 Comment(0)
A
1

Make sure you change the learning phase of keras backend to store proper values of the layers (like dropout or batch normalization). Here is a discussion about it.

Ajmer answered 1/4, 2019 at 10:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.