I have exported a SavedModel
and now I with to load it back in and make a prediction. It was trained with the following features and labels:
F1 : FLOAT32
F2 : FLOAT32
F3 : FLOAT32
L1 : FLOAT32
So say I want to feed in the values 20.9, 1.8, 0.9
get a single FLOAT32
prediction. How do I accomplish this? I have managed to successfully load the model, but I am not sure how to access it to make the prediction call.
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
"/job/export/Servo/1503723455"
)
# How can I predict from here?
# I want to do something like prediction = model.predict([20.9, 1.8, 0.9])
This question is not a duplicate of the question posted here. This question focuses on a minimal example of performing inference on a SavedModel
of any model class (not just limited to tf.estimator
) and the syntax of specifying input and output node names.