Error with exporting TF2.2.0 model with tf.lookup.StaticHashTable for Serving
Asked Answered
P

1

6

I'm using StaticHashTable as in one Lambda layer after the output layer of my tf.keras model. It's quite simple actually: I've a text classification models and I'm adding a simple lambda layer that takes the model.output and convert the model_id to more general labels. I can save this version of model with model.save(... as H5 format..) without any issue, and can load it back and use it without any problem.

Issue is, when I try to export my TF2.2.0 model for TF-Serving, I can't find how I can export it. Here is what I can do with TF1.X or with TF2.X + tf.compat.v1.disable_eager_execution()

tf.compat.v1.disable_eager_execution()
version = 1
name = 'tmp_model'
export_path = f'/opt/tf_serving/{name}/{version}'
builder = saved_model_builder.SavedModelBuilder(export_path)

model_signature = tf.compat.v1.saved_model.predict_signature_def(
    inputs={
        'input': model.input
    }, 
    outputs={
        'output': model.output
    }
)

with tf.compat.v1.keras.backend.get_session() as sess:
    builder.add_meta_graph_and_variables(
        sess=sess,
        tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
        signature_def_map={
            'predict': model_signature
        },
        # For initializing Hashtables
        main_op=tf.compat.v1.tables_initializer()
    )
    builder.save()

This will save my models with TF1.X format for serving and I can use it without any issue. Things is, I'm using LSTM layer and I want to use my model on GPU. By the documentation, if I disable the eager mode, I can't use the GPU-version of LSTM with TF2.2. And without going through above mentioned code, I can't save my model for serving wrt TF2.2 standard and StaticHashTables.

Here is how I'm trying to export my TF2.2 model which is using StaticHashTables in final layer; and which is giving error as below:

class MyModule(tf.Module):

    def __init__(self, model):
        super(MyModule, self).__init__()
        self.model = model
    
    @tf.function(input_signature=[tf.TensorSpec(shape=(None, 16), dtype=tf.int32, name='input')])
    def predict(self, input):
        result = self.model(input)
        return {"output": result}

version = 1
name = 'tmp_model'
export_path = f'/opt/tf_serving/{name}/{version}'

module = MyModule(model)
tf.saved_model.save(module, export_path, signatures={"predict": module.predict.get_concrete_function()})

Error:

AssertionError: Tried to export a function which references untracked object Tensor("2907:0", shape=(), dtype=resource).
TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

Any suggestion or am I missing anything on exporting TF2.2 model which is using the StaticHashTables in final Lambda layer for TensorFlow Serving?

More info here: https://github.com/tensorflow/serving/issues/1719

Thanks!

Pentachlorophenol answered 13/8, 2020 at 16:25 Comment(0)
N
3

I had the same issue and I found the answer creating a custom layer with the lookup transformation and then addin that layer to my model. Somebody else put the answer here on stackoverflow but I cannot find this again so I will put it for you. The reason is that the variables and the other elements from outside must be trackeable and I didn't find other way to make them trackeable but to create a Custom layer because those are trackeable and don't need to add aditional assets when exporting.

This is the code:

Here is the custom layer specific to make the transformation before the model (includes the tokenizer as a lookup from a static table, and then the padding:

class VocabLookup(tf.keras.layers.Layer):
    def __init__(self, word_index, **kwargs):
        self.word_index = word_index
        self.vocab = list(word_index.keys())
        self.indices = tf.convert_to_tensor(list(word_index.values()), dtype=tf.int64)
        vocab_initializer = tf.lookup.KeyValueTensorInitializer(self.vocab, self.indices)
        self.table = tf.lookup.StaticHashTable(vocab_initializer, default_value=1)
        super(VocabLookup, self).__init__(**kwargs)

    def build(self, input_shape):
        self.built = True

    def sentences_transform(self,tx):
        x = tf.strings.lower(tx)
        x = tf.strings.regex_replace(x,"[,.:;]", " ")
        x = tf.strings.regex_replace(x,"á", "a")
        x = tf.strings.regex_replace(x,"é", "e")
        x = tf.strings.regex_replace(x,"í", "i")
        x = tf.strings.regex_replace(x,"ó", "i")
        x = tf.strings.regex_replace(x,"ú", "u")
        x = tf.strings.regex_replace(x,"ü", "u")
        x = tf.strings.regex_replace(x,"Á", "a")
        x = tf.strings.regex_replace(x,"É", "e")
        x = tf.strings.regex_replace(x,"Í", "i")
        x = tf.strings.regex_replace(x,"Ó", "o")
        x = tf.strings.regex_replace(x,"Ú", "u")
        x = tf.strings.regex_replace(x,"Ü", "u")
        x = tf.strings.regex_replace(x,"Ü", "u")
        x = tf.strings.regex_replace(x,"[?¿¡!@#$-_\?+¿{}*/]", "")
        x = tf.strings.regex_replace(x," +", " ")
        x = tf.strings.strip(x)
        x = tf.strings.split(x)
        x = self.table.lookup(x)
        x_as_vector = tf.reshape(x, [-1])
        zero_padding = tf.zeros([191] - tf.shape(x_as_vector), dtype=x.dtype)
        x = tf.concat([x_as_vector, zero_padding], 0)
        return x
        

    def call(self, inputs):
        x = tf.map_fn(lambda tx: self.sentences_transform(tx), elems = inputs,dtype=tf.int64)
        return x

    def get_config(self):
        return {'word_index': self.word_index}

In my case I create the layer to receive the word_index from a tokenizer as an Input. Then, you can use a Layer like this one inside your model:

with open(<tokenizer_path>) as f:
    data = json.load(f)
    tokenizer = tokenizer_from_json(data)

moderator = load_model(<final model path ('.h5')>)
word_index = tokenizer.word_index
text_bytes = tf.keras.Input(shape=(), name='image_bytes', dtype=tf.string)
x = VocabLookup(word_index)(text_bytes)
output = moderator(x)
model = tf.keras.models.Model(text_bytes, output)

If you make the summary you will have something like this:

model.summary()
Model: "functional_57"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
image_bytes (InputLayer)     [(None,)]                 0         
_________________________________________________________________
vocab_lookup_60 (VocabLookup (None, None)              0         
_________________________________________________________________
sequential_1 (Sequential)    (None, 1)                 1354369   
=================================================================
Total params: 1,354,369
Trainable params: 1,354,369
Non-trainable params: 0

With this steps you finally can save as a TF2 serving model

save_path = <your_serving_model_path>
tf.saved_model.save(model,  save_path)
Nesmith answered 20/8, 2020 at 13:31 Comment(3)
I've few qstns here, I'm trying to save a vocab lookup layer like the one you've mentioned and use it later for predictions. Wt I can't understand is, during prediction, how to initialize vocab layer with values seen during building that layer ? in intrst of latency i can't re-init my vocab layr frm scrtch.Enzyme
I have to see your specific use case, but once the word_index is inside the layer you cannot modify it; so any new word thats not in the initial word_index will be tagged as unknownNesmith
In my case, this solution couldn't fix. How I solved the problem was put self.table(which is tf.lookup.StaticHashTable) into build method.Magnetite

© 2022 - 2024 — McMap. All rights reserved.