How to debug Keras in TensorFlow 2.0?
Asked Answered
B

1

6

Actually, I find the problem already in TensorFlow 1.13.0. (tensorflow1.12.0 works well).

My code is listed as a simple example:

def Lambda layer(temp):
    print(temp)
    return temp

which is used as a lambda layer in my Keras model. In tensorflow1.12.0, the print(temp) can output the detail data like following

[<tf.Tensor: id=250, shape=(1024, 2, 32), dtype=complex64, numpy=
array([[[ 7.68014073e-01+0.95353246j,  7.01403618e-01+0.64385843j,
          8.30483198e-01+1.0340731j , ..., -8.88018191e-01+0.4751519j ,
         -1.20197642e+00+0.6313924j , -1.03787208e+00+0.22964947j],
        [-7.94382274e-01+0.56390345j, -4.73938555e-01+0.55901265j,
         -8.73749971e-01+0.67095983j, ..., -5.81580341e-01-0.91620034j,
         -7.04443693e-01-1.2709806j , -3.23135853e-01-1.0887597j ]],

It is because I use the 1024 as batch_size. But when I update to tensorflow1.13.0 or TensorFlow 2.0, the same code's output

Tensor("lambda_1/truediv:0", shape=(None, 1), dtype=float32)

This is terrible since I can not know the exact mistakes. So, any idea about how to solve it?

Burnside answered 15/4, 2019 at 5:16 Comment(0)
C
4

You see that output because the Keras model is being converted to its graph representation, and thus print printes the tf.Tensor graph description.

To see the content of a tf.Tensor when using Tensorflow 2.0 you should use tf.print instead of print since the former gets converted to its graph representation while the latter doesn't.

Calibrate answered 15/4, 2019 at 8:25 Comment(4)
thank you for your kind response. It works! however, it is really complex to debug by adding a lot of tf.print to check variables. In tensorflow 1.12.0, I can debug with a break point within the lambda layer, and the program will stop there. However,in 2.0 the program will not stop in the lambda layer after the training begin, so I cannot easliy debug (as my custom loss function is complicated,I really need to check many variables to ensure my function is right)Do you have any idea?Burnside
When the source code gets converted into a Graph there are few things you can do to easily debug. Probably the best thing you can do is to first write the model using keras + manually writing the training loop using tf.GradienTape and all the eager stuff. In this way, you can use a debugger and debug easily. Then, if you want you can throw away the custom training loop and use Keras (or better, just decorate the training loop with @tf.function and convert the loop to a graph to speed it up)Calibrate
thank you, maybe it's the best solution so far. I really like the debug way in tf 1.12.0 and a little disappointed its absence in tf 2.0. Hopefully there will be more convenient debug method in the futureBurnside
Just go pure eager (the custom training loop with the tape is really powerful) and then convert to a graph. You'll get the best from both worlds, trust me :) however since this answer solved your question please remember to mark it as accepted!Calibrate

© 2022 - 2024 — McMap. All rights reserved.