How can I implement the Kullback-Leibler loss in TensorFlow?
Asked Answered
L

1

6

I need to minimize KL loss in tensorflow.

I tried this function tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None), but I failed.

I tried to implement it manually:

def kl_divergence(p,q):
    return p* tf.log(p/q)+(1-p)*tf.log((1-p)/(1-q))

Is it correct?

Lowminded answered 8/4, 2017 at 17:52 Comment(1)
Possible duplicate of KL Divergence in TensorFlowComedo
A
10

What you have there is the cross entropy, KL divergence should be something like:

def kl_divergence(p, q): 
    return tf.reduce_sum(p * tf.log(p/q))

This assumes that p and q are both 1-D tensors of float, of the same shape and for each, their values sum to 1.

It should also work if p and q are equally sized mini-batches of 1-D tensors that obey the above constraints.

Admiralty answered 8/4, 2017 at 17:56 Comment(8)
Thanks a lot, and in case that p and q are multidimensional?Lowminded
Do you mean in the case that p and q are mini-batches of distributions that you want to optimize? I think what I have should be fine for that case as well. If it's not that then I would need more context.Admiralty
i get nan when i try to compute the division 0/0Lowminded
i add a costant = 0.00001 to each p and q to avoid nan, is correct?Lowminded
@srabb that is one option, you can also do tf.max so it doesn't modify other results, I wonder if there is a better way thoughAdmiralty
adding a small const is the version I see in tensorflow examples from Google, I would guess this is marginally faster than doing a max?Admiralty
Instead of adding a constant to handle zeros smooth it with an uniform distribution.Rehearing
tf.log no longer exists - tf.math.log ratherFulcrum

© 2022 - 2024 — McMap. All rights reserved.