I need to minimize KL loss in tensorflow.
I tried this function tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None)
, but I failed.
I tried to implement it manually:
def kl_divergence(p,q):
return p* tf.log(p/q)+(1-p)*tf.log((1-p)/(1-q))
Is it correct?