Hello I have a network that produces logits / outputs like this:
logits = tf.placeholder(tf.float32, [None, 128, 64, 64]) // outputs
y = tf.placeholder(tf.float32, [None, 128, 64, 64]) // ground_truth, targets
--> y ground truth values are downscaled from [0, 255] to [0, 1]
in order to increase perforce as I have read it is better to use range [0, 1]
Now I want to calculate the RMSE / EuclideanLoss like this:
loss = tf.reduce_mean(tf.square(logits - y))
or
loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(y, logits))))
not sure which one is better.
When doing so my loss values start at roughly 1.
and then quickly go down to 2.5e-4
. When I use the EuclideanLoss
in Caffe for the same network my loss values starts from roughly 1000
and goes down to 200
. Am I doing anything wrong in Tensorflow or why are the loss values that small? I can't really track the loss values in tensorboard
since they are so small. Can anyone help me?
logits
before using it in the loss, is that true? Could you show a bit more of the code? – Shortenlogits = tf.nn.conv2d(inputs, weights, [1,strides,strides,1], padding='VALID', data_format='NHWC')
@FlorentinHennecker – Ursuline