How do you de-normalise?
Asked Answered
P

2

13

Once you do normalisation of your data so the values are between 0-1, how do you de-normalise it so you can interpret the result?

So when you normalise your data, and feed it to your network and get an output which is normalised data. How do you reverse normalisation to get the original data?

Panhellenic answered 22/7, 2017 at 18:28 Comment(7)
You usually don't need to do that, your network should be capable of undoing it. What you usually undo is your preprocessing. What are you trying to do? Can you be more specific? Autoencoding?Hairraising
I was feeding a batch normalised data to the network and when I output results is is batch normalised. Well it is unreadable like 0.333 etcPanhellenic
Are you sure you really mean batch normalized data? Batch normalization is not usually applied to data, you might just mean plain normalization.Penicillin
i apologise plain normalization is what i meant, where the data is arranged between 0-1Panhellenic
This question is not about programming, so I'm voting to close it. Consider doing further research and having a look at questions from Cross Validated and Data Science SE.Persas
In the event that you want a technical solution to the problem, then you need to write a minimal reproducible example that better explains your issue and shows what you have tried so far.Persas
Why you gotta be so rude , can't u help a brother out?Panhellenic
R
22

If you have some data d that you normalize to 0-1 by doing (something like)

min_d = np.min(d)
max_d = np.max(d)
normalized_d = (d - min_d) / (max_d - min_d)

you can de-normalize this by inverting the normalization. In this case

denormalized_d = normalized_d * (max_d - min_d) + min_d
Roughspoken answered 22/7, 2017 at 22:7 Comment(1)
Thank you, you are a life saver. Thank you so much you have explained it very clearly thank you!Panhellenic
I
2

Additionally since the question is tagged with keras, if you were to normalize the data using its builtin normalization layer, you can also de-normalize it with a normalization layer.

You need to set the invert parameter to True, and use the mean and variance from the original layer, or adapt it to the same data.

# Create a variable for demonstration purposes
test_var = pd.Series([2.5, 4.5, 17.5, 10.5], name='test_var')

#Create a normalization layer and adapt it to the data
normalizer_layer = tf.keras.layers.Normalization(axis=-1)
normalizer_layer.adapt(test_var)

#Create a denormalization layer using the mean and variance from the original layer
denormalizer_layer = tf.keras.layers.Normalization(axis=-1, mean=normalizer_layer.mean, variance=normalizer_layer.variance, invert=True)

#Or create a denormalization layer and adapt it to the same data
#denormalizer_layer = tf.keras.layers.Normalization(invert=True)
#denormalizer_layer.adapt(test_var)

#Normalize and denormalize the example variable
normalized_data = normalizer_layer(test_var)
denormalized_data = denormalizer_layer(normalized_data)

#Show the results
print("test_var")
print(test_var)

print("normalized test_var")
print(normalized_data)

print("denormalized test_var")
print(denormalized_data)

see more: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization

Ive answered 9/4, 2023 at 15:21 Comment(2)
This answer helped me a lot in understanding how to use the Normalization layer. But unfortunately I get unexpected results if I run the above code at print(denormalized_data): the result is tf.Tensor([[44.911316 46.911316 59.91132 52.911316]], shape=(1, 4), dtype=float32), where I expected the values to go back to the original [2.5, 4.5, 17.5, 10.5]. Is there some behaviour I'm not thinking of that would explain this?Jadda
Addition to my previous comment: it's possible this has something to do with this bugJadda

© 2022 - 2025 — McMap. All rights reserved.