How to denormalise (de-standardise) neural net predictions after normalising input data
Asked Answered
L

1

1

How does one return original data scale after normalising input data for the neural net. Normalising was made with the standard deviation method. But the problem has already discussed, it belongs to returning same values for each neural net input. I've followed the advice and normalised data. Are there very obvious ways how to get adequate (which are different from each other) predictions for non-normalised data?

But being normalised inputs demonstrate relatively acceptable output results (predictions). But it seems to result in overfitting. So, how to avoid overfitting?

Lewes answered 31/8, 2015 at 21:15 Comment(0)
S
2

If you have also standardized your targets using:

     y - mean(y)
y' = -----------
      stdev(y)

Then you just have to solve the above for y:

y = y' * stdev(y) + mean(y)

And replace y with what your neural networks predicts.

For classification, you shouldn't standardize the targets. For regression, you should.

Your question is not clear about the overfitting part in its current form.

Sizeable answered 1/9, 2015 at 7:49 Comment(1)
Thanks! I thought of this. But I have doubts regarding this converting operation because of using mean and stdev not for prediction (output) values. How is it explained in econometrics or statistics? What is the main reason why we should normalised inputs for regression? And is it usual situation that it is impossible to get acceptable predictions in PyBrain when inputs are not normalised?Lewes

© 2022 - 2024 — McMap. All rights reserved.