Using Gekko's brain module, how do I determine how many layers and what type of layer to use to solve a deep learning problem?
Asked Answered
C

2

9

I am learning to use Gekko's brain module for deep learning applications.

I have been setting up a neural network to learn the numpy.cos() function and then produce similar results.

I get a good fit when the bounds on my training are:

x = np.linspace(0,2*np.pi,100)

But the model falls apart when I try to extend the bounds to:

x = np.linspace(0,3*np.pi,100)

What do I need to change in my neural network to increase the flexibility of my model so that it works for other bounds?

This is my code:

from gekko import brain
import numpy as np
import matplotlib.pyplot as plt

#Set up neural network 
b = brain.Brain()
b.input_layer(1)
b.layer(linear=2)
b.layer(tanh=2)
b.layer(linear=2)
b.output_layer(1)

#Train neural network
x = np.linspace(0,2*np.pi,100)
y = np.cos(x)
b.learn(x,y)

#Calculate using trained nueral network
xp = np.linspace(-2*np.pi,4*np.pi,100)
yp = b.think(xp)

#Plot results
plt.figure()
plt.plot(x,y,'bo')
plt.plot(xp,yp[0],'r-')
plt.show()

These are results to 2pi:

enter image description here

These are results to 3pi:

enter image description here

Chkalov answered 21/11, 2019 at 19:12 Comment(0)
P
6

I get the following result if i increase the nodes to 5 b.layer(tanh=5)

There are probably multiple answers to this question, tho. Maybe increasing the number of layers or changing the activation function. You can always use different solvers, too. Finding the best network architecture is an optimization problem of its own. Some people have tried to figure it out with genetic algorithms, for example:

https://arxiv.org/pdf/1808.03818.pdf

enter image description here

Popele answered 21/11, 2019 at 19:24 Comment(0)
R
3

Data-driven (Black-box) models inherently have "Overfitting" and "Underfitting" problems. If you give too many degrees of freedom in your model, it will perfectly fit into your "Training" data set, while it won't fit that well with the "Validation" data set.

Same for the Neural Net type of models. The more layers you give, the easier the model results in the "Overfitting."

There are several ways to avoid "Overfitting".

Generally, you can balance the "model errors" between the "Training set" and "Validation set," stopping increasing the layers at the point when the Validation Error starts increasing while the training error keeps decreasing, or vice versa.

Rok answered 22/11, 2019 at 17:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.