Are evolutionary algorithms and neural networks used in the same domains? [closed]
O

7

20

I am trying to get a feel for the difference between the various classes of machine-learning algorithms.

I understand that the implementations of evolutionary algorithms are quite different from the implementations of neural networks.

However, they both seem to be geared at determining a correlation between inputs and outputs from a potentially noisy set of training/historical data.

From a qualitative perspective, are there problem domains that are better targets for neural networks as opposed to evolutionary algorithms?

I've skimmed some articles that suggest using them in a complementary fashion. Is there a decent example of a use case for that?

Oxley answered 9/3, 2009 at 22:34 Comment(3)
There's a slight misconception here: evolutionary algorithms neither require nor usually use any sort of historical or training data, but tend to operate directly on the live data. However, you must know how to measure a solution's fitness adequately.Tullis
I’m voting to close this question because it is not about programming as defined in the help center but about ML theory and/or methodology - please see the intro and NOTE in stackoverflow.com/tags/machine-learning/infoFlaky
As OP, I agree it should be closed, but just wanted to defend myself that this question was asked in the early days of StackOverflow before there was a StackExchange network or any kind of community gravitation towards what questions were acceptable on SO. The question seems nonsensical by today's standards of this website, but questions like this one that are no long topical were once commonplace and built the early community that evolved into what it is today. Cheers!Oxley
B
19

Here is the deal: in machine learning problems, you typically have two components:

a) The model (function class, etc)

b) Methods of fitting the model (optimizaiton algorithms)

Neural networks are a model: given a layout and a setting of weights, the neural net produces some output. There exist some canonical methods of fitting neural nets, such as backpropagation, contrastive divergence, etc. However, the big point of neural networks is that if someone gave you the 'right' weights, you'd do well on the problem.

Evolutionary algorithms address the second part -- fitting the model. Again, there are some canonical models that go with evolutionary algorithms: for example, evolutionary programming typically tries to optimize over all programs of a particular type. However, EAs are essentially a way of finding the right parameter values for a particular model. Usually, you write your model parameters in such a way that the crossover operation is a reasonable thing to do and turn the EA crank to get a reasonable setting of parameters out.

Now, you could, for example, use evolutionary algorithms to train a neural network and I'm sure it's been done. However, the critical bit that EA require to work is that the crossover operation must be a reasonable thing to do -- by taking part of the parameters from one reasonable setting and the rest from another reasonable setting, you'll often end up with an even better parameter setting. Most times EA is used, this is not the case and it ends up being something like simulated annealing, only more confusing and inefficient.

Bursary answered 10/3, 2009 at 1:44 Comment(0)
E
4

Problems that require "intuition" are better suited to ANNs, for example hand writing recognition. You train a neural network with a huge amount of input and rate it until you're done (this takes a long time), but afterwards you have a blackbox algorithm/system that can "guess" the hand writing, so you keep your little brain and use it as a module for many years or something. Because training a quality ANN for a complex problem can take months I'm worst case, and luck.

Most other evolutionary algorithms "calculate" an adhoc solution on the spot, in a sort of hill climbing pattern.

Also as pointed out in another answer, during runtime an ANN can "guess" faster than most other evolutionary algorithms can "calculate". However one must be careful, since the ANN is just "guessing" an it might be wrong.

Ethos answered 10/3, 2009 at 0:52 Comment(0)
C
4

Evolutionary, or more generically genetic algorithms, and neural networks can both be used for similar objectives, and other answers describe well the difference.

However, there is one specific case where evolutionary algorithms are more indicated than neural networks: when the solution space is non-differentiable.

Indeed, neural networks use gradient descent to learn from backpropagation (or similar algorithm). The calculation of a gradient relies on derivatives, which needs a continuous and derivative space, in other words that you can shift gradually and progressively from one solution to the next.

If your solution space is non-differentiable (ie, either you can choose solution A, or B, or C, but nothing in the middle like 0.5% A + 0.5% B, so that some solutions are impossible), then you are trying to fit a non-differentiable function, and then neural networks cannot work.

(Side note: discrete state space partially share the same issue and so are a common issue for most algorithms but there are usually some work done to workaround these issues, for example decision trees can work easily on categorical variables, while other models like svm have more difficulties and generally require encoding categorical variables into continuous values).

In this case, evolutionary and genetic algorithms are perfect, one could even say a god send, since they can "jump" from one solution to the next without any issue. They don't care that some solutions are impossible, nor that the gaps are big or small between subset of the possible state space, evolutionary algorithms can jump randomly far away or close by until they find appropriate solutions.

Also worth mentioning is that evolutionary algorithms are not subject to the curse of dimensionality as much as any other machine learning algorithm, including neural networks. This might seem a bit counter intuitive, since the convergence to a global maximum is not guaranteed, and the procedure might seem to be slow to evolve to a good solution, but in practice the selection procedure works fast and converges to a good local maximum.

This makes evolutionary algorithms a very versatile and generic tool to approach naively any problem, and one of the very few tools to deal with either non-differentiable functions, discrete functions, or with astronomically high dimensional datasets.

Chaechaeronea answered 6/4, 2018 at 3:29 Comment(3)
very nice supplement! i particularly like your point that evolutionary algorithms can approach a problem naively. (Intellectual bias arising from the received "correct way of doing things" seems to be a problem in humans. Also, not attempting to tackle problems because of their perceived difficulty or presumed unsolvability--it seems to me naive agent makes no such starting assumptions;)Peptic
Neural nets are used for image and speech classification. Isn't this solution space discrete? If not, what makes it continuous?Sheik
@Sheik I oversimplified my answer by stating "discrete", the correct term would be "non-differenciable" state space. It can be continuous or discrete, but the problem is when you have a state space with "gaps" with impossible solutions, so that it's non differenciable anymore (cannot compute derivatives, thus cannot use statistics/likelihoods). This is not a problem for evolutionary or genetic algorithms which both can solve that.Chaechaeronea
B
2

Evolutionary algorithms (EAs) are slow because they rely on unsupervised learning: EAs are told that some solutions are better than others, but not how to improve them. Neural networks are generally faster, being an instance of supervised learning: they know how to make a solution better by using gradient descent within a function space over certain parameters; this allows them to reach a valid solution faster. Neural networks are often used when there isn't enough knowledge about the problem for other methods to work.

Batey answered 9/3, 2009 at 22:41 Comment(5)
Can you elaborate on "gradient descent within a function space" in layman's terms? Does that just mean neural networks converge on potential solutions faster by using a more sophisticated feedback mechanism as opposed to brute force?Oxley
That really depends on the problem domain and potential epistasis of the parameters in the solution space.Hamrnand
More sophisticated feedback is correct. A function space is just a bunch of functions; for example, f(x) = a * x for different values of a is a function space. Gradient descent here involves evaluating a particular function, taking a 'derivative' and tweaking the funct in the correct direction.Bursary
Evolutionary algorithms do use randomness but this doesn't make them intrinsically any more slow. They may be slower for some functions but it also makes them proportionally less susceptible to getting stuck in local maxima, which is a desirable property.Tullis
"they just follow a random path". I don't think that is true. You could say "they randomly sample around their path". Regardless, that doesn't make them intrinsically slow, being no different from any stochastic sampling method in that respect.Mesmerize
O
2

Look at Neuro Evolution. (NE)

The current best methods is NEAT and HyperNEAT by Kenneth Stanley.

Genetic Algorithms only find a genome of some sort; It's great to create the genome of a neural network, because you get the reactive nature of the neural network, rather than just a bunch of static genes.

There's not many limits to what it can learn. But it takes time of course. Neural topology have to be evolved through the usual mutation and crossover, as well as weights updated. There can be no back propagation.

Also you can train it with a fitness function, which is thus superior to back propagation when you do not know what the output should be. Perfect for learning complex behaviour for systems that you do not know any optimal strategies for. Only problem is that it'll learn behaviour you didn't anticipate. Often that behaviour can be very alien, although it does exactly what you rewarded it for in the fitness function. Thus you'll be using as much time deriving fitness functions as you would have creating output sets for backpropagation :P

Oakleil answered 8/4, 2011 at 7:7 Comment(0)
T
1

In terms of problem domains, I compare artificial neural networks trained by backpropagation to an evolutionary algorithm.

An evolutionary algorithm deploys a randomized beamsearch, that means your evolutionary operators develop candidates to be tested and compared by their fitness. Those operators are usually non deterministic and you can design them so they can both find candidates in close proximity and candidates that are further away in the parameter space to overcome the problem of getting stuck in local optima.

However the success of a EA approach greatly depends on the model you develop, which is a tradeoff between high expression potential (you might overfit) and generality (the model might not be able to express the target function).

Because neural networks usually are multilayered the parameter space is not convex and contains local optima, the gradient descent algorithms might get stuck in. The gradient descent is a deterministic algorithm, that searches through close proximity. That's why neural networks usually are randomly initialised and why you should train many more than one model.

Moreover you know each hidden node in a neural network defines a hyperplane you can design a neural network so it fits your problem well. There are some techniques to prevent neural networks from overfitting.

All in all, neural networks might be trained fast and get reasonable results with few efford (just try some parameters). In theory a neural network that is large enough is able to approximate every target function, which on the other side makes it prone to overfitting. Evolutionary algorithms require you to make a lot of design choices to get good results, the hardest probably being which model to optimise. But EA are able to search through very complex problem spaces (in a manner you define) and get good results quickly. AEs even can stay successful when the problem (the target function) is changing over time.

Tom Mitchell's Machine Learning Book: http://www.cs.cmu.edu/~tom/mlbook.html

Tighten answered 5/6, 2009 at 8:9 Comment(0)
S
1

Evolutionary algorithms (EA) represent a manner of training a model, where as neuronal nets (NN) ARE a model. Most commonly throughout the literature, you will find that NNs are trained using the backpropagation algorithm. This method is very attractive to mathematicians BUT it requires that you can express the error rate of the model using a mathematical formula. This is the case for situations in which you know lots of input and output values for the function that you are trying to approximate. This problem can be modeled mathematically, as the minimization of a loss function, which can be achieved thanks to calculus (and that is why mathematicians love it).

But neuronal nets are also useful for modeling systems which try to maximize or minimize some outcome, the formula of which is very difficult to model mathematically. For instance, a neuronal net could control the muscles of a cyborg to achieve running. At each different time frame, the model would have to establish how much tension should be present in each muscle of the cyborg's body, based on the input from various sensors. It is impossible to provide such training data. EAs allow training by only providing a manner of evaluation of the model. For our example, we would punish falling and reward the traveled distance across a surface (in a fixed timeframe). EA would just select the models which do their best in this sense. First generations suck but, surprisingly, after a few hundred generations, such individuals achieve very "natural" movements and manage to run without falling off. Such models may also be capable of dealing with obstacles and external physical forces.

Spragens answered 13/10, 2016 at 9:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.