What is evolutionary computation?
Evolutionary computation is a branch of Artificial Intelligence and is used heavily for complex optimization problems and also for continuous optimization.
Evolutionary computation techniques are used to handle problems that have far more variables than what traditional algorithms can handle.
These computational models employ evolutionary algorithms that essentially use evolutionary processes for the purpose of solving such complex problems. They use evolutionary principles like inheritance from effective previous generation models, and natural selection, where the traits from the most effective models and passed on to future generation models.
Why do we use evolutionary computation?
Since evolutionary computing has the ability to produce tightly optimized solutions for a wide range of problems, they are extensively used in computer science. There are even variants that are created and used specifically for particular data structures and families of problems.
This branch of artificial intelligence is also employed in evolutionary biology for studying common aspects of general evolutionary processes.
What are the types of evolutionary algorithms?
There are various types of evolutionary algorithms. Here are the most significant ones:
- Genetic algorithms (GA)
- Genetic Programming (GP)
- Evolutionary Programming (EP)
- Evolutionary Strategies (ES)
Many more evolutionary algorithms also exist. These include Gene Expression Programming, Differential Evolution, Learning Classifier Systems, and Neuroevolution.
Genetic algorithms (GA)
Genetic algorithms are the most popular type of evolutionary algorithms. They find solutions to problems as strings of numbers. Most of these strings are binary, but the most effective ones tend to show something about the problem in question.
These algorithms make use of operators like mutation and recombination. Sometimes they use both operators together.
One of the uses of genetic algorithms is selecting the right combination of variables to build a predictive model. Selecting the right subset of variables is essentially a combinatory and optimization problem.
The advantage of genetic algorithms is that it makes it possible for the best solution to emerge from the best of prior solutions. It improves the selection over time.
The whole idea behind genetic algorithms is to combine the differnet solutions generation after generation so that it can extract the best genes or variables from each solution. It helps creating better fitted individuals.
Genetic algorithms are also used for hyper-tuning parameters, finding the maximum or minimum of a function, or the search for a correct neural network architecture (Neuroevolution). It is also used in feature selection.
The idea of genetic algorithms (GA) is to generate a few random possible solutions that represent different variables, and then combine the best possible solutions in an iterative process. The basic genetic algorithm operations are selection (picking the most fitted solutions in a generation), cross-over (creating two new individuals, based on the genes of solutions), and mutation (changing a gene randomly in an individual).
Genetic Programming (GP)
Here, the solutions to problems are computer programs. The ability of these computer programs to solve computational problems is what determines their fitness.
Genetic Programming (GP) is essentially an automatic programming technique which favors the evolution of computer programs that solve or at least approximately solve problems. It involves essentially ‘breeding’ programs by continuously improving an initially random set of programs.
Improvements are made by stochastic variation of programs and selection in line with some predefined criteria for judging the quality of a solution. Programs of genetic programming systems essentially evolve to solve predescribed automatic programming and machine learning problems.
In its essence, genetic programming is a heuristic search technique that is commonly called ‘hill climbing’. It involves searching for an optimal or at least a suitable program among the space of all programs.
Evolutionary Programming (EP)
This is not too different from Genetic Programming. However, in Evolutionary Programming, the programs that need to be optimized have a fixed structure, while the numeric parameters can evolve.
This evolutionary algorithm paradigm was first used by Lawrence J. Fogel in 1960 in an attempt to use simulated evolution as a learning process seeking to create artificial intelligence. He used finite-state machines as predictors and evolved them. Right now, evolutionary programming is a wide evolutionary computing dialect that has no fixed structure or representation. It is becoming increasingly difficult to differentiate evolutionary programming from evolutionary strategies.
The main operator of evolutionary programming is mutation. In evolutionary programming, members of the population are seen as part of a specific species rather than members of the same species. Every parent generates an offspring by using a (μ + μ) survivor selection.
Evolutionary Strategies (ES)
Evolutionary strategies usually work by making use of self-adaptive mutation rates. They work with vectors of real numbers as representations of solutions.
Evolutionary strategies are optimization techniques that are based on the ideas of evolution. They use natural problem-dependent representations and mainly make use of mutation and selection as search operators. The operators are applied in a loop, an iteration of which is known as a generation. The sequence of generations continues till a termination criterion is met. Most evolutionary algorithms work on a genotype level, but evolutionary strategies work on a behavioral level.
Since the physical expression is coded directly, an individual’s genes are not mapped to its physical expression.
This approach is followed to give rise to a strong causality so that a small change in the coding gives rise to a small change in the individual and a large change in the coding causes a large change in the individual.
How does evolutionary computing work?
At the initial stage of the evolutionary computation process, an initial batch of possible solutions is created.
After that, the system tests the solutions proposed and stochastically removes the solutions that do not perform well, thus refining the model.
It introduces tiny randomized changes to future generations of the model, and over generations the solutions are refined to a high degree.
The solutions at the end of the evolutionary computing process are highly optimized.