<!-- JSON-LD markup generated by Google Structured Data Markup Helper. --><script type="application/ld+json">[ { "@context" : "http://schema.org", "@type" : "Article", "name" : "Metaheuristics", "articleSection" : "What are meta-heuristics?", "articleBody" : "Metaheuristics is a branch of optimization in the industry of computer science and applied mathematics, that is related to algorithms and computational complexity theory.", "url" : "https://www.engati.com/glossary/meta-heuristics", "publisher" : { "@type" : "Organization", "name" : "Engati" }}, { "@context" : "http://schema.org", "@type" : "Article", "name" : "Metaheuristics", "articleSection" : "How is meta-heuristics related to machine learning?", "articleBody" : "Based on the training data, Machine Learning methods can be used for various optimization tasks, unearthing its meta-heuristics, and hence it requires less hand engineering as compared to solvers who can optimize only a single task.", "url" : "https://www.engati.com/glossary/meta-heuristics", "publisher" : { "@type" : "Organization", "name" : "Engati" }}, { "@context" : "http://schema.org", "@type" : "Article", "name" : "Metaheuristics", "articleSection" : "Some of the meta-heuristic approaches are", "articleBody" : [ "Particle swarm optimization", "Ant Colony Optimization", "Variable Neighbourhood Search (VNS)", "Scatter search and Tabu Search", "Simulated Annealing", "Genetic Programming" ], "url" : "https://www.engati.com/glossary/meta-heuristics", "publisher" : { "@type" : "Organization", "name" : "Engati" }} ]</script>

Metaheuristics is a branch of optimization in the industry of computer science and applied mathematics, that is related to algorithms and computational complexity theory. It provides justifiable solutions in a practical amount of time, for finding complex solutions. As the metaheuristic algorithms can solve complex problems successfully, they have come into the limelight in many fields, ranging from academic research to engineering practice. Metaheuristics’ effectiveness relies on its capability to avoid the trap of relative solutions, adjusting to a particular instance, and capitalizing on the structure of the problem. Machine learning techniques used for tuning metaheuristics is a trial approach, and there’s less literature on this subject. In most of the meta-heuristic algorithms, it is difficult for the updating process to efficiently use the information available from individuals of previous iterations. If this useful information that is neglected, could be exploited to its full extent and used in the optimization process, then the quality of the succeeding solutions can be improved.

Based on the training data, Machine Learning methods can be used for various optimization tasks, unearthing its meta-heuristics, and hence it requires less hand engineering as compared to solvers who can optimize only a single task. Many approaches to solving meta-heuristics have been explored, from mathematical programming with simple constraints to the use of heuristics and metaheuristics, that generate near-optimal solutions with complex constraints. Some of the meta-heuristic approaches are particle swarm optimization (PSO) approach, ant colony optimization (ACO), simulated annealing (SA), iterated local search (ILS), variable neighborhood search (VNS), evolution strategies, artificial immune systems (AIS), tabu search (TS), bee colony, genetic programming, scatter search (SS) and genetic algorithms. These basic meta-heuristic algorithms have failed to efficiently utilize valuable information that is made.

Meta-heuristic optimzation is available from individuals of previous iterations, for guiding their current and later search. Metaheuristics solve problems faster, solve larger problems, and obtain robust algorithms, thus achieving acceptable results in a reasonable time. Metaheuristic algorithms, when combined with information feedback models, lead to improved methods for optimization.

Introduced by Eberhart and Kennedy (1995), Particle Swarm Optimization (PSO) is a swarm-based metaheuristic algorithm that replicates the social behavior of bird flocking or fish schooling. Initializing the population is the primary step of the PSO algorithm, followed by evaluating the fitness of each particle, updating the best experience of each particle, choosing the best particle, calculating the velocities and finally updating the positions of the particles. It is an excellent intelligence-based metaheuristic algorithm in which the particles are updated based on the best individuals in the population and the best position for each particle, i.e. a swarm of particles keep changing their positions in the given search space, based on their experience and the swarm’s experience to find the global optimum. Goksal, Karaoglan, and Altiparmak (2013) present an approach where Particle Swarm Optimization enables the algorithm to explore search regions in the search space. Ai and Kachitvichyanukul (2009) developed another version of the Particle Swarm Optimization algorithm which utilizes the method of random key-based encoding and decoding with the cheapest insertion based heuristic implemented for continuous improvement. A research study conducted by Windisch shows that the Particle Swarm Optimization outperforms the Genetic Algorithm in terms of efficiency. Another study conducted by Tiwari reported that their proposed algorithm achieved better results at code coverage capability.

Ant colony optimization is a representative metaheuristic algorithm used for global discrete optimization problems, where ants are updated to its maximum extent without utilizing the previous information. ACO algorithms can control the range of values, and can iteratively improvise solutions. Gajpal and Abad (2009) developed an ant colony optimization algorithm that applies the nearest neighbor heuristic to create an initial solution, initialize the trail intensities, and generate solutions for each ant. Subsequently, on each ant-solution, a local search procedure is implemented and the trail intensities are updated [9].

Another ant colony optimization solution implements an improvement procedure that consists of four routines: intra-swap, intra-move, inter-swap and inter-move using the nearest neighbor heuristic. Dorigo and Gambardella also applied the principles of ant colony optimization meta-heuristic, while Dalfard combined simulated annealing with a genetic algorithm to solve the same routing problem.

Variable Neighbourhood Search is a well-known meta-heuristic algorithm based on the principle of changing the neighborhoods systematically to escape from the local search space. A variable neighborhood search explores a set of neighborhoods and tries to find a better solution at each step. Hence, we need to define a succession of neighborhood structures in the search space. At each iteration of the algorithm, another set of solutions is generated from the current neighborhood and improved through the local search routine, producing a new local minimum that is accepted at the move-or-not step depending on the search criteria. The best-found search solution is stored and the current search continues moving to another neighborhood in the following iteration. The research study conducted by Zachariadis and Kiranoudis (2011) describes the use of a local search algorithm where the neighborhood structures are explored using an algorithmic concept which statistically encodes the tentative moves. Bianchessi and Rigini (2007) proposed the use of improvement and constructive algorithms that contain the variable neighborhood structures. This shows that meta-heuristics need to change depending on the length of the routes, shorter or longer, to obtain desired results.

This method considers the solutions from a known set, recombines the chosen ones, and creates a new solution. However, the Tabu search is a deterministic search technique, inspired by the processes of human memory to solve the problems of optimization, which uses search experience to escape from the local optima. Tabu search selects the prime solution which lies in the neighborhood of the current solution as the new solution, even though it increases the solution cost at each iteration. Zachariadis, Tarantilis, and Kiranoudis (2010) proposed a two-stage algorithm that is based on adaptive memory programming where an initial solution is developed using a savings-based heuristic method and the constructed solution is improvised with the help of Tabu search. This procedure is reiterated until the predetermined number of routes which construct a route pool, is generated. In the second stage, the node sequences available in the routing pool produces new solutions, while simultaneously applying the Tabu search improvement procedure, and the routing pool (which establishes adaptive memory) is updated by using the acquired solution from Tabu search. Glover (1986) developed a well-known variant of tabu search: Unified tabu search which can be used to solve the periodic vehicle routing problem. The Tabu search algorithm utilizes a tabu list that prevents the algorithm from cycling and different search regions are explored by enabling the search processes. Implementing a tabu list that involves short-term memory can prevent the search process from cycling and the performance of the algorithm is improved considerably.

Simulated Annealing is one of the oldest classical metaheuristic algorithms that use trajectory-based optimization methods. A study conducted by Yang and Kumar described an information guided framework for simulated annealing where the information collected from the exploration stage was used as feedback for driving the optimization procedure. Due to the speed and simulation quality of Simulated Annealing (SA), it outperforms other metaheuristics within the set of the tested methods of optimization.

One of the algorithms that are used for solving the complex problems is a

Cluster-first and Route-second method (for e.g. customers are formed into clusters and the problem is solved for each of those customer clusters in the next phase. Sevgi and Elise demonstrated identification and categorization and used the “Density-based cluster algorithm” to locate and cluster the customer coordinates. The research study by Phiboonbanakit uses machine learning to perform geo-coordinate clustering which learns the characteristics of each cluster for predicting the regions for further input [38].

One example is the branch-price-and-cut algorithm, which is a branch and bound algorithm in which the lower bound is computed at each node of the search tree by column generation. When the number of variables is large, the linear relaxation cannot be solved directly, and hence we invoke an iterative column generation procedure. A branch and price algorithm, which works with location clusters rather than individual locations, can be used for solving a general variant of vehicle routing problem incorporating a hybrid delivery strategy of either delivering at customer’s home or in the trunk of their car. The branch and price algorithm embeds the column generation into the branch and bound trees where linear programming relaxation is solved at the root node.

A research study conducted by Righini and Salani (2006) describes the branch and price algorithm for vehicle routing problem where exact dynamic programming and state-space relaxation is employed. Based on the descriptions of all the algorithms above, we can extract some useful information obtained from a surrogate, an individual, a whole population, a dynamic environment, neighbor, direction, and mutual relationship and it can be reused up to a certain extent to optimize it further. The literature on metaheuristics indicates that when a controlled exploration of feasible solutions is allowed, the performance of the search gets enhanced.

Engage and retain your customers using Engati. Try it for free!

Set it up in 7 mins!Engati powers 45,000+ chatbot & live chat solutions in 50+ languages across the world.

We aim to empower you to create the best customer experiences you could imagine.

So, are you ready to create unbelievably smooth experiences?

Sorry, something went wrong. Please try after some time.

Please enter a valid email ID

Get a personalized demo with our experts to get you started

Thanks for the information. We will be shortly getting in touch with you.

Thanks for reaching out. We will get in touch with you shortly

Thanks for the information. We will be shortly getting in touch with you.