Optimal multivariate mixture: a genetic algorithm approach

  • Original Research
  • Open access
  • Published: 18 May 2024

Cite this article

You have full access to this open access article

annals of operations research call for papers

  • Giacinto Angelo Sgarro   ORCID: orcid.org/0000-0002-4189-0674 1   na1 ,
  • Luca Grilli 1   na1 &
  • Domenico Santoro 2   na1  

73 Accesses

Explore all metrics

The Optimal Multivariate Mixture Problem (OMMP) consists of finding an optimal mixture which, starting from a set of elements (items) described by a set of variables (features), is as close as possible to an ideal solution. This problem has numerous applications spanning various fields, including food science, agriculture, chemistry, materials science, medicine, and pharmaceuticals. The OMMP is a class of optimization problems that can be addressed using traditional Operations Research (OR) approaches. However, it can also be effectively tackled using meta-heuristic techniques within Artificial Intelligence (AI). This paper aims to present an Artificial Intelligence perspective. It proposes a Genetic Algorithm (GA) for Optimal Multivariate Mixture (GA-OMM), a novel improved version of a GA whose modified genetic operators prove to improve the exploration efficiency. Here, the algorithm is described in its general framework, and a test case 8-items 5-features is conducted to evaluate efficiency by exploring various combinations of hyperparameters. Test cases are also set up for the previous version, as well as a linear programming (LP) approach. The data experiments indicate that the proposed GA is efficient, converges towards the global optimum, consistently outperforms its predecessor, and delivers highly competitive results. In particular, GA-OMM shows an average fitness of GA-OMMP/LP and standard deviation with an order of magnitude ranging between \(10^{-8}\) to \(10^{-4}\) . Moreover, it consistently outperforms its predecessor, which exhibits similar values around \(10^{-3}\)

Similar content being viewed by others

annals of operations research call for papers

Genetic Programming Based on Error Decomposition: A Big Data Approach

annals of operations research call for papers

Genetic Algorithms for Subset Selection in Model-Based Clustering

annals of operations research call for papers

Multi-objective Genetic Algorithms

Avoid common mistakes on your manuscript.

1 Introduction

The Optimal Multivariate Mixture Problem (OMMP) involves the selection of an optimal mixture or combination of elements from a given set of items described by a set of variables. The objective is to obtain a mixture that closely approximates the characteristics of a desired item across the variables. Generally, problems of this nature can be tackled using conventional methods from Operations Research (OR) (Seyedmohammadi et al., 2019 ; Saderova et al., 2021 ; Gilani et al., 2020 ; Aazza, 2021 ). However, in recent decades, meta-heuristic approaches, which belong to the field of Artificial Intelligence (AI), have gained increasing prominence (Jones et al., 2002 ). Meta-heuristic techniques are general algorithmic frameworks that draw inspiration from nature and aim to find reasonable solutions to complex problems by combining different strategies that exploit a dynamic balance between accumulated search experience and exploration of the search space (Bianchi et al., 2009 ). In other words, meta-heuristics are intelligent procedures that identify and explore regions of high-quality solutions while minimizing time spent elsewhere in the search space (Bianchi et al., 2009 ). Examples of meta-heuristics include ant colony optimization (ACO), evolutionary computation (EC), simulated annealing (SA), and tabu search (TS). Among the EC family, which encompasses algorithms inspired by biological evolution for global optimization, genetic algorithms (GA) are particularly well-suited for solving multi-objective problems (Konak et al., 2006 ).

Genetic algorithms (GA) are optimization methods inspired by the natural selection mechanism of evolution, which simulate the natural evolution process by three significant operations: crossover, mutation, and reproduction (Chen & Chen, 1997 ; Chang & Chen, 1998 ). The problem of choosing the best combination of elements to achieve an optimal mixture is a complex and challenging task (Thongsook et al., 2014 ) with applications in various real-life contexts, including food, agriculture (Seyedmohammadi et al., 2019 ), chemistry (Aazza, 2021 ; Shirneshan et al., 2021 ; Jacob & Ashok, 2021 ), materials science (Monteagudo et al., 2011 ; Mirzahosseini et al., 2015 ; Gilani et al., 2020 ), medicine (Balamurugan et al., 2012 ), and pharmaceuticals (El-Hagrasy et al., 2006 ). In the domain of mixture experimental design, various studies have been proposed using GA-based mathematical models to generate optimal designs for conducting mixture experiments (Thongsook et al., 2014 ). For instance, experimental applications have been carried out to characterize pharmaceutical powder blending processes using D-optimal design (El-Hagrasy et al., 2006 ). Additionally, prediction models have been developed to investigate the rutting phenomenon in asphalt mixtures using linear genetic programming (LGP) and artificial neural network (ANN) techniques (Mirzahosseini et al., 2015 ). Furthermore, genetic algorithms have been employed to explore and optimize biodiesel-ethanol blends, simultaneously reducing emissions and enhancing performance (Shirneshan et al., 2021 ). Lastly, in the context of asphalt mixtures, the effects of nano-hydrated lime (NHL) as a bitumen modifier on moisture and fatigue damages have been investigated using multivariate regression (MVR), group method of data handling (GMDH), and genetic programming (GP) (Gilani et al., 2020 ). These studies highlight the potential and effectiveness of genetic algorithms in addressing various challenges in mixture experimental design, including pharmaceutical processes, asphalt mixtures, and fuel blending.

This paper aims to propose a genetic algorithm-based model called Genetic Algorithm for Optimal Multivariate Mixture (GA-OMM), which, in a generalized context, can select both items and corresponding percentage values optimally to obtain a mixture with feature values as close as possible to the ones of an ideal desired composition. Due to the intrinsic nature of this problem, to the best of our knowledge, this study represents the second investigation addressing this specific problem, as most previous research in mixture studies has focused on optimal experimental design (Thongsook et al., 2014 ; Mirzahosseini et al., 2015 ; El-Hagrasy et al., 2006 ) or multi-objective optimization (Shirneshan et al., 2021 ; Aazza, 2021 ; Gilani et al., 2020 ; Jones et al., 2002 ).

This work describes the algorithm within its general framework and conducts a test case with 8 items and 5 features to evaluate its efficiency by exploring various combinations of hyperparameters. Additionally, we establish test cases for the previous version (GA-1) (Sgarro & Grilli, 2023 ) and a Linear Programming (LP) approach. The need for a new version stands for the findings of some critical issues found in it, like the percentage generation process and the variable coding used, processes that will be analyzed afterward in Sect.  3.6 . The data experiments demonstrate that the proposed GA exhibits high efficiency, consistently converges towards the global optimum, outperforms its predecessor, and delivers highly competitive results. Although the problem of selecting optimal multivariate items for multivariate mixtures has not been extensively studied in the literature, it is not limited to theoretical considerations and holds practical applications. The inspiration comes from a widespread problem that companies employed in the durum wheat processing sector have. The underlying problem can be defined as follows: “Given a set of n grains, each characterized by m features, is it possible to obtain one or more mixtures that closely approximate an ’ideal optimal solution’?”

The previous example is just the one we were committed to. But willing to link to other fields, here we provide some examples: In the realm of food, for instance, it may pertain to crafting a new recipe by blending various ingredients to achieve a desired taste or nutritional profile. In agriculture, the problem could manifest in formulating optimal blends of fertilizers or pesticides tailored for specific crops and environmental conditions. In chemistry, the challenge could lie in combining different chemical substances to synthesize new materials with specific properties. This could be crucial in drug synthesis, where achieving the right mix of reactants is pivotal in obtaining the desired compound. Similarly, in the field of materials science, researchers might seek to blend different components to produce alloys or composites with desired properties, such as mechanical strength or electrical conductivity. In medicine, the problem may involve blending different medications or active ingredients to create formulations that maximize therapeutic efficacy while minimizing side effects for individual patients. Lastly, in the pharmaceutical industry, the challenge could involve mixing various components to manufacture drugs in tablet, capsule, or liquid form with optimal release characteristics.

The paper structure is the following: in Sect.  2 , we present the problem of Optimal Multivariate Mixture. Section  3 provides a generic model of genetic algorithms (GA), including a description and comparison of meta-heuristics, along with an explanation of GA-1. Section  4 describes the GA-OMM algorithm within its framework. Sect.  5 presents the experimental application of the 8-items 5-features model, where GA-OMM is compared with GA-1 and LP-solver. Section  6 includes a detailed analysis of the consistency and performance of the experimental application. Finally, in Sect.  7 , some conclusions are drawn.

2 Problem definition

The problem of selecting optimal multivariate items for optimal multivariate mixtures can be mathematically formulated as follows:

Given a set of n items, \(X_1, X_2, \ldots , X_n\) , where each item \(X_i\) is represented by a vector of m features, i.e.,

and a desired “ideal optimal solution" denoted as:

is it possible to obtain a mixture, denoted as \(X_{mix}\) , that closely approximates the “ideal solution"?

In such a context, the mixture is obtained as a convex combination of the n items that is:

The objective of the problem is to determine the values of \(\alpha _i\) (for \(i=1,2,\ldots ,n\) ) that minimize the distance \(d(\bar{X}, X_{mix})\) between the ideal solution \(\bar{X}\) and the obtained mixture. Formally, the problem can be expressed as follows, find:

minimizing:

subject to:

A graphical representation of OMMP is provided in Fig.  1 .

figure 1

Representation of OMMP. This involves determining the optimal percentage values \(\alpha _i\) for the items \(X_i\) , such that their mixture \(X_{mix}\) closely approximates a desired mixture \(\bar{X}\) across the j variables

To provide some real-context examples in the context of food, let’s consider the task of developing a new type of granola bar. Here, the elements are the individual grains or ingredients used in the bar, such as oats, nuts, seeds, and dried fruits. The variables could include the nutritional composition of these ingredients, such as their fat, carbohydrate, protein, and water content. The ideal solution would involve selecting the right combination of grains and adjusting their proportions to balance flavors, textures, and nutritional benefits that align with the target market’s preferences and dietary requirements.

In agriculture, imagine a scenario where farmers need to optimize the composition of a fertilizer blend for a specific crop, say tomatoes, grown in a particular soil type and climate. In this case, the elements are the different types of fertilizers available, each containing varying amounts of nutrients like nitrogen, phosphorus, and potassium. The variables could include soil pH, nutrient levels, and environmental conditions. The ideal solution would entail selecting the right combination and proportions of fertilizers to meet the crop’s nutritional needs, enhance soil fertility, and maximize yield while minimizing environmental impact.

Moving to chemistry, let’s consider the synthesis of a new type of polymer for use in biodegradable packaging materials. Here, the elements are the different monomers or chemical building blocks available for polymerization. The variables could include monomer structure, molecular weight, and polymerization conditions. The ideal solution would involve selecting the appropriate combination of monomers and optimizing reaction parameters to achieve a polymer with the desired mechanical properties, biodegradability, and cost-effectiveness.

In the realm of medicine, pharmaceutical researchers are developing a new cancer treatment regimen that combines multiple drugs to target different aspects of tumor growth and metastasis. The elements are the individual drugs, each with its own mechanism of action and side effect profile. The variables could include drug dosage, administration schedule, and patient-specific factors like tumor type and genetic makeup. The ideal solution would involve designing a treatment protocol that maximizes tumor response while minimizing toxicity and adverse effects on the patient’s overall health.

Lastly, in the pharmaceutical industry, let’s consider the formulation of a new pain reliever in tablet form. Here, the elements are the active pharmaceutical ingredients (APIs) and excipients used to create the tablet. The variables could include API concentration, tablet size, and excipient composition. The ideal solution would involve selecting the right combination and proportions of ingredients to ensure effective pain relief, optimal tablet disintegration and dissolution rates, and minimal gastrointestinal irritation.

3 General structure of a genetic algorithm

Genetic algorithms (GA) are a branch of Evolutionary Computing (EC), i.e., those techniques that solve optimization and search problems using biologically inspired operators (Wahde, 2008 ). More specifically, GA tries to imitate the biological processes of natural selection through revised versions of crossover, mutation, and reproduction processes (Sivanandam & Deepa, 2008 ). From the first formulation onward (Holland, 1992 ), numerous models were proposed by authors. In this context, we will describe a “simple model structure” that represents the base we used for the GA proposed.

Generally, a GA starts from a set of candidate solutions called population. Using genetic operators, it evolves through iterations towards better solutions in an optimization problem (Sivanandam & Deepa, 2008 ; Dwivedi et al., 2012 ). Specifically, during each iteration, the starting solutions, known as parents, are recombined through crossover and mutation operators to generate new solutions called children. Subsequently, through a reproduction operator, parents and children are evaluated first, and then new parents are selected and used in the following iteration (Whitley, 1994 ). The process continues until optimal solutions are reached (Konak et al., 2006 ; Chang & Chen, 1998 ; Thongsook et al., 2014 ). More specifically, a GA framework is represented in Fig.  2 , encompassing the following steps: 1) Parents generation, 2) Crossover and Mutation operators, and 3) Evaluation and selection (Reproduction). The process of steps 2) and 3) is repeated for a fixed number of iterations or until a convergence criterion is satisfied (Konak et al., 2006 ; Chang & Chen, 1998 ; Chen & Chen, 1997 ; Thongsook et al., 2014 ; Dwivedi et al., 2012 ).

figure 2

The workflow of the generalized GA model

3.1 Parents generation

An initial population of solutions is generated within the entire range of possible solutions, also called the search space. These solutions are often taken randomly and, in this context, are called chromosomes (Whitley, 1994 ). Chromosomes are commonly represented using an encoding rule, such as binary vector coding, where solutions are represented as arrays of 0 s and 1 s, or real vector coding, where solutions are arrays in which each component (gene) corresponds to a real number (Chang & Chen, 1998 ).

3.2 Crossover and mutation operators

Starting from the initial chromosomes, a subset known as parents is selected, and crossover and mutation operators are applied to generate children (Sivanandam & Deepa, 2008 ).

Crossover involves selecting two chromosomes, often randomly, and cutting their arrays at the same positions, then exchanging the cropped segments. There are various types of “cutting", with one example being the division of two chromosomes into two heads and two tails. Given a chromosome with \(m\in {\mathbb {N}}\) components \([1, \ldots , m]\) , the head consists of \([1, \ldots , c]\) , and the tail consists of \([c+1,\ldots , m]\) with \(1\le c \le m-1\) . The first child is the head of the first and the tail of the second, and the second child vice versa.

Mutation involves modifying certain parts of a parent chromosome to create a child chromosome (Konak et al., 2006 ; Whitley, 1994 ; Dwivedi et al., 2012 ). For instance, in binary vector coding, the mutation operator could randomly select several components from a chromosome and exchange 0 s with 1 s, and vice versa (Chen & Chen, 1997 ). Typically, crossover and mutation are separate processes. Initially, chromosomes are selected for crossover, and then, in a subsequent step, considering all parents, chromosomes are selected for mutation. This allows some chromosomes to be utilized for crossover and mutation (Whitley, 1994 ).

3.3 Evaluation and selection (reproduction)

Parents and children are pooled together, and the entire set of chromosomes is evaluated using an objective function called as fitness function to assess them and create a ranking. Then, a specific logic of selection is applied for choosing certain chromosomes that will be the population for the next iteration (Konak et al., 2006 ; Thongsook et al., 2014 ; Dwivedi et al., 2012 ; Chen & Chen, 1997 ; Chang & Chen, 1998 ). This whole process, known as reproduction, is divided into two main parts: evaluation, which involves computing the fitness of each chromosome, and selection, which involves selecting the chromosomes that will compose the future population for the next iteration. During an iteration, the evaluation process typically involves calculating the fitness values for all individuals. Subsequently, the selection process is applied, which may utilize techniques such as roulette selection to select a specific number, denoted as P , of individuals as offspring. Roulette selection is a kind of selection in which the probability that one chromosome is selected is directly proportional to the value of fitness owned by the chromosome itself (Wahde, 2008 ). The fitness function is generally designed based on the specific context and objective, and various techniques have been proposed in the literature for the logic of selection. The choice of a particular technique depends on the specific context and objective at hand (Sivanandam & Deepa, 2008 ).

3.4 Re-iteration

Crossover and mutation are applied after selecting the best chromosomes through the reproduction procedure, often with the same number of individuals for each generation. This is followed by evaluation and selection steps, which are repeated for a predetermined number of generations or until a convergence criterion is satisfied (Konak et al., 2006 ; Chang & Chen, 1998 ; Chen & Chen, 1997 ; Thongsook et al., 2014 ; Dwivedi et al., 2012 ). The number of generations and the convergence criterion depend on the specific context and objective. Graphs depicting generation fitness are commonly plotted to assess the performance and progress of the genetic algorithm (GA) during iterations. These graphs allow for the observation of algorithm convergence over generations. Fitness can be represented in various forms, such as the average or best fitness value of each generation (to name a few). If the fitness converges to the best value over the iterations, the GA functions correctly (Chang & Chen, 1998 ). An example of a graph related to this work is provided in Fig.  9 a and b.

3.5 Comparison with other meta-heuristics

GA can be compared with other meta-heuristic techniques, particularly Simulated Annealing (SA), Tabu Search (TS), and Swarm Intelligence (SI). Each of these techniques has peculiar characteristics:

Simulated Annealing is a probabilistic physics-inspired method (Kirkpatrick et al., 1983 ), which works by emulating the process of slow cooling of a solid until freezing. SA has the advantage of easy implementation and reduced consumption (e.g., time/space) compared to the Monte Carlo method, for example. However, it has the disadvantage of stacking the solution in local minima. Many real-life applications use this meta-heuristic, for example, in agriculture (Dong et al., 2013 ; Chen et al., 2023 ; Kuo et al., 2001 ), traffic mitigation (Oda et al., 1997 ; Lavandier et al., 2021 ; Xu et al., 2009 ), vehicle routing (Yu et al., 2022 ; Redi et al., 2020 ) or resources allocation (Tufano et al., 2020 ; Abuajwa et al., 2021 ). Simulated Annealing algorithms are well-suited for problems requiring exploration of a vast solution space and prevention of entrapment in local minima.

Tabu Search is an iterative method for building an extended neighborhood that allows you to escape local minima through worsening steps, which are subsequently referred to as “Tabu" (Glover, 1989 ). Although TS can avoid using already explored solutions, this meta-heuristic requires many iterations and suffers from strong conditioning due to the parameters’ choices. Many real-life applications use TS, such as for solving routing problems (Barbarosoglu & Ozgur, 1999 ; Gendreau et al., 1994 ; Cordeau et al., 2001 ), scheduling problems (Ben-Daya & Al-Fawzan, 1998 ; Adamuthe & Bichkar, 2012 ; Edwards et al., 2015 ) or DNA sequencing (Kwarciak & Formanowicz, 2014 ; Błażewicz et al., 2004 ). Tabu Search algorithms can effectively find high-quality solutions for combinatorial optimization problems. They are particularly suitable when the problem involves managing complex constraints.

Swarm Intelligence comprises a set of techniques that are based on the natural behavior of systems made up of a multitude of individuals, e.g., ants, bees, and birds, based on self-organization. The SI algorithms include Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Evolutionary Algorithms (EA), and Artificial Swarm Intelligence (ASI). Swarm Intelligence algorithms are well-suited for problems requiring exploring various solutions and dynamic adaptation to problem conditions (Sgarro & Grilli, 2024 ).

Although they are widely used techniques, each of these requires strong modifications to be applied to the specific OMMP. For example, using SI, in this case, requires that the swarm be configured with specific parameters tailored to the characteristics of the grain mixture. While SA and TS excel in certain optimization contexts, their adaptation to the OMMP demands substantial adjustments to account for the unique constraints and complexities involved. SI algorithms, inspired by collective behaviors in natural systems, offer a promising approach but necessitate significant customization to address the intricacies of the grain mixture problem effectively. In contrast, GA emerges as the preferred choice for solving the OMMP due to its inherent adaptability and robustness.

Genetic Algorithms exhibit intrinsic advantages in managing complex combinatorial problems, such as the search for the optimal mixture, where exploring a vast array of possible combinations is essential. Drawing inspiration from natural selection and biological evolution, the GA approach facilitates efficient exploration of solution space by dynamically adapting the population of candidate solutions throughout iterations. Moreover, GAs are adept at handling complex constraints inherent in the OMMP, such as limitations on the quantity of each element type or constraints balancing desired mixture characteristics. By maintaining and propagating the best solutions over time, GAs ensure robust solution space exploration while satisfying imposed constraints.

Furthermore, GAs are renowned for their ability to effectively navigate the entire solution space without becoming trapped in local minima, mitigating the risk of stagnation encountered with SA and TS. Leveraging mechanisms like selection, crossover, and mutation, GAs sustain genetic diversity within the population, facilitating the exploration of solution regions otherwise left unexplored. Additionally, GAs offer greater flexibility and scalability compared to SA, TS, and SI when it comes to problems like this one. They can be readily adapted and configured to address the specific requirements of the OMMP, allowing for enhanced customization and optimization of performance.

In conclusion, Genetic Algorithms emerge as the most suitable approach for solving the OMMP, offering a combination of adaptability, robustness, and scalability that effectively addresses the complexities and challenges inherent in this optimization task.

3.6 Genetic algorithm 1 (GA-1)

The previous version of GA for solving OMMP worked in this way: a double coding was applied for the solutions, a binary encoding for choosing the items, and a percentage coding for percentage generation.

During the first iteration, a parents generation was applied, first generating binary chromosomes, then by creating percentage chromosomes by them in this way: first, a percentage value was computed by choosing a random number \(x_{p,1}\) between [0, 1] with distributed uniform probability, then the second percentage a value of probability between \([1-x_{p,1}, 1]\) and so on up to the last percentage to compute. Lastly, each percentage chromosome was explored a certain number of times. All the solutions were evaluated at the end of the iteration, and the best ones were selected for the next iteration.

From the second iteration onwards, a part of the d best binary solution of the previous iteration was used to generate new percentage values; each one explored a certain number of times. Following evaluation, only the d best solutions among all the existing solutions and the newly generated ones were retained for the subsequent iteration.

This algorithm, despite its effectiveness, shows some criticisms. First of all, the double coding could be outdone by only representing non-taken items as having zero percentage value, thus lightening the code and the computing. Moreover, the way percentages are generated can suffer from biases; that is, the expected percentage assigned to the first feature risks being greater than the following ones. For this reason, a safer method for obtaining more uniform probabilities would be generating them all within a fixed range [0, 1] and then normalizing them. Overcoming these issues lays the foundation for a more probable and efficient algorithm.

4 Genetic algorithm for optimal mixture

Let X indicate the chromosome, P the parent, and C the child. Our Genetic Algorithm for Optimal Multivariate Mixture (GA-OMM) is described as follows. In the initial iteration, a designated quantity of chromosomes \(X_p\) is generated, denoted as \(P_{start}\) . From this set of chromosomes, a specific percentage \(p_c\) and a percentage m are selected to apply crossover and mutation operators, aiming to discover novel solutions. The resulting solutions obtained from these two genetic operators, combined with the original parents, are collectively considered and subsequently subjected to a percentage exploration operator for further exploration. Every Percentage Chromosome \(X_p\) is “explored” \(N_{Expl}\) times randomly moving the percentage points within a fixed neighborhood of the starting percentage values, ensuring that the total sum of the new percentage values is one. Finally, the fitness value of each solution within the iteration is computed, and the best P chromosomes are chosen through the implementation of elite selection to be part of the new generation. The workflow of GA-OMM is indicated in Fig.  3 while its working scheme in Fig.  4 .

figure 3

The workflow of the generalized model of GA-OMM. The red boxes represent the starting and the end points of an iteration

figure 4

The working scheme of the GA-OMM during an iteration involves several steps. First, mutation and crossover operations are applied to the starting population. Then, percentage exploration is performed on all the solutions that have been explored. Finally, the reproduction phase, which includes evaluation and selection, determines the population for the next iteration

4.1 Encoding (Genotype)

In the GA, each combination of items is represented using percentage vector coding. This direct vector coding scheme expresses the proportion or percentage allocation of the n components. Consequently, a mixture is represented by a Percentage Chromosome \((X_p)\) defined as:

where \(x_{p,i}\in [0,1]\) and \(\sum _{i=1}^n x_{p,i} =1\) . Each solution \((X_p)\) can serve as either a Parent or a Child during an iteration. Chromosomes assume the role of parents when employed to generate new chromosomes, which in turn are designated as Children. Consequently, chromosomes may take \(P_p=X_p\) as parents or \(C_p=X_p\) when serving as children. Alternatively, they may remain in the form \((X_p)\) when not selected by crossover or mutation operators during the generation process.

During chromosome generation, the \(\texttt{PercentageGeneration}\) function is applied to generate their percentage values. This function assigns percentages to a chromosome \(X_p\) by utilizing random values within the range of \(\texttt{min}\) and \(\texttt{max}\) , then normalize all the generated values by using:

4.2 Mutation

The \(\texttt{Mutation}\) operator operates in the following manner: given the initial/selected population, a random number within the range [0, 1] is generated for each individual. Then, individuals with a value lower than m are selected for mutation. For each selected individual, a random value between 0 and 1 is created for each gene. A new random percentage value is generated for genes with values less than \(p_m\) . Subsequently, the entire \(X_p\) is normalized to ensure that the sum of the new \(x_{p,i}\) values aligns with the required criteria.

4.3 Crossover

Through the \(\texttt{Crossover}\) operator, for each chromosome within the initial/selected population, a random number between the range of [0; 1] is generated. The corresponding chromosome is selected for crossover if this value is less than \(p_c\) . Additionally, if the number of solutions is odd, one of them is randomly removed from the batch. Two chromosomes are randomly chosen from the set at a time for the crossover operation, ensuring that each chromosome is chosen only once. Initially, a cut point is defined by randomly generating the index of a gene. Then, an exchange of genes is performed from that gene index onwards. This process ensures that the number of children is the same as the number of parents. Identically to the mutation process, it is necessary to normalize the percentage values obtained as children after the crossover operation.

4.4 Percentage exploration

The \(\texttt{Percentage Exploration}\) operator involves exploring \(N_{Expl}\) times all the chromosomes of an iteration by randomly moving their percentage points within a fixed neighborhood around the starting percentage values. A modified chromosome can be represented using Eq.  9 .

where the value of \(\Delta x_{p,i}\) is determined by the function \(\texttt{rand}(-\alpha ;+\alpha )\) as shown in Eq.  10 .

NOTE: each time a new percentage change value \(\Delta x_{p,i}\) is generated, the algorithm ensures that the following condition is satisfied:

After this step, a normalization process is applied to the percentages to ensure that their sum remains equal to 1, as shown in Eq.  12 :

The percentage exploration operator in GA-1, as in GA-OMM, helps exploration because it concentrates the exploration of solutions within a limited area of the search space by contributing to the local search part of the algorithm. Differently, mutation and crossover in this context behave as global search operators because they explore the solution space in a broader and more diverse way.

4.5 Evaluation and selection (reproduction)

The reproduction process in our algorithm considers all the solutions of a given iteration. It begins by evaluating each individual’s fitness through an objective function, then through a selection function, indicates the best individuals that will serve as parents \((X_p)\) for the subsequent iteration. During the evaluation step, the fitness function is applied to each individual to determine the percentage values \(\bar{X}_p\) that closely approximate the desired chemical composition \(\bar{X}\) . The equation represents this approximation:

To achieve this, a common metric is used to compare all the solutions in the iteration. Our experiment employs an objective function that minimizes the difference between the mixture represented by the chromosome \(X_{mix}\) and the desired final composition \(\bar{X}\) . Specifically, the fitness function used in our study is expressed as a form of weighted sum approaches (Konak et al., 2006 ), i.e.:

where \(r_{j}\) represents the range of the j -th component calculated as:

In the second step, the selection process is performed using elite selection. This strategy selects the P best-fit solutions from the current iteration to be part of the next generation. Specifically, it chooses the P chromosomes with the lowest fitness values.

5 Experimental settings

To evaluate the proposed GA-OMM approach, we conducted experiments using a set of items characterized by \(n=8\) items and \(m=5\) features. These features can be interpreted as properties such as proteins, fats, carbohydrates, fibers, and water content, expressed in grams per 100 gs of the product. Alternatively, other variables such as price or quality level can be considered (Table  1 ). Several reasons justify the selection of these specific numbers in the table. One crucial consideration is ensuring the algorithm’s ability to handle heterogeneous features in terms of their absolute values and ranges, as well as fostering competition among different features. For instance, Feature 1 exhibits very small values and range, while Features 2 and 3 are closely matched in terms of values and ranges, as are Features 4 and 5.

To assess the algorithm’s consistency, we performed a set of 10 tests, in which a random desired mix was specified for each test, as indicated in Table  2 .

Each test consisted of 30 executions, and the mean and standard deviation of the best objective functions from each test were calculated. Subsequently, the means and standard deviations of the best objective function were averaged across all the tests. These results, i.e., the averaged means and standard deviation of the 10 tests, averaged 30 times per test, were then compared with those obtained from the GA-1 and an LP solver. The purpose of considering the averages of two consecutive means and the average of standard deviations is to assess the algorithm’s stability and its ability to address problems with varying initial data. To conduct the experiments, we utilized MATLAB software and employed the linear programming problem solver \(\texttt{linprog}\) , a function that finds the minimum of a linear objective function using a dual-simplex optimization algorithm. Specifically, the experiments were performed on a machine equipped with a 2.8 GHz Intel i7 quad-core with 6MB shared L3 cache and 16GB 2133 MHz LPDDR3 RAM. To test the consistency and reliability of the GA, we conducted a series of experiments comparing GA-OMM results using different hyperparameter values with the results obtained from the other methodologies. Specifically, two types of settings were established:

\(p_c\) , \(p_m\) , and m settings: each experiment was executed with the following hyperparameter values: \(p_c = [0.8, 0.85, 0.9, 0.95]\) , \(p_m = [0.1, 0.15, 0.2, 0.25]\) , and \(m = [10, 20, 30, 40]\) . The values of P and \(t_g\) were constant at 200 and 100, respectively.

P and \(t_g\) settings: the experiments were performed with the following hyperparameter values: \(P = [20, 70, 140, 200]\) and \(t_g = [10, 35, 70, 100]\) . The values of \(p_c\) , \(p_m\) , and m were held constant at 0.9, 0.25 and 30, respectively. The initial population size, \(P_{start}\) , was always set to P /2. For the experiments where \(P = 20\) , the parameter m could not be set to 30; therefore, we decided to use \(m = 10\) .

Thus, the hyperparameters tested during the experiments are shown in Algorithm 1.

figure a

The GA-OMM tested hyperparameters.

The first set of experiments resulted in the findings presented in Table  3 . This table displays the difference in fitness between the average GA-OMM fitness and the fitness obtained through LP, as well as the average standard deviations of the GA-OMM fitness values. The same data are also graphically represented from Figs.  5 a, 6 , 7 and 8 b.

figure 5

a Displays the fitness difference between the average GA-OMM fitness and the fitness obtained through LP. b shows average standard deviations of the GA-OMM fitness values. These values are presented for \(m = 10\) and different hyperparameters \(p_c\) and \(p_m\) configurations

figure 6

a Displays the fitness difference between the average GA-OMM fitness and the fitness obtained through LP. b shows the average standard deviations of the GA-OMM fitness values. These values are presented for \(m = 20\) and different hyperparameters \(p_c\) and \(p_m\) configurations

figure 7

a Displays the fitness difference between the average GA-OMM fitness and the fitness obtained through LP. b shows the average standard deviations of the GA-OMM fitness values. These values are presented for \(m = 30\) and different hyperparameters \(p_c\) and \(p_m\) configurations

figure 8

a Displays the fitness difference between the average GA-OMM fitness and the fitness obtained through LP. b shows the average standard deviations of the GA-OMM fitness values. These values are presented for \(m = 40\) and different hyperparameters \(p_c\) and \(p_m\) configurations

Table  3 demonstrates that the choice of hyperparameters \(p_c\) and \(p_m\) does not affect the results significantly. Furthermore, the parameter m does not impact the outcomes significantly. In most cases, the average difference between the GA fitness results and the LP fitness and standard deviation with a magnitude ranges between \(10^{-6}\) to \(10^{-4}\) . However, certain combinations of hyperparameters exhibit superior performance. One such combination is \([p_c, p_m, m] = [0.85, 0.15, 40]\) , which achieves a precision in the order of \(10^{-7}\) . Additionally, combinations like [0.9, 0.25, (10, 30)] and [0.95, 0.2, 30] achieve a precision in the order of \(10^{-8}\) . It is noteworthy that, when analyzing the convergence graphs of setting one, it was observed that GA-1 generally converges faster than GA-OMM (Fig.  9 a). However, in all cases, GA-OMM yielded better results than GA-1 and remained very close to the values obtained from LP (Fig.  9 b). One possible explanation for why GA-1 converges more rapidly to a solution than GA-OMM lies in the greater emphasis the former places on local search over global search, thereby increasing the likelihood of falling into local optima. GA-1 solely employs the binary mutation operator as its global search operator, whereas GA-OMM utilizes both mutation and crossover, which are further applied to percentages, enabling a more efficient solution space exploration. In this context, \(t_g = 10\) was a suitable metric for measuring the algorithms’ convergence towards the optimum since, in all cases, beyond that threshold, the best optimum found by both algorithms remained stable at the same value. Due to the high frequency of positive results and the best overall performance, we have decided to continue the process of hyperparameter testing by selecting the values [0.9, 0.25, 30].

figure 9

In a example is shown where GA-1 converges earlier than GA-OMM. In b , an example is presented where GA-OMM demonstrates a superior optimum found compared to GA-1. The average and minimum values of GA-1 for iteration are represented in blue and red, while the average and minimum values of GA-OMM are shown in green and purple, respectively

In the second set of experiments, the main objective was to compare the performances of the two Genetic Algorithms (GAs) by varying the hyperparameters’ P and \(t_g\) values. While m , \(p_c\) , and \(p_m\) are parameters linked to the quality of search within the solution space, P and \(t_g\) are parameters associated with the quantity of solutions explored by the algorithm. For this reason, one of the aims of this experiment was to measure the optimal thresholds of solution quantity to explore to converge and fall into the optimum with high probability while simultaneously minimizing the total number of solutions explored. The bar diagrams in Fig.  10 a illustrate the disparity in fitness between the average fitness of GA-1 and the average fitness achieved through GA-OMM. Additionally, Fig.  10 b demonstrates the variance in average standard deviations of the fitness values between GA-1 and GA-OMM.

figure 10

In a , the fitness difference is displayed between the average GA-1 fitness and the average fitness obtained through GA-OMM. In b , the difference in the average standard deviations of the GA-1 and GA-OMM fitness values

The bar diagrams demonstrate that GA-OMM consistently outperforms GA-1 in terms of performance, except the hyperparameter combination P and \(t_g\) equal [20, 10]. This specific combination suggests that when the number of solutions to explore is low, GA-1 converges faster toward the optimal solution, even if it means converging throughout a local optimum, as shown in Fig.  9 b. When the number of solutions explored is enough to converge, i.e., as the number of solutions explored increases, GA-OMM consistently proves more effective than GA-1. Specifically, the difference in average fitness indicates that the performance gap between the two algorithms widens as \(t_g\) increases. Conversely, increasing the value of P leads to a reduction in these differences. The same pattern is observed for the standard deviations as well. As \(t_g\) increases, the differences tend to increase, while increasing P tends to decrease these differences. When examining the performance of GA-OMM itself, it is noteworthy to analyze the bar diagrams presented in Fig.  11 a and b. Both figures illustrate how GA-OMM successfully reaches the global optimum and achieves this with greater precision and consistency as the values of P and \(t_g\) increase, with a slightly more pronounced influence observed with an increase in \(t_g\) . Corresponding graphs for GA-1 can be found in Fig.  12 a and b.

figure 11

a Displays the fitness difference between the average GA-OMM fitness and the fitness obtained through LP. b shows the average standard deviations of the GA-OMM fitness values

figure 12

a Displays the fitness difference between the average GA-1 fitness and the fitness obtained through LP. b shows the average standard deviations of the GA-1 fitness values

7 Conclusions

This research addresses the mathematical problem of the optimal selection of multivariate items for optimal multivariate mixtures. In this work, we propose a novel GA-based model called GA-OMM to solve this problem and evaluate its performance through experiments. Specifically, after providing a generalized algorithm description, we employ an 8-item, 5-feature model to assess algorithm consistency and performance. Two sets of experiments are conducted, involving the execution of 10 desired mix tests for 30 iterations, with different hyperparameter settings for each experiment. The results are compared with those of a previous version of the GA, referred to as GA-1, and a Linear Programming solver, and they demonstrate that GA-OMM outperforms GA-1, which is a significant improvement. Moreover, as the number of solutions explored increases, GA-OMM exhibits a convergence towards the global optimum, akin to the behavior of the LP solver. It’s evident that adding items enhances the likelihood of uncovering multiple optimal solutions, a goal that the algorithm should ideally strive to achieve comprehensively. However, augmenting the feature set inevitably complicates searching for the global optimum. Access to real-world data would significantly aid in validating and refining the model’s performance under practical scenarios. Furthermore, real-world scenarios often entail multiple simultaneous demands to fulfill, thus transforming the problem into a multicriteria challenge. As such, there arises a pressing need to adapt or integrate the algorithm to address these complex problem domains effectively. Future research endeavors will focus on modifying and adapting the model to equip the proposed GA with the capability to handle more intricate problems characterized by larger numbers of items, features, and the nuances inherent in multiple-criteria decision-making (MCDM).

Data availibility

Data will be made available on reasonable request.

Aazza, S. (2021). Application of multivariate optimization for phenolic compounds and antioxidants extraction from Moroccan cannabis sativa waste. Journal of Chemistry . https://doi.org/10.1155/2021/9738656

Article   Google Scholar  

Abuajwa, O., Roslee, M. B., & Yusoff, Z. B. (2021). Simulated annealing for resource allocation in downlink Noma systems in 5g networks. Applied Sciences, 11 (10), 110. https://doi.org/10.3390/app11104592

Adamuthe, A.C., & Bichkar, R.S. (2012). Tabu search for solving personnel scheduling problem. In 2012 international conference on communication, information & computing technology (ICCICT) , pp. 1–6.

Balamurugan, A. N., Loganathan, G., Bellin, M. D., Wilhelm, J. J., Harmon, J., Anazawa, T., Soltani, S. M., Radosevich, D. M., Yuasa, T., Tiwari, M., Papas, K. K., McCarthy, R., Sutherland, D. E. R., & Hering, B. J. (2012). A new enzyme mixture to increase the yield and transplant rate of autologous and allogeneic human islet products. Transplantation, 93 (7), 693–702. https://doi.org/10.1097/TP.0b013e318247281b

Barbarosoglu, G., & Ozgur, D. (1999). A tabu search algorithm for the vehicle routing problem. Computers & Operations Research, 26 (3), 255–270. https://doi.org/10.1016/S0305-0548(98)00047-1

Ben-Daya, M., & Al-Fawzan, M. (1998). A tabu search approach for the flow shop scheduling problem. European Journal of Operational Research, 109 (1), 88–95. https://doi.org/10.1016/S0377-2217(97)00136-7

Bianchi, L., Dorigo, M., Gambardella, L. M., & Gutjahr, W. J. (2009). A survey on metaheuristics for stochastic combinatorial optimization. Natural Computing, 8 (2), 239–287. https://doi.org/10.1007/s11047-008-9098-4

Błażewicz, J., Formanowicz, P., Kasprzak, M., Markiewicz, W. T., & Świercz, A. (2004). Tabu search algorithm for dna sequencing by hybridization with Isothermic libraries. Computational Biology and Chemistry, 28 (1), 11–19. https://doi.org/10.1016/j.compbiolchem.2003.12.002

Chang, F., & Chen, L. (1998). Real-coded genetic algorithm for rule-based flood control reservoir management. Water Resources Management, 12 (3), 185–198. https://doi.org/10.1023/A:1007900110595

Chen, C., Li, Y., Cao, G., & Zhang, J. (2023). Research on dynamic scheduling model of plant protection UAV based on levy simulated annealing algorithm. Sustainability, 15 (3), 1503. https://doi.org/10.3390/su15031772

Chen, T., & Chen, C. (1997). Improvements of simple genetic algorithm in structural design. International Journal for Numerical Methods in Engineering, 40 (7), 1323–1334.

Cordeau, J. F., Laporte, G., & Mercier, A. (2001). A unified Tabu search heuristic for vehicle routing problems with time windows. The Journal of the Operational Research Society, 52 (8), 928–936.

Dong, Y., Zhao, C., Yang, G., Chen, L., Wang, J., & Feng, H. (2013). Integrating a very fast simulated annealing optimization algorithm for crop leaf area index variational assimilation. Mathematical and Computer Modelling, 58 (3), 877–885. https://doi.org/10.1016/j.mcm.2012.12.013

Dwivedi, V., Chauhan, T., Saxena, S., & Agrawal, P. (2012). Travelling salesman problem using genetic algorithm. IJCA proceedings on development of reliable information systems, techniques and related issues (DRISTI 2012)  1: 25 .

Edwards, G., Sørensen, C. G., Bochtis, D. D., & Munkholm, L. J. (2015). Optimised schedules for sequential agricultural operations using a tabu search method. Computers and Electronics in Agriculture, 117 , 102–113. https://doi.org/10.1016/j.compag.2015.07.007

El-Hagrasy, A. S., D’Amico, F., & Drennen, J. K., III. (2006). A process analytical technology approach to near-infrared process control of pharmaceutical powder blending. Part i: D-optimal design for characterization of powder mixing and preliminary spectral data evaluation. Journal of Pharmaceutical Sciences, 95 (2), 392–406. https://doi.org/10.1002/jps.20467

Gendreau, M., Hertz, A., & Laporte, G. (1994). A tabu search heuristic for the vehicle routing problem. Management Science, 40 (10), 1276–1290.

Glover, F. (1989). Tabu search – part i. ORSA Journal on Computing, 1 (3), 190–206. https://doi.org/10.1287/ijoc.1.3.190

Holland, J. H. (1992). Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence . MIT press.

Book   Google Scholar  

Jacob, A., & Ashok, B. (2021). Potential of amyl alcohol mixtures derived from scenedesmus quadricauda microalgae biomass as third generation bioenergy for compression ignition engine applications using multivariate-desirability analysis . Energy Sources, Part A: Recovery, Utilization and Environmental Effects. https://doi.org/10.1080/15567036.2021.1950872

Jones, D. F., Mirrazavi, S. K., & Tamiz, M. (2002). Multi-objective meta-heuristics: An overview of the current state-of-the-art. European Journal of Operational Research, 137 (1), 1–9. https://doi.org/10.1016/S0377-2217(01)00123-0

Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220 (4598), 671–680. https://doi.org/10.1126/science.220.4598.671

Konak, A., Coit, D. W., & Smith, A. E. (2006). Multi-objective optimization using genetic algorithms: A tutorial. Reliability Engineering and System Safety, 91 (9), 992–1007. https://doi.org/10.1016/j.ress.2005.11.018

Kuo, S. F., Liu, C. W., & Merkley, G. P. (2001). Sw-soil and water: Application of the simulated annealing method to agricultural water resource management. Journal of Agricultural Engineering Research, 80 (1), 109–124. https://doi.org/10.1006/jaer.2001.0723

Kwarciak, K., & Formanowicz, P. (2014). Tabu search algorithm for DNA sequencing by hybridization with multiplicity information available. Computers & Operations Research, 47 , 1–10. https://doi.org/10.1016/j.cor.2014.01.007

Lavandier, J., Islami, A., Delahaye, D., Chaimatanan, S., & Abecassis, A. (2021). Selective simulated annealing for large scale airspace congestion mitigation. Aerospace, 8 (10), 288.

Mirzahosseini, M., Najjar, Y. M., Alavi, A. H., & Gandomi, A. H. (2015). Next-generation models for evaluation of the flow number of asphalt mixtures. International Journal of Geomechanics, 15 (6), 04015009. https://doi.org/10.1061/(ASCE)GM.1943-5622.0000483

Monteagudo, J. M., Durán, A., Aguirre, M., & San Martín, I. (2011). Optimization of the mineralization of a mixture of phenolic pollutants under a ferrioxalate-induced solar photo-fenton process. Journal of Hazardous Materials, 185 (1), 131–139. https://doi.org/10.1016/j.jhazmat.2010.09.007

Gilani, Najafi Moghaddam, Hosseinian, V. S. M., Behbahani, H., & Hamedi, G. H. (2020). Prediction and pareto-based multi-objective optimization of moisture and fatigue damages of asphalt mixtures modified with nano hydrated lime. Construction and Building Materials, 261 , 120509. https://doi.org/10.1016/j.conbuildmat.2020.120509

Oda, T., Otokita, T., Tsugui, T., & Mashiyama, Y. (1997). Application of simulated annealing to optimization of traffic signal timings. IFAC Proceedings, 30 (8), 733–736. https://doi.org/10.1016/S1474-6670(17)43908-5

Redi, A. A. N. P., Jewpanya, P., Kurniawan, A. C., Persada, S. F., Nadlifatin, R., & Dewi, O. A. C. (2020). A simulated annealing algorithm for solving two-echelon vehicle routing problem with locker facilities. Algorithms, 13 (9), 218. https://doi.org/10.3390/a13090218

Saderova, J., Rosova, A., Sofranko, M., & Kacmary, P. (2021). Example of warehouse system design based on the principle of logistics. Sustainability (Switzerland), 13 (8), 4492. https://doi.org/10.3390/su13084492

Seyedmohammadi, J., Sarmadian, F., Jafarzadeh, A. A., & McDowell, R. W. (2019). Development of a model using matter element, AHP and GIS techniques to assess the suitability of land for agriculture. Geoderma, 352 , 80–95. https://doi.org/10.1016/j.geoderma.2019.05.046

Sgarro, G. A., & Grilli, L. (2023). Genetic algorithm for optimal multivariate mixture. Applied Mathematical Sciences, 17 (1), 15–25.

Sgarro, G. A., & Grilli, L. (2024). Ant colony optimization for Chinese postman problem. Neural Computing and Applications, 36 (6), 2901–2920. https://doi.org/10.1007/s00521-023-09195-4

Shirneshan, A., Bagherzadeh, S. A., Najafi, G., Mamat, R., & Mazlan, M. (2021). Optimization and investigation the effects of using biodiesel-ethanol blends on the performance and emission characteristics of a diesel engine by genetic algorithm. Fuel, 289 , 119. https://doi.org/10.1016/j.fuel.2020.119753

Sivanandam, S., & Deepa, S. (2008). Genetic algorithms, Introduction to genetic algorithms (pp. 15–37). Springer. https://doi.org/10.1007/978-3-540-73190-0_7

Thongsook, S., Budsaba, K., & Borkowski, J. J. (2014). Using a genetic algorithm to generate ds-optimal designs for mixture experiments in a simplex region. Lobachevskii Journal of Mathematics, 35 (2), 122–137. https://doi.org/10.1134/S1995080214020103

Tufano, A., Accorsi, R., & Manzini, R. (2020). A simulated annealing algorithm for the allocation of production resources in the food catering industry. British Food Journal, 112 (7), 2139–2158. https://doi.org/10.1108/BFJ-08-2019-0642

Wahde, M. (2008). Biologically inspired optimization methods: An introduction . Boston: WIT press.

Google Scholar  

Whitley, D. (1994). A genetic algorithm tutorial. Statistics and Computing, 4 (2), 65–85. https://doi.org/10.1007/BF00175354

Xu, T., Wei, H., & Hu, G. (2009). Study on continuous network design problem using simulated annealing and genetic algorithm. Expert Systems with Applications, 36 (2), 1322–1328. https://doi.org/10.1016/j.eswa.2007.11.023

Yu, V. F., Susanto, H., Jodiawan, P., Ho, T. W., Lin, S. W., & Huang, Y. T. (2022). A simulated annealing algorithm for the vehicle routing problem with parcel lockers. IEEE Access, 10 , 20764–20782. https://doi.org/10.1109/ACCESS.2022.3152062

Download references

Acknowledgements

We want to acknowledge the assistance of the Grant Office of the University of Foggia for their contribution to partnership creation with Casillo Group.

Open access funding provided by Università di Foggia within the CRUI-CARE Agreement. Research financially supported by: Project MiSE-DGPIIPMI-Artificial Intelligence to support the digitalization and industrial engineering process-2021.

Author information

Giacinto Angelo Sgarro, Luca Grilli and Domenico Santoro have contributed equally to this work.

Authors and Affiliations

Department of Economics, Management and Territory (DEMeT), University of Foggia, Via A. Da Zara, 11, 71121, Foggia, FG, Italy

Giacinto Angelo Sgarro & Luca Grilli

Department of Economics and Finance, University of Bari Aldo Moro, Largo A. Santa Scolastica, 70124, Bari, BA, Italy

Domenico Santoro

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Giacinto Angelo Sgarro .

Ethics declarations

Conflict of interest.

The authors have no actual or potential Conflict of interest in relation to this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Sgarro, G.A., Grilli, L. & Santoro, D. Optimal multivariate mixture: a genetic algorithm approach. Ann Oper Res (2024). https://doi.org/10.1007/s10479-024-06045-x

Download citation

Received : 16 October 2023

Accepted : 26 April 2024

Published : 18 May 2024

DOI : https://doi.org/10.1007/s10479-024-06045-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Meta-heuristics
  • Decision-making
  • Blending problems
  • Artificial intelligence

Mathematics Subject Classification

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Annals of Operations Research Template

    annals of operations research call for papers

  2. Annals of Operations Research Template

    annals of operations research call for papers

  3. Power Publishers, Operations Research

    annals of operations research call for papers

  4. (PDF) Call for Papers- Annals of Operations Research- Special Issue

    annals of operations research call for papers

  5. Annals of Operations Research Template

    annals of operations research call for papers

  6. Advances in Equilibrium Modeling, Analysis and Computation

    annals of operations research call for papers

VIDEO

  1. Operations Research Chapter 4 Video 3

  2. Data Entry Operations 229 Solved TMA Session 2024 #niosboardnewstoday #nationalopenscholing

  3. OPERATION RESEARCH

  4. EPA Research Call 2024 Information Webinar

  5. Our Enduring Academic Journey

  6. Paris Mini Album

COMMENTS

  1. Call for Papers

    Call for Papers. The Annals of Operations Research publishes periodic special issues that focus on defined fields of operations research, ranging from the highly theoretical to the algorithmic and the applied. Current open Calls for Papers include: Ensemble Learning for Operations Research & Business Analytics. (this opens in a new tab)

  2. (PDF) Annals of Operations Research

    Abstract. Annals of Operations Research invites submissions for a special issue on "Financial Modelling and Risk Management of Energy and Environmental Instruments and Derivatives". The deadline ...

  3. Call for Papers- Annals of Operations Research- Special Issue

    Call for Papers- Annals of Operations Research- Special Issue: Applications of Operations Research and Data Science in Disrupting Illicit Markets December 2021 DOI: 10.13140/RG.2.2.27044.19847

  4. PDF Call for Papers

    Annals of Operations Research seeks submissions for the special issue Operations Research in the 21st Century: XXI Latin-Iberoamerican Conference on Operations Research, CLAIO 2022. We invite all participants of CLAIO 2022 to submit a full version of their presented contributions to this special issue. Contributions arising from papers given at ...

  5. PDF Annals of Operations Research

    Annals of Operations Research Call for Papers Special Issue: AHP/ANP: The Next Generation For this special issue we invite original research contributions to the theory, computation, and practice of Analytic Hierarchy/Network Process. We welcome high quality papers that address any type of decision problem, may propose methods and

  6. PDF Call for Papers

    The use of methods from modern operations research is a prerequisite for any paper submitted to this special volume to enter the reviewing process. Annals of Operations Researchis abstracted and indexed inthe following databases: Science Citation Index, Science Citation Index Expanded (SciSearch), Journal Citation Reports/Science Edition,

  7. PDF CALL FOR PAPERS Annals of Operations Research

    CALL FOR PAPERS Annals of Operations Research Special Issue: Big Data Modeling and Applications In the big data era, correctly harnessing data can help to achieve better and fact-based decision-making in all aspects of operations research. Big data modeling aims to describe and organize the

  8. PDF Call for papers for Special Issue SOME: Sustainable Operations in

    Annals of Operations Research . Call for papers for Special Issue . SOME: Sustainable Operations in Manufacturing Enterprise . Today even the most powerful manufacturing and services firms cannot do business in an arbitrary manner considering only profits. example, McDonald's is For forced to stop using eggs from being

  9. ANNALS OF OPERATIONS RESEARCH

    ANNALS OF OPERATIONS RESEARCH Calls for Papers (If deadline has passed, please contact the guest editors to request permission before submitting.) Editors: Sarah Ben Amor, Anissa Frini, Gilles Reinhardt ... Call for Papers: APPLICATION OF OPERATIONS RESEARCH (OR) IN DISASTER RELIEF OPERATIONS (DRO)-PART II closes December 15, 2017. Editor ...

  10. PDF CALL FOR PAPERS Annals of Operations Research

    The era of AI through reliability and statistical machine computing as well as intelligent and recommender systems with almost all applications and service industry has experienced a dramatic shift in the past two decades to a truly global industry, known as the Industry 4.0. The forces that have driven this change are still at play and will ...

  11. PDF Annals Of Operations Research

    Annals Of Operations Research Tobias Bleicker Conversational commerce: entering the next stage of AI WEBAnnals of Operations Research (2024) 333:653-687 655 1 3 ... as 65 European Journal of Operational Research 3 3 3 0.92 4 81 95.3 ag 61.8 Production and Operations … CALL FOR PAPERS Annals of Operations Research - ISSAT … WEBThe era of ...

  12. PDF CALL FOR PAPERS ANNALS OF OPERATIONS RESEARCH

    CALL FOR PAPERS ANNALS OF OPERATIONS RESEARCH SPECIAL ISSUE: Game Theoretical Models and Applications II Submissions in Game Theoretical Models and Applications are welcome for this special issue to appear in Annals of Operations Research. The deadline for submissions is 28 February 2022.

  13. PDF Call for Papers Annals of Operations Research Special Issue: Project

    High-quality papers are sought from participants of the conference but submissions are also welcome from others with novel theoretical or applied contributions fitting within this area. Papers should employ a rigorous research methodology, including operations research and empirical research, and should have relevance and practicable implications.

  14. (PDF) Call for Papers Annals of Operations Research Special Issue

    Call for Papers Annals of Operations Research Special Issue: Emerging Trends in the Interplay between Analytics and Operations in MSMEs May 2020 DOI: 10.13140/RG.2.2.11828.48004

  15. Calls for Papers

    Calls for Papers. There are no calls for papers at this time. Available calls for papers to the Operations Research journal.

  16. PDF Call for Papers

    The Annals of Operations Research invites submissions for a special issue on innovative Application of Operations Research to Financial Markets. The call is open to all researchers in this area. Submitted papers should not have been previously published nor be cur-rently under consideration for publication elsewhere. Papers arising from a con-

  17. PDF CALL FOR PAPERS

    CALL FOR PAPERS . Annals of Operations Research . Special Issue on . Data Mining and Decision Analytics. Closing date: August 31, 2018 . The decision-making capabilities of operations research methods can enhance the learning and representation of patterns and structure in data. Viceversa, the characterizations identifie- d and

  18. Construction of copulas for bivariate failure rates

    This paper aims to develop a method to construct an asymmetric copula, based on which a closed form of the cumulative bivariate failure rate can be obtained. The construction method differs from existing ones. This new method can facilitate the derivation of some results such as the estimation of the expected number of occurrences for a system whose failure process is modelled by a bivariate ...

  19. Optimal multivariate mixture: a genetic algorithm approach

    The Optimal Multivariate Mixture Problem (OMMP) consists of finding an optimal mixture which, starting from a set of elements (items) described by a set of variables (features), is as close as possible to an ideal solution. This problem has numerous applications spanning various fields, including food science, agriculture, chemistry, materials science, medicine, and pharmaceuticals. The OMMP ...