Dynamic constrained optimization is a challenging research topic in which the objective function and/or constraints change over time. In such problems, it is commonly assumed that all problem instances are feasible. In reality some instances can be infeasible due to various practical issues, such as a sudden change in resource requirements or a big change in the availability of resources. Decision-makers have to determine whether a particular instance is feasible or not, as infeasible instances cannot be solved as there are no solutions to implement. In this case, locating the nearest feasible solution would be valuable information for the decision-makers. In this paper, a differential evolution algorithm is proposed for solving dynamic constrained problems that learns from past environments and transfers important knowledge from them to use in solving the current instance and includes a mechanism for suggesting a good feasible solution when an instance is infeasible. To judge the performance of the proposed algorithm, 13 well-known dynamic test problems were solved. The results indicate that the proposed algorithm outperforms existing recent algorithms with a margin of 79.40% over all the environments and it can also find a good, but infeasible solution, when an instance is infeasible.
Dynamic constrained optimization is necessary for solving many real-world problems in various fields [
Detecting changes in environments and tracking the changing optimum are two important tasks for solving dynamic optimization problems (DOPs). Many studies have been conducted to track the changing optimum [
To judge the performances of new algorithms, some synthetic constrained test problems, in which all instances are feasible, are proposed in the literature. However, in practice, some problem environments may not be feasible owing to various real-life issues; for example, their resource requirements can be significantly higher than expected due to defects in their supplied materials, shortages of resources in a given time period due to delivery disruptions, capacity reductions due to machine breakdowns and unexpected increases in market demand [
When solving optimization problems using EAs, it is necessary to define the search space, which is usually limited by variable bounds. The feasible space of a constrained problem is limited by both its constraint functions and variable bounds. For this reason, the feasible space is usually smaller than the search space, but they can be equal in some cases. To optimize a problem, we are interested in finding the best feasible solution, which is usually located on the boundary of the feasible space for most practical problems [
Motivated by the above-mentioned facts, the aim of this paper is firstly, to develop a mechanism that determines the limits of feasibility, that can help to deal with possible infeasibility in problem instances. Secondly, a knowledge-transfer mechanism that will allow the retrieval and exploitation of solutions from past environments is proposed. Finally, to solve DCOPs with the existence of infeasible environments, a multi-operator-based differential evolution (DE) algorithm is developed by incorporating those two mechanisms. The performance of the proposed algorithm was tested by solving 13 single objective DCOPs with dynamic inequality constraints and compared its solutions with those of state-of-the-art dynamic constrained optimization algorithms. The experimental results revealed that the proposed approach can obtain high-quality solutions in a limited amount of time and is capable of dealing with infeasible instances, so it is suited for all environments.
The rest of this paper is organized as follows: in Section 2, EAs for solving DCOPs are reviewed; in Section 3, DCOPs are introduced and the proposed approach is explained in detail; in Section 4, the experimental results are discussed and the performance of the proposed method analyzed; and, finally, conclusions are drawn and future research directions suggested in Section 5.
As of the literature, evolutionary algorithms (EAs) are widely accepted as effective methods for solving static optimization problems. It is a common trend to extend EAs developed for static problems to solve dynamic ones. However, in this situation, they converge to an optimum by reducing the population’s diversity, which requires additional mechanisms to adapt to the changed environment if the final population of a previous environment is used as an initial population of the current environment [
Compared to DOPs, not much work has been proposed in the literature for solving DCOPs. Thus, some EAs originally designed for static problems were adapted to solve dynamic constrained ones. Singh et al. [
A memory-based method is another EA strategy proposed for solving DCOPs. A study presented by Richter [
In a study by Richter et al. [
Ameca-Alducin et al. [
Also, some studies consider a repairing strategy for solving DCOPS. That by Pal et al. [
In another significant study proposed by Nguyen [
Bu et al. [
Another strategy, called sensitive constraint detection, was introduced by Hamza et al. [
In this section, a framework for solving DCOPs is proposed. Firstly, a brief review of the optimization algorithm employed in this paper is provided. Then, an overview of the proposed framework is presented and then details of its components are discussed.
The optimizer used in this method is based on multi-operator differential evolution (MODE) [
A DCOP is an optimization problem in which one or more of the functions and/or parameters change over time. The functions include the objective function and constraint functions (i.e., constraint left-hand side). In a given environment, it is considered a static problem instance which does not change for a given time period (i.e., the duration of the environment). In general, a DCOP can be expressed as
DCOPs can be classified as one of three types based on the dynamics of its objective function and constraints as: 1) the objective function is dynamic and the constraints static [
The structure of the proposed framework is shown in Algorithm 1. It starts by randomly generating an initial population (
Next, the algorithm enters a loop (Algorithm 1, lines 8–18) which has two main phases: 1) a search and 2) a change-handling technique. In the first, the multi-operator DE algorithm [
As many real-world DCOPs have cyclical changes [
It is essential to identify when an environmental change occurs and its amount in many real-world DOPs [
In changing environments, the constraints may alter in a way that a solution in the search space that was feasible in the previous environment may become infeasible. This may happen due to changes in the right-hand side values of the constraints (e.g., the availability of certain resources) and also changes in either the constraint coefficients or right-hand side values, or both. We assume that the variable bounds will be unchanged (although they can be changed), but the constraint right-hand-side values can be changed. No feasible solution means that no action can be taken and there is discontinuity in the actions between the environments, which is not acceptable in practice. To maintain continuity in actions, we believe it will be beneficial to find the nearest feasible solution so that the practitioner can make adjustments to the resources required. In other words, this will generate a feasible solution with minimum changes in the values on the right-hand side (Algorithm 3, lines 2–4). Since the new environment may be near the previous one, some solutions from the latter may be adopted. Therefore, to transfer the knowledge of promising solutions from the previous environment to the new one,
Also, as environments with the same change may have similar optimal solutions, solutions from previous environments are beneficial for use in a similar environment. The search process can start where other environments end (Algorithm 3, lines 9–21), which assists obtaining high-quality solutions in a limited amount of time. Firstly, the magnitude of change (
Another scenario may be where several previous environments have the same
In this section, in a case of infeasibility, the idea of finding the nearest feasible solution is discussed. Firstly, an analytical example is provided and then a genetic algorithm-based approach presented.
Analytical Example
Let us consider a problem (G24 from [
As previously discussed, to solve a problem using EAs, the search space considered is usually larger than the feasible space of the problem and is defined by the variable bounds, as shown in the mathematical model and
0 | 0 | −2 | −36 |
3 | 0 | −20 | 0 |
0 | 4 | 2 | −32 |
3 | 4 | −16 | 4 |
We assume that the search space is fixed but the feasible one may change due to a change in either the constraint function or right-hand side of the constraints or both. For any such variation, the optimal solution obtained in the previous environment may move to a new location in the new one and in some cases, can even be infeasible. In some situations, there may be no feasible solution if there is a large change in one or more constraints. In COPs, it is well known that the optimal solution often lies on the boundary of the feasible search space [
Genetic-based Method
In the genetic-based method, to determine the limits of feasibility for each
In this section, the performance of the proposed approach is analyzed by solving a set of benchmark functions introduced in the CEC2006 special competition sessions on COPs [
For each benchmark problem, 25 independent runs were executed, each over 30 time periods with a change frequency of 5000 fitness evaluations (
The average feasibility ratios and fitness values produced from all the experimental runs in each environment are used as performance measures. If feasible solutions were not obtained from the compared algorithms, the average sum of the constraint violations is considered the solutions’ quality measure. A statistical comparison is conducted using a Wilcoxon signed-rank test [
In this section, the benefits of the proposed algorithm and its components are discussed and analyzed.
In this section, the results obtained from both the analytical and genetic methods are presented and discussed.
g_{1} | A. | −1.00E+01 | −1.00E+20 | −2.96E+00 | −1.80E+04 | −3.15E+02 | −9.00E+00 | 3.04E+04 | −9.50E−01 | 2.94E+00 | −6.56E+02 | 1.99E+02 | −1.80E+03 | −2.00E+01 |
G. | −1.00E+01 | −3.17E+17 | −2.96E+00 | −1.94E+00 | −2.15E+02 | −2.12E+00 | −1.82E+02 | −9.50E−01 | −6.25E−02 | −1.21E+02 | −1.00E+00 | −2.32E+03 | −2.00E+01 | |
g_{2} | A. | −1.00E+01 | −1.50E+02 | −9.70E+01 | −8.81E+00 | −3.70E+02 | 7.00E+00 | 5.98E+02 | −3.45E+00 | −4.33E+02 | −1.00E+00 | −2.76E+03 | −3.60E+01 | |
G. | −1.00E+01 | −1.30E+02 | −9.20E+01 | −1.81E+00 | −2.25E+02 | −7.52E−01 | −3.39E+02 | −1.33E+00 | −4.86E+01 | −1.00E+00 | −3.50E+03 | −3.60E+01 | ||
g_{3} | A. | −1.00E+01 | −1.38E+01 | −1.82E+02 | 1.94E+02 | −1.09E+01 | −2.33E+00 | 1.99E+02 | −2.56E+03 | |||||
G. | −1.00E+01 | −1.28E+01 | −3.20E+01 | −4.11E+02 | −7.18E−01 | −2.33E+00 | −1.00E+00 | −3.18E+03 | ||||||
g_{4} | A. | −8.00E+01 | −2.31E+01 | 3.98E+02 | 2.40E+02 | −9.08E+06 | −5.37E+02 | 1.99E+02 | −2.06E+03 | |||||
G. | −8.00E+01 | −1.88E+01 | −1.88E+02 | −1.44E+02 | −2.98E+06 | −1.31E+02 | −1.00E+00 | −2.81E+03 | ||||||
g_{5} | A. | −8.00E+01 | −8.24E+00 | 3.76E+02 | −9.90E+06 | −2.52E+02 | −1.00E+00 | −1.01E+03 | ||||||
G. | −8.00E+01 | −5.00E+00 | −4.63E+01 | −3.23E+06 | −4.24E+01 | −1.00E+00 | −1.57E+03 | |||||||
g_{6} | A. | −8.00E+01 | −8.45E+00 | −1.72E+02 | −8.68E+06 | −2.95E+02 | −1.00E+00 | |||||||
G. | −8.00E+01 | −5.00E+00 | −1.07E+02 | −7.18E+05 | −1.92E+02 | −1.00E+00 | ||||||||
g_{7} | A. | −3.00E+01 | 3.34E+02 | −1.04E+03 | −1.00E+00 | |||||||||
G. | −3.00E+01 | −2.45E+01 | −2.17E+00 | −1.00E+00 | ||||||||||
g_{8} | A. | −3.00E+01 | −1.12E+02 | −1.04E+03 | −1.00E+00 | |||||||||
G. | −3.00E+01 | −7.93E+01 | −1.04E+03 | −1.00E+00 | ||||||||||
g_{9} | A. | −3.00E+01 | −2.38E+01 | 1.99E+02 | ||||||||||
G. | −3.00E+01 | −2.60E+00 | −1.00E+00 | |||||||||||
g_{10} | A. | −2.38E+01 | −2.00E+02 | |||||||||||
G. | −2.37E+01 | −1.00E+00 | ||||||||||||
g_{11} | A. | −4.51E+02 | −2.00E+02 | |||||||||||
G. | −4.94E+01 | −1.00E+00 | ||||||||||||
g_{12} | A. | −4.51E+02 | −2.00E+02 | |||||||||||
G. | −4.50E+02 | −1.00E+00 | ||||||||||||
g_{13} | A. | −5.85E+02 | −2.00E+02 | |||||||||||
G. | −4.60E+02 | −1.00E+00 | ||||||||||||
G16 | Met. | g_{14} | g_{15} | g_{16} | g_{17} | g_{18} | g_{19} | g_{20} | g_{21} | g_{22} | g_{23} | g_{24} | g_{25} | g_{26} |
A. | −5.77E+02 | −2.65E+02 | −2.65E+02 | −5.43E+00 | −5.63E+00 | −7.76E−02 | −7.52E−02 | −2.15E+02 | −2.16E+02 | −3.63E+02 | −3.63E+02 | −5.17E+02 | −5.17E+02 | |
G. | −2.40E+02 | −2.65E+02 | −1.34E+02 | −1.74E+00 | −4.44E+00 | −4.51E−02 | −7.51E−02 | −4.68E+01 | −1.53E+02 | −3.63E+02 | −3.17E+02 | −5.17E+02 | −3.30E+02 | |
Met. | g_{27} | g_{28} | g_{29} | g_{30} | g_{31} | g_{32} | g_{33} | g_{34} | g_{35} | g_{36} | g_{37} | g_{38} | ||
A. | −5.19E+02 | −5.18E+02 | −2.17E+03 | −2.17E+03 | −2.17E+04 | −1.79E+04 | −2.02E−01 | −3.33E−01 | −1.37E+05 | −1.13E+04 | −9.33E+06 | −1.01E+07 | ||
G. | −5.53E+00 | −5.18E+02 | −2.17E+03 | −3.16E+02 | −8.57E+03 | −1.20E+04 | −2.27E−01 | −1.40E−01 | −6.89E+04 | −1.57E+04 | −9.28E+06 | −6.57E+05 |
To evaluate the performances of the two different methods, these feasibility limits are applied on the dynamic variant of MODE [
where the change in constraints’ boundaries from 0 to
The experimental results obtained for each test problem from the analytical and genetic methods are illustrated in
g_{1} | A. | 5.00E+01 | 1.78E+04 | 9.98E+01 | 6.50E+00 | 3.38E+02 | ||||||||
G. | ||||||||||||||
g_{2} | A. | 7.50E−01 | 5.04E+00 | 7.00E+00 | 1.41E+02 | 2.01E+00 | 3.08E+02 | |||||||
G. | ||||||||||||||
g_{3} | A. | 9.96E−01 | 1.50E+02 | 1.72E+00 | ||||||||||
G. | ||||||||||||||
g_{4} | A. | 3.70E+00 | 1.52E+00 | 2.22E+02 | ||||||||||
G. | ||||||||||||||
g_{5} | A. | 3.24E+00 | 1.73E+00 | 2.08E+02 | ||||||||||
G. | ||||||||||||||
g_{6} | A. | 3.45E+00 | 6.51E+00 | 8.90E+00 | 1.03E+02 | |||||||||
G. | ||||||||||||||
g_{7} | A. | 1.03E+03 | ||||||||||||
G. | ||||||||||||||
g_{8} | A. | 3.27E+1 | 3.91E−02 | |||||||||||
G. | ||||||||||||||
g_{9} | A. | 2.12E+01 | ||||||||||||
G. | ||||||||||||||
g_{10} | A. | 8.39E−02 | 1.99E+02 | |||||||||||
G. | ||||||||||||||
g_{11} | A. | 2.67E+02 | 1.99E+02 | |||||||||||
G. | ||||||||||||||
g_{12} | A. | 1.59E+00 | 1.99E+02 | |||||||||||
G. | ||||||||||||||
g_{13} | A. | 1.08E+02 | 1.99E+02 | |||||||||||
G. | ||||||||||||||
Met. | g_{14} | g_{15} | g_{16} | g_{17} | g_{18} | g_{19} | g_{20} | g_{21} | g_{22} | g_{23} | g_{24} | g_{25} | g_{26} | |
G16 | A. | |||||||||||||
G. | ||||||||||||||
Met. | g_{27} | g_{28} | g_{29} | g_{30} | g_{31} | g_{32} | g_{33} | g_{34} | g_{35} | g_{36} | g_{37} | g_{38} | ||
A. | ||||||||||||||
G. |
In this subsection, the merits of the proposed framework are discussed based on comparisons of four different variants of the MODE algorithm. These include:
the proposed approach;
dynamic MODE using a sensitive constraint-detection mechanism [
default MODE initializing the population in each new environment with the population before a change occurs [
re-initialization MODE whereby the entire population is randomly re-initialized after every change [
Detailed results are reported in the supplementary material in Appendices A and B, with summaries of the comparisons presented in
Algorithms | Better | Equal | Worse | Decision | |
---|---|---|---|---|---|
Proposed |
287 | 42 | 61 | 0 | + |
Proposed |
287 | 44 | 59 | 0 | + |
Proposed |
355 | 21 | 14 | 0 | + |
Furthermore,
Algorithm | Mean rank |
---|---|
Proposed | |
Default | 2.66 |
Dynamic | 2.56 |
Re-initialization | 3.30 |
The proposed algorithm has superior performance because it tracks the useful location information of previous environments that helps to handle possible environmental changes. Also, it can deal with large changes effectively, which accelerates convergence towards feasibility and provides better solutions. However, the high violations of the constraints found in other variants lead to no feasible solutions. The experimental study shows the efficiency of the proposed method over the others, based on the results for most of the tested environments which, in turn, facilitates the movement of current solutions towards the feasible region.
Also, in order to better evaluate the proposed algorithm, the results obtained using the limits of feasibility for the other compared algorithms are shown in
Algorithms | Better | Equal | Worse | Decision | |
---|---|---|---|---|---|
Proposed |
259 | 48 | 83 | 0 | + |
Proposed |
263 | 49 | 78 | 0 | + |
Proposed |
351 | 21 | 18 | 0 | + |
In this paper, a method for handling DCOPs under resource disruptions is introduced. In it, a mechanism that helps to locate useful information from previous environments when addressing DCOPs is developed. The solutions in previous environments are exploited in a new one based on their similarities. Also, when the constraints change so much that a solution lies in the infeasible area, the right-hand side of the original problem is modified to a limit that provides a feasible solution. Therefore, analytical and genetic methods for specifying the limits of feasibility, that can help address DCOPs under resource disruptions, are proposed.
The proposed algorithm is combined with an optimization one and used to test 13 DCOPs in each of which 30 dynamic changes occur with a frequency of 5000 FEs. The experimental results show that this proposed method outperforms other approaches and can handle big changes effectively, which accelerates convergence and achieves better solutions. The results also demonstrate that the genetic-based approach outperforms the analytical one for finding the limit of feasibility, as it is capable of providing more accurate results for such problems.
For possible future research, the proposed method could include a constraint-handling approach for bringing solutions back to the feasible region quickly when a disruption occurs. Also, as each mechanism has its advantages, a possible avenue of investigation is to choose the most suitable (i.e., either analytical or genetic) based on a problem’s characteristics. In addition, this implementation could be extended to consider dynamism in the objective function, variables’ boundaries and/or active variables.