Quantum-behaved particle swarm optimization, which was motivated by analysis of particle swarm optimization and quantum system, has shown compared performance in finding the optimal solutions for many optimization problems to other evolutionary algorithms. To address the problem of premature, a local search strategy is proposed to improve the performance of quantum-behaved particle swarm optimization. In proposed local search strategy, a super particle is presented which is a collection body of randomly selected particles’ dimension information in the swarm. The selected probability of particles in swarm is different and determined by their fitness values. To minimization problems, the fitness value of one particle is smaller; the selected probability is more and will contribute more information in constructing the super particle. In addition, in order to investigate the influence on algorithm performance with different local search space, four methods of computing the local search radius are applied in local search strategy and propose four variants of local search quantum-behaved particle swarm optimization. Empirical studies on a suite of well-known benchmark functions are undertaken in order to make an overall performance comparison among the proposed methods and other quantum-behaved particle swarm optimization. The simulation results show that the proposed quantum-behaved particle swarm optimization variants have better advantages over the original quantum-behaved particle swarm optimization.
Particle swarm optimization (PSO) is a stochastic problem-independent optimization technique, which is first proposed by James and Eberhart in 1995.1 It is a simulation from the social behavior of bird flocking or fish schooling. PSO is initialized with a population of particles with random positions and velocities in the search space of the problem. Each particle is associated with a potential solution. PSO relies on the exchange of position and velocity information between individuals of swarm. Particles fly over the problem landscape by following their own and the current best particles’ experience. All of them are attracted to the best location, which has been found by them, and also the best location found by all particles in its topological neighborhood. In the PSO with N particles, if one D-dimensional problem is considered, three vectors about ith particle are named as the current position , the velocity , and the personal best (pbest) position . The global best (gbest) position of swarm is denoted as . Then the particles in PSO are updated as follows
For ; , where c1 and c2 are acceleration factors known as cognitive and social parameters, which determine the magnitude of personal information and social information in the search process and compute how far a particle will fly in a single iteration. Parameter w in formula (1) is called inertia weight which is influential to PSO and could control the velocity of particles.
Over the last decade, PSO has become a well-known optimization technology since it can be easily implemented for optimization problems of different fields and needs fewer parameters to be adjusted compared with other evolution algorithms. More and more researchers all over the world put their attentions on PSO and develop the original version. A huge number of variants are proposed and applied in different fields.2–9 An efficient optimization algorithm should have excellent ability to balance local and global search. For PSO, local search means that the particle is capable to exploit the neighborhood of the present solution for other promising candidates while global search implies some solutions in other part of the search space being able to be found. In general, exploration and exploitation should be considered simultaneously, but it is very difficult. Since PSO had been proven theoretically that it was not a global convergent algorithm, even not a local convergent one, against the convergence criteria given in Van den Bergh.10 Local search technologies were applied in PSO algorithm to improve the quality of solution by searching its neighborhood. In Sun et al.11 a golden ratio was used to determine the size of the search area and only two positions needed to be checked in order to find whether there were local positions with lower fitness value around a certain particle position. A stochastic local search concept was introduced to balance exploration and exploitation at each iteration and encouraged the particles to explore local region beyond that defined by the search algorithm.12 Some researchers proposed gradient-based descent methods to combine with PSO for finding a local minimum of objective function and especially a repulsion strategy and partially initializing population method were incorporated in PSO to increase its global jumping ability.13 Another local search scheme was implemented as a neighborhood search engine to improve the solution quality, where it intended to explore the less crowded area in the current archive to possibly obtain more nondominated solutions.14 To address the problem of premature convergence, a potential particle position in the solution search space was collectively constructed by a number of randomly selected particles in the swarm. Each selected particle donated the value in the location of its randomly selected dimension from its personal best.15
In 2004, J. Sun et al. applied a strategy based on a quantum delta potential well model to depict the behavior of particles inspired by quantum mechanics and trajectory analysis of PSO.16 The formulas were modified and a mean best (mbest) position, which denoted the average position of all particles’ personal best positions in swarm, was introduced into the evolution equations while the velocity formula was discarded. Hence, a new version of PSO, known as quantum-behaved particle swarm optimization (QPSO) was proposed.17 QPSO is a probabilistic algorithm18 and its iterative equations are very different from those in PSO. It also has fewer parameters to adjust and need no velocity compared to PSO, which make it easier to implement. The QPSO algorithm has been shown good performance in solving a wide range of continuous optimization problems. Many improvements have been proposed that focus on global search ability, convergence speed, solution precision, robustness, etc. These strategies employed in QPSO can be classified into six categories, which are improvements by parameter selection, control swarm diversity, cooperative methods, using probability distribution function, novel search methods, and hybrid methods. Most of them were reviewed in Fang et al.19 For example, different diversity-controlled models were introduced into QPSO to improve the ability from escaping local minima in Sun et al.20,21 Sun et al. gave the convergence condition of contraction–expansion coefficient with stochastic simulation method and proposed two parameter control methods.22 Some researchers aimed to integrate the desirable properties of different approaches to enhance the QPSO further.23–30 However, to the best of our knowledge, QPSO combined with local search strategy has not been reported elsewhere, the goal of this work was to employ a new local search strategy in QPSO to improve its performance, which is different from those mentioned above.
This paper is organized as follows. In the next section, the principle of QPSO is introduced. The following section presents the QPSO with local search strategy. The next section provides the experimental analysis and performance comparisons with other algorithms. Some concluding remarks are given in the last section.
QPSO
Clerc and Kennedy analyzed the particle search trajectory in PSO,31 and concluded that the convergence of whole particles could be achieved while each particle converged to its local attractor which was defined as
or
Inspired by convergence analysis of the traditional PSO and quantum system, a wave-function is used to depict the state of a particle with momentum and energy. According to the statistical significance of the wave-function, a normalized probability density function Q and distribution function F for each component of the particle’s current position can be obtained through solving the Schrödinger equations for each dimension
where is standard deviation (SD) of the distribution and determines search space of each particle at each iteration. Employed Monte Carlo method, the position of the particle can be given as
where u is a random number obeyed uniform distributed on (0, 1).
In QPSO, mbest is employed as the global attractor point of swarm to evaluate the value of . This position is defined as the average of personal best (pbest) position of all particles in swarm
Here, N is the population size and Pi is the pbest position of particle i. Then the values of Li.j(t) can be got by
and the position of particle can be evaluated by
where parameter β is called contraction–expansion coefficient, which can be tuned to control the convergence speed of the algorithm for different optimization problems. The variant of PSO with equations (3), (8), and (10) is called QPSO.
QPSO with proposed local search strategy
In the original QPSO, the basic optimization principle is that each particle in the swarm shares their discovered information with their neighbors, and the best discovery particle attracts others flying to the optimal solution. This strategy looks very promising, but there will be a risk of the susceptible to premature convergence, especially to those multimodal and high in dimension problems. The reason is that the global best particle always is an important part in computing the local attractor in formula (3) and shares too much information with other particles. This behavior is more likely to drive particles migrating toward the same direction and to be overconcentrated. Therefore, the global search efficiency and ability of QPSO will decline.
The motivation for proposed local search strategy is the challenge of premature convergence tendency, which affects the performance of QPSO. In QPSO, the global best particle is the decision-making one to search optimal solutions for optimization problems. The decision is dictated by a single particle in the swarm. If more particles’ information is involved in the decision making, it could lead to a promising region in the search space where optimal solution could be obtained.
The description of the local search strategy is as follows: during the evolution process, each particle has equal chance to decide the potential location where a better global best solution could be obtained in search space. Hence, it is better that a super particle should be considered that includes the information of all particles. As a result, each particle in swarm is selected to contribute its idea with a certain probability to construct a potential solution (super particle). The idea from selected particle is one-dimension information or more from its personal best position.
In fact, referring to the evolution of social culture in real world, it is not proper to consider each particle with equal chance to form super particle, the elitists always play more important role in culture development. According to this idea, the particle with better fitness value has more chance to deliver its information to construct super particle in our proposed local search strategy. It means that the probability of particle being selected to contribute its dimension information depended on the particle fitness value.
When the super particle is constructed, local researches undertook around its neighbor in order to find a better solution in compared with the current global solution. If a better solution is found, the current global solution will be replaced by it; otherwise no replacement is done.
The procedure of proposed local search strategy
In the proposed local search strategy, the super particle is denoted as at tth iteration whose dimensions are selected from different pbest according to their fitness values. In general, the dimensions of particle with better fitness value have more chance being selected to construct super particle. To minimization problem, it is natural that we can rank the particles in descendent order to their fitness values. Then a weight coefficient linearly decreasing with the particles’ rank can be assigned to particles. Then the nearer the best solution, the larger the probability is. The potential new position with local search is denoted as follows
r is the local search radius which is around the super particle and is a random vector picked uniformly from the range . During the working procedure, a new potential position is calculated by formula (11) and the fitness value of is compared to the fitness value of the global best position in QPSO algorithm. If an improvement on the fitness value can be obtained, the global best position will be replaced with . Otherwise no replacement is done. Then, the search radius will be set to decrease linearly by multiplying itself with a factor χ using formula (12)
where χ is a variable value which linearly decreases with time growing of the local search strategy
where is the upper limit value of χ and is the lower limit value. and are the current iteration times and the max iteration times during local search, respectively.
The procedure of local search is described in Figure 1.
The proposed local search with four different search radiuses
In proposed local search strategy, the local search space is very important to algorithm performance. There is a hope that a better potential solution is included in the local search space. If the search space is too small, the better potential solution may not exist in the search space, otherwise the better potential solution is difficult to be found with a large local search space. In QPSO, , , and are the three important positions during evolution process. The distance between them is variable in each iteration. In order to compare the influence on QPSO with different local search radiuses, four cases are proposed in local search to determine the search space.
Constant value: when the super particle is given in local search, a constant value is assigned to be the local search boundary. The value is not variable during the whole evolution process to a certain problem.
Distance between and : denotes a global best particle in the swarm and is mean value of the personal best position. The two variables always change with the evolution process. The local search space is set as the distance between and . At the early stage, the particles in the swarm scatter in whole search space, and the distance between and is large. In the later, the particles will gather in a small space and the distance will reduce. The dynamic setting method of the local search space looks more reasonable to meet the requirement of evolution.
Distance between randomly selected one and : here, the dynamic local search space is set as the distance between randomly selected of all particles and .
Distance between randomly selected one and : in this case, the dynamic local search space is set as the distance between randomly select of all particles and .
In this work, the proposed local search strategy with four different cases are called QPSO with local constant search space (QPSOLCSS), QPSO with local dynamic search space between and (QPSOLDSS_gm), QPSO with local dynamic search space between randomly selected and (QPSOLDSS_rpm), QPSO with local dynamic search space between randomly selected and (QPSOLDSS_rpg), respectively.
The procedure of QPSO algorithm with local search strategy is described in Figure 2.
Performance comparison
In order to test the performance of the proposed local search QPSO with four different cases, the original QPSO, the QPSO with linear weight coefficient (QPSO-Li),32 and two popular forms of PSO, including the PSO with inertia weight (PSO-In),33 the PSO with constriction factor (PSO-Co),2 were all compared by running a series of experiments on the first 12 functions from the CEC2005 benchmark suite.34 The function names, properties, and search ranges are given in Table 1. Each algorithm was run 50 times on each problem using 40 particles to search the global optimal solutions.
Benchmark function (F1–F12).
No.
Function name
Properties
Search range
1
Shifted Sphere Function
Unimodal, Shifted, Separable, Scalable
[−100,100]
2
Shifted Schwefel’s problem 1.2
Unimodal, Shifted, Nonseparable, Scalable
[−100,100]
3
Shifted Rotated High Conditioned Elliptic Function
For the six versions of QPSO, contraction–expansion coefficient β was set linearly decreased from 1.0 to 0.5 which could lead to a generally good performance of the QPSO algorithm.18 For QPSO-Li, and were set as 1.5 and 0.5, which linearly decreased with the process of evolution as those in Xi et al.32 To four proposed local search variants of QPSO, was set as 50, the values of and were set as 1 and 0.01, respectively. In the case with constant search radius, the search boundary was set as 1% of the whole search range for a certain problem. The parameter configurations of the above PSO variants were the same as those recommended by the existing publications. In PSO-In, Shi and Eberhart varied the inertia weight coefficient decreasing linearly from 0.9 to 0.4 and fixed the acceleration coefficients (c1 and c2) at 2.0 over the course of the search in their empirical study, and the simulation results on the testing functions showed the better performance than the one with fixed inertia weight.33 In PSO-Co, in order to remove the restriction on velocity, Clerc and Kennedy found that the values of the constriction factor χ and acceleration coefficients (c1 and c2) could be set to satisfy some constraints. They recommended to use a value of 4.1 for the sum of c1 and c2, which resulted in a value of the constriction factor χ = 0.7298 and c1 = c2 = 2.05.2 In our simulations, we also employed these parameter values for PSO-In and PSO-Co, although they might not be optimal. In each run, the particles in the algorithms started in new and randomly generated positions, which uniformly distributed within the setting search bounds. Each run of all algorithms lasted 5000 iterations, the mean best fitness values (objective function value) for 50 runs and SD for each run were recorded in Table 2.
Mean (SD) of the best fitness values over 50 trial runs of different algorithms.
Algorithms
F1
F2
F3
F4
F5
F6
PSO-In
1.0871e−27
1.0887e+02
1.5528e+07
1.2612e+04
8.8903e+03
1.2291e+03
2.6449e−29
1.4634
1.5075e+05
5.6583
94.9348
23.1830
PSO-Co
6.9672e−29
5.4111e−10
2.8489e+06
7.4828e+02
6.8209e+03
57.62808
3.9153e−31
6.8814e−14
4.0085e+04
9.1716
76.4684
4.8542
Ba-QPSO
2.4169e−27
0.0937
2.9583e+06
3.7279e+02
2.7291e+03
53.9118
1.0493e−29
1.0703e−04
1.7843e+04
1.4860
17.7551
0.8988
QPSO-Li
1.8983
23.3452
4.5360e+06
2.5879e+02
2.6318e+03
4.8143e+02
0.0033
0.0723
2.4022e+02
4.2677
7.7492
4.2406
QPSOLCSS
2.5109e−27
0.0220
1.8737+06
93.6253
2.3179e+03
48.9655
6.2422e−29
0.0016
1.2394e+05
15.9905
1.1828e+02
7.4060
QPSOLDSS_gm
7.9589e−28
0.0364
2.4410e+06
1.2032e+02
2.3059e+03
44.4211
2.2763e−29
0.0100
1.6823e+05
14.5215
80.2947
6.34527
QPSOLDSS_rpm
2.4290e−27
0.0856
2.9981e+06
3.2411e+02
2.6569e+03
43.1746
6.7471e−29
0.0101
1.9952e+05
31.9020
1.2665e+02
5.5366
QPSOLDSS_rpg
2.4496e−27
0.1006
2.5961e+06
2.6048e+02
2.5542e+03
52.6520
5.5363e−29
0.0169
2.0472e+05
29.8107
1.0514e+02
7.1522
Algorithms
F7
F8
F9
F10
F11
F12
PSO-In
0.0174
0.6022
35.2814
1.5350e+02
38.6985
8.6057e+05
1.8827e−04
0.0122
0.0515
0.2582
0.0238
2.7566e+03
PSO-Co
0.0199
1.3804
76.2335
1.2878e+02
28.8472
3.3977e+03
0.0023
0.0261
0.3732
1.6357
0.0358
10.4325
Ba-QPSO
0.0159
8.8817e−15
22.42378
92.4352
26.9774
1.5842e+04
1.2333e−04
3.6252e−17
0.1327
0.6134
0.1477
1.8227e+02
QPSO-Li
1.1358
7.9580e−15
17.6842
93.3588
24.1378
2.3249e+04
9.5724e−04
1.7401e−17
0.0703
1.4384
0.2068
7.8240e+02
QPSOLCSS
0.0181
7.3185e−15
9.6313
1.0998e+02
27.8084
1.1992e+04
0.0020
2.1532e−16
0.6197
6.8057
1.1487
1.6625e+03
QPSOLDSS_gm
0.0176
6.2527e−15
9.2420
86.1241
31.2311
1.3254e+04
0.0022
2.1895e−16
0.5739
6.9128
0.9114
2.0981e+03
QPSOLDSS_rpm
0.0188
5.1869e−15
15.7433
94.7732
29.8578
1.3775e+04
0.0023
2.5552e−16
0.9495
7.3644
1.1470
1.7972e+03
QPSOLDSS_rpg
0.0157
5.0448e−15
15.2831
1.0970e+02
30.6278
1.6197e+04
0.0016
2.5303e−16
1.0270
7.2299
0.9929
2.1922e+03
For the Shifted Sphere Function (F1), QPSOLDSS_gm yielded the best results in all algorithms. The results for Shifted Schwefel’s Problem 1.2 (F2) suggested that QPSOLCSS got the better values except the PSO-Co. For both the Shifted Rotated High Conditioned Elliptic Function (F3) and Shifted Schwefel’s Problem 1.2 with Noise in Fitness (F4), QPSOLCSS generated better results than other methods. QPSOLDSS_gm was able to hit the solution for the function with the best quality among all the algorithms again. For Benchmark F6, Shifted Rosenbrock Function, QPSOLDSS_rpm showed better performance than other methods. The simulation results for the Shifted Rotated Griewank’s Function without Bounds (F7) and Shifted Rotated Ackley’s Function with Global Optimum on the Bounds (F8) demonstrated that QPSOLDSS_rpg had the best performance in finding the solution for the functions. The best results for Shifted Rastrigin’s Function (F9) and Shifted Rotated Rastrigin’s Function (F10) which appeared to be a very difficult problem were obtained by QPSOLDSS_gm. For Benchmark F11 which was Shifted Rotated Weierstrass Function, the QPSO-Li seemed to outperform the other algorithms. When searching the optima of Schewefel’s Problem 2.13 (F12), the QPSOLCSS gave the best solution.
In this work, an ANOVA with 0.05 as the level of significance was utilized to investigate whether the differences in mean best fitness values between algorithms were significant. The procedure employed in this work was called a step-down procedure which considered that all but one of the comparisons was less extreme than the range. When all pair-wise comparisons had been done, this method was the best available if confidence intervals were not needed and sample sizes were equal.35 All tested algorithms were ranked to determine which algorithm could be said the most effective for each optimization problem. If algorithms were not statistically different from each other, there would be the same rank; those that were not statistically different from more than one other group of algorithms were ranked with the best performing of these groups. For 12 test functions, the resulting rank for each of them and the total rank were shown in Table 3. In Table 3, the proposed four local search variants of QPSO occupied the top position even one of them got the best solution in F1 and F11. QPSOLDSS_gm won the championship even though not all time found the best solution for all test benchmark functions but showed preferable performance in each simulation. The second best performing algorithm was the QPSOLCSS algorithm as indicated by the total ranks.
Ranking by algorithms and problems.
algorithms
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
rank
PSO-In
3
8
8
8
8
8
3
7
7
8
8
8
84
PSO-Co
1
1
4
7
7
6
7
8
8
7
4
1
61
Ba-QPSO
4
5
5
6
6
4
1
6
6
2
2
5
52
QPSO-Li
8
7
7
3
4
7
8
5
5
3
1
7
65
QPSOLCSS
7
2
1
1
1
3
5
4
2
6
3
2
37
QPSOLDSS_gm
2
3
2
2
1
1
3
3
1
1
7
3
29
QPSOLDSS_rpm
5
4
6
5
5
1
6
2
4
4
5
4
51
QPSOLDSS_rpg
6
6
3
4
3
4
1
1
3
5
6
6
48
With the purpose of investigating the convergence ability for proposed QPSO with local search strategy, Figure 3 gave the comparison of convergence processes of QPSO with four local search radiuses and the original QPSO which were the top five variants of QPSO. For four proposed local search variants, they were the combination forms of QPSO and local search strategy, and then they had similar convergence curves, but could get better solutions finally. These curves showed proposed variants to have better performance which balanced convergence speed concerning global search ability and local search ability.
Procedure of proposed local search strategy.
Procedure of QPSO with local search strategy.
Comparison of convergence process with five algorithms on CEC2005 benchmark functions.
Conclusion
In this paper, a local search strategy was discussed and four variants with local search strategy of QPSO were proposed. The local search strategy was introduced into the evolutionary process of QPSO to improve the original QPSO performance. Through empirical studies on a well-known benchmark suite, we provided analysis about the proposed methods in terms of convergence speed and led to the conclusion that the proposed variants of QPSO possessed stable and comparable performance with the original QPSO and some other forms of PSO, especially the QPSO with local dynamic search space between gbest and
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research,authorship,and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research,authorship,and/or publication of this article: The research work was supported by Jiangsu Overseas Research & Training Program for University Prominent Young & Middle-aged Teachers and Presidents,the National Natural Science Foundation of China (Project No: 61170119,60973094) and also sponsored by Qing Lan Project of Jiangsu Province and Wuxi Institute of Technology.
References
1.
Kennedy J and Eberhart RC. Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Piscataway, NJ, 27 November–1 December, 1995, pp.1942–1948. New York: IEEE Press.
2.
Clerc M. The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In: Proceedings of congress on evolutionary computation, Washington, DC, 6–9 July 1999, pp.1951–1957. New York: IEEE Press.
3.
Kennedy J. Stereotyping: improving particle swarm performance with cluster analysis. In: Proceedings of congress on computational intelligence, Honolulu, HI, USA, 12–17 May 2002, pp.1671–1676. New York: IEEE Press.
4.
GohCKTanKCLiuDS. A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design. Eur J Oper Res2010; 202: 42–54.
5.
OmranpourHEbadzadehMShiryS. Dynamic particle swarm optimization for multimodal function. Int J Artif Intell2012; 1: 1–10.
Richer TJ and Blackwell TM. The Levy particle swarm. In: Proceedings of Congress on evolutionary computation, Vancouver, BC, Canada, 16–21 July 2006, pp.808–815. New York: IEEE Press.
8.
Kennedy J. Bare bones particle swarms. In: Proceedings of IEEE swarm intelligence symposium, Indianapolis, IN, USA, 24–26 April 2003, pp.80–87. New York: IEEE Press.
9.
Renato AK and Leandro SC. PSO-E: particle swarm with exponential distribution. In: IEEE Congress on evolutionary computation, Vancouver, BC, Canada, 16–21 July 2006, pp.1428–1433. New York: IEEE Press.
10.
Van den Bergh F. An analysis of particle swarm optimizers. PhD dissertation, University of Pretoria, Pretoria, South Africa, 2002.
11.
Sun Y, Wyk BJ and Wang Z. A new golden ratio local search based particle swarm optimization. In: Proceedings of the international conference on systems and informatics (ICSAI ’12), Yantai, China, 19–20 May 2012, pp.754–757. New York: IEEE Press.
12.
Akbari R and Ziarati K. Combination of particle swarm optimization and stochastic local search for multimodal function optimization. In: Proceedings of the Pacific-Asia workshop on computational intelligence and industrial application (PACIIA ’08), vol. 2, 2008, pp.388–392.
13.
WangY-J. Improving particle swarm optimization performance with local search for high-dimensional function optimization. Optim Methods Softw2010; 25: 781–795.
14.
MousaAAEl-ShorbagyMAAbd-El-WahedWF. Local search based hybrid particle swarm optimization algorithm for multiobjective optimization. Swarm Evolut Comput2012; 3: 1–14.
15.
ArasomwanMAAdewumiAO. Improved particle swarm optimization with a collective local unimodal search for continuous optimization problems. Sci World J2014; 2014: 1–23.
16.
Sun J, Feng B and Xu W. Particle swarm optimization with particles having quantum behavior. In: Proceedings of Congress on evolutionary computation, Portland, OR, USA, 19–23 June 2004, pp.326–331. New York: IEEE Press.
17.
Sun J, Feng B and Xu W. A global search strategy of quantum-behaved particle swarm optimization. In: Proceedings of IEEE conference on cybernetics and intelligent systems, Singapore, 1–3 December 2004, pp.111–116. New York: IEEE Press.
18.
SunJFangWWuX. Quantum-behaved particle swarm optimization analysis of individual particle behavior and parameter selection. Evolut Comput2012; 20: 349–339.
19.
FangWSunJDingY. A review of quantum-behaved particle swarm. IETC Tech Rev2010; 27: 336–348.
20.
Sun J, Xu W and Fang W. Quantum-behaved particle swarm optimization algorithm with controlled diversity. In: Proceeding of international conference on computational science, Reading, UK, 28–31 May 2006, pp.847–854. Berlin: Springer.
21.
SunJXuWFangW. A diversity-guided quantum-behaved particle swarm optimization algorithm. Simul Evolut Learn2006; 4247: 497–504.
FangWSunJXuW. Improved quantum-behaved particle swarm optimization algorithm based on differential evolution operator and its application. J Syst Simul2008; 20: 6740–6744.
FangWSunJXuW. Analysis of mutation operators on quantum-behaved particle swarm optimization algorithm. New Math Nat Comput2009; 5: 487–496.
26.
SunJFangWPaladeV. Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point. Appl Math Comput2011; 218: 3763–3775.
27.
SunJWuXPaladeV. Convergence analysis and improvements of quantum-behaved particle swarm optimization. Inform Sci2012; 193: 81–103.
28.
LiYJiaoLShangR. Dynamic-context cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Inform Sci2015; 294: 408–422.
29.
LiYWangYChenJ. Overlapping community detection through an improved multi-objective quantum-behaved particle swarm optimization. J Heuristics2015; 21: 549–575.
ClercMKennedyJ. The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans Evolut Comput2002; 6: 58–73.
32.
XiMSunJXuW. An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Appl Math Comput2008; 205: 751–759.
33.
Shi Y and Eberhart RC. A modified particle swarm optimizer. In: Proceedings of IEEE international conference on evolutionary computation, 1998, pp.69–73.
34.
Suganthan P, Hansen N, Liang J, et al. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical report, Nanyang Technological University, Singapore, May 2005.
35.
DayRWQuinnGP. Comparisons of treatments after an analysis of variance in ecology. Ecol Monogr1989; 59: 433–463.