First hitting time analysis of continuous evolutionary algorithms based on average gain

Size: px
Start display at page:

Download "First hitting time analysis of continuous evolutionary algorithms based on average gain"

Transcription

1 Cluster Comput (06) 9:33 33 DOI 0.007/s First hitting time analysis of continuous evolutionary algorithms based on average gain Zhang Yushan Huang Han Hao Zhifeng 3 Hu Guiwu Received: 6 April 06 / Revised: June 06 / Accepted: 0 June 06 / Published online: 7 July 06 Springer Science+Business Media New York 06 Abstract Runtime analysis of continuous evolutionary algorithms (EAs) is a hard topic in the theoretical research of evolutionary computation, relatively few results have been obtained compared to the discrete EAs. In this paper, we introduce the martingale and stopping time theory to establish a general average gain model to estimate the upper bound for the expected first hitting time. The proposed model is established on a non-negative stochastic process and does not need the assumption of Markov property, thus is more general. Afterwards, we demonstrate how the proposed model can be applied to runtime analysis of continuous EAs. In the end, as a case study, we analyze the runtime of (, λ)es with adaptive step-size on Sphere function using the proposed approach, and derive a closed-form expression of the time upper bound for 3-dimensional case. We also discuss the relationship between the step size and the offspring size λ to ensure convergence. The experimental results show that the proposed approach helps to derive tight upper bound for the expected first hitting time of continuous EAs. B Huang Han bssthh@63.com Zhang Yushan scuthill@63.com Hao Zhifeng zfhao@fosu.edu.cn Hu Guiwu guiwuhugz@63.com School of Mathematics and Statistics, Guangdong University of Finance and Economics, Guangzhou 5030, China School of Software Engineering, South China University of Technology, Guangzhou 50006, China 3 School of Mathematics and Big Data, Foshan University, Foshan 58000, China Keywords Evolutionary algorithm Continuous optimization First hitting time Average gain model Martingale Stopping time Introduction Evolutionary algorithms (EAs) are a class of adaptive search algorithms inspired by the natural evolution process. With many successful applications of EAs in various optimization areas, the theoretical research of EAs has now increasingly attracted attention [,]. Theoretical study is helpful to understand the mechanism of EAs more precisely, and guides us to design more efficient EAs in practice. Runtime analysis is a hot spot in the theory of EAs. Intuitively, the aim is to analyze the number of fitness evaluations until one goal of optimization (e.g., good approximation found) is reached. The runtime can be measured by the first hitting time for a set of states of an underlying discretetime stochastic process [3]. Due to their stochastic nature, the runtime analysis of EAs is not an easy task. The early studies were related to basic EAs [such as the ( + )EA] on pseudo-boolean functions with good structural properties [4 6]. These studies demonstrated some useful mathematical methods and tools, and obtained theoretical results on some instances. Nowadays, the runtime analysis of (+)EA has been gradually extended from simple pseudo-boolean functions to combinatorial optimization problems with practical applications. Oliveto et al. [7] analyzed the runtime of (+)EA and random local search (RLS) on some instances of an NP-hard combinatorial optimization problem Vertex Cover Problems, and proved that if the vertex degree is at most two, then the ( + )EA can solve the problem in polynomial time. Computing unique input output (UIO) sequence is a fundamental and hard problem in conformance 3

2 34 Cluster Comput (06) 9:33 33 testing of finite state machines (FSM), Lehre and Yao [8] selected several instances and analyzed the performance of ( + )EA on them, the theoretical and numerical results provide a first theoretical characterization of the potential and limitations of the ( + ) EA on the problem of computing UIOs. Recently, Zhou et al. conducted a series of approximation performance analysis of ( + )EA on some instances of the Minimum Label Spanning Tree Problem [9], the Multiprocessor Scheduling Problem [0], the Maximum Cut Problem [] and the Maximum Leaf Spanning Tree Problem []. Along with the progress of theoretical results on (+)EA, lots of mathematical methods and tools have been developed, e.g., Markov chain [3], absorbing Markov chain combined with convergence rate [4], switch analysis [5], method of fitness-based partitions [6,7], etc. In particular, drift analysis introduced by He and Yao [6] has turned out to be a powerful technique for runtime analysis of EAs. Jägersküpper reconsidered the standard (+)EA on linear functions combining Markov chain analysis and drift analysis, significantly improved the upper bounds on the expected runtime [8]. By combining drift analysis and the concept of takeover time together, Chen, He et al theoretically discussed the time complexity of population-based EAs on unimodal problems [9]. Variants of drift theorems has also been proposed, Oliveto and Witt [0] introduced a simplified drift-analysis theorem for proving lower bounds on the runtime of (+)EA on Needle, OneMax and Maximum Matching Problem. Variable drift, where the drift is monotonically increasing with the current state, was presented by Rowe and Sudholt [] to establish a sharp threshold at λ 5log 0 n between exponential and polynomial runtime of (, λ)ea on OneMax. Witt [] used multiplicative drift analysis to derive tight bounds on the expected runtime of ( + )EA on linear functions. As summarized above, the existing work on runtime analysis of EAs mainly concentrates on discrete optimization problems, time complexity analysis of EAs on continuous search space remains relatively few, which is unsatisfactory from a theoretical point of view. By investigating the expected spatial gain towards the optimum in a single step and using Wald s equation, Jägersküpper [3,4] conducted a first rigorous but tedious runtime analysis on ( + )ES, (+λ)es minimizing the Sphere function. Agapie et al. [5] modeled the ( + )ES as a renewal process under some strong assumptions, and analyzed the first hitting time of ( + )ES on inclined plane. Theoretically speaking, drift analysis can be applied to both discrete optimization and continuous optimization, however, only few theoretical results on drift analysis applied to the latter can be found [6]. Inspired by drift analysis and the idea of Riemann integral, Huang et al. [7] proposed an average gain model to estimate the mean runtime of two ( + )EAs based on the mutation of standard normal distribution and uniform distribution for Sphere function. The proposed method [7] still needs to be improved, because it heavily relies on the specific algorithm ((+ )EA) and the tackled problem (Sphere function), thus is in some sense ad hoc, besides, the original prove was based on Riemann integral, thereby the mathematical strictness and generality also needs to be strengthened. Luo et al. [8 30] proposed a fuzzy linear based method for processing online web data. The purpose of this paper is to generalize the proposed model in [7] and lay a solid theoretical basis on it. To separate the proposed model from specific algorithm and problem, we build the model on a non-negative stochastic process {X t ; t = 0,,,...}. We introduce martingale theory to remove the assumption of Markov property, which makes the model more general, and regard the first hitting time as a stopping time. Combining the martingale theory and the stopping time theory together, we establish an universal formula to estimate the upper bound for the expected first hitting time of continuous EAs. After that, an easy-to-use result is derived. As a case study, we analyze the runtime of the non-elitist (, λ)es with adaptive step-size on the Sphere function using the proposed approach, and discuss the relationship between the step size and the offspring size λ. We conduct numerical experiments and find that our theoretical analysis is consistent with practical situation. This paper is structured as follows. Section introduces some preliminary knowledge on supermartingale and stopping time, then propose the average gain model. Section 3 shows how the general average gain model proposed in the previous section can be applied to runtime analysis of continuous EAs. In Sect. 4, we analyze the upper bound of expected first hitting time of (, λ)es on Sphere function and derive a close-form expression of the time upper bound for 3-dimensional case. Section 5 presents the numerical experiment and results. Finally, Sect. 6 conclude the paper. Preliminaries. Supermartingale and stopping time Throughout this paper, we analyze time-discrete non-negative stochastic processes represented by {X t } t=0, where X t 0, t = 0,,... For example, as far as the EAs are concerned, X t could represent the number of zero-bits or one-bits in the bit-string of an ( + )EA on OneMax problem at generation t, a certain distance value of a population-based EA to the optimal solution, etc. In particular, X t might aggregate several different random variables realized by a search heuristic at time t into a single one. Whether the state space is discrete or continuous does not matter. Let (, F, P) be a probability space, {Y t } t=0be any stochastic process on (, F, P). The natural filtration of F 3

3 Cluster Comput (06) 9: is denoted by F t = σ(y 0, Y,...,Y t ) F, t = 0,,... The σ -algebra F t contains all the events generated by Y 0, Y,...,Y t, i.e., all the information observed up to time t. Obviously, F 0 F F n. In probability theory, a martingale is a model of a fair game where knowledge of past events never helps predict the mean of the future winnings. In particular, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time in the realized sequence, the expectation of the next value in the sequence is equal to the present observed value even given knowledge of all prior observed values. There are two popular generalizations of a martingale that also include cases when the current observation is not necessarily equal to the future conditional expectation but instead an upper or lower bound on the conditional expectation, where the former corresponds to the so-called supermartingale and the latter submartingale. We now give the formal definition of supermartingale as follows. Definition [3]. Let {F t, t 0} be an increasing sequence of sub-σ -algebra on F. A stochastic process {S t } t=0 is called a supermartingale regarding {F t, t 0}or {Y t } t=0 if {S t} t=0 is adapted to {F t, t 0}(i.e., for any t = 0,,,..., S t is measurable on F t ), E( S t )<+ and E(S t+ F t ) S t for all t = 0,,,... Definition implies, roughly speaking, that a supermartingale decreases on average. In the runtime analysis of EAs, we are often interested in the time, in a run, that EAs find an optimal solution for the first time. This is the so-called first hitting time [6], which can be described as a stopping time. In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time) is a specific type of random time : a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. The formal definition of stopping time is presented as follows. Definition [3]. Let {Y t } t=0 be a stochastic process and T be a non-negative integer-valued random variable, T is called a stopping time regarding {Y t } t=0 if {T n} F n = σ(y 0, Y,...,Y n ) for all n = 0,,,... Let H n = σ(x 0, X,...,X n ), n 0, T 0 = min {t 0 : X t = 0}. Obviously, we have n 0,{T 0 n} = n {T 0 = k} = n {X 0 > 0,...,X k > 0, X k = 0} k=0 k=0 H n, so the first hitting time T 0 is a stopping time regarding {X t } t=0. We present two lemmas below, which will be applied in the proof of Theorem. Lemma [3]. Suppose {S t } t=0 to be a supermartingale regarding {Y t } t=0, T to be a bounded stopping time regarding {Y t } t=0, then E(S T F 0 ) S 0 () In the equation (), the subscript of supermartingale {S t } t=0 is changed to a random variable: stopping time T from a fixed time t, and the S 0 remains to be the upper bound. Lemma Let s T 0 = min {s, T 0 }, s = 0,,,..., then s T 0 is a bounded stopping time regarding {X t } t=0 for any fixed s. Proof s T 0 is obviously bounded when s is fixed. Next, we will prove s T 0 to be a stopping time. m 0, {s T 0 m} = {s m} {T 0 m}, () when m s, {s m} {T 0 m} = {T 0 m} = H m ; () when m < s, {s m} {T 0 m} = {T 0 m} = {T 0 m} H m. By Definition, the conclusion is proved.. Average gain model The expected one-step variation δ t = E(X t X t+ H t ), t 0 is called average gain [7]. Note that δ t is ordinarily a random variable since H t is a σ -algebra generated by random variables Y 0, Y,...,Y t. Suppose we manage to bound δ t from below by some α>0for all possible values of δ t, then we know that the process {X t } t=0 decreases its state ( progresses toward 0 ) in expectation by at least α in each step, and the Theorem below will provide a bound on T 0 that only depends on X 0 and α. In fact, the very naturally looking result E(T 0 X 0 ) X 0 α will be obtained. However the lower bounds on the average gain may be more complicated. For example, a lower bound on δ t might depend on X t or states at earlier points of time, e.g., if the progress decreases as the current state decreases. This is often the case in applications to evolutionary algorithms. It is not so often the case that the whole history is needed. Simple evolutionary algorithms and other randomized search heuristics are Markov processes such that simply δ t = E(X t X t+ X t ), t 0. Based on average gain, the upper bound for the expectation of T 0 can be estimated as follows. Theorem Let {X t } t=0 be a stochastic process, where for any t 0, X t 0. Assume that E(T 0 )<+. Ifthere exists α R, t 0, E(X t X t+ H t ) α > 0, then E(T 0 X 0 ) X 0 α. 3

4 36 Cluster Comput (06) 9:33 33 Proof Define Z t = X t + tα, t = 0,,,... E(Z t+ H t ) = E(X t+ + (t + )α H t ) = E(X t (X t X t+ ) + (t + )α H t ) = X t E(X t X t+ H t ) + (t + )α X t α + (t + )α = X t + tα = Z t By Definition, {Z t } t=0 is a supermartingale with regard to {X t } t=0. Following Lemmas and, wehave E(Z s T0 H 0 ) = E(Z s T0 X 0 ) Z 0 = X 0, then E(X s T0 + (s T 0 )α X 0 ) = E(X s T0 X 0 ) + αe(s T 0 X 0 ) X 0. As s T 0 T 0, X s T0 0, using Lebesgue dominated convergence theorem, we have X 0 lim E(X s T s 0 X 0 ) + α lim E(s T 0 X 0 ) s αe(t 0 X 0 ) This means E(T 0 X 0 ) X 0 α. It is worth noting that the α in Theorem is a constant independent of X t, one has to use the uniform lower bound of δ t over all t = 0,,,... This might lead to very loose upper bounds on the first hitting time T 0. In addition, T 0 usually corresponds to EAs in discrete optimization, for continuous EAs, we are interested in the first hitting time for -approximation solutions, which can be denoted by T = min {t 0 : X t }. Based on Theorem, a general theorem on the upper bound of T is derived. Theorem Suppose {X t } t=0 to be a stochastic process with X t 0 for all t 0. Let h : [0, A] R + be a monotone increasing, integrable function. If E(X t X t+ H t ) h(x t ) when X t >>0, then it holds for T that X0 E(T X 0 ) + dx () h(x) { 0, x Proof Let g(x) = x () when x >, y, g(x) g(y) = x h(t) dt +, x >,wehave h(t) dt + () when x >, y >, g(x) g(y) = x y h(t) dt x y h(x) therefore ( ) when X t >, X t+, E (g(x t ) g(x t+ ) H t ) ( ) when X t >, X t+ >, E (g (X t ) g (X t+ ) H t ) E ( ) Xt X t+ H t h(x t ) = h(x t ) E (X t X t+ H t ) From the above, we get E(g(X t ) g(x t+ ) H t ) when X t > > 0. Noting that T = min {t 0 : X t } = min {t 0 : g(x t ) = 0} = T g 0. Assume X 0 >, by Theorem, wehave ( E (T X 0 ) = E T g ) 0 g(x 0) g(x X0 0) = + h(t) dt. In Theorem, the lower bound h(x t ) of the average gain δ t varies with X t, so one does not need to find a uniform lower bound of δ t over all t = 0,,,...This helps to derive a tight upper bound on the first hitting time T. The fitness level technique (also called the method of f - based partitions, first formulated by Wegener [5])is a simple but useful technique mainly used in the runtime analysis of ( + )EA. This technique firstly partitions the search space properly according to the fitness values, and then estimate the lower bound of the probability with which the individual jumps from the current region to a better region, so that the runtime to find the optimum can be estimated. For the sake of completeness, we present a novel proof based on Theorem. Corollary (The fitness level technique) Consider the (+ )EA maximizing a function f and a partition of the search space into non-empty sets A 0, A,...,A m. Assume that the sets form an f -based partition, i.e., for 0 i < j m and all x A i, y A j it holds that f (x) = f i < f (y) = f j. Let p i be a lower bound on the probability that a search point in 3

5 Cluster Comput (06) 9: A i jumps to A i+ A i+ A m.then the expected first hitting of A m is at most m i=0 p i. E (T X 0 ) X0 X0 h (x) dx + = = q ln ( X0 qx dx + ) +. Proof Let ξ t,t = 0,,,... be theindividual at generation t, define X t = d(ξ t ) = m i, ξ t A i (i = 0,,...m). Let =,then T = min {t 0; X t <},we can see that T = min {t 0; X t <} = min {t 0; m i < } = min {t 0; ξ t A m }, which means that T is the first hitting time of A m. Consider X t = k for k m, when ξ t A m k. With at least probability p m k,the value of X t deceases by at least. Consequently, E(X t X t+ X t = k) p m k.we define h(x t ) Xt =k = p m k,and obtain an integrable, monotone increasing function on [, m]. Hence, from Theorem, we can obtain the upper bound on E(T X 0 ),assume that ξ 0 A 0,then X 0 = m. The deduction is as follows. X0 E(T X 0 ) + h(x) dx m = + dx p m x = + m i+ i= m + p 0 = m i=0 i= p i. i p m i p m x dx From the above, we can see that method of f -based partitions can be regarded as a specialization of the proposed average gain model. An another special case where h(x) = qx can be drawn from Theorem as follows. Corollary Suppose {X t } t=0 to be a stochastic process with X t 0 for all t 0. If there exists 0 < q such that E(X t X t+ H t ) qx t,(x t >>0), then it holds for T that E (T X 0 ) ( ) q ln X0 + (3) Proof Let h(x) = qx, obviously h(x) is monotone increasing, integrable. By Theorem, we get In the case study of this paper, we will use Corollary to analyze the runtime of the (, λ)es with adaptive step-size on the Sphere function. 3 Runtime analysis for continuous EAs In this section, we demonstrate how the general average gain model proposed in the previous section can be applied to runtime analysis of continuous EAs. Without loss of generality, we assume that EAs in this paper are used to tackle minimization problems in continuous search space. Given a search space S R n and a function f : S R, the task is to find a global optimum x S, such that f = f (x ) = min x S f (x). The solving procedure of a typical EA can be described briefly as follows. Algorithm (Evolutionary Algorithm). () Generate an initial population P = { ξ, ξ,...,ξ μ } randomly. () While (termination condition not met) () Produce an offspring P = { ξ, ξ,...,ξ λ} from P by crossover and/or mutation. () Create a new parent population P by selecting μindividuals from P P or P. (3) Output the best-so-far solution found. Obviously the above description includes a wide range of EAs using crossover, mutation and selection. The description does not set any restrictions on the type of crossover, mutation or selection schemes used. It includes EAs which use crossover or mutation alone. The EA framework given above is closer to evolution strategies (abbreviated as ES) than to GAs (genetic algorithms) in the sense that selection is applied after crossover and/or mutation. However,the main results given in this paper are independent of any such implementation details. In fact they hold for virtually any stochastic search algorithms except for the degree of difficulty or ease of calculation. Denote P t = { ξ t, ξ t,...,ξ t μ} as the tth generation of population, t = 0,,,...Let f (P t ) = min { f (x) : x P t } be the fitness of population P t, define X t = f (P t ) f, which is used to measure the distance of the population to 3

6 38 Cluster Comput (06) 9:33 33 the optimal solution, then X t can be considered as aforementioned a certain distance value of a population-based EA to the optimal solution. Obviously, the sequence {X t } t=0 generated by the EA is a non-negative stochastic process. Define the stopping time of an EA as T = min {t 0 : X t },which is the first hitting time for the continuous EAs to find -approximation solutions, the calculation of average gain δ t play a crucial role in the estimation of the upper bound of T. For most EAs, the state of population P t+ depends only on the population P t. In such a case, the stochastic process {P t } t=0 can be modeled by a Markov chain [6,3]. Accordingly, {X t } t=0 can also be regarded as a Markov chain. In this case, the average gain δ t = E(X t X t+ H t ) can be simplified to δ t = E(X t X t+ X t ), this may lead to relatively easier calculation. That is to say, adopting the same notations as those in Sect., we only need to modify E(X t X t+ H t ) in Theorems, and Corollary to E(X t X t+ X t ),all the conclusions remain applicable for continuous EAs. We present the corresponding results below. Theorem Let {X t } t=0 be a stochastic process associated with EA, where for any t 0, X t 0. Assume that E(T 0 )<+. If t 0, E(X t X t+ X t ) α>0, then E(T 0 X 0 ) X 0 α. Theorem Suppose {X t } t=0 to be a stochastic process associated with EA, where X t 0 for all t 0. Let h : [0, A] R + be a monotone increasing, integrable function. If E(X t X t+ X t ) h(x t ) when X t >>0, then it holds for T that X0 E(T X 0 ) + h(x) dx. Corollary Suppose {X t } t=0 to be a stochastic process associated with EA with X t 0 for all t 0. If there exists 0 < q such that E(X t X t+ X t ) qx t,(x t >> 0), then it holds for T that E(T X 0 ) ( ) q ln X0 +. In the next section, we will analyze the runtime of the (,λ)es with adaptive step-size on the Sphere function using the proposed approach. 4 Case study Most runtime analyses presented so far have been focusing on plus strategies like (μ + λ)eas, particularly special cases such as the ( + λ)ea or the ( + )EA. In these algorithms the new population is selected among both parents and offspring. On hard problems this bears the risk of getting stuck in local optima. Practitioners therefore often use comma strategies where the new population is selected solely among the offspring. In this section, we consider the non-elitist (,λ)es: In each iteration, λ offspring are generated independently from a parent by mutation and then the best among them is selected to be the parent of the next generation. It is worth noting that the new parent is not necessarily better than the old one. Analyzing non-elitist EAs like these is an interesting and challenging topic as, unlike for plus strategies, one can not rely on the best fitness in the population to increase monotonically over time. The procedure of (,λ)es can be described as below: Algorithm ((,λ)es). () Initialize x 0, l 0,sett = 0 () While (termination condition not met) () for i = toλ y t,i = x t + l t u () endfor (3) x t+ = arg min (4) adjust l t+ (5) t t + x {y t,,...,y t,λ} (3) Output the solution. f (x) Here, u is a random vector uniformly distributed on an n-dimensional unit hyper-sphere surface and l t is the step length or radius of the mutation hyper-sphere. We consider the minimization of the Sphere function f (x) = n i=0 xi. Obviously f = f (0, 0,...,0) = 0. Assume that the algorithm has reached a point x t S, t = 0,,... Define X t = f (x t ) f. The new points generated by x t are distributed on the surface of a hypersphere with radius l t and center x t. Since both the isolines of the problem and the mutation hyper-sphere are invariant under rotation it is feasible to analyze the projection into the plane as illustrated in Fig.. Let f (x t ) = R, f ( y t,i ) = ri, i =,,...,λ. The large circle with radius R represents an isoline of the problem with f (x) = R, whose center is the optimum. The small circle with radius lrepresents the mutation hyper-sphere. Due to symmetry, we may restrict our analysis to the case with ω [0,π]. By the Cosine theorem, r = R lrcos ω + l,it leads to R r = lrcos ( ω l. To apply Corollary,we need to calculate E Xt X t+ X t X t ), so we define the relative improvement V as 3

7 Cluster Comput (06) 9: From V = a cos ω a, it holds that ω= arccos therefore ( V +a ( v + a )] [ ( v + a )] p V (v) = p ω [arccos arccos a a v, a cos π a v a cos 0 a. After the reduction, we get p V (v) = 4a S a (v). In other [ words, V obeys uniform distribution on the interval S a = a a, a a ]. a ), Fig. Cross section of the parameter space V = R r R = a cos ω a (4) where a = l/r. Here the angel ω is a random variable as r is randomly generated, regarding the probability distribution function of the angel ω, we have the lemma below. Lemma 3 [3]. The probability density function of the angel ω is p ω (x) = sinn x B (, n ) [0,π) (x) (5) where B(, ) denotes the Beta function and A (x) denotes the indicator function of set A. To avoid complicated and tedious calculation, without loss of generality we consider the case of n = 3 in the remaining part. As shown in Eq. (4), the relative improvement V is determined by the angel ω, so it is also a random variable, whose probability density function can be calculated as follows. Lemma 4 For n = 3, given X t = R, the probability density function of the relative improvement V is p V (v) = 4a S a (v) (6) where S a = [ a a, a a ]. Proof For n = 3, noting that B(, ) =, so p ω(x) = sinx [0,π) (x) according to Lemma 3. According to Algorithm, when the (,λ)es has reach the point x t,t = 0,,..., λ offspring are generated independently, denoted by y t,i, i =,,...,λ, then x t+ = arg min x { y t,,..., y t,λ } f (x) is selected as the parent of the next generation. Let f ( y t,i ) = ri, V i = R ri, then R V i,i =,,...,λis independent and distributed identically to V. As x t+ = arg min x { y t,,..., y t,λ } f (x), it holds equivalently that X t X t+ X t = max {V i : i =,...,λ}, which is denoted by V max. The probability density function of V max is calculated as follows. Lemma 5 Given X t = R, the probability density function of V max is p Vmax (x) = λ p V (x) (P {V x}) λ Sa (x) (7) Proof The distribution function P {V max x} = P {V x,...,v λ x} = (P {V x}) λ,asv i,i =,,...,λ is independent and distributed identically to V,so d(p {V x})λ p Vmax (x) = dx = λ p V (x) (P {V x}) λ Sa (x) DefineT = min {t 0; X t <}, now,weestimatethe upper bound of the first hitting time T based on Corollary. Theorem 3 When a = l t / X t = (λ )/(λ + ), λ, it holds for T that E(T X 0 ) ( ) λ + ln λ ( X0 ) + (8) 3

8 330 Cluster Comput (06) 9:33 33 Proof According to Lemma 4, 5, wehave ( ) Xt X t+ E X t = E (V max X t ) X t a a = x p Vmax (x)dx a a a a ( x + a + a ) λ = x λ a a 4a dx 4a = a λ λ + a (9) ( Equation (9) implies that E (X t X t+ X t ) = a λ λ+ a) X t Take the derivative of (9) with respect to a, we can see that when a = λ λ+, λ, Eq. (9) reaches its maximum Table Comparison of the estimation of the expected first hitting time and the theoretical upper bound ( ) ( ) λ E (T X 0 ) λ+ λ ln X ( λ λ+) (0, ) for λ. This means that when we fix ( ) a = λ λ+, then E(X t X t+ X t ) = λ λ+ X t, according to Corollary, it holds that E (T X 0 ) ( ) λ + ln λ ( X0 ) + Theorem ( ) 3 shows that if we choose the step length l t = λ λ+ X t, the non-elitist (,λ)es can converge to the -neighbourhood of the optimum of Sphere function and the expected first hitting time is bounded above by ( ) ( ) λ+ λ ln X Experiment Theorem 3 gives a closed-form expression of the expected first hitting time upper bound for 3-dimensional case. We conduct experiment to validate the theoretical result, the experiment ( settings : fix the error = 0.0; set x 0 = 3,, ), which leads to X = ; for each λ, we conduct 500 runs for algorithm on Sphere function of 3-dimension; we denote T i as the first hitting time for the -neighbourhood of the optimum at the ith run; define Fig. Convergence of(, λ)es lambda=0 lambda= Fitness Generations 3

9 Cluster Comput (06) 9: T i E(T i= X 0 ) = 500, considered to be the estimation of the expected first hitting time. Table compares the practical expected first hitting time E(T X 0 ) and the theoretical upper ( ) ( ) bound λ+ λ ln X0 +. From Table, we can see that E(T X 0 ) decreases as λ increases, meaning the expected first hitting time decreases as the population size increases, the reason is that when the population size λ become larger, the sampling of the search space repeat more frequently, thus can locate the optimum more easily. The table also shows that the theoretical time upper bound is consistent with practical situation. Figure demonstrates two cases of the convergence of (, λ)es with λ = 0, 0 respectively. We can see that (, λ)es converges to the optimum fast. 6 Conclusion The general average gain model proposed in this paper is established on a non-negative stochastic process {X t ; t = 0,,,...}without the assumption of Markov property, thus is more general to some extent. Combining the martingale theory and the stopping time theory together, we establish an universal formula (Eq. ) to estimate the upper bound for the expected first hitting time of continuous EAs. The core of the proposed model is Theorem, where the lower bound h(x t ) of the average gain δ t depends on X t,so we do not need to find a uniform lower bound of δ t over all t = 0,,,...This helps to reduce calculation and derive a tight upper bound on the first hitting time T through integral. The method of f -based partitions can also be regarded as a specialization of the proposed average gain model. When we apply this model to runtime analysis of continuous EAs, as most EAs can be modeled by a Markov chain, the average gain δ t = E(X t X t+ H t ) can be simplified to δ t = E(X t X t+ X t ), this may lead to relatively easier calculation. In the case study, we analyze the runtime of non-elitist (,λ)es with adaptive step-size on the 3- dimensional Sphere function deriving a closed-form expression of the upper bound for the expected first hitting time. The analysis also shows that the non-elitist (,λ)es can converge provided that the step size l t and the offspring size λ satisfied certain condition. The experiment shows that our theoretical analysis is consistent with practical situation. Our results can be extended to n-dimensional case following similar approach except for more sophisticated calculations. It is worth noting that, the proposed model is actually not only suitable for continuous EAs but also suitable for discrete EAs, which can be seen from the deducing process. We will use the proposed model to analyze more cases in the future and attempt to apply it to PSO. It should be noted that the algorithm procedure of PSO usually does not have Markov property, so it needs more sophisticated calculations and techniques, this is a challenging task. Acknowledgments This work is supported by Humanity and Social Science Youth Foundation of Ministry of Education of China (4YJCZH6), National Natural Science Foundation of China (63700, ), Guangdong Natural Science Founds for Distinguished Young Scholar (04A ), Guangdong Natural Science Foundation (05A ), the Fundamental Research Funds for the Central Universities, SCUT (05PT0), Guangdong High-level personnel of special support program (04TQ0X664). References. Oliveto, P.S., He, J., Yao, X.: Time complexity of evolutionary algorithms for combinatorial optimization: a decade of results. Int. J. Autom. Comput. 4(3), 8 93 (007). Yao, X.: Unpacking and understanding evolutionary algorithms. In: Proceedings of WCCI 0, pp Springer, Berlin (0) 3. Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64(4), (0) 4. Droste, S., Jansen, T., Wegener, I.: On the analysis of the ( + ) evolutionary algorithm. Theor. Comput. Sci. 76( ), 5 8 (00) 5. Wegener, I.: Methods for the analysis of evolutionary algorithms on pseudo-boolean functions. In: Sarker, R., Mohammadian, M., Yao, X. (eds.) Evolutionary Optimization, pp Kluwer Academic Publishers, Norwell, MA (00) 6. He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 7(), (00) 7. Oliveto, P.S., He, J., Yao, X.: Analysis of the ( + )EA for finding approximate solutions to vertex cover problems. IEEE Trans. Evolut. Comput. 3(5), (009) 8. Lehre, P.K., Yao, X.: Runtime analysis of the ( + ) EA on computing unique input output sequences. Inf. Sci. 59(), (04) 9. Lai, X.S., Zhou, Y.R., He, J., et al.: Performance analysis of evolutionary algorithms for the minimum label spanning tree problem. IEEE Trans. Evolut. Comput. 8(6), (04) 0. Zhou, Y.R., Zhang, J., Wang, Y.: Performance analysis of the (+) evolutionary algorithm for the multiprocessor scheduling problem. Algorithmica 73(), 4 (05). Zhou, Y.R., Lai, X.S., Li, K.: Approximation and parameterized runtime analysis of evolutionary algorithms for the maximum cut problem. IEEE Trans. Cybern. 45(8), (05). Xia, X., Zhou, Y.R., Lai, X.S.: On the analysis of the ( + ) evolutionary algorithm for the maximum leaf spanning tree problem. Int. J. Comput. Math. 9(0), (05) 3. He, J., Yao, X.: Towards an analytic framework for analysing the computation time of evolutionary algorithms. Artif. Intell. 45( ), (003) 4. Yu, Y., Zhou, Z.H.: A new approach to estimating the expected first hitting time of evolutionary algorithms. Artif. Intell. 7(5), (008) 5. Yu, Y., Qian, C., Zhou, Z.H.: Switch analysis for running time analysis of evolutionary algorithms. IEEE Trans. Evolut. Comput. 9(6), (05) 6. Sudholt, D.: A new method for lower bounds on the running time of evolutionary algorithms. IEEE Trans. Evolut. Comput. 7(3), (03) 3

10 33 Cluster Comput (06) 9: Witt, C.: Fitness levels with tail bounds for the analysis of randomized search heuristics. Inf. Process. Lett. 4( ), 38 4 (04) 8. Jägersküpper, J.: Combining markov-chain analysis and drift analysis: the ( + ) evolutionary algorithm on linear functions reloaded. Algorithmica 59(3), (0) 9. Chen, T.S., He, J., Sun, G., et al.: A new approach for analyzing average time complexity of population-based evolutionary algorithms on unimodal problems. IEEE Trans. Syst. Man Cybern. B 39(5), (009) 0. Oliveto, P.S., Witt, C.: Simplified drift analysis for proving lower bounds in evolutionary computation. Algorithmica 59(3), (0). Rowe, J., Sudholt, D.: The choice of the offspring population size in the (,λ) EA. In: Proceedings of GECCO, pp ACM Press, New York (0). Witt, C.: Tight bounds on the optimization time of a randomized search heuristic on linear functions. Comb. Probab. Comput. (0), (03) 3. Jägersküpper, J.: Algorithmic analysis of a basic evolutionary algorithm for continuous optimization. Theor. Comput. Sci. 379(3), (007) 4. Jägersküpper, J.: Lower bounds for randomized direct search with isotropic sampling. Oper. Res. Lett. 36(3), (008) 5. Agapie, A., Agapie, M., Baganu, G.: Evolutionary algorithms for continuous-space optimisation. Int. J. Syst. Sci. 44(3), 50 5 (03) 6. Chen, Y., Zou, X.F., He, J.: Drift conditions for estimating the first hitting times of evolutionary algorithms. Int. J. Comput. Math. 88(), (0) 7. Huang, H., Xu, W.D., Zhang, Y.S., et al.: Runtime analysis for continuous ( + ) evolutionary algorithm based on average gain model. Sci. Sin. Inf. 44(6), 8 84 (04) 8. Xuan, J., Luo, X., Zhang, G., Lu, J., Xu, Z.: Uncertainty analysis for the keyword system of web events. IEEE Trans. Syst. Man Cybern. 46(6), (06) 9. Wei, X., Luo, X., Li, Q., Zhang, J., Xu, Z.: Online comment-based hotel quality automatic assessment using improved Fuzzy comprehensive evaluation and Fuzzy cognitive map. IEEE Trans. Fuzzy Syst. 3(), 7 84 (05) 30. Luo, X., Xu, Z., Yu, J., Chen, X.: Building association link network for semantic linkon web resources. IEEE Trans. Autom. Sci. Eng. 8(3), (0) 3. Zhang, B., Zhang, J.X.: Applied Stochastic Processes. Tsinghua University Press, Beijing (004) 3. Schumer, M.A., Steiglitz, K.: Adaptive step size random search. IEEE Trans. Autom. Control 3, (968) Zhang Yushan received the Ph.D. degree in Computer Science from South China University of Technology, Guangzhou, in 03. He is now an Associate Professor with School of Mathematics and Statistics, Guangdong University of Finance and Economics. His main research interests include theoretical foundation of evolutionary algorithms, convergence and time complexity analysis of bio-inspired algorithms. Huang Han received the Ph.D. degree in Computer Science from the South China University of Technology (SCUT), Guangzhou, in 008. Currently, he is a Professor at School of Software Engineering in SCUT. His research interests include evolutionary computation, swarm intelligence and their application. Dr. Huang is a senior memberofccf. Hao Zhifeng received the Ph.D. degree in Mathematics from Nanjing University, in 995. He is now a Professor and Ph.D. supervisor. His main research interests include design and analysis of algorithms, mathematical foundation of bio-inspired algorithms, combinatorial optimization and algebra. Hu Guiwu received the Ph.D. degree in Computer Science from South China University of Technology, Guangzhou, in 004. He is now a Professor with School of Mathematics and Statistics, Guangdong University of Finance and Economics. His research interests include evolutionary computation, swarm intelligence and their application, data mining, machine learning. 3

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights

Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights Runtime Analysis of Evolutionary Algorithms for the Knapsack Problem with Favorably Correlated Weights Frank Neumann 1 and Andrew M. Sutton 2 1 Optimisation and Logistics, School of Computer Science, The

More information

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions Chao Qian,2, Yang Yu 2, and Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of

More information

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China

More information

Switch Analysis for Running Time Analysis of Evolutionary Algorithms

Switch Analysis for Running Time Analysis of Evolutionary Algorithms IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. X, 204 Switch Analysis for Running Time Analysis of Evolutionary Algorithms Yang Yu, Member, IEEE, Chao Qian, Zhi-Hua Zhou, Fellow, IEEE Abstract

More information

How to Treat Evolutionary Algorithms as Ordinary Randomized Algorithms

How to Treat Evolutionary Algorithms as Ordinary Randomized Algorithms How to Treat Evolutionary Algorithms as Ordinary Randomized Algorithms Technical University of Denmark (Workshop Computational Approaches to Evolution, March 19, 2014) 1/22 Context Running time/complexity

More information

The Fitness Level Method with Tail Bounds

The Fitness Level Method with Tail Bounds The Fitness Level Method with Tail Bounds Carsten Witt DTU Compute Technical University of Denmark 2800 Kgs. Lyngby Denmark arxiv:307.4274v [cs.ne] 6 Jul 203 July 7, 203 Abstract The fitness-level method,

More information

When to Use Bit-Wise Neutrality

When to Use Bit-Wise Neutrality When to Use Bit-Wise Neutrality Tobias Friedrich Department 1: Algorithms and Complexity Max-Planck-Institut für Informatik Saarbrücken, Germany Frank Neumann Department 1: Algorithms and Complexity Max-Planck-Institut

More information

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study Yang Yu, and Zhi-Hua Zhou, Senior Member, IEEE National Key Laboratory for Novel Software Technology Nanjing University,

More information

A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms

A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms Pietro S. Oliveto Department of Computer Science, University of Sheffield, UK Symposium Series in Computational Intelligence

More information

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem

A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem Jun He 1, Yuren Zhou 2, and Xin Yao 3 1 J. He is with the Department of Computer Science,

More information

Black Box Search By Unbiased Variation

Black Box Search By Unbiased Variation Black Box Search By Unbiased Variation Per Kristian Lehre and Carsten Witt CERCIA, University of Birmingham, UK DTU Informatics, Copenhagen, Denmark ThRaSH - March 24th 2010 State of the Art in Runtime

More information

REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 U N I V E R S I T Y OF D O R T M U N D REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 53 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence

More information

Crossover can be constructive when computing unique input output sequences

Crossover can be constructive when computing unique input output sequences Crossover can be constructive when computing unique input output sequences Per Kristian Lehre and Xin Yao The Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA),

More information

On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments

On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments Chao Qian 1, Yang Yu 1, Yaochu Jin 2, and Zhi-Hua Zhou 1 1 National Key Laboratory for Novel Software Technology, Nanjing

More information

Average Drift Analysis and Population Scalability

Average Drift Analysis and Population Scalability Average Drift Analysis and Population Scalability Jun He and Xin Yao Abstract This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way.

More information

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

An Analysis on Recombination in Multi-Objective Evolutionary Optimization An Analysis on Recombination in Multi-Objective Evolutionary Optimization Chao Qian, Yang Yu, Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 20023, China

More information

Plateaus Can Be Harder in Multi-Objective Optimization

Plateaus Can Be Harder in Multi-Objective Optimization Plateaus Can Be Harder in Multi-Objective Optimization Tobias Friedrich and Nils Hebbinghaus and Frank Neumann Max-Planck-Institut für Informatik, Campus E1 4, 66123 Saarbrücken, Germany Abstract In recent

More information

When Is an Estimation of Distribution Algorithm Better than an Evolutionary Algorithm?

When Is an Estimation of Distribution Algorithm Better than an Evolutionary Algorithm? When Is an Estimation of Distribution Algorithm Better than an Evolutionary Algorithm? Tianshi Chen, Per Kristian Lehre, Ke Tang, and Xin Yao Abstract Despite the wide-spread popularity of estimation of

More information

Runtime Analysis of Evolutionary Algorithms: Basic Introduction 1

Runtime Analysis of Evolutionary Algorithms: Basic Introduction 1 Runtime Analysis of Evolutionary Algorithms: Basic Introduction 1 Per Kristian Lehre University of Nottingham Nottingham NG8 1BB, UK PerKristian.Lehre@nottingham.ac.uk Pietro S. Oliveto University of Sheffield

More information

When to use bit-wise neutrality

When to use bit-wise neutrality Nat Comput (010) 9:83 94 DOI 10.1007/s11047-008-9106-8 When to use bit-wise neutrality Tobias Friedrich Æ Frank Neumann Published online: 6 October 008 Ó Springer Science+Business Media B.V. 008 Abstract

More information

Runge-Kutta Method for Solving Uncertain Differential Equations

Runge-Kutta Method for Solving Uncertain Differential Equations Yang and Shen Journal of Uncertainty Analysis and Applications 215) 3:17 DOI 1.1186/s4467-15-38-4 RESEARCH Runge-Kutta Method for Solving Uncertain Differential Equations Xiangfeng Yang * and Yuanyuan

More information

Black-Box Search by Unbiased Variation

Black-Box Search by Unbiased Variation Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 102 (2010) Black-Box Search by Unbiased Variation Per Kristian Lehre and Carsten Witt Technical University of Denmark Kgs. Lyngby,

More information

A Runtime Analysis of Parallel Evolutionary Algorithms in Dynamic Optimization

A Runtime Analysis of Parallel Evolutionary Algorithms in Dynamic Optimization Algorithmica (2017) 78:641 659 DOI 10.1007/s00453-016-0262-4 A Runtime Analysis of Parallel Evolutionary Algorithms in Dynamic Optimization Andrei Lissovoi 1 Carsten Witt 2 Received: 30 September 2015

More information

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT --

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT -- Conditions for the Convergence of Evolutionary Algorithms Jun He and Xinghuo Yu 1 Abstract This paper presents a theoretical analysis of the convergence conditions for evolutionary algorithms. The necessary

More information

Geometric Semantic Genetic Programming (GSGP): theory-laden design of variation operators

Geometric Semantic Genetic Programming (GSGP): theory-laden design of variation operators Geometric Semantic Genetic Programming (GSGP): theory-laden design of variation operators Andrea Mambrini University of Birmingham, UK NICaiA Exchange Programme LaMDA group, Nanjing University, China 7th

More information

Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness

Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness Evolutionary Algorithms How to Cope With Plateaus of Constant Fitness and When to Reject Strings of The Same Fitness Thomas Jansen and Ingo Wegener FB Informatik, LS 2, Univ. Dortmund, 44221 Dortmund,

More information

On the convergence of uncertain random sequences

On the convergence of uncertain random sequences Fuzzy Optim Decis Making (217) 16:25 22 DOI 1.17/s17-16-9242-z On the convergence of uncertain random sequences H. Ahmadzade 1 Y. Sheng 2 M. Esfahani 3 Published online: 4 June 216 Springer Science+Business

More information

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators

Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Andrea Mambrini 1 University of Birmingham, Birmingham UK 6th June 2013 1 / 33 Andrea Mambrini GSGP: theory-laden

More information

Uncertain Quadratic Minimum Spanning Tree Problem

Uncertain Quadratic Minimum Spanning Tree Problem Uncertain Quadratic Minimum Spanning Tree Problem Jian Zhou Xing He Ke Wang School of Management Shanghai University Shanghai 200444 China Email: zhou_jian hexing ke@shu.edu.cn Abstract The quadratic minimum

More information

Evolutionary Computation Theory. Jun He School of Computer Science University of Birmingham Web: jxh

Evolutionary Computation Theory. Jun He School of Computer Science University of Birmingham Web:   jxh Evolutionary Computation Theory Jun He School of Computer Science University of Birmingham Web: www.cs.bham.ac.uk/ jxh Outline Motivation History Schema Theorem Convergence and Convergence Rate Computational

More information

UNIVERSITY OF DORTMUND

UNIVERSITY OF DORTMUND UNIVERSITY OF DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence Methods

More information

Runtime Analysis of Genetic Algorithms with Very High Selection Pressure

Runtime Analysis of Genetic Algorithms with Very High Selection Pressure Runtime Analysis of Genetic Algorithms with Very High Selection Pressure Anton V. Eremeev 1,2 1 Sobolev Institute of Mathematics, Omsk Branch, 13 Pevtsov str., 644099, Omsk, Russia 2 Omsk State University

More information

Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation

Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation Downloaded from orbit.dtu.dk on: Oct 12, 2018 Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation Witt, Carsten Published in: 29th International Symposium on Theoretical

More information

Runtime Analysis of a Binary Particle Swarm Optimizer

Runtime Analysis of a Binary Particle Swarm Optimizer Runtime Analysis of a Binary Particle Swarm Optimizer Dirk Sudholt Fakultät für Informatik, LS 2 Technische Universität Dortmund Dortmund, Germany Carsten Witt Fakultät für Informatik, LS 2 Technische

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

On the Impact of Objective Function Transformations on Evolutionary and Black-Box Algorithms

On the Impact of Objective Function Transformations on Evolutionary and Black-Box Algorithms On the Impact of Objective Function Transformations on Evolutionary and Black-Box Algorithms [Extended Abstract] Tobias Storch Department of Computer Science 2, University of Dortmund, 44221 Dortmund,

More information

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress

Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Gaussian EDA and Truncation Selection: Setting Limits for Sustainable Progress Petr Pošík Czech Technical University, Faculty of Electrical Engineering, Department of Cybernetics Technická, 66 7 Prague

More information

Runtime Analysis of Binary PSO

Runtime Analysis of Binary PSO Runtime Analysis of Binary PSO Dirk Sudholt Fakultät für Informatik, LS 2 Technische Universität Dortmund Dortmund, Germany Carsten Witt Fakultät für Informatik, LS 2 Technische Universität Dortmund Dortmund,

More information

Self-Adaptive Ant Colony System for the Traveling Salesman Problem

Self-Adaptive Ant Colony System for the Traveling Salesman Problem Proceedings of the 29 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 29 Self-Adaptive Ant Colony System for the Traveling Salesman Problem Wei-jie Yu, Xiao-min

More information

Multiplicative Drift Analysis

Multiplicative Drift Analysis Multiplicative Drift Analysis Benjamin Doerr Daniel Johannsen Carola Winzen Max-Planck-Institut für Informatik Campus E1 4 66123 Saarbrücken, Germany arxiv:1101.0776v1 [cs.ne] 4 Jan 2011 Submitted January

More information

Pure Strategy or Mixed Strategy?

Pure Strategy or Mixed Strategy? Pure Strategy or Mixed Strategy? Jun He, Feidun He, Hongbin Dong arxiv:257v4 [csne] 4 Apr 204 Abstract Mixed strategy evolutionary algorithms EAs) aim at integrating several mutation operators into a single

More information

Chapter 2 Event-Triggered Sampling

Chapter 2 Event-Triggered Sampling Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential

More information

REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 U N I V E R S I T Y OF D O R T M U N D REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence

More information

Runtime Analyses for Using Fairness in Evolutionary Multi-Objective Optimization

Runtime Analyses for Using Fairness in Evolutionary Multi-Objective Optimization Runtime Analyses for Using Fairness in Evolutionary Multi-Objective Optimization Tobias Friedrich 1, Christian Horoba 2, and Frank Neumann 1 1 Max-Planck-Institut für Informatik, Saarbrücken, Germany 2

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

On the convergence rates of genetic algorithms

On the convergence rates of genetic algorithms Theoretical Computer Science 229 (1999) 23 39 www.elsevier.com/locate/tcs On the convergence rates of genetic algorithms Jun He a;, Lishan Kang b a Department of Computer Science, Northern Jiaotong University,

More information

Runtime Analysis of (1+1) EA on Computing Unique Input Output Sequences

Runtime Analysis of (1+1) EA on Computing Unique Input Output Sequences 1 Runtime Analysis of (1+1) EA on Computing Unique Input Output Sequences Per Kristian Lehre and Xin Yao Abstract Computing unique input output (UIO) sequences is a fundamental and hard problem in conformance

More information

Hierarchical Structures on Multigranulation Spaces

Hierarchical Structures on Multigranulation Spaces Yang XB, Qian YH, Yang JY. Hierarchical structures on multigranulation spaces. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 27(6): 1169 1183 Nov. 2012. DOI 10.1007/s11390-012-1294-0 Hierarchical Structures

More information

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5]. Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic

More information

Improved time complexity analysis of the Simple Genetic Algorithm

Improved time complexity analysis of the Simple Genetic Algorithm Downloaded from orbit.dtu.dk on: Nov 0, 0 Improved time complexity analysis of the Simple Genetic Algorithm Oliveto, Pietro S.; Witt, Carsten Published in: Theoretical Computer Science Link to article,

More information

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT --

DRAFT -- DRAFT -- DRAFT -- DRAFT -- DRAFT -- Towards an Analytic Framework for Analysing the Computation Time of Evolutionary Algorithms Jun He and Xin Yao Abstract In spite of many applications of evolutionary algorithms in optimisation, theoretical

More information

Stability and hybrid synchronization of a time-delay financial hyperchaotic system

Stability and hybrid synchronization of a time-delay financial hyperchaotic system ISSN 76-7659 England UK Journal of Information and Computing Science Vol. No. 5 pp. 89-98 Stability and hybrid synchronization of a time-delay financial hyperchaotic system Lingling Zhang Guoliang Cai

More information

Refined Upper Bounds on the Expected Runtime of Non-elitist Populations from Fitness-Levels

Refined Upper Bounds on the Expected Runtime of Non-elitist Populations from Fitness-Levels Refined Upper Bounds on the Expected Runtime of Non-elitist Populations from Fitness-Levels ABSTRACT Duc-Cuong Dang ASAP Research Group School of Computer Science University of Nottingham duc-cuong.dang@nottingham.ac.uk

More information

On Constrained Boolean Pareto Optimization

On Constrained Boolean Pareto Optimization On Constrained Boolean Pareto Optimization Chao Qian and Yang Yu and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software

More information

Generalized projective synchronization between two chaotic gyros with nonlinear damping

Generalized projective synchronization between two chaotic gyros with nonlinear damping Generalized projective synchronization between two chaotic gyros with nonlinear damping Min Fu-Hong( ) Department of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210042, China

More information

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape

Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Evolutionary Programming Using a Mixed Strategy Adapting to Local Fitness Landscape Liang Shen Department of Computer Science Aberystwyth University Ceredigion, SY23 3DB UK lls08@aber.ac.uk Jun He Department

More information

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence

More information

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College

More information

Generalization of Belief and Plausibility Functions to Fuzzy Sets

Generalization of Belief and Plausibility Functions to Fuzzy Sets Appl. Math. Inf. Sci. 6, No. 3, 697-703 (202) 697 Applied Mathematics & Information Sciences An International Journal Generalization of Belief and Plausibility Functions to Fuzzy Sets Jianyu Xiao,2, Minming

More information

Finding Robust Solutions to Dynamic Optimization Problems

Finding Robust Solutions to Dynamic Optimization Problems Finding Robust Solutions to Dynamic Optimization Problems Haobo Fu 1, Bernhard Sendhoff, Ke Tang 3, and Xin Yao 1 1 CERCIA, School of Computer Science, University of Birmingham, UK Honda Research Institute

More information

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization.

Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization. nd Workshop on Advanced Research and Technology in Industry Applications (WARTIA ) Verification of a hypothesis about unification and simplification for position updating formulas in particle swarm optimization

More information

At t = T the investors learn the true state.

At t = T the investors learn the true state. 1. Martingales A discrete time introduction Model of the market with the following submodels (i) T + 1 trading dates, t =0, 1,...,T. Mathematics of Financial Derivatives II Seppo Pynnönen Professor of

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Soft Lattice Implication Subalgebras

Soft Lattice Implication Subalgebras Appl. Math. Inf. Sci. 7, No. 3, 1181-1186 (2013) 1181 Applied Mathematics & Information Sciences An International Journal Soft Lattice Implication Subalgebras Gaoping Zheng 1,3,4, Zuhua Liao 1,3,4, Nini

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation Lecture Algorithm Configura4on and Theore4cal Analysis Outline Algorithm Configuration Theoretical Analysis 2 Algorithm Configuration Question: If an EA toolbox is available (which

More information

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem

Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem Research Collection Working Paper Running time analysis of a multi-objective evolutionary algorithm on a simple discrete optimization problem Author(s): Laumanns, Marco; Thiele, Lothar; Zitzler, Eckart;

More information

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization

Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Binary Particle Swarm Optimization with Crossover Operation for Discrete Optimization Deepak Singh Raipur Institute of Technology Raipur, India Vikas Singh ABV- Indian Institute of Information Technology

More information

A New Approach for Uncertain Multiobjective Programming Problem Based on P E Principle

A New Approach for Uncertain Multiobjective Programming Problem Based on P E Principle INFORMATION Volume xx, Number xx, pp.54-63 ISSN 1343-45 c 21x International Information Institute A New Approach for Uncertain Multiobjective Programming Problem Based on P E Principle Zutong Wang 1, Jiansheng

More information

Star-Structured High-Order Heterogeneous Data Co-clustering based on Consistent Information Theory

Star-Structured High-Order Heterogeneous Data Co-clustering based on Consistent Information Theory Star-Structured High-Order Heterogeneous Data Co-clustering based on Consistent Information Theory Bin Gao Tie-an Liu Wei-ing Ma Microsoft Research Asia 4F Sigma Center No. 49 hichun Road Beijing 00080

More information

A numerical method for solving uncertain differential equations

A numerical method for solving uncertain differential equations Journal of Intelligent & Fuzzy Systems 25 (213 825 832 DOI:1.3233/IFS-12688 IOS Press 825 A numerical method for solving uncertain differential equations Kai Yao a and Xiaowei Chen b, a Department of Mathematical

More information

A Parameterized Runtime Analysis of Simple Evolutionary Algorithms for Makespan Scheduling

A Parameterized Runtime Analysis of Simple Evolutionary Algorithms for Makespan Scheduling A Parameterized Runtime Analysis of Simple Evolutionary Algorithms for Makespan Scheduling Andrew M. Sutton and Frank Neumann School of Computer Science, University of Adelaide, Adelaide, SA 5005, Australia

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

A Note on the Complexity of Network Reliability Problems. Hans L. Bodlaender Thomas Wolle

A Note on the Complexity of Network Reliability Problems. Hans L. Bodlaender Thomas Wolle A Note on the Complexity of Network Reliability Problems Hans L. Bodlaender Thomas Wolle institute of information and computing sciences, utrecht university technical report UU-CS-2004-001 www.cs.uu.nl

More information

A Parallel Evolutionary Approach to Multi-objective Optimization

A Parallel Evolutionary Approach to Multi-objective Optimization A Parallel Evolutionary Approach to Multi-objective Optimization Xiang Feng Francis C.M. Lau Department of Computer Science, The University of Hong Kong, Hong Kong Abstract Evolutionary algorithms have

More information

2-bit Flip Mutation Elementary Fitness Landscapes

2-bit Flip Mutation Elementary Fitness Landscapes RN/10/04 Research 15 September 2010 Note 2-bit Flip Mutation Elementary Fitness Landscapes Presented at Dagstuhl Seminar 10361, Theory of Evolutionary Algorithms, 8 September 2010 Fax: +44 (0)171 387 1397

More information

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean Functions

Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean Functions Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean Functions Timo Kötzing Max-Planck-Institut für Informatik 66123 Saarbrücken, Germany Dirk Sudholt CERCIA, University of Birmingham

More information

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities Research Journal of Applied Sciences, Engineering and Technology 7(4): 728-734, 214 DOI:1.1926/rjaset.7.39 ISSN: 24-7459; e-issn: 24-7467 214 Maxwell Scientific Publication Corp. Submitted: February 25,

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

The covariance of uncertain variables: definition and calculation formulae

The covariance of uncertain variables: definition and calculation formulae Fuzzy Optim Decis Making 218 17:211 232 https://doi.org/1.17/s17-17-927-3 The covariance of uncertain variables: definition and calculation formulae Mingxuan Zhao 1 Yuhan Liu 2 Dan A. Ralescu 2 Jian Zhou

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

Matching Index of Uncertain Graph: Concept and Algorithm

Matching Index of Uncertain Graph: Concept and Algorithm Matching Index of Uncertain Graph: Concept and Algorithm Bo Zhang, Jin Peng 2, School of Mathematics and Statistics, Huazhong Normal University Hubei 430079, China 2 Institute of Uncertain Systems, Huanggang

More information

SIMU L TED ATED ANNEA L NG ING

SIMU L TED ATED ANNEA L NG ING SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic

More information

Genetic Algorithm: introduction

Genetic Algorithm: introduction 1 Genetic Algorithm: introduction 2 The Metaphor EVOLUTION Individual Fitness Environment PROBLEM SOLVING Candidate Solution Quality Problem 3 The Ingredients t reproduction t + 1 selection mutation recombination

More information

Tight Approximation Ratio of a General Greedy Splitting Algorithm for the Minimum k-way Cut Problem

Tight Approximation Ratio of a General Greedy Splitting Algorithm for the Minimum k-way Cut Problem Algorithmica (2011 59: 510 520 DOI 10.1007/s00453-009-9316-1 Tight Approximation Ratio of a General Greedy Splitting Algorithm for the Minimum k-way Cut Problem Mingyu Xiao Leizhen Cai Andrew Chi-Chih

More information

REIHE COMPUTATIONAL INTELLIGENCE S O N D E R F O R S C H U N G S B E R E I C H 5 3 1

REIHE COMPUTATIONAL INTELLIGENCE S O N D E R F O R S C H U N G S B E R E I C H 5 3 1 U N I V E R S I T Ä T D O R T M U N D REIHE COMPUTATIONAL INTELLIGENCE S O N D E R F O R S C H U N G S B E R E I C H 5 3 1 Design und Management komplexer technischer Prozesse und Systeme mit Methoden

More information

An Improved Quantum Evolutionary Algorithm with 2-Crossovers

An Improved Quantum Evolutionary Algorithm with 2-Crossovers An Improved Quantum Evolutionary Algorithm with 2-Crossovers Zhihui Xing 1, Haibin Duan 1,2, and Chunfang Xu 1 1 School of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191,

More information

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems

Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Investigation of Mutation Strategies in Differential Evolution for Solving Global Optimization Problems Miguel Leon Ortiz and Ning Xiong Mälardalen University, Västerås, SWEDEN Abstract. Differential evolution

More information

Worst case analysis for a general class of on-line lot-sizing heuristics

Worst case analysis for a general class of on-line lot-sizing heuristics Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University

More information

Gene Pool Recombination in Genetic Algorithms

Gene Pool Recombination in Genetic Algorithms Gene Pool Recombination in Genetic Algorithms Heinz Mühlenbein GMD 53754 St. Augustin Germany muehlenbein@gmd.de Hans-Michael Voigt T.U. Berlin 13355 Berlin Germany voigt@fb10.tu-berlin.de Abstract: A

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

A lower bound for scheduling of unit jobs with immediate decision on parallel machines

A lower bound for scheduling of unit jobs with immediate decision on parallel machines A lower bound for scheduling of unit jobs with immediate decision on parallel machines Tomáš Ebenlendr Jiří Sgall Abstract Consider scheduling of unit jobs with release times and deadlines on m identical

More information

An Uncertain Control Model with Application to. Production-Inventory System

An Uncertain Control Model with Application to. Production-Inventory System An Uncertain Control Model with Application to Production-Inventory System Kai Yao 1, Zhongfeng Qin 2 1 Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China 2 School of Economics

More information

Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming

Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming Discrete Dynamics in Nature and Society Volume 20, Article ID 7697, pages doi:0.55/20/7697 Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming Wei-Xiang

More information

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531

TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 TECHNISCHE UNIVERSITÄT DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence

More information

Improved Computational Complexity Results for Weighted ORDER and MAJORITY

Improved Computational Complexity Results for Weighted ORDER and MAJORITY Improved Computational Complexity Results for Weighted ORDER and MAJORITY Anh Nguyen Evolutionary Computation Group School of Computer Science The University of Adelaide Adelaide, SA 5005, Australia Tommaso

More information