2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oat

Size: px
Start display at page:

Download "2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oat"

Transcription

1 Chapter 1 CONSTRAINED GENETIC ALGORITHMS AND THEIR APPLICATIONS IN NONLINEAR CONSTRAINED OPTIMIZATION Benjamin W. Wah and Yi-Xin Chen Department of Electrical and Computer Engineering and the Coordinated Science Laoratory University of Illinois, Urana-Champaign Urana, IL 61801, USA Astract This chapter presents a framework that unies various search mechanisms for solving constrained nonlinear programming (NLP) prolems. These prolems are characterized y functions that are not necessarily dierentiale and continuous. Our proposed framework is ased on the rst-order necessary and sucient condition developed for constrained local minimization in discrete space that shows the equivalence etween discrete-neighorhood saddle points and constrained local minima. To look for discrete-neighorhood saddle points, we formulate a discrete constrained NLP in an augmented Lagrangian function and study various mechanisms for performing ascents of the augmented function in the original-variale suspace and descents in the Lagrange-multiplier suspace. Our results show that CSAGA, a comined constrained simulated annealing and genetic algorithm, performs well when using crossovers, mutations, and annealing to generate trial points. Finally, we apply iterative deepening to determine the optimal numer of generations in CSAGA and show that its performance is roust with respect to changes in population size. 1. Introduction Many engineering applications can e formulated as constrained nonlinear programming prolems (NLPs). Examples include production planning, computer integrated manufacturing, chemical control processing, and structure optimization. These applications can e solved y existing methods if they are specied in well-dened formulae that are dierentiale and continuous. However, only special cases can e solved when they do not satisfy the required assumptions. For instance, sequential quadratic programming cannot handle prolems whose ojective and constraint functions are not dierentiale or whose variales are discrete or mixed. Since many applications involving optimization may e formulated y non-dierentiale functions with discrete or mixed-integer variales, it is important to develop new methods for handling these optimization prolems. The study of algorithms for solving a disparity of constrained optimization prolems is dicult unless the prolems can e represented in a unied way. In this chapter we assume that continuous variales are rst discretized into discrete variales in such a way that the values of functions using Research supported y National Aeronautics and Space Administration Contract NAS

2 2 discretized variales approach those of the original continuous variales. Such an assumption is valid when continuous variales are represented as oating-point numers and when the range of variales is small (say etween 10 5 and 10 5 ). Intuitively, if discretization is ne enough, then solutions found in discretized space are fairly good approximations to the original solutions. The accuracy of solutions found in discretized prolems has een studied elsewhere [16]. Based on discretization, continuous and mixed-integer constrained NLPs can e represented as discrete constrained NLPs as follows: 1 minimize f(x) (1.1) suject to g(x) 0 x = [x 1 ; : : : ; x n ] T is a vector h(x) = 0 of ounded discrete variales. Here, f(x) is a lower-ounded ojective function, g(x) = [g 1 (x); ; g k (x)] T is a vector of k inequality constraints, h(x) = [h 1 (x); ; h m (x)] T is a vector of m equality constraints. Functions f(x), g(x), and h(x) are not necessarily dierentiale and can e either linear or nonlinear, continuous or discrete, and analytic or procedural. Without loss of generality, we consider only minimization prolems. Solutions to (1.1) cannot e characterized in ways similar to those of prolems with dierentiale functions and continuous variales. In the latter class of prolems, solutions are dened with respect to neighorhoods of open spheres with radius approaching zero asymptotically. Such a concept does not exist in prolems with discrete variales. Let X e the Cartesian product of the discrete domains of all variales in x. To characterize solutions sought in discrete space, we dene the following concepts on neighorhoods and constrained solutions in discrete space: Denition 1. N dn (x), the discrete neighorhood [1] of point x 2 X is a nite user-dened set of points fx 0 2 Xg such that x 0 2 N dn (x) () x 2 N dn (x 0 ), and that for any y 1 ; y k 2 X, it is possile to nd a nite sequence of points in X, y 1 ; ; y k, such that y i+1 2 N dn (y i ) for i = 1; k 1. Denition 2. Point x 2 X is called a constrained local minimum in discrete neighorhood (CLM dn ) if it satises two conditions: a) x is feasile, and ) f(x) f(x 0 ), for all feasile x 0 2 N dn (x). Denition 3. Point x 2 X is called a constrained gloal minimum in discrete neighorhood (CGM dn ) i a) x is feasile, and ) for every feasile point x 0 2 X, f(x 0 ) f(x). The set of all CGM dn is X opt. According to our denitions, a CGM dn must also e a CLM dn. In a similar way, there are denitions on continuous-neighorhood constrained local minima (CLM cn ) and constrained gloal minima (CGM cn ). We have shown earlier [14] that the necessary and sucient condition for a point to e a CLM dn is that it satises the discrete-neighorhood saddle-point condition (Section 2.1). We have also extended simulated annealing (SA) [13] and greedy search [14] to look for discrete-neighorhood saddle points SP dn (Section 2.2). At the same time, new prolem-dependent constraint-handling heuristics have een developed in the GA community to handle nonlinear constraints [11] (Section 2.3). Up to now, there is no clear understanding on how to unify these algorithms into one that can e applied to nd CGM dn for a wide range of prolems. 1 For two vectors v and w of the same numer of elements, v w means that each element of v is not less than the corresponding element of w. v w can e dened similarly. 0, when compared to a vector, stands for a null vector.

3 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization 2 3 P R (N g P ) N g P N gp P R (N gp ) N g P a) P R(N gp ) approaches one asymptotically ) Existence of asolute minimum N optp in N gp P R (N gp ) Figure 1.1. An example showing the application of CSAGA with P = 3 to solve a discretized version of G1 [11] (N optp 2000). Based on our previous work, our goal in this chapter is to develop an eective framework that unies SA, GA, and greedy search for nding CGM dn. In particular, we propose constrained genetic algorithm (CGA) and comined constrained SA and GA (CSAGA) that look for SP dn. We also study algorithms with the optimal average completion time for nding a CGM dn. The algorithms studied in this chapter are all stochastic searches that proe a search space in a random order, where a proe is a neighoring point examined y an algorithm, independent of whether it is accepted or not. Assuming p j to e the proaility that an algorithm nds a CGM dn in its j th proe and a simplistic assumption that all proes are independent, the performance of one run of such an algorithm can e characterized y N, the numer of proes made (or CPU time taken), and P R (N), the reachaility proaility that a CGM dn is hit in any of the N proes: P R (N) = 1 NY j=1 (1 p j ); where N 0: (1.2) Reachaility can e maintained y reporting the est solution found y the algorithm when it stops. As an example, Figure 1.1a plots P R (N g P ) when CSAGA (see Section 3.2) was run under various numer of generations N g and xed population size P = 3 (where N = N g P ). The graph shows that P R (N g P ) approaches one asymptotically as N g P is increased. Although it is hard to estimate the value of P R (N) when a test prolem is solved y an algorithm, we can always improve the chance of nding a solution y running the same algorithm multiple times, each with N proes, from random starting points. Given P R (N) for one run of the algorithm and that all runs are independent, the expected total numer of proes to nd a CGM dn is: 1X j=1 P R (N)(1 P R (N)) j1 N j = Figure 1.1 plots (1.3) ased on P R (N g P ) in Figure 1.1a. minimizes (1.3) ecause P R (0) = 0, lim N!1 P R (N ) = 1, N P R (N) N P R (N) : (1.3) N P R (N)! 1 as N! 1. The curve in Figure 1.1 illustrates this ehavior. In general, there exists N opt that is ounded elow y zero, and

4 4 Based on the existence of N opt, we present in Section 3.3 search strategies in CGA and in CSAGA that minimize (1.3) in nding a CGM dn. Finally, Section 4 compares the performance of our algorithms. 2. Previous Work In this section, we rst summarize the theory of Lagrange multipliers applied to solve (1.1) and two algorithms developed ased on the theory. We then descrie existing work in GA for solving constrained NLPs Theory of Lagrange multipliers for solving discrete constrained NLPs Dene a discrete equality-constrained NLP as follows [14, 16]: min x f(x) x is a vector of ounded (1.4) suject to h(x) = 0 discrete variales, A generalized discrete augmented Lagrangian function of (1.4) is dened as follows [14]: L d (x; ) = f(x) + T H (h(x)) jjh(x)jj2 ; (1.5) where H is a non-negative continuous transformation function satisfying H(y) 0, H(y) = 0 i y = 0, and = [ 1 ; ; m ] T is a vector of Lagrange multipliers. Function H is easy to design; examples include H(h(x)) = [jh 1 (x)j; ; jh m (x)j] T and H(h(x)) = [max(h 1 (x); 0); ; max(h m (x); 0)] T. Note that these transformations are not used in Lagrangemultiplier methods in continuous space ecause they are not dierentiale at H(h(x)) = 0. However, they do not pose prolems here ecause we do not require their dierentiaility. Similar transformations can e used to transform inequality constraint g j (x) 0 into equivalent equality constraint max(g j (x); 0) = 0. Hence, we only consider prolems with equality constraints from here on. We dene a discrete-neighorhood saddle point SP dn (x ; ) with the following property: L d (x ; ) L d (x ; ) L d (x; ) (1.6) for all x 2 N dn (x ) and all ; 0 2 R m. Note that although we use similar terminologies as in continuous space, SP dn is dierent from SP cn (saddle point in continuous space) ecause they are dened using dierent neighorhoods. The concept of SP dn is very important in discrete prolems ecause, starting from them, we can derive rst-order necessary and sucient condition for CLM dn that leads to gloal minimization procedures. This is stated formally in the following theorem [14]: Theorem 1 First-order necessary and sucient condition on CLM dn. [14] A point in the discrete search space of (1.4) is a CLM dn i it satises (1.6) for any. Theorem 1 is stronger than its continuous counterparts. The rst-order necessary conditions in continuous Lagrange-multiplier theory [9] require CLM cn to e regular points and functions to e dierentiale. In contrast, there are no such requirements for CLM dn. Further, the rst-order conditions in continuous theory [9] are only necessary, and second-order sucient condition must e checked in order to ensure that a point is actually a CLM cn (CLM in continuous space). In contrast, the condition in Theorem 1 is necessary as well as sucient.

5 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization procedure CSA (; N ) 2. set initial x [x 1; ; x n; 1; ; k ] T with random x, 0; 3. while stopping condition is not satised do 4. generate x 0 2 N dn (x) using G(x; x 0 ); 5. accept x 0 with proaility A T (x; x 0 ) 6. reduce temperature y T T ; 7. end while 8. end procedure a) CSA called with schedule N and rate 1. procedure CSA ID 2. set initial cooling rate 0 and N N 0 ; 3. set K numer of CSA runs at xed ; 4. repeat 5. for i 1 to K do call CSA(; N ); end for; 6. increase cooling schedule N N ; 7. until feasile solution has een found and no etter solution in two successive increases of N ; 8. end procedure ) CSA ID: CSA with iterative deepening Figure 1.2. Constrained simulated annealing algorithm (CSA) and its iterative-deepening extension 2.2. Existing algorithms for nding SPdn Since there is a one-to-one correspondence etween CGM dn and SP dn, it implies that a strategy looking for SP dn with the minimum ojective value will result in CGM dn. We review two methods to look for SP dn. The rst algorithm is the discrete Lagrangian method (DLM) [15]. It is an iterative local search that looks for SP dn y updating the variales in x in order to perform descents of L d in the x suspace, while occasionally updating the variales of unsatised constraints in order to perform ascents in the suspace and to force the violated constraints into satisfaction. When no new proes can e generated in oth the x and suspaces, the algorithm has additional mechanisms to escape from such local traps. It can e shown that the point where DLM stops is a CLM dn when the numer of neighorhood points is small enough to e enumerated in each descent in the x suspace [14, 16]. However, when the numer of neighorhood points is very large and hill-climing is used to nd the rst point with an improved L d in each descent, then the point where DLM stops may e a feasile point ut not necessarily a SP dn. The second algorithm is the constrained simulated annealing (CSA) [13] algorithm shown in Figure 1.2a. It looks for SP dn y proailistic descents in the x suspace and y proailistic ascents in the suspace, with an acceptance proaility governed y the Metropolis proaility. Similar to DLM, if the neighorhood of every point is very large and cannot e enumerated, then the point where CSA stops may only e a feasile point ut not necessarily a SP dn. Using G(x; x 0 ) for generating trial point x 0 in N dn (x), A T (x; x 0 ) as the Metropolis acceptance proaility, and a logarithmic cooling schedule, CSA has een proven to have asymptotic convergence with proaility one to CGM dn [13]. This property is stated in the following theorem: Theorem 2 Asymptotic convergence of CSA. [13] The Markov chain modeling CSA converges to a CGM dn with proaility one. Theorem 2 extends a similar theorem for SA that proves its asymptotic convergence to unconstrained gloal minima of unconstrained optimization prolems. By looking for SP dn in the Lagrangian-function space, Theorem 2 shows the asymptotic convergence of CSA to CGM dn in constrained optimization prolems. Theorem 2 implies that CSA is not a practical algorithm when used to nd CGM dn in one run with certainty ecause CSA will take innite time. In practice, when CSA is run once using a a nite cooling schedule N, it nds a CGM dn with reachaility proaility P R (N ) < 1. To increase its success proaility, CSA with a nite N can e run multiple times from random starting points. Assuming that all the runs are independent, a CGM dn can e found in nite average time dened y (1.3).

6 6 N P R (N ;Q) fail fail fail fail success N opt t 2t 4t 8t 16t Total time for iterative deepening = 31t Optimal time = 12t log 2 (N ) Figure 1.3. An application of iterative deepening in CSA ID. We have veried experimentally that the expected time dened in (1.3) has an asolute minimum at N opt. (Figure 1.1 illustrates the existence of N opt for CSAGA.) It follows that, in order to minimize (1.3), CSA should e run multiple times from random starting points using schedule N opt. To nd N opt at run time without using prolem-dependent information, we have proposed to use iterative deepening [7] y starting CSA with a short schedule and y douling the schedule each time the current run fails to nd a CGM dn [12]. Since the total overhead in iterative deepening is dominated y that of the last run, CSA ID (CSA with iterative deepening in Figure 1.2) has a completion time of the same order of magnitude as that using N opt when the last schedule that CSA is run is close to N opt and that this run succeeds. Figure 1.3 illustrates that the total time incurred y CSA ID is of the same order as the expected overhead at N opt. Note that P R (N opt ) < 1 for one run of CSA at N opt. When CSA is run with a schedule close to N opt and fails to nd a solution, its cooling schedule will e douled and overshoots eyond N opt. To reduce the chance of overshooting into exceedingly long cooling schedules and to increase the success proaility efore its schedule reaches N opt, we have proposed to run CSA multiple times from random starting points at each schedule in CSA ID. Figure 1.2 shows CSA that is run K = 3 times at each schedule efore the schedule is douled. Our results show that such a strategy generally requires twice the average completion time with respect to multiple runs of CSA using N opt efore it nds a CGM dn [12] Genetic algorithms for solving constrained NLP prolems Genetic algorithm (GA) is a general stochastic optimization algorithm that maintains a population of alternative candidates and that proes a search space using genetic operators, such as crossovers and mutations, in order to nd etter candidates. The original GA was developed for solving unconstrained prolems, using a single tness function to rank candidates. Recently, many variants of GA have een developed for solving constrained NLPs. Most of these methods were ased on penalty formulations that use GA to minimize an unconstrained penalty function F(x), consisting of a sum of the ojective and the constraints weighted y penalties. Similar to CSA, these methods do not require the dierentiaility or continuity of functions.

7 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization 4 7 One penalty formulation is the static-penalty formulation in which all penalties are xed [2]: F (x; ) = f(x) + mx i=1 i jh i (x)j ; (1.7) where > 0, and penalty vector = f 1 ; 2 ; ; m g is xed and chosen to e large enough so that F (x ; ) < F (x; ) 8x 2 X X opt and x 2 X opt : (1.8) Based on (1.8), an unconstrained gloal minimum of (1.7) over x is a CGM dn to (1.4); hence, it suces to minimize (1.7) in solving (1.4). Since oth f(x) and jh i (x)j are lower ounded and x takes nite discrete values, always exists and is nite, therey ensuring the correctness of the approach. Note that other forms of penalty formulations have also een studied in the literature. The major issue of static-penalty methods lies in the diculty of selecting a suitale. If is much larger than necessary, then the terrain will e too rugged to e searched eectively y local-search methods. If it is too small, then feasile solutions to (1.7) may e dicult to nd. Dynamic-penalty methods [6], on the other hand, address the diculties of static-penalty methods y increasing penalties gradually in the following tness function: F(x) = f(x) + (C t) m X j=1 jh i (x)j ; (1.9) where t is the generation numer, and C,, and are constants. In contrast to static-penalty methods, (C t), the penalty on infeasile points, is increased during evolution. Dynamic-penalty methods do not always guarantee convergence to CLM dn or CGM dn. For example, consider a prolem with two constraints h 1 (x) = 0 and h 2 (x) = 0. Assuming that a search is stuck at an infeasile point x 0 and that for all x 2 N dn (x 0 ), 0 < jh 1 (x 0 )j < jh 1 (x)j, jh 2 (x 0 )j > jh 2 (x)j > 0, and jh 1 (x 0 )j + jh 2 (x 0 )j < jh 1 (x)j + jh 2 (x)j, then the search can never escape from x 0 no matter how large (C t) grows. One way to ensure the convergence of dynamic-penalty methods is to use a dierent penalty for each constraint, as in Lagrangian formulation (1.5). In the previous example, the search can escape from x 0 after assigning a much larger penalty to h 2 (x 0 ) than that to h 1 (x 0 ). There are many other variants of penalty methods, such as annealing penalties, adaptive penalties [11] and self-adapting weights [4]. In addition, prolem-dependent operators have een studied in the GA community for handling constraints. These include methods ased on preserving feasiility with specialized genetic operators, methods searching along oundaries of feasile regions, methods ased on decoders, repair of infeasile solutions, co-evolutionary methods, and strategic oscillation. However, most methods require domain-specic knowledge or prolem-dependent genetic operators, and have diculties in nding feasile regions or in maintaining feasiility for nonlinear constraints. In general, local minima of penalty functions are only necessary ut not sucient to e constrained local minima of the original constrained optimization prolems, unless the penalties are chosen properly. Hence, nding local minima of a penalty function does not necessarily solve the original constrained optimization prolem. 3. A General Framework to look for SPdn Although there are many methods for solving constrained NLPs, our survey in the last section shows a lack of a general framework that unies these mechanisms. Without such a framework, it is

8 8 Generate random initial candidate with initial Insert candidate(s) into list ased on sorting criterion (annealing or deterministic) Search in suspace Y Generate new candidate(s) in the susuace (proailistic or greedy) x loop N loop start Generate new candidates in the x suspace (genetic, proailistic, or greedy) N Stopping conditions met Update Lagrangian values of all candidates in list (annealing or determinisic) Y stop Figure 1.4. An iterative stochastic procedural framework to look for SP dn. dicult to know whether dierent algorithms are actually variations of each other. In this section we present a framework for solving constrained NLPs that unies SA, GA, and greedy searches. Based on the necessary and sucient condition in Theorem 1, Figure 1.4 depicts a stochastic procedure to look for SP dn. The procedure consists of two loops: the x loop that updates the variales in x in order to perform descents of L d in the x suspace, and the loop that updates the variales of unsatised constraints for any candidate in the list in order to perform ascents in the suspace. The procedure quits when no new proes can e generated in oth the x and suspaces. The procedure will not stop until it nds a feasile point ecause it will generate new proes in the suspace when there are unsatised constraints. Further, if the procedure always nds a descent direction at x y enumerating all points in N dn (x), then the point where the procedure stops must e a feasile local minimum in the x suspace of L d (x; ), or equivalently, a CLM dn. Both DLM and CSA discussed in Section 2.2 t into this framework, each maintaining a list of one candidate at any time. DLM entails greedy searches in the x and suspaces, deterministic insertions into the list of candidates, and deterministic acceptance of candidates until all constraints are satised. On the other hand, CSA generates new proes randomly in one of the x or variales, accepts them ased on the Metropolis proaility if L d increases along the x dimension and decreases along the dimension, and stops updating when all constraints are satised. In this section, we use genetic operators to generate proes and present in Section 3.1 CGA and in Section 3.2 CSAGA. Finally, we propose in Section 3.3 iterative-deepening versions of these algorithms Constrained genetic algorithm (CGA) CGA in Figure 1.5a was developed ased on the general framework in Figure 1.4. Similar to traditional GA, it organizes a search into a numer of generations, each involving a population of candidate points in a search space. However, it searches in the L d space using genetic operators to generate proes in the x suspace, either greedy or proailistic mechanisms to generate proes in the suspace, and deterministic organization of candidates according to their L d values. Lines 2-3 initialize to zero the generation numer t and the vector of Lagrange multipliers. The initial population P(t) can e either randomly generated or user provided. Lines 4 and 12 terminate CGA when the maximum numer of allowed generations is exceeded. Line 5 evaluates in generation t all candidates in P(t) using L d (x; (t)) as the tness function.

9 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization procedure CGA(P, N g) 2. set generation numer t 0 and (t) 0; 3. initialize population P(t); 4. repeat /* over multiple generations */ 5. evaluate L d (x; (t)) for all candidates in P(t); 6. repeat /* over proes in x suspace */ 7. y GA(select(P(t))); 8. evaluate L d (y; ) and insert into P(t) 9. until sucient proes in x suspace; 10. (t) (t) ch(h; P(t)); /* update */ 11. t t + 1; 12. until (t > N g) 13. end procedure a) CGA called with population size P and numer of generations N g. 1. procedure CGA ID 2. set initial numer of generations N g = N 0; 3. set K = numer of CGA runs at xed N g; 3. repeat /* iterative deepening to nd CGM dn */ 4. for i 1 to K do call CGA(P; N g) end for 5. set N g N g (typically = 2); 6. until N g exceeds maximum allowed or (no etter solution has een found in two successive increases of N g and N g > 5 N 0 and a feasile solution has een found); 7. end procedure ) CGA ID: CGA with iterative deepening Figure 1.5. Constrained GA and its iterative deepening version. Lines 6-9 explore the x suspace y selecting from P(t) candidates to reproduce using genetic operators and y inserting the new candidates generated into P(t) according to their tness values. After a numer of descents in the x suspace (dened y the numer of proes in Line 9 and the decision ox \search in suspace" in Figure 1.4), the algorithm switches to searching in the suspace. Line 10 updates according to the vector of maximum violations H(h(x); P(t)), where the maximum violation of a constraint is evaluated over all the candidates in P(t). That is, H i (h(x); P(t)) = max H(h i (x)); i = 1; ; m; (1.10) x2p(t) where h i (x) is the i th constraint function, H is the non-negative transformation dened in (1.5), and c is a step-wise constant controlling how fast changes. Operator in Figure 1.5a can e implemented in two ways in order to generate a new. A new can e generated proailistically ased a uniform distriution in ( ch 2 ; ch 2 ], or in a greedy fashion ased on a uniform distriution in (0; ch]. In addition, we can accept new proes deterministically y rejecting negative ones, or proailistically using an annealing rule. In all cases, a Lagrange multiplier will not e changed if its corresponding constraint is satised Comined Constrained SA and GA (CSAGA) Based on the general framework in Figure 1.4, we design CSAGA y integrating CSA in Figure 1.2a and CGA in Figure 1.5a into a comined procedure. The new procedure diers from the original CSA in two respects. First, y maintaining multiple candidates in a population, we need to decide how CSA should e applied to the multiple candidates in a population. Our evaluations show that, instead of running CSA corresponding to a candidate from a random starting point, it is est to run CSA sequentially, using the est solution point found in one run as the starting point of the next run. Second, we need to determine the duration of each run of CSA. This is controlled y parameter q that was set to e Ng after experimental evaluations. The new algorithm shown in 6 Figure 1.6 uses oth SA and GA to generate new proes in the x suspace. Line 2 initializes P(0). Unlike CGA, any candidate x = [x 1 ; ; x n ; 1 ; ; k ] T in P(t) is dened in the joint x and suspaces. Initially, x can e user-provided or randomly generated, and is set to zero.

10 10 1. procedure CSAGA(P; N g) 2. set t 0, T 0, 0 < < 1, and P(t); 3. repeat /* over multiple generations */ 4. for i 1 to q do /* SA in Lines 5-10 */ 5. for j 1 to P do 6. generate x 0 j from N dn (x j ) using G(x j ; x 0 j); 7. accept x 0 j with proaility A T (x j ; x 0 j) 8. end for 9. set T T ; /* set T for the SA part */ 10. end for 11. repeat /* y GA over proes in x suspace */ 12. y GA(select(P(t))); 13. evaluate L d (y; ) and insert y into P(t); 14. until sucient numer of proes in x suspace; 15. t t + q; /* update generation numer */ 16. until (t N g) 17. end procedure Figure 1.6. CSAGA: Comined CSA and CGA called with population size P and N g generations. Lines 4-10 perform CSA using q proes on every candidate in the population. In each proe, we generate proailistically x 0 and accept it ased on the Metropolis proaility. Experimentally, j we set q to e Ng. As discussed earlier, we use the est point of one run as the starting point of 6 the next run. Lines start a GA search after the SA part has een completed. The algorithm searches in the x suspace y generating proes using GA operators, sorting all candidates according to their tness values L d after each proe is generated. In ordering candidates, since each candidate has its own vector of Lagrange multipliers, the algorithm rst computes the average value of Lagrange multipliers for each constraint over all candidates in P(t) and then calculates L d for each candidate using the average Lagrange multipliers. Note that CSAGA has diculties similar to those of CGA in determining a proper numer of candidates to use in its population and the duration of each run. We address these two issues in the CSAGA ID in the next susection CGA and CSAGA with iterative deepening In this section we present a method to determine the optimal numer of generations in one run of CGA and CSAGA in order to nd a CGM dn. The method is ased on the use of iterative deepening [7] that determines an upper ound on N g in order to minimize the expected total overhead in (1.3), where N g is the numer of generations in one run of CGA. The numer of proes expended in one run of CGA or CSAGA is N = N g P, where P is the population size. For a xed P, let ^PR (N g ) = P R (P N g ) e the reachaility proaility of nding CGM dn. From (1.3), the expected total numer of proes using multiple runs of either CGA or CSAGA and xed P is: N P R (N) = N gp P R (N g P ) = P N g ^P R (N g ) (1.11) N In order to have an optimal numer of generations N gopt that minimizes (1.11), g must have ^P R (N g) an asolute minimum in (0,1). This condition is true since ^PR (N g ) of CGA has similar ehavior

11 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization 6 11 as P R (N g ) of CSA. It has een veried ased on statistics collected on ^PR (N g ) and N g at various P when CGA and CSAGA are used to solve ten discretized enchmark prolems G1-G10 [11]. Figure 1.1 illustrates the existence of such an asolute minimum when CSAGA with P = 3 was applied to solve G1. Similar to the design of CSA ID, we apply iterative deepening to estimate N gopt. CGA ID in Figure 1.5 uses a set of geometrically increasing N g to nd a CGM dn : N gi = i N 0 ; i = 0; 1; : : : (1.12) where N 0 is the (small) initial numer of generations. Under each N g, CGA is run for a maximum of K times ut stops immediately when a feasile solution has een found, when no etter solution has een found in two successive generations, and after the numer of iterations has een increased geometrically at least ve times. These conditions are used to ensure that iterative deepening has een applied adequately. For iterative deepening to work, > 1. Let ^PR (N gi ) e the reachaility proaility of one run of CGA under N gi generations, B opt (f 0 ) e the expected total numer of proes taken y CGA with N gopt to nd a CGM dn, and B ID (f 0 ) e the expected total numer of proes taken y CGA ID in Figure 1.5 to nd a solution of quality f 0 starting from N 0 generations. According to (1.11), B opt (f 0 ) = P N gopt ^P R (N gopt ) (1.13) The following theorem shows the sucient conditions in order for B ID (f 0 ) = O(B opt (f 0 )). Theorem 3 Optimality of CGA ID and CSAGA ID. B ID (f 0 ) = O(B opt (f 0 )) if a) ^PR (0) = 0; ^PR (N g ) is monotonically non-decreasing for N g in (0; 1); and lim Ng!1 ^P R (N g ) 1; ) (1 ^PR (N gopt )) K < 1. The proof is not shown due to space limitations. Typically, = 2, and ^PR (N gopt ) 0:25 in all the enchmarks tested. Sustituting these values into condition () in Theorem 3 yields K > 2:4. In our experiments, we have used K = 3. Since CGA is run a maximum of three times under each N g, B opt (f 0 ) is of the same order of magnitude as one run of CGA with N gopt. The only remaining issue left in the design of CGA ID and CSAGA ID is in choosing a suitale population size P in each generation. In designing CGA ID, we found that the optimal P ranges from 4 to 40 and is dicult to determine a priori. Although it is possile to choose a suitale P dynamically, we do not present the algorithm here due to space limitations and ecause it performs worse than CSAGA ID. In selecting P for CSAGA ID, we note in the design of CSA ID in Figure 1.2 that K = 3 parallel runs are made at each cooling schedule in order to increase the success proaility of nding a solution. For this reason, we set P = K = 3 in our experiments. Our experimental results in the next section show that, although the optimal P may e slightly dierent from 3, the corresponding expected overhead to nd a CGM dn diers very little from that when a constant P is used.

12 12 4. Experimental Results We present in this section our experimental results in evaluating CSA ID, CGA ID and CSAGA ID on discrete constrained NLPs. Based on the framework in Figure 1.4, we rst determine the est comination of strategies to use in generating proes and in organizing candidates. Using the est comination of strategies, we then show experimental results on some constrained NLPs. Due to a lack of large-scale discrete enchmarks, we derive our enchmarks from two sets of continuous enchmarks: Prolem G1-G10 [11, 8] and Floudas and Pardalos' Prolems [5] Implementation Details In theory, algorithms derived from the framework, such as CSA, CGA, and CSAGA, will look for SP dn. In practice, however, it is important to choose appropriate neighorhoods and generate proper trial points in x and suspaces in order to solve constrained NLPs eciently. An important component of these methods is the frequency at which is updated. Like in CSA [13], we have set experimentally in CGA and CSAGA the ratio of generating trial points in x and suspaces from the current point to e 20n to m, where n is the numer of variales and m is the numer of constraints. This ratio means that x is updated more often than. In generating trial points in the x suspace, we have used a dynamically controlled neighorhood size in the SA part [13] ased on the 1:1 ratio rule [3], whereas in the GA part, we have used the seven operators in Genocop III [10] and L d as our tness function. In implementing CSA ID, CGA ID and CSAGA ID, we have used the default parameters of CSA [13] in the SA part and those of Genocop III [10] in the GA part. The generation of trial point 0 in the suspace is done y the following rule: 0 j = j + r 1 j where j = 1; ; m: (1.14) Here, r 1 is randomly generated in [1=2; +1=2] if we choose to generate proailistically, and is randomly generated in [0; 1] if we choose to generate proes in in a greedy fashion. We adjust adaptively according to the degree of constraint violations, where = w H(x) = [w 1 H 1 (x); w 2 H 2 (x); ; w m H m (x)]; (1.15) represents vector product, and H is the vector of maximum violations dened in (1.10). When H i (x) is satised, i does not need to e updated; hence, i = 0. In contrast, when a constraint is not satised, we adjust i y modifying w i according to how fast H i (x) is changing: 0 w w i = i 1 w i if H i (x) > 0 T if H i (x) < 1 T (1.16) where T is the temperature, and 0 = 1:25, 1 =0.8, 0 = 1:0, and 1 = 0:01 were chosen experimentally. When H i (x) is reduced too quickly (i.e., H i (x) < 1 T ), H i (x) is over-weighted, leading to possily poor ojective values or diculty in satisfying other under-weighted constraints. Hence, we reduce i 's neighorhood. In contrast, if H i (x) is reduced too slowly (i.e., H i (x) > 0 T ), we enlarge i 's neighorhood in order to improve its chance of satisfaction. Note that w i is adjusted using T as a reference ecause constraint violations are expected to decrease when T decreases. In addition, for iterative deepening to work, we have set the following parameters: = 2, K = 3, N 0 = 10 n v, and N max = 1: n v, where n v is the numer of variales, and N 0 and N max are, respectively, initial and maximum numer of proes.

13 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization 7 13 Tale 1.1. Timing results on evaluating various cominations of strategies in CSA ID, CGA ID and CSAGA ID with P = 3 to nd solutions that deviate y 1% and 10% from the est-known solution of a discretized version of G2. All CPU times in seconds were averaged over 10 runs and were collected on a Pentinum III 500-MHz computer with Solaris 7. '' means that no solution with desired quality can e found. Proe Generation Strategy Insertion Target Solution 1% o CGM dn Target Solution 10% o CGM dn suspace x suspace Strategy CSA ID CGA ID CSAGA ID CSA ID CGA ID CSAGA ID proailistic proailistic annealing 6.91 sec sec sec sec sec. proailistic proailistic deterministic 9.02 sec sec sec sec sec. proailistic deterministic annealing sec sec sec. proailistic deterministic deterministic sec sec. greedy proailistic annealing 7.02 sec sec sec sec. greedy proailistic deterministic 7.02 sec sec sec sec. greedy deterministic annealing sec sec sec. greedy deterministic deterministic sec sec sec Evaluation Results Due to a lack of large-scale discrete enchmarks, we derive our enchmarks from two sets of continuous enchmarks: Prolem G1-G10 [11, 8] and Floudas and Pardalos' Prolems [5]. In generating a discrete constrained NLP, we discretize continuous variales in the original continuous constrained NLP into discrete variales as follows. In discretizing continuous variale x i in range [l i ; u i ], where l i and u i are lower and upper ounds of x i, respectively, we force x i to take values from the set: A i = 8 < : n o a i + ia i j; j = 0; 1; ; s if s i a i < 1 ai + 1 s j; j = 0; 1; ; ( i a i )sc if i a i 1; (1.17) where s = 1: Tale 1.1 shows the results of evaluating various cominations of strategies in CSA ID, CGA ID, and CSAGA ID on a discretized version of G2 [11, 8]. We show the average time of 10 runs for each comination in order to reach two solution quality levels (1% or 10% worse than CGM dn, assuming the value of CGM dn is known). Evaluation results on other enchmark prolems are similar and are not shown due to space limitations. Our results show that CGA ID is usually less ecient than CSA ID or CSAGA ID. Further, CSA ID or CSAGA ID has etter performance when proes generated in the x suspace are accepted y annealing rather than y deterministic rules (the former prevents a search from getting stuck in local minima or infeasile points). On the other hand, there is little dierence in performance when new proes generated in the suspace are accepted y proailistic or y greedy rules and when new candidates are inserted according to annealing or deterministic rules. In short, generating proes in the x and suspaces proailistically and inserting candidates in oth the x and suspaces y annealing rules leads to good and stale performance. For this reason, we use this comination of strategies in our experiments. We next test our algorithms on ten constrained NLPs G1-G10 [11, 8]. These prolems have ojective functions of various types (linear, quadratic, cuic, polynomial, and nonlinear) and constraints of linear inequalities, nonlinear equalities, and nonlinear inequalities. The numer of variales is up to 20, and that of constraints, including simple ounds, is up to 42. The ratio of feasile space with respect to the whole search space varies from 0% to almost 100%, and the topologies of feasile regions are quite dierent. These prolems were originally designed to e solved y evolutionary

14 14 Tale 1.2. Results on CSA ID, CGA ID and CSAGA ID in nding the est-known solution f for 10 discretized constrained NLPs and their corresponding results found y EA. (S.T. stands for strategic oscillation, H.M. for homomorphous mappings, and D.P. for dynamic penalty. B ID(f ), the CPU time in seconds to nd the est-known solution f, were averaged over 10 runs and were collected on a Pentinum III 500-MHz computer with Solaris 7. #L d represents the numer of L d (x; )-function evaluations. The est B ID(f ) for each prolem is oxed.) Prolem Best EAs CSA ID CGA ID CSAGA ID ID Solution f Best Found Method B ID(f ) #L d P opt B ID(f ) P B ID(f ) #L d P opt B ID(f ) G1 (min) Genocop 1.65 sec sec sec G2 (max) S.T sec sec. 3 G3 (max) S.T sec sec. 3 G4 (min) H.M sec sec sec sec sec sec sec sec sec. G5 (min) D.P sec sec sec G6 (min) Genocop 0.99 sec sec sec G7 (min) H.M sec sec sec G8 (max) H.M sec sec sec G9 (min) Genocop 0.74 sec sec. 3 G10 (min) H.M sec sec sec sec sec sec sec sec sec sec. algorithms (EAs) in which constraint handling techniques were tuned for each prolem in order to get good results. Examples of such techniques include keeping a search within feasile regions with specic genetic operators and dynamic and adaptive penalty methods. Tale 1.2 compares the performance of CSA ID, CGA ID, and CSAGA ID with respect to B ID (f ), the expected total CPU time of multiple runs until a solution of value f is found. The rst two columns show the prolem IDs and the corresponding known f. The next two columns show the est solutions otained y EAs and the specic constraint handling techniques used to generate the solutions. Since all CSA ID, CGA ID and CSAGA ID can nd a CGM dn in all 10 runs, we compare their performance with respect to T, the average total overhead of multiple runs until a CGM dn is found. The fth and sixth columns show, respectively, the average time and numer of L d (x; ) function evaluations CSA ID takes to nd f. The next two columns show the performance of CGA ID with respect to P opt, the optimal population size found y enumeration, and the average time to nd f. These results show that CGA ID is not competitive as compared to CSA ID, even when P opt is used. The results on including additional steps in CGA ID to select a suitale P at run time are worse and are not shown due to space limitations. Finally, the last ve columns show the performance of CSAGA ID. The rst three present the average times and numer of L d (x; ) evaluations using a constant P, whereas the last two show the average times using P opt found y enumeration. These results show little improvements in using P opt. Further, CSAGA ID has etween 9% and 38% in improvement in B ID (f ), when compared to that of CSA ID, for the 10 prolems except for G4 and G10. Comparing CGA ID and CSAGA ID with EA, we see that EA was only ale to nd f in three of the ten prolems, despite extensive tuning and using prolem-specic heuristics, whereas oth CGA ID and CSAGA ID can nd f for all these prolems without any prolem-dependent strategies. It is not possile to report the timing results of EA ecause the results are the est among many runs after extensive tuning. Finally, Tale 1.3 shows the results on selected discretized Floudas and Pardalos' enchmarks [5] that have more than 10 variales and that have many equality or inequality constraints. The rst three columns show the prolem IDs, the known f, and the numer of variales (n v ) in each. The

15 Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization 8 15 Tale 1.3. Results on CSA ID and CSAGA ID with P = 3 in solving selected Floudas and Pardalos' discretized constrained NLP enchmarks (with more than n v = 10 variales). Since Prolem 5: and 7: are especially large and dicult and a search can rarely reach their true CGM dn, we consider a CGM dn found when the solution quality is within 10% of the true CGM dn. All CPU times in seconds were averaged over 10 runs and were collected on a Pentium-III 500-MHz computer with Solaris 7. Prolem f(x) CSA ID CSAGA ID ID Best f n v B ID(f ) B ID(f ) 2.7.1(min) sec (min) sec (min) sec (min) sec (min) sec. 5.2(min) sec. 5.4(min) sec. 7.2(min) sec. 7.3(min) sec. 7.4(min) sec sec sec sec sec sec sec sec sec sec sec. last two columns compare B ID (f ) of CSA ID and CSAGA ID with xed P = 3. They show that CSAGA ID is consistently faster than CSA ID (etween 1.3 and 26.3 times), especially for large prolems. This is attriuted to the fact that GA maintains more diversity of candidates y keeping a population, therey allowing competition among the candidates and leading SA to explore more promising regions. 5. Conclusions In this chapter we have presented new algorithms to look for discrete-neighorhood saddle points in discrete Lagrangian space of constrained optimization prolems. Our results show that genetic algorithms, when comined with simulated annealing, are eective in locating saddle points. Future developments will focus on etter ways to select appropriate heuristics in proe generation, including search direction control and neighorhood size control, at run time.

16

17 References [1] E. Aarts and J. Korst. Simulated Annealing and Boltzmann Machines. J. Wiley and Sons, [2] D. P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, [3] A. Corana, M. Marchesi, C. Martini, and S. Ridella. Minimizing multimodal functions of continuous variales with the simulated annealing algorithm. ACM Trans. on Mathematical Software, 13(3):262{280, [4] A.E. Eien and Zs. Ruttkay. Self-adaptivity for constraint satisfaction: Learning penalty functions. Proceedings of the 3rd IEEE Conference on Evolutionary Computation, pages 258{ 261, [5] C. A. Floudas and P. M. Pardalos. A Collection of Test Prolems for Constrained Gloal Optimization Algorithms, volume 455 of Lecture Notes in Computer Science. Springer-Verlag, [6] J. Joines and C. Houck. On the use of non-stationary penalty functions to solve nonlinear constrained optimization prolems with gas. Proceedings of the First IEEE International Conference on Evolutionary Computation, pages 579{584, [7] R. E. Korf. Depth-rst iterative deepening: An optimal admissile tree search. Articial Intelligence, 27:97{109, [8] S. Koziel and Z. Michalewicz. Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary Computation, 7(1):19{44, [9] D. G. Luenerger. Linear and Nonlinear Programming. Addison-Wesley Pulishing Company, Reading, MA, [10] Z. Michalewicz and G. Nazhiyath. Genocop III: A co-evolutionary algorithm for numerical optimization prolems with nonlinear constraints. Proceedings of IEEE International Conference on Evolutionary Computation, 2:647{651, [11] Z. Michalewicz and M. Schoenauer. Evolutionary algorithms for constrained parameter optimization prolems. Evolutionary Computation, 4(1):1{32, [12] B. W. Wah and Y. X. Chen. Optimal anytime constrained simulated annealing for constrained gloal optimization. Sixth Int'l Conf. on Principles and Practice of Constraint Programming, Septemer

18 18 [13] B. W. Wah and T. Wang. Simulated annealing with asymptotic convergence for nonlinear constrained gloal optimization. Principles and Practice of Constraint Programming, pages 461{475, Octoer [14] B. W. Wah and Z. Wu. The theory of discrete Lagrange multipliers for nonlinear discrete optimization. Principles and Practice of Constraint Programming, pages 28{42, Octoer [15] Z. Wu. Discrete Lagrangian Methods for Solving Nonlinear Discrete Constrained Optimization Prolems. M.Sc. Thesis, Dept. of Computer Science, Univ. of Illinois, Urana, IL, May [16] Z. Wu. The Theory and Applications of Nonlinear Constrained Optimization using Lagrange Multipliers. Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urana, IL, Octoer 2000.

Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction

Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1 Benjamin W. Wah and Yixin Chen Department of Electrical and Computer Engineering and the Coordinated Science

More information

c Copyright by Honghai Zhang 2001

c Copyright by Honghai Zhang 2001 c Copyright by Honghai Zhang 2001 IMPROVING CONSTRAINED NONLINEAR SEARCH ALGORITHMS THROUGH CONSTRAINT RELAXATION BY HONGHAI ZHANG B.S., University of Science and Technology of China, 1998 THESIS Submitted

More information

c Copyright by Zhe Wu 2000

c Copyright by Zhe Wu 2000 c Copyright by Zhe Wu 2000 THE THEORY AND APPLICATIONS OF DISCRETE CONSTRAINED OPTIMIZATION USING LAGRANGE MULTIPLIERS BY ZHE WU B.E., University of Science & Technology of China, 1996 M.S., University

More information

Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1

Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1 Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1 Benjamin W. Wah 1, Yixin Chen 2 and Tao Wang 3 1 Department of Electrical and Computer Engineering and the Coordinated

More information

GLOBAL OPTIMIZATION FOR CONSTRAINED NONLINEAR PROGRAMMING. BY TAO WANG B.E., Zhejiang University, 1989 M.E., Zhejiang University, 1991

GLOBAL OPTIMIZATION FOR CONSTRAINED NONLINEAR PROGRAMMING. BY TAO WANG B.E., Zhejiang University, 1989 M.E., Zhejiang University, 1991 c Copyright by Tao Wang, 2001 GLOBAL OPTIMIZATION FOR CONSTRAINED NONLINEAR PROGRAMMING BY TAO WANG B.E., Zhejiang University, 1989 M.E., Zhejiang University, 1991 THESIS Submitted in partial fulfillment

More information

EVOLUTIONARY OPERATORS FOR CONTINUOUS CONVEX PARAMETER SPACES. Zbigniew Michalewicz. Department of Computer Science, University of North Carolina

EVOLUTIONARY OPERATORS FOR CONTINUOUS CONVEX PARAMETER SPACES. Zbigniew Michalewicz. Department of Computer Science, University of North Carolina EVOLUTIONARY OPERATORS FOR CONTINUOUS CONVEX PARAMETER SPACES Zbigniew Michalewicz Department of Computer Science, University of North Carolina Charlotte, NC 28223, USA and Thomas D. Logan IBM, Charlotte,

More information

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables

Minimizing a convex separable exponential function subject to linear equality constraint and bounded variables Minimizing a convex separale exponential function suect to linear equality constraint and ounded variales Stefan M. Stefanov Department of Mathematics Neofit Rilski South-Western University 2700 Blagoevgrad

More information

Metaheuristics and Local Search

Metaheuristics and Local Search Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :

More information

On the convergence rates of genetic algorithms

On the convergence rates of genetic algorithms Theoretical Computer Science 229 (1999) 23 39 www.elsevier.com/locate/tcs On the convergence rates of genetic algorithms Jun He a;, Lishan Kang b a Department of Computer Science, Northern Jiaotong University,

More information

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,

More information

Parallel Recombinative Simulated Annealing: A Genetic Algorithm. IlliGAL Report No July 1993

Parallel Recombinative Simulated Annealing: A Genetic Algorithm. IlliGAL Report No July 1993 Parallel Recominative Simulated Annealing: A Genetic Algorithm IlliGAL Report No. 93006 July 1993 (Revised Version of IlliGAL Report No. 92002 April 1992) Samir W. Mahfoud Department of Computer Science

More information

Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed

Benchmarking a Hybrid Multi Level Single Linkage Algorithm on the BBOB Noiseless Testbed Benchmarking a Hyrid ulti Level Single Linkage Algorithm on the BBOB Noiseless Tested László Pál Sapientia - Hungarian University of Transylvania 00 iercurea-ciuc, Piata Liertatii, Nr., Romania pallaszlo@sapientia.siculorum.ro

More information

Simulated annealing with asymptotic convergence for nonlinear constrained optimization

Simulated annealing with asymptotic convergence for nonlinear constrained optimization DOI 10.1007/s10898-006-9107-z ORIGINAL PAPER Simulated annealing with asymptotic convergence for nonlinear constrained optimization Benjamin W. Wah Yixin Chen Tao Wang Received: 20 August 2005 / Accepted:

More information

Robot Position from Wheel Odometry

Robot Position from Wheel Odometry Root Position from Wheel Odometry Christopher Marshall 26 Fe 2008 Astract This document develops equations of motion for root position as a function of the distance traveled y each wheel as a function

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Polynomial Degree and Finite Differences

Polynomial Degree and Finite Differences CONDENSED LESSON 7.1 Polynomial Degree and Finite Differences In this lesson, you Learn the terminology associated with polynomials Use the finite differences method to determine the degree of a polynomial

More information

FinQuiz Notes

FinQuiz Notes Reading 9 A time series is any series of data that varies over time e.g. the quarterly sales for a company during the past five years or daily returns of a security. When assumptions of the regression

More information

A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS

A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS OZGUR YENIAY Received 2 August 2004 and in revised form 12 November 2004 Constrained nonlinear programming

More information

radio sky. In other words, we are interested in the local maxima of the rightness distriution, i.e., of the function y(x) that descries how the intens

radio sky. In other words, we are interested in the local maxima of the rightness distriution, i.e., of the function y(x) that descries how the intens Checking if There Exists a Monotonic Function That Is Consistent with the Measurements: An Ecient Algorithm Kavitha Tupelly 1, Vladik Kreinovich 1, and Karen Villaverde 2 1 Department of Computer Science,

More information

SIMU L TED ATED ANNEA L NG ING

SIMU L TED ATED ANNEA L NG ING SIMULATED ANNEALING Fundamental Concept Motivation by an analogy to the statistical mechanics of annealing in solids. => to coerce a solid (i.e., in a poor, unordered state) into a low energy thermodynamic

More information

Discrete Lagrangian Methods for Designing Multiplierless Two-Channel PR-LP Filter Banks

Discrete Lagrangian Methods for Designing Multiplierless Two-Channel PR-LP Filter Banks Journal of VLSI Signal Processing: Systems for Signal, Image, and Video Technology, xx, 1 2 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Discrete Lagrangian Methods

More information

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.2 Advanced Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Acceptance Function Initial Temperature Equilibrium State Cooling Schedule Stopping Condition Handling Constraints

More information

Piecewise Linear Dynamic Programming for Constrained POMDPs

Piecewise Linear Dynamic Programming for Constrained POMDPs Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (28) Piecewise Linear Dynamic Programming for Constrained POMDPs Joshua D. Isom Sikorsky Aircraft Corporation Stratford, CT 6615

More information

Lecture 12: Grover s Algorithm

Lecture 12: Grover s Algorithm CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 12: Grover s Algorithm March 7, 2006 We have completed our study of Shor s factoring algorithm. The asic technique ehind Shor

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Simulated Annealing for Constrained Global Optimization

Simulated Annealing for Constrained Global Optimization Monte Carlo Methods for Computation and Optimization Final Presentation Simulated Annealing for Constrained Global Optimization H. Edwin Romeijn & Robert L.Smith (1994) Presented by Ariel Schwartz Objective

More information

Two-Stage Improved Group Plans for Burr Type XII Distributions

Two-Stage Improved Group Plans for Burr Type XII Distributions American Journal of Mathematics and Statistics 212, 2(3): 33-39 DOI: 1.5923/j.ajms.21223.4 Two-Stage Improved Group Plans for Burr Type XII Distriutions Muhammad Aslam 1,*, Y. L. Lio 2, Muhammad Azam 1,

More information

DISTRIBUTION SYSTEM OPTIMISATION

DISTRIBUTION SYSTEM OPTIMISATION Politecnico di Torino Dipartimento di Ingegneria Elettrica DISTRIBUTION SYSTEM OPTIMISATION Prof. Gianfranco Chicco Lecture at the Technical University Gh. Asachi, Iaşi, Romania 26 October 2010 Outline

More information

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study

On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study Yang Yu, and Zhi-Hua Zhou, Senior Member, IEEE National Key Laboratory for Novel Software Technology Nanjing University,

More information

Topological structures and phases. in U(1) gauge theory. Abstract. We show that topological properties of minimal Dirac sheets as well as of

Topological structures and phases. in U(1) gauge theory. Abstract. We show that topological properties of minimal Dirac sheets as well as of BUHEP-94-35 Decemer 1994 Topological structures and phases in U(1) gauge theory Werner Kerler a, Claudio Rei and Andreas Weer a a Fachereich Physik, Universitat Marurg, D-35032 Marurg, Germany Department

More information

ADAM BERGER, JOHN MULVEY, ED ROTHBERG, AND ROBERT VANDERBEI. Abstract. One component of every multi-stage stochastic program is a

ADAM BERGER, JOHN MULVEY, ED ROTHBERG, AND ROBERT VANDERBEI. Abstract. One component of every multi-stage stochastic program is a SOLVING MULTISTAGE STOCHASTIC PROGRAMS WITH TREE DISSECTION ADAM BERGER, JOHN MULVEY, ED ROTHBERG, AND ROBERT VANDERBEI Astract. One component of every multi-stage stochastic program is a ltration that

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Linearly-solvable Markov decision problems

Linearly-solvable Markov decision problems Advances in Neural Information Processing Systems 2 Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego todorov@cogsci.ucsd.edu

More information

Optimal Routing in Chord

Optimal Routing in Chord Optimal Routing in Chord Prasanna Ganesan Gurmeet Singh Manku Astract We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected

More information

TIGHT BOUNDS FOR THE FIRST ORDER MARCUM Q-FUNCTION

TIGHT BOUNDS FOR THE FIRST ORDER MARCUM Q-FUNCTION TIGHT BOUNDS FOR THE FIRST ORDER MARCUM Q-FUNCTION Jiangping Wang and Dapeng Wu Department of Electrical and Computer Engineering University of Florida, Gainesville, FL 3611 Correspondence author: Prof.

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

Computational statistics

Computational statistics Computational statistics Combinatorial optimization Thierry Denœux February 2017 Thierry Denœux Computational statistics February 2017 1 / 37 Combinatorial optimization Assume we seek the maximum of f

More information

MATH 225: Foundations of Higher Matheamatics. Dr. Morton. 3.4: Proof by Cases

MATH 225: Foundations of Higher Matheamatics. Dr. Morton. 3.4: Proof by Cases MATH 225: Foundations of Higher Matheamatics Dr. Morton 3.4: Proof y Cases Chapter 3 handout page 12 prolem 21: Prove that for all real values of y, the following inequality holds: 7 2y + 2 2y 5 7. You

More information

Genetic Algorithms with Dynamic Niche Sharing. for Multimodal Function Optimization. at Urbana-Champaign. IlliGAL Report No

Genetic Algorithms with Dynamic Niche Sharing. for Multimodal Function Optimization. at Urbana-Champaign. IlliGAL Report No Genetic Algorithms with Dynamic Niche Sharing for Multimodal Function Optimization Brad L. Miller Dept. of Computer Science University of Illinois at Urbana-Champaign Michael J. Shaw Beckman Institute

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Coping With NP-hardness Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you re unlikely to find poly-time algorithm. Must sacrifice one of three desired features. Solve

More information

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria

12. LOCAL SEARCH. gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria 12. LOCAL SEARCH gradient descent Metropolis algorithm Hopfield neural networks maximum cut Nash equilibria Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley h ttp://www.cs.princeton.edu/~wayne/kleinberg-tardos

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline

More information

A.I.: Beyond Classical Search

A.I.: Beyond Classical Search A.I.: Beyond Classical Search Random Sampling Trivial Algorithms Generate a state randomly Random Walk Randomly pick a neighbor of the current state Both algorithms asymptotically complete. Overview Previously

More information

Local and Stochastic Search

Local and Stochastic Search RN, Chapter 4.3 4.4; 7.6 Local and Stochastic Search Some material based on D Lin, B Selman 1 Search Overview Introduction to Search Blind Search Techniques Heuristic Search Techniques Constraint Satisfaction

More information

arxiv: v1 [cs.fl] 24 Nov 2017

arxiv: v1 [cs.fl] 24 Nov 2017 (Biased) Majority Rule Cellular Automata Bernd Gärtner and Ahad N. Zehmakan Department of Computer Science, ETH Zurich arxiv:1711.1090v1 [cs.fl] 4 Nov 017 Astract Consider a graph G = (V, E) and a random

More information

Finding Succinct. Ordered Minimal Perfect. Hash Functions. Steven S. Seiden 3 Daniel S. Hirschberg 3. September 22, Abstract

Finding Succinct. Ordered Minimal Perfect. Hash Functions. Steven S. Seiden 3 Daniel S. Hirschberg 3. September 22, Abstract Finding Succinct Ordered Minimal Perfect Hash Functions Steven S. Seiden 3 Daniel S. Hirschberg 3 September 22, 1994 Abstract An ordered minimal perfect hash table is one in which no collisions occur among

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

Hill climbing: Simulated annealing and Tabu search

Hill climbing: Simulated annealing and Tabu search Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is

More information

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash

CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash CS 781 Lecture 9 March 10, 2011 Topics: Local Search and Optimization Metropolis Algorithm Greedy Optimization Hopfield Networks Max Cut Problem Nash Equilibrium Price of Stability Coping With NP-Hardness

More information

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts.

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts. An Upper Bound on the oss from Approximate Optimal-Value Functions Satinder P. Singh Richard C. Yee Department of Computer Science University of Massachusetts Amherst, MA 01003 singh@cs.umass.edu, yee@cs.umass.edu

More information

Lin-Kernighan Heuristic. Simulated Annealing

Lin-Kernighan Heuristic. Simulated Annealing DM63 HEURISTICS FOR COMBINATORIAL OPTIMIZATION Lecture 6 Lin-Kernighan Heuristic. Simulated Annealing Marco Chiarandini Outline 1. Competition 2. Variable Depth Search 3. Simulated Annealing DM63 Heuristics

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some

More information

Point-Based Value Iteration for Constrained POMDPs

Point-Based Value Iteration for Constrained POMDPs Point-Based Value Iteration for Constrained POMDPs Dongho Kim Jaesong Lee Kee-Eung Kim Department of Computer Science Pascal Poupart School of Computer Science IJCAI-2011 2011. 7. 22. Motivation goals

More information

On Learning Linear Ranking Functions for Beam Search

On Learning Linear Ranking Functions for Beam Search Yuehua Xu XUYU@EECS.OREGONSTATE.EDU Alan Fern AFERN@EECS.OREGONSTATE.EDU School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331 USA Astract Beam search is used

More information

A Tabu Search Algorithm to Construct BIBDs Using MIP Solvers

A Tabu Search Algorithm to Construct BIBDs Using MIP Solvers The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, Septemer 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 179 189 A Tau Search Algorithm to Construct

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement Open Journal of Statistics, 07, 7, 834-848 http://www.scirp.org/journal/ojs ISS Online: 6-798 ISS Print: 6-78X Estimating a Finite Population ean under Random on-response in Two Stage Cluster Sampling

More information

Finding optimal configurations ( combinatorial optimization)

Finding optimal configurations ( combinatorial optimization) CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint

More information

ANOVA 3/12/2012. Two reasons for using ANOVA. Type I Error and Multiple Tests. Review Independent Samples t test

ANOVA 3/12/2012. Two reasons for using ANOVA. Type I Error and Multiple Tests. Review Independent Samples t test // ANOVA Lectures - Readings: GW Review Independent Samples t test Placeo Treatment 7 7 7 Mean... Review Independent Samples t test Placeo Treatment 7 7 7 Mean.. t (). p. C. I.: p t tcrit s pt crit s t

More information

1 Hoeffding s Inequality

1 Hoeffding s Inequality Proailistic Method: Hoeffding s Inequality and Differential Privacy Lecturer: Huert Chan Date: 27 May 22 Hoeffding s Inequality. Approximate Counting y Random Sampling Suppose there is a ag containing

More information

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139

Upper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139 Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology

More information

A converse Gaussian Poincare-type inequality for convex functions

A converse Gaussian Poincare-type inequality for convex functions Statistics & Proaility Letters 44 999 28 290 www.elsevier.nl/locate/stapro A converse Gaussian Poincare-type inequality for convex functions S.G. Bokov a;, C. Houdre ; ;2 a Department of Mathematics, Syktyvkar

More information

Lecture Notes: Introduction to IDF and ATC

Lecture Notes: Introduction to IDF and ATC Lecture Notes: Introduction to IDF and ATC James T. Allison April 5, 2006 This lecture is an introductory tutorial on the mechanics of implementing two methods for optimal system design: the individual

More information

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5]. Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic

More information

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line

The Mean Version One way to write the One True Regression Line is: Equation 1 - The One True Line Chapter 27: Inferences for Regression And so, there is one more thing which might vary one more thing aout which we might want to make some inference: the slope of the least squares regression line. The

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

0 o 1 i B C D 0/1 0/ /1

0 o 1 i B C D 0/1 0/ /1 A Comparison of Dominance Mechanisms and Simple Mutation on Non-Stationary Problems Jonathan Lewis,? Emma Hart, Graeme Ritchie Department of Articial Intelligence, University of Edinburgh, Edinburgh EH

More information

HIGH-DIMENSIONAL GRAPHS AND VARIABLE SELECTION WITH THE LASSO

HIGH-DIMENSIONAL GRAPHS AND VARIABLE SELECTION WITH THE LASSO The Annals of Statistics 2006, Vol. 34, No. 3, 1436 1462 DOI: 10.1214/009053606000000281 Institute of Mathematical Statistics, 2006 HIGH-DIMENSIONAL GRAPHS AND VARIABLE SELECTION WITH THE LASSO BY NICOLAI

More information

Large Scale Semi-supervised Linear SVMs. University of Chicago

Large Scale Semi-supervised Linear SVMs. University of Chicago Large Scale Semi-supervised Linear SVMs Vikas Sindhwani and Sathiya Keerthi University of Chicago SIGIR 2006 Semi-supervised Learning (SSL) Motivation Setting Categorize x-billion documents into commercial/non-commercial.

More information

Methods for finding optimal configurations

Methods for finding optimal configurations CS 1571 Introduction to AI Lecture 9 Methods for finding optimal configurations Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Search for the optimal configuration Optimal configuration search:

More information

Optimization Methods via Simulation

Optimization Methods via Simulation Optimization Methods via Simulation Optimization problems are very important in science, engineering, industry,. Examples: Traveling salesman problem Circuit-board design Car-Parrinello ab initio MD Protein

More information

Rollout Policies for Dynamic Solutions to the Multi-Vehicle Routing Problem with Stochastic Demand and Duration Limits

Rollout Policies for Dynamic Solutions to the Multi-Vehicle Routing Problem with Stochastic Demand and Duration Limits Rollout Policies for Dynamic Solutions to the Multi-Vehicle Routing Prolem with Stochastic Demand and Duration Limits Justin C. Goodson Jeffrey W. Ohlmann Barrett W. Thomas Octoer 2, 2012 Astract We develop

More information

Simulated Annealing. Local Search. Cost function. Solution space

Simulated Annealing. Local Search. Cost function. Solution space Simulated Annealing Hill climbing Simulated Annealing Local Search Cost function? Solution space Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The

More information

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G.

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G. In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and J. Alspector, (Eds.). Morgan Kaufmann Publishers, San Fancisco, CA. 1994. Convergence of Indirect Adaptive Asynchronous

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 1996 Many mathematical constants are expressed as slowly convergent sums

of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 1996 Many mathematical constants are expressed as slowly convergent sums Zeta Function Expansions of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 996 Many mathematical constants are expressed as slowly convergent sums of the form C = f( ) () n n2a for some

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules Joc Cing Tay and Djoko Wibowo Intelligent Systems Lab Nanyang Technological University asjctay@ntuedusg Abstract As the Flexible

More information

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA)

ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA) ERASMUS UNIVERSITY ROTTERDAM Information concerning the Entrance examination Mathematics level 2 for International Business Administration (IBA) General information Availale time: 2.5 hours (150 minutes).

More information

Fast Evolution Strategies. Xin Yao and Yong Liu. University College, The University of New South Wales. Abstract

Fast Evolution Strategies. Xin Yao and Yong Liu. University College, The University of New South Wales. Abstract Fast Evolution Strategies Xin Yao and Yong Liu Computational Intelligence Group, School of Computer Science University College, The University of New South Wales Australian Defence Force Academy, Canberra,

More information

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms

A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China

More information

In: Proc. BENELEARN-98, 8th Belgian-Dutch Conference on Machine Learning, pp 9-46, 998 Linear Quadratic Regulation using Reinforcement Learning Stephan ten Hagen? and Ben Krose Department of Mathematics,

More information

Methods for finding optimal configurations

Methods for finding optimal configurations S 2710 oundations of I Lecture 7 Methods for finding optimal configurations Milos Hauskrecht milos@pitt.edu 5329 Sennott Square S 2710 oundations of I Search for the optimal configuration onstrain satisfaction

More information

Introduction to Simulated Annealing 22c:145

Introduction to Simulated Annealing 22c:145 Introduction to Simulated Annealing 22c:145 Simulated Annealing Motivated by the physical annealing process Material is heated and slowly cooled into a uniform structure Simulated annealing mimics this

More information

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Artificial Intelligence, Computational Logic PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE Lecture 4 Metaheuristic Algorithms Sarah Gaggl Dresden, 5th May 2017 Agenda 1 Introduction 2 Constraint

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Average Reward Parameters

Average Reward Parameters Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend

More information

1. Define the following terms (1 point each): alternative hypothesis

1. Define the following terms (1 point each): alternative hypothesis 1 1. Define the following terms (1 point each): alternative hypothesis One of three hypotheses indicating that the parameter is not zero; one states the parameter is not equal to zero, one states the parameter

More information

Simulated Annealing. 2.1 Introduction

Simulated Annealing. 2.1 Introduction Simulated Annealing 2 This chapter is dedicated to simulated annealing (SA) metaheuristic for optimization. SA is a probabilistic single-solution-based search method inspired by the annealing process in

More information

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T

Linear Programming. Our market gardener example had the form: min x. subject to: where: [ acres cabbages acres tomatoes T Our market gardener eample had the form: min - 900 1500 [ ] suject to: Ñ Ò Ó 1.5 2 â 20 60 á ã Ñ Ò Ó 3 â 60 á ã where: [ acres caages acres tomatoes T ]. We need a more systematic approach to solving these

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Polynomial Time Perfect Sampler for Discretized Dirichlet Distribution

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Polynomial Time Perfect Sampler for Discretized Dirichlet Distribution MATHEMATICAL ENGINEERING TECHNICAL REPORTS Polynomial Time Perfect Sampler for Discretized Dirichlet Distriution Tomomi MATSUI and Shuji KIJIMA METR 003 7 April 003 DEPARTMENT OF MATHEMATICAL INFORMATICS

More information

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing

below, kernel PCA Eigenvectors, and linear combinations thereof. For the cases where the pre-image does exist, we can provide a means of constructing Kernel PCA Pattern Reconstruction via Approximate Pre-Images Bernhard Scholkopf, Sebastian Mika, Alex Smola, Gunnar Ratsch, & Klaus-Robert Muller GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany fbs,

More information

Laboratory for Computer Science. Abstract. We examine the problem of nding a good expert from a sequence of experts. Each expert

Laboratory for Computer Science. Abstract. We examine the problem of nding a good expert from a sequence of experts. Each expert Picking the Best Expert from a Sequence Ruth Bergman Laboratory for Articial Intelligence Massachusetts Institute of Technology Cambridge, MA 0239 Ronald L. Rivest y Laboratory for Computer Science Massachusetts

More information

Bayesian inference with reliability methods without knowing the maximum of the likelihood function

Bayesian inference with reliability methods without knowing the maximum of the likelihood function Bayesian inference with reliaility methods without knowing the maximum of the likelihood function Wolfgang Betz a,, James L. Beck, Iason Papaioannou a, Daniel Strau a a Engineering Risk Analysis Group,

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Single Solution-based Metaheuristics

Single Solution-based Metaheuristics Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information