Instituto Tecnologico de Aguascalientes. From the SelectedWorks of Adrian Bonilla-Petriciolet

Size: px
Start display at page:

Download "Instituto Tecnologico de Aguascalientes. From the SelectedWorks of Adrian Bonilla-Petriciolet"

Transcription

1 Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2012 Evaluation of Covariance Matrix Adaptation Evolution Strategy, Shuffled Complex Evolution and Firefly Algorithms for Phase Stability, Phase Equilibrium and Chemical Equilibrium Problems Seif-Eddeen Fateen, Cairo University Adrian Bonilla-Petriciolet Gade Pandu Rangaiah, National University of singapore Available at:

2 chemical engineering research and design 9 0 ( ) Contents lists available at SciVerse ScienceDirect Chemical Engineering Research and Design j ourna l ho me page: Evaluation of Covariance Matrix Adaptation Evolution Strategy, Shuffled Complex Evolution and Firefly Algorithms for phase stability, phase equilibrium and chemical equilibrium problems Seif-Eddeen K. Fateen a,, Adrián Bonilla-Petriciolet b, Gade Pandu Rangaiah c a Department of Chemical Engineering, Cairo University, Egypt b Department of Chemical Engineering, Instituto Tecnológico de Aguascalientes, Mexico c Department of Chemical & Biomolecular Engineering, National University of Singapore, Singapore a b s t r a c t Phase equilibrium calculations and phase stability analysis of reactive and non-reactive systems play a significant role in the simulation, design and optimization of reaction and separation processes in chemical engineering. These challenging problems, which are often multivariable and non-convex, require global optimization methods for solving them. Stochastic global optimization algorithms have shown promise in providing reliable and efficient solutions for these thermodynamic problems. In this study, we evaluate three alternative global optimization algorithms for phase and chemical equilibrium calculations, namely, Covariant Matrix Adaptation-Evolution Strategy (CMA-ES), Shuffled Complex Evolution (SCE) and Firefly Algorithm (FA). The performance of these three stochastic algorithms was tested and compared to identify their relative strengths for phase equilibrium and phase stability problems. The phase equilibrium problems include both multi-component systems with and without chemical reactions. FA was found to be the most reliable among the three techniques, whereas CMA-ES can find the global minimum reliably and accurately even with a smaller number of iterations The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved. Keywords: Covariant Matrix Adaptation Evolution Strategy; Shuffled Complex Evolution; Phase stability analysis; Phase equilibrium calculations; Chemical equilibrium calculations 1. Introduction The calculations of phase and chemical equilibrium are an essential component of all process simulators. The search for better methods and techniques to solve these often-difficult thermodynamic problems is still ongoing. Current methods have their own deficiencies and sometime fail to find the correct solutions for difficult problems such as calculation of simultaneous phase and chemical equilibrium for systems containing many components near the critical point of the mixture and the phase boundaries (Zhang et al., 2011a). Actually, novel processes handle complex mixtures, severe Abbreviations: CMA-ES, Covariant Matrix Adaptation-Evolution Strategy; DE, Differential Evolution; DETL, Differential Evolution with Tabu List; ES, Evolution Strategy; FA, Firefly Algorithm; GA, Genetic Algorithm; GSR, Global Success Rate; IDE, Integrated Differential Evolution; IDE N, IDE without Tabu List and Radius; MFA, Modified Firefly Algorithm; MFlops, million floating-point operations; NFE, number of function evaluations; PEC, phase equilibrium calculations; PS, phase stability; PSO, Particle Swarm Optimization; PSO-CNM, PSO with Nelder Mead Simplex Method (PSO-CNM); PSO-CQN, PSO with Quasi-Newton Method; rpec, Reactive Phase Equilibrium Calculations; RPS, Repulsive Particle Swarm; RT, Random Tunneling; SA, Simulated Annealing; SC, stopping criteria; SCE, Shuffled Complex Evolution; SQP, Sequential Quadratic Programming; SR, Success Rate; TPDF, tangent plane distance function; TS, Tabu Search; UBBPSO, Unified Bare-Bones Particle Swarm Optimization. Corresponding author. Tel.: ; fax: addresses: sfateen@alum.mit.edu (S.-E.K. Fateen), petriciolet@hotmail.com (A. Bonilla-Petriciolet), RagaiahGP@nus.edu.eg (G.P. Rangaiah). Received 12 November 2011; Received in revised form 11 April 2012; Accepted 24 April /$ see front matter 2012 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.

3 2052 chemical engineering research and design 9 0 ( ) c number of components D dimension of the problem F, F obj objective function G Gibbs free energy g * global minimum I brightness of a firefly in FA i, j index of the component, or index of the firefly in FA k index of the problem dimension in FA K eq equilibrium constant of a reaction m number of points in a complex in SCE n number of moles in the thermodynamics equations, or population size parameter in the stochastic algorithms N matrix of stoichiometric coefficients of a set of reference components chosen from the reactions np number of problems n ref column vector of moles of each of the reference components p penalty term or number of complexes in SCE q number of points in a sub-complex in SCE r number of reactions R g universal gas constant SC max maximum number of successive iterations/generations without improvement in the best objective function value T temperature, K x decision variable vector y mole fraction z mole fraction Greek letters number of consecutive offspring generated by each sub-complex in SCE, or parameter of the random term in FA move equation ˇ transformed decision variables used instead of mole fractions, or attractiveness of a firefly in FA ε vector of random numbers in FA ϕ fugacity coefficient of a pure component ˆϕ fugacity coefficient of a component in a mixture activity coefficient, or parameter in the attractiveness equation in FA parameter of the extra term in FA chemical potential stoichiometric coefficient of a component in a reaction number of phases Subscripts F feed I index for the components in the mixture K index for the iterations in FA min minimum value O initial value of the parameter Y at composition y Z at composition z Superscripts 0 pure component operating conditions, or even incorporate combined unit operations (e.g., reactive distillation, extractive distillation, etc.). Therefore, wrong estimation of the thermodynamic state may have negative impacts on the design, analysis and operation of such novel processes. The prediction of phase behavior of a mixture involves the solution of two thermodynamic problems: phase stability (PS) analysis and phase equilibrium calculations (PEC). PS problems involve the determination of whether a system will remain in one phase at the given conditions or split into two or more phases. This type of problems usually precedes the PEC problem, which involves the determination of the number, type and composition of the phases at equilibrium at the given conditions. During the analysis of a chemical process, PS and PEC problems need to be solved numerous times. Solving both these types of thermodynamic problems involves the use of global optimization. In particular, PS analysis requires the minimization of the tangent plane distance function (TPDF), while the Gibbs free energy function needs to be minimized for PEC (Srinivas and Rangaiah, 2007b). For PS problems, finding a local minimum for TPDF is not sufficient; and the global minimum must be identified for determining the correct stability condition. The number and type of phases, at which Gibbs free energy function achieves the global minimum, are usually unknown in PEC problems, and so several calculations may have to be performed using different phase configurations to identify the stable equilibrium state, which increases the complexity of the optimization problem. In general, high non-linearity of thermodynamic models, non-convexity of TPDF and Gibbs free energy functions and the presence of a trivial solution in the search space make PEC and PS problems difficult to solve. Moreover, both these thermodynamic problems may have local optimal values that are very comparable to the global optimum value, which makes it challenging to find the global optimum (Bonilla-Petriciolet et al., 2011). Note that a Reactive Phase Equilibrium Calculation (rpec) or chemical equilibrium calculation is performed if any reaction is possible in the system under study, and the objective function (i.e., Gibbs free energy) must satisfy the chemical equilibrium constraints. Further, complexity and dimensionality of these calculations increase significantly. Hence, PS, PEC, and rpec problems require a reliable and efficient global optimization algorithm. Many deterministic and stochastic optimization algorithms have been proposed and tested for finding the global optimum in PS, PEC and rpec problems, particularly in the past two decades (Avami and Saboohi, 2011; Bonilla-Petriciolet et al., 2011; Bonilla-Petriciolet and Segovia-Hernández, 2010; Burgos-Solórzano et al., 2004; Jalali and Seader, 1999; Jalali et al., 2008; Rangaiah, 2001; Reynolds et al., 1997; Rossi et al., 2009; Srinivas and Rangaiah, 2007a; Wasylkiewicz and Ung, 2000). A comprehensive review of these techniques can be found in (Zhang et al., 2011a). Deterministic global optimization methods have been applied to different PEC, PS and/or rpec problems. For example, homotopy continuation methods have been applied to PEC and PS problems (Jalali et al., 2008; Sun and Seider, 1995). Although homotopy-continuation algorithm guarantees global convergence to a single solution, it does not guarantee global convergence to multiple solutions. Even using complex search spaces, the success of continuation methods in finding all solutions cannot be assured. On the other hand, interval analysis has been used to solve nonlinear equations to find all stationary points of TPDF for phase stability analysis (Tessier et al., 2000; Xu et al., 2002).

4 chemical engineering research and design 9 0 ( ) This approach has been combined with other numerical procedures for performing phase equilibrium calculations. For example, Burgos-Solórzano et al. (2004) applied interval Newton method for solving the PEC and rpec problems under high pressure. In general, the interval analysis method can solve nonlinear equations to find all solutions lying within the variable bounds. It requires an interval extension of the Jacobian matrix, and involves setting up and solving the interval Newton equation for a new interval. However, it is quite a challenge to find all solutions and the Jacobian matrix for the complex systems, and the computational time is significant for multicomponent systems. Recently, Rossi et al. (2011) applied convex analysis method to PEC and rpec problems. This method employs the CONOPT solver in General Algebraic Modeling System (GAMS). The proposed method can solve PEC problems with high efficiency and reliability but it requires the convexity of the model. Branch and bound methods have been applied to many chemical engineering optimization problems including PS and PEC problems (Cheung et al., 2002; Harding and Floudas, 2000). In general, these methods are often slow and require a significant numerical effort that grows exponentially with problem size (Nichita et al., 2002a; Wakeham and Stateva, 2004). Besides, branch and bound methods require certain properties of the objective function, and problem reformulation is usually needed to guarantee the global convergence. Note that the problem reformulation can be very difficult to perform for complex thermodynamic models such as equations of state with nontraditional mixing rules. Finally, Nichita et al. applied the tunneling method to perform stability analysis of various systems (Nichita et al., 2002b, 2008) and to PEC problems (Nichita et al., 2002a, 2004). Their results suggest that tunneling method is a robust and efficient tool for these applications even for difficult cases. However, it requires feasible and improved initial estimates for reliability and computational efficiency respectively (Nichita et al., 2002a). For an unknown system, it is difficult to provide a feasible and good initial estimate for the algorithm. In summary, the deterministic methods can guarantee convergence to the global optimum only when certain properties such as continuity are satisfied or a priori information of the system is available. Also, reformulation of the problem may be needed depending on the characteristic of the thermodynamic models, and the computational time grows exponentially with problem size. In contrast, stochastic methods are quite simple to implement and use. They do not require any assumptions or transformation of the original problems, can be applied with any thermodynamic model, and yet provide a high probabilistic convergence to the global optimum. They can often locate the global optimum in modest computational time compared to deterministic methods (Bonilla-Petriciolet et al., 2011). In recent years, several stochastic global optimization techniques have been applied to solve the PS and PEC problems in non-reactive and reactive systems (Bonilla-Petriciolet et al., 2006, 2011; Bonilla-Petriciolet and Segovia-Hernández, 2010; Rahman et al., 2009; Rangaiah, 2001; Reynolds et al., 1997; Srinivas and Rangaiah, 2006, 2007a,b). These algorithms include Simulated Annealing (SA), Genetic Algorithms (GA), Tabu Search (TS), Differential Evolution (DE), Random Tunneling method (RT), Particle Swarm Optimization (PSO), Repulsive Particle Swarm (RPS) and hybrid stochastic algorithms. In particular, Srinivas and Rangaiah (2007a, 2010) studied DE and TS for non-reactive mixtures, and proposed two versions of DE with Tabu List (DETL), in order to improve the performance of the optimization algorithm. Srinivas and Rangaiah (2006) evaluated the RT on a number of medium sized problems including vapor liquid, liquid liquid and vapor liquid liquid equilibrium problems. RT can locate the global optimum for most of the examples tested but its reliability is low for problems having a local minimum comparable to the global minimum. Rahman et al. (2009) have concluded that RPS can reliably locate the global minimum of TPDF problems. In a recent study, Bonilla-Petriciolet and Segovia-Hernández (2010) tested different versions of PSO for PS and PEC for both reactive and non-reactive systems, and their results show that classical PSO is a reliable method with good performance. Finally, hybrid stochastic global optimization such as bare-bones Particle Swarm Optimization and Integrated Differential Evolution with Tabu List have been successfully applied to phase equilibrium and phase stability problems (Zhang et al., 2011b). Systematic and comprehensive comparison of different global optimization methods is challenging. However, some comparison of stochastic with deterministic algorithms for phase equilibrium calculations can be found in the literature. Teh and Rangaiah (2002, 2003) compared the performance of GA and TS with several deterministic algorithms such as Rachford-Rice-Mean value theorem-wegstein s projection method, accelerated successive substitution method, Nelson s method, simultaneous equation-solving method, linearly constrained minimization method, GLOPEQ and enhanced interval analysis method for solving phase equilibrium calculations. Their comparison shows that some stochastic methods can be more efficient than deterministic algorithms. Most of the stochastic methods have some parameters to be tuned for different problems in order to improve the convergence to the global optimum. Selection of proper parameter values for different problems usually require a lot of effort, and an improper choice can result in computational inefficiency or poor reliability. In order to overcome such difficulties, this work evaluates three global optimization algorithms that have fewer algorithm parameters, for PEC, rpec and PS problems involving multiple components, multiple phases and popular thermodynamic models. The performance of Covariant Matrix Adaptation-Evolution Strategy (CMA-ES), Shuffled Complex Evolution (SCE) and Firefly Algorithm (FA) on PS, PEC and rpec problems are compared and discussed based on both reliability and computational efficiency using practical stopping criteria. The remainder of this paper is organized as follows. The three algorithms, CMA-ES, SCE and FA, are presented in Section 2. Description of PEC, PS and rpec problems is given in Section 3. Implementation of the three algorithms is covered in Section 4. Section 5 presents the results and discusses the performance of CMA-ES, SCE and FA on PEC, PS and rpec problems. Finally, the conclusions of this work are summarized in Section Description of stochastic global optimization techniques In this study, the global optimization problem to be solved is: Minimize F(X) (1) with respect to D decision variables: X = (X 1, X 2,..., X d,..., X D ). The upper and lower bounds of these variables are

5 2054 chemical engineering research and design 9 0 ( ) Adapted from Hansen (2006). Fig. 1 Simplified algorithm of CMA-ES. Xmax 1, Xmax 2,..., Xmax d,..., Xmax D and X 1 min, X 2 min,..., X d min,..., X D min respectively. Three different stochastic global optimization techniques, the Covariant Matrix Adaptation Evolution Strategy (CMA- ES), the Shuffled Complex Evolution (SCE) and the Firefly Algorithm (FA), were evaluated for the phase stability and equilibrium problems in this study. The first two techniques belong to the Evolutionary Methods category, while the third technique is one type of Swarm Intelligence techniques, according to the classification presented by Rangaiah (2010). Those methods were selected because no attempts have been reported in the literature to evaluate their performance in solving the phase and chemical equilibrium problems and their performance could be superior to other stochastic methods. Each of these methods is briefly described in the following sections. More details of these stochastic optimization methods can be found in the cited references.

6 chemical engineering research and design 9 0 ( ) Step 1: Set parameters and Initialize Set parameters to their default values An initial population of points is sampled randomly from the feasible solution space (Ω). The selected population is partitioned into one or more complexes, each containing a fixed number of points. Step 2: Main evolution loop Each complex evolves according to a competitive complex evolution (CCE) algorithm. CCE algorithm employs a downhill simplex method in generating offsprings. The entire population is periodically shuffled and points are reassigned to complexes to share the information from the individual complexes. Evolution and shuffling are repeated so that the entire population is close to convergence criteria, and are stopped if the convergence criteria are satisfied. Local optimization starting from the best solution found by the global search Output: Solution found by local optimizer. Adapted from Duan et al. (1992). Fig. 2 Simplified algorithm of SCE The CMA-ES method Evolution Strategy (ES) is a stochastic search algorithm, in which search steps are taken by stochastic variation (mutation) of points found so far. The best out of a number of new search points are selected to continue. The mutation is usually carried out by adding a normally distributed random vector. Pair-wise dependencies between the variables in this distribution are described by a covariance matrix. For most objective functions, the mutation covariance matrix needs to be adapted continually during optimization. The covariance matrix adaptation evolution strategy (CMA-ES) was proposed to adapt the mutation covariance by gathering information about successful search steps, using that information to modify the covariance matrix of the mutation distribution in a goal directed, de-randomized fashion (Hansen and Ostermeier, 2001). Changes to the covariance matrix are such that variances in directions of the search space that have previously been successful are increased while those in other directions passively decrease. The accumulation of information over a number of search steps makes it possible to adapt the covariance matrix even when using small populations. CMA-ES has found many engineering applications (e.g., feedback control of combustion (Hansen et al., 2009), calibration of scientific instruments (Wilson et al., 2008), solving design problems in hydrogeology (Bayer and Finkel, 2007), parameter estimation (Hohm and Zitzler, 2007)), and many others. A simplified algorithm, taken from (Hansen, 2006), is presented in Fig. 1. All the parameters needed for CMA-ES were made as functions of the population size, n, and overall standard deviation, (Hansen and Ostermeier, 2001). The method adapts itself and hence no need to fine-tune any additional parameter. In this study, we used the modification proposed by Ros and Hansen (2008) that reduces the internal time and space complexity from quadratic to linear. We also enabled the active CMA-ES option (Jastrebski and Arnold, 2006), which uses the information about the unsuccessful offspring candidate solutions to actively reduce variances of the mutation distribution in unpromising directions of the search space. Step 1: Set parameters and Initialize Generate initial population of n fireflies x i Light intensity I i at x i is determined by f(x) i Define light absorption coefficient γ Step 2: Main evolution loop while (t < MaxGeneration) for i=1:n for j=1:n if (I i < I j ), Move firefly i towards j; end if Vary attractiveness with distance r via exp[-γr] Evaluate new solutions and update light intensity end for j end for i Rank the fireflies and find the current best solution. end while Local optimization starting from the best solution found by the global search Output: Solution found by local optimizer. Adapted from Yang (2007). Fig. 3 Simplified algorithm of FA.

7 Table 1 Details of PEC and PS problems studied. PEC & PS no. System Feed conditions Thermodynamic models Global optimum for 1 n-butyl acetate + water n F = (0.5, 0.5) at 298 K and kpa NRTL model and parameters reported by Rangaiah (2001) 2 Toluene + water + aniline n F = ( , , ) at 298 K and kpa NRTL model and Model parameters reported by McDonald and Floudas (1995) 3 N 2 + C 1 + C 2 n F = (0.3, 0.1, 0.6) at 270 K and 7600 kpa SRK EoS with classical mixing rules and parameters reported by Bonilla-Petriciolet et al. (2006) 4 C 1 + H 2 S n F = (0.9813, ) at 190 K and 4053 kpa SRK EoS with classical mixing rules and parameters reported by Rangaiah (2001) 5 C 2 + C 3 + C 4 + C 5 + C 6 n F = (0.401, 0.293, 0.199, , )at 390 K and 5583 kpa 6 C 1 + C 2 + C 3 + C 4 + C 5 + C 6 + C C 17+ n F = (0.7212, , , , , , , ) at 353 K and 38,500 kpa 7 C 1 + C 2 + C 3 + ic 4 + C 4 + ic 5 + C 5 + C 6 + ic 15 n F = (0.614, , , , , , , , ) at 314 K and kpa 8 C 1 + C 2 + C 3 + C 4 + C 5 + C 6 + C 7 + C 8 + C 9 + C 10 n F = (0.6436, , , , , , , , , ) at K and 19,150 kpa SRK EoS with classical mixing rules and parameters reported by Bonilla-Petriciolet et al. (2006) SRK EoS with classical mixing rules and parameters reported by Harding and Floudas (2000) SRK EoS with classical mixing rules. Model parameters reported by (Rangaiah, 2001). SRK EoS with classical mixing rules and parameters reported by Bonilla-Petriciolet et al. (2006) Equilibrium Stability chemical engineering research and design 9 0 ( )

8 Table 2 Details of rpec (chemical equilibrium) problems studied. rpec no. System Feed conditions Thermodynamic models Global optimum Reference(s) 1 A1 + A2 A3 + A4 n F = (0.5, 0.5, 0.0, 0.0) at 355 K and kpa (1) Ethanol (2) Acetic acid (3) Ethyl acetate (4) Water 2 A1 + A2 A3, and A4 as an inert component n F = (0.3, 0.3, 0.0, 0.4) at K and kpa NRTL model and ideal gas. K eq = (1) Isobutene G 0 rxs/r = T T ln T (2) Methanol ln K eq = G 0 rxs/r (3) Methyl ter-butyl ether where T is in K (4) n-butane 3 A1 + A2 + 2A3 2A4 n F = (0.354, 0.183, 0.463, 0.0) at 355 K and kpa (1) 2-Methyl-1-butene (2) 2-Methyl-2-butene (3) Methanol (4) Tert-amyl methyl ether 4 A1 + A2 A3 + A4 n F = (0.3, 0.4, 0.3, 0.0) at K and kpa McDonald and Floudas (1995), Bonilla-Petriciolet et al. (2011), Bonilla-Petriciolet and Segovia-Hernández (2010) Wilson model and ideal gas Bonilla-Petriciolet et al. (2011) Wilson model and ideal gas. K eq = e /T. Where T is in K UNIQUAC model and ideal gas. ln K eq = 450/T (1) Acetic acid (2) n-butanol (3) Water (4) n-butyl acetate 5 A1 + A2 A3 n F = (0.6, 0.4, 0.0) Margules solution model. g E /R g T = 3.6x 1 x x 1 x x 2 x 3 K eq = A1 + A2 + 2A3 2A4 with A5 as inert component n F = (0.1, 0.15, 0.7, 0.0, 0.05) at 335 K and kpa (1) 2-Methyl-1-butene (2) 2-Methyl-2-butene (3) Methanol (4) Tert-amyl methyl ether (5) n-pentane 7 A1 + A2 A3 n F = (0.52, 0.48, 0.0) at K and kpa 8 A1 + A2 A3 + A4 n F = (0.048, 0.5, 0.452, 0.0) at 360 K and kpa Wilson model and ideal gas. K eq = 1.057*10 04 e /T where T is in K Margules solution model K eq = 3.5 NRTL model K eq = Bonilla-Petriciolet et al. (2006), Bonilla-Petriciolet et al. (2011) Wasylkiewicz and Ung (2000) and Bonilla-Petriciolet et al. (2011) Bonilla-Petriciolet et al. (2006) Bonilla-Petriciolet et al. (2006) Wasylkiewicz and Ung (2000) and Bonilla-Petriciolet et al. (2011) Wasylkiewicz and Ung (2000) and Bonilla-Petriciolet et al. (2011) chemical engineering research and design 9 0 ( )

9 2058 chemical engineering research and design 9 0 ( ) Table 3 Selected values of the parameters used in the implementation of CMA-ES, SCE and FA. Method Parameter Selected value CMA-ES 0.5 n 10D SCE p n/(2d + 1) m (2D + 1) q (D + 1) 30 n 10D FA o 0.5 ˇmin n 10D 2.2. The SCE method SCE is another population-based stochastic global optimization algorithm proposed by Duan et al. (1992). The SCE algorithm and its variants have been used in many engineering applications such as the assessment and optimization of hydrologic model (Vrugt et al., 2003a,b). However, to the best of our knowledge, this method has not been applied for performing thermodynamic calculations. The SCE algorithm begins with a population of points sampled randomly from the feasible space. The population is partitioned into several communities, each of which evolves based on a statistical reproduction process that uses the simplex geometric shape to direct the search in an improvement direction. At periodic stages in the evolution, the entire population is shuffled and points are reassigned to communities to ensure information sharing. As the search progresses, the entire population converges toward the neighborhood of global optimum. The SCE parameters include the number of points in a complex (m), the number of points in a sub-complex (q), the number of complexes (p), and the number of consecutive offspring generated by each sub complex ( ). The basic algorithm of the SCE method is presented in Fig The Firefly Algorithm FA is a nature-inspired meta-heuristic stochastic global optimization method that was developed by Yang (2007). It is a relatively new method that has gained popularity in finding the global minimum of diverse applications. It was rigorously evaluated by Gandomi et al. (2011), and has been recently used to solve the flow shop scheduling problem (Sayadi et al., 2010) Fig. 4 Global Success Rate, GSR, versus iterations for PS problems using CMA-ES, SCE and FA with SC-1 (a) stochastic method only and (b) stochastic method combined with local optimization.

10 chemical engineering research and design 9 0 ( ) Table 4 Success rate (SR) and number of function evaluations of CMA-ES, SCE and FA for PS problems using SC max with NP of 10D. PS no. SC max CMA-ES SCE FA SR NFE SR NFE SR NFE PS PS PS PS PS PS , , ,659 PS , , , , ,967 PS , , , ,158 Average GSR and total NFE , , ,569 and financial portfolio optimization (Giannakouris et al., 2010). The FA algorithm imitates the mechanism of firefly communications via luminescent flashes. In the FA algorithm, the two important issues are the variation of light intensity and the formulation of attractiveness. The brightness of a firefly is determined by the landscape of the objective function. Attractiveness is proportional to brightness and, thus, for any two flashing fireflies, the less bright one moves toward the brighter one. A simplified algorithm for the FA technique is presented in Fig. 3. In this algorithm, the attractiveness of a firefly is determined by its brightness, which is equal to the objective Table 5 Success rate (SR) and number of function evaluations (NFE) of CMA-ES, SCE and FA for PEC problems using SC max with NP of 10D. PEC no. SC max CMA-ES SCE FA SR NFE SR NFE SR NFE PEC PEC PEC PEC PEC PEC , , , , ,323 PEC , , , , , , ,608 PEC , , , ,100 Average GSR and total NFE , , ,974

11 2060 chemical engineering research and design 9 0 ( ) Fig. 5 Global Success Rate, GSR (plot a) and average NFE (plot b) of CMA-ES, SCE and FA for PS problems using SC-2 (SC max = 10, SC max = 25 and SC max = 50) and SC-1 (1500 iterations). function. The brightness of a firefly at a particular location x was chosen as I(x) = f(x). The attractiveness is judged by the other fireflies. Thus, it was made to vary with the distance between firefly i and firefly j. The attractiveness was made to vary with the degree of absorption of light in the medium between the two fireflies. Thus, the attractiveness is given by ˇ = ˇmin + (ˇo ˇmin )e r2 (2) The distance between any two fireflies i and j at x i and x j is the Cartesian distance: r ij = x i x j = d (x i,k x j,k ) 2 (3) k=1 The movement of a firefly attracted to another more attractive (brighter) firefly j is determined by x i = x i + ˇ(x j x i ) + ε i (4) The second term is due to the attraction, while the third term ε i is a vector of random numbers drawn from a uniform distribution in the range [ 0.5, 0.5]. In our implementation, we used the value of 1 for ˇo, 0.2 for ˇmin and was made to decrease with the increase in the iteration number, k, in order to reduce the randomness according to the following formula: k = k 1 (5) Thus the randomness is decreased gradually as the optima are approached. This formula was adapted from Yang (2011). 3. Formulation of PS, PEC and rpec problems A brief description of the global optimization problems including the objective function, decision variables and constraints, for PEC, PS and rpec problems is given in the following sections Phase stability problems Solving the PS problem is usually the starting point for the phase equilibrium calculations. The theory used to solve this problem states that a phase is stable if the tangent plane generated at the feed (or initial) composition lies below the molar Gibbs energy surface for all compositions. One common implementation of the tangent plane criterion (Harding and Floudas, 2000; Michelsen, 1982; Sun and Seider, 1995) is to

12 chemical engineering research and design 9 0 ( ) Fig. 6 Global Success Rate, GSR versus iterations for PEC problems using CMA-ES, SCE and FA with SC-1 (a) stochastic method only and (b) stochastic method combined with local optimization. minimize the tangent plane distance function (TPDF), defined as the vertical distance between the molar Gibbs energy surface and the tangent plane at the given phase composition. Specifically, TPDF is given by TPDF = c y i ( i y i z ) (6) i=1 where i y and i z are the chemical potentials of component i calculated at compositions y and z, respectively. For stability analysis of a phase/mixture of composition z, TPDF must be globally minimized with respect to composition of a trial phase y. If the global minimum value of TPDF is negative, the phase is not stable at the given conditions, and phase split calculations are necessary to identify the compositions of each phase. The decision variables for minimizing TPDF in phase stability problems are mole fractions, y i for i = 1, 2,..., c, each in the range [0, 1], and the constraint is that summation of these mole fractions is equal to 1. The constrained global optimization of TPDF can be transformed into an unconstrained problem by using decision variables ˇi instead of y i as follows: n iy = ˇiz i n F i = 1,..., c (7) and n iy y i = i = 1,..., c (8) j=1 n c jy where n F is the total moles in the feed mixture used for stability analysis, and n iy are the conventional mole numbers of component i in trial phase y. The number of decision variables is still c for the unconstrained minimization of TPDF. Thus, the unconstrained global optimization problem for phase stability analysis becomes: min TPDF(ˇi) 0 ˇi 1, i = 1,..., c The calculation of TPDF is straightforward with almost any thermodynamic model because: i 0 i R g T = ln ( x i i i ) (9) = ln(x i i ) (10) where R g is the universal gas constant, i is the chemical potential of component i at the mixture, and 0 i is the chemical potential of pure component i. More details on PS problem

13 2062 chemical engineering research and design 9 0 ( ) Fig. 7 Global Success Rate, GSR (plot a) and average NFE (plot b) of CMA-ES, SCE and FA for PEC problems using SC-2 (SC max = 10, SC max = 25 and SC max = 50) and SC-1 (1500 iterations). formulation can be found in Rangaiah (2001). The characteristics of PS problems used in this study are summarized in Table Phase equilibrium calculation problems A mixture of substances at a given temperature, T, pressure, P and total molar amount may separate into two or more phases. The composition of the different substances is the same throughout a phase but may differ significantly in different phases at equilibrium. If there is no reaction between the different substances, then it is a phase equilibrium problem. There are mainly two approaches for PEC: equation solving approach and Gibbs free energy minimization approach. The former involves solving a set of non-linear equations arising from mass balances and equilibrium relationships. The latter involves the minimization of the Gibbs free energy function. Although the first approach seems to be faster and simple, the solution obtained may not correspond to the global minimum of Gibbs free energy function. Moreover, it needs a priori knowledge of phases existing at equilibrium (Rangaiah, 2001). Classic thermodynamics indicate that minimization of Gibbs free energy is a natural approach for calculating the equilibrium state of a mixture. Hence, this study uses Gibbs free energy minimization for PEC, which was used to determine phase compositions at equilibrium in several works (McDonald and Floudas, 1995; Rangaiah, 2001; Reynolds et al., 1997; Teh and Rangaiah, 2003). The mathematical formulation involves the minimization of Gibbs free energy subject to mass balance equality constraints and bounds that limit the range of decision variables. In a non-reactive system with c components and phases, the objective function for PEC is g = j = 1 i = 1 c n ij ln(x ij ij ) = j = 1 i = 1 c ( ) xij ˆ ij n ij ln i (11) where n ij, x ij, ij, ˆ ij and ϕ i are the moles, mole fraction, activity coefficient and fugacity coefficient of component i in phase j, and the fugacity coefficient of pure component, respectively. Eq. (11) must be minimized with respect to n ij taking into account the following mass balance constraints: n ij = z i n F i = 1,..., c (12) j = 1 0 n ij z i n F i = 1,..., c j = 1,..., (13) where z i is the mole fraction of component i in the feed and n F is the total moles in the feed.

14 chemical engineering research and design 9 0 ( ) Fig. 8 Global Success Rate, GSR versus iterations for rpec problems using CMA-ES, SCE and FA with SC-1 (a) stochastic method only and (b) stochastic method combined with local optimization. To perform unconstrained minimization of Gibbs energy function, one can use new variables instead of n ij as decision variables. The introduction of the new variables eliminates the restrictions imposed by material balances, reduces problem dimensionality and the optimization problem is transformed into an unconstrained one. For multi-phase non-reactive systems, new variables ˇij (0, 1) are defined and employed as decision variables by using the following expressions: n i1 = ˇi1 z i n F i = 1,..., c (14) ) j 1 n ij = ˇij (z i n F n im m=1 i = 1,..., c; j = 2,..., 1 (15) 1 n i = z i n F n im i = 1,..., c (16) m=1 Using this formulation, all trial compositions satisfy the mass balances allowing the easy application of optimization strategies (Bonilla-Petriciolet and Segovia-Hernández, 2010; Srinivas and Rangaiah, 2007a). For Gibbs energy minimization, the number of decision variables ˇij is c ( 1) for non-reactive systems. The details of PEC problems used in this study are also in Table 1. In most of the reported studies, PEC problems tested assumed that the number and type of phases are known; such problems are also known as phase split calculations. In this study too, the same assumption is made, and the problems tested are simply referred to as PEC problems Reactive Phase Equilibrium Calculation problems In rpec problems, also known as chemical equilibrium problems, reactions increase the complexity and dimensionality of phase equilibrium problems, and so phase split calculations in reactive systems are more challenging due to non-linear interactions among phases and reactions. The phase distribution and composition at equilibrium of a reactive mixture are determined by the global minimization of Gibbs free energy with respect to mole numbers of components in each phase subject to element/mass balances and chemical equilibrium constraints (Burgos-Solórzano et al., 2004; Seider and Widagdo, 1996). The expressions for Gibbs free energy and its mathematical properties depend on the structure of the thermodynamic equation(s) chosen to model each of the phases that may exist at equilibrium.

15 2064 chemical engineering research and design 9 0 ( ) Table 6 Success rate (SR) and number of function evaluations of CMA-ES, SCE and FA for rpec problems using SC max with NP of 10D. rpec no. SC max CMA-ES SCE FA SR NFE SR NFE SR NFE rpec-1 6D D D rpec-2 6D D D rpec-3 6D D D rpec-4 6D D D 0 75, rpec-5 6D D D rpec-6 6D , D , D , rpec-7 6D D D rpec-8 6D D D Average GSR and total NFE , , ,317 Recently, Bonilla-Petriciolet et al. (2011) concluded that the constrained Gibbs free energy minimization approach has the advantage of requiring smaller computing time compared to the unconstrained approach, is straightforward and suitable for chemical equilibrium calculations. In summary, for a system with c components and phases subject to r independent chemical reactions, the objective function for rpec is F obj = g ln K eq N 1 n ref,j (17) j=1 where g is given by Eq. (11), ln K eq is a row vector of logarithms of chemical equilibrium constants for r independent reactions, N is an invertible, square matrix formed from the stoichiometric coefficients of a set of reference components chosen from r reactions, and n ref is a column vector of moles of each of the reference components. This objective function is defined using reaction equilibrium constants, and it must be globally minimized subject to the following mass balance restrictions (Bonilla-Petriciolet et al., 2011): (n ij v i N 1 n ref,j ) = n if v i N 1 n ref,f i = 1,..., c r (18) j=1 where n i,f is the initial moles of component i in the feed. These mass balance equations can be rearranged to reduce the number of decision variables in the optimization problem and to eliminate equality constraints, which are usually challenging for stochastic optimization methods. Thus, Eq. (18) is rearranged using the following expression: 1 n i = n if v i N 1 (n ref,f n ref, ) (n ij v i N 1 n ref,j ) j=1 i = 1,..., c r (19) Using Eq. (19), the decision variables for rpec are c ( 1) + r mole numbers (n ij ). Then, the global optimization problem can be solved by minimizing Eq. (17) with these decision variables and the remaining c r mole numbers (n i ) are determined from Eq. (19), subject to the inequality constraints n i > 0. In constrained optimization problems, the search space consists of both feasible and infeasible points. For rpec, feasible points satisfy all the mass balance constraints, Eq. (18), while infeasible points violate at least one of them (i.e., n i < 0 where i = 1,..., c r). The penalty function method is used to solve the constrained Gibbs free energy minimization in reactive systems because it is easy to implement and is considered efficient for handling constraints in the stochastic methods (Bonilla-Petriciolet et al., 2011). For handling these constraints, an absolute value of constraint violation is multiplied with a high penalty weight and then added to the objective function. In case of more than one constraint violation, all constraint violations are first multiplied with the penalty weight, and all of them are added to the objective function. Specifically, the penalty function is given by F r = { F obj if n ij > 0 i = 1,..., c j = 1,...,, F obj + p otherwise, (20) where p is the penalty term whose value is positive. In case of infeasible solutions (i.e., n i < 0), Gibbs free energy function of

16 chemical engineering research and design 9 0 ( ) Fig. 9 Global Success Rate, GSR (plot a) and average NFE (plot b) of CMA-ES, SCE, and FA for rpec problems using SC-2 (SC max = 6D, SC max = 12D and SC max = 24D) and SC-1 (1500 iterations). phase cannot be determined due to the logarithmic terms of the activity or fugacity coefficients. So, the penalty function used for handling infeasible solutions in rpec is given by Bonilla-Petriciolet et al. (2011). n unf p = 10 n i (21) i=1 where n i is obtained from Eq. (19) and n unf is the number of infeasible mole numbers (i.e., n i < 0 where i = 1,..., c r). In this study, the resulting constrained Gibbs free energy minimization for a reactive system is solved using CMA-ES, SCE and FA algorithms. The details of the rpec problems are shown in Table Implementation of the methods In this study, all the optimization algorithms and thermodynamic models were coded in MATLAB. CMA-ES, SCE and FA codes were obtained from Hansen (2011), Donckels (2011) and Yang (2007), respectively. All the codes were modified to allow for the two stopping criteria used in this study as discussed below. The parameters used for the algorithms were tuned to give sufficiently good results for a problem from each of the three categories of problem, and then they were fixed for all problems tested in order to compare the robustness of the algorithms; see Table 3 for the parameter values used. Further, n = 10D for the three methods. Altogether, we studied 24 problems consisting of 8 PEC, 8 PS and 8 rpec problems, whose details can be found in Tables 1 and 2. All these problems are multimodal with number of decision variables ranging from 2 to 10. Each problem was solved 100 times independently with a different random number seed for robust performance analysis. The performances of stochastic algorithms were compared based on Success Rate (SR) and average number of function evaluations (for both global and local searches) in all 100 runs (NFE), for two stopping criteria: SC-1 based on the maximum number of iterations and SC-2 based on the maximum number of iterations without improvement in the best objective function value (SC max ). SC-2 is an improvementbased stopping criterion. Note that NFE is a good indicator of computational efficiency since function evaluation involves extensive computations in application problems. Further, it is independent of the computer and software platform used, and so it is useful for comparison by researchers. SR is the number of times the algorithm located the global optimum to the specified accuracy, out of 100 runs. A run/trial is considered successful if the best objective function value obtained after

17 2066 chemical engineering research and design 9 0 ( ) Fig. 10 Global Success Rate, GSR of CMA-ES, SCE, FA and IDE for all problems using stochastic method combined with local optimization. the local optimization is within 1.0E-5 from the known global optimum. Also, Global Success Rate (GSR) of different algorithms is reported for all the problems. It is defined as np SR GSR = i np i=1 (22) where np is the number of problems and SR i is the individual success rate for each problem. At the end of each run by each stochastic algorithm, a local optimizer was used to continue the search to find the global optimum precisely and efficiently. This is also done at the end of different iteration levels for performance analysis; however, global search in the subsequent iterations is not affected by this. Since all algorithms were implemented in MATLAB, Sequential Quadratic Program (SQP) was chosen as the local optimizer. The best solution at the end of the stochastic algorithm was used as the initial guess for SQP, which is likely to locate the global optimum if the initial guess is in the global optimum region. In the small number of cases when the local optimizer diverged to a larger value of the objective function, the output of the stochastic algorithm was retained. All computations were performed on 64-bit HP Pavilion dv6 Notebook computer with an Intel Core i7-2630qm processor, 2.00 GHz and 4 GB of RAM, which can complete 1344 MFlops (million floating-point operations) for the LINPACK benchmark program that uses the MATLAB backslash operator to solve for a matrix of order 500 (Burkardt, 2008). 5. Results and discussion 5.1. Performance of algorithms on PS problems On PS problems, similar tests using the three stochastic algorithms were performed. The results were collected at different iteration levels, starting from 10-iteration level, after local optimization at each of these iteration levels. As expected, GSR of CMA-ES, SCE, and FA for all PS problems using SC-1 improves with increasing number of iterations (Fig. 4a). The highest GSR was 68.9% obtained by the FA algorithm without the local optimization. The selected PS problems were somewhat difficult to optimize, which is reflected in the relatively low GSR without using the local optimizer. At 10 and 25 iterations, SCE obtained the best GSR, but from 50 to 750 iterations, the CMA-ES obtained the best GSR. At the termination of the iterations, FA obtained the highest GSR. Despite relatively faster convergence, SCE has the tendency to be trapped at a local optimum. The reliability of CMA-ES and SCE did not improve much beyond the 100th iteration. However, reliability of FA kept improving until the end. Fig. 4(b) shows the improvements in GSR with the use of a local optimization technique at the end of the stochastic techniques. The best GSR, which was obtained by CMA-ES, increased from 65.9% without local optimization to reach 91.9%. The performance of CMA-ES showed slightly more reliability compared to that of FA. However, GSR of the SCE was about 20% less. Another interesting observation from Fig. 4(b) is that GSR of SCE using local optimization decreased with iterations. Since PS problems are particularly difficult, local optimization techniques may diverge even with an improved initial point obtained by further iterations in the stochastic method. This result was peculiar only to SCE method, which sheds an additional negative light on its use with SQP for the solution of phase stability problems. In stochastic global optimization, it is necessary to use a suitable stopping criterion for the optimization algorithm to stop at the right time incurring least computational resources without compromising reliability of finding the global optimum. Results on the effect of stopping criterion, SC-2 with SC max = 10, 25 and 50 on CMA-ES, SCE and FA for all PS problems are presented in Table 4; GSR and NFE reported in this table are for stochastic followed by local optimization. They show that, in general, reliability of the algorithm and NFE increase with increasing SC max. For PS problem 6, CMA-ES obtained the best reliability. For PS problem 2, SCE obtained the best reliability. For PS problems 1, 5, 7 and 8, FA obtained the best reliability. The three techniques obtained 100% reliability for problem 3, while only SCE and FA obtained 100% reliability for problem 4. However, reliability obtained using SC-1 is always higher than that obtained by SC-2, which is shown in Fig. 5(a) that summarizes GSR of the three techniques with the three SC-2 stopping criteria compared with SC-1 for all PS problems. Fig. 5(b) shows the NFE of the three techniques with the four stopping criteria for all PS problems. For SCE, use of SC-2 with maximum 50 iterations gave similar GSR results to using SC-1, yet it required 8 times less NFE. Therefore, it is recommended to use SC-2 with SCE. For CMA-ES and FA,

18 chemical engineering research and design 9 0 ( ) Fig. 11 Global Success Rate, GSR (plot a) and average NFE (plot b) of CMA-ES, SCE, FA and IDE for all problems using SC-2 (SC max = 10, SC max = 25 and SC max = 50 except for rpec problems: SC max = 6, SC max = 12 and SC max = 24) and SC-1 (1500 iterations). use of SC-1 gave slightly more reliable performance but at the expense of much higher NFE as shown in Fig. 5(b). Thus, use of SC-2 for both CMA-ES and FA may still be desirable because of considerable savings in the computational needs, despite the small loss in reliability. Problems 6, 7 and 8 were identified as challenging since at least one of the methods failed to achieve 50% GSR even with the subsequent local optimization. These problems were solved again with a higher population size, and these results will be discussed later. In general, stochastic optimization methods provide only a probabilistic guarantee of locating the global optimum, and their proofs for numerical convergence usually state that the global optimum will be identified in infinite time with probability 1 (Niewierowicz et al., 2003; Rudolph, 1994; Weise, 2008). So, better performance of stochastic methods is expected if more iterations and/or larger population size are used Performance of algorithms on PEC problems GSR values for all PEC problems by CMA-ES, SCE and FA algorithms with n of 10D using SC-1 are illustrated in Fig. 6. As expected, GSR improves with increasing number of iterations, particularly at lower iteration levels. After 250 iterations, GSR does not improve significantly for both CMA-ES and SCE. However, GSR kept improving for FA. In general, subsequent iterations without improvement in the results are waste of computational resources. For example, for stochastic optimization only, GSR of CMA-ES is 88% at 50 iterations; it increases to 90.75% at 250 iterations and stays the same until 1500 iterations. Results in Fig. 6(a) show that CMA-ES has higher reliability at low NFE compared to SCE and FA, for PEC problems when global stochastic optimization only is used. When the performance of the three methods with local optimization at the end of the global search is compared, FA gives the highest reliability with GSR close to 100% at 1500 iterations. The effect of stopping criterion, SC-2 on CMA-ES, SCE and FA algorithms has also been studied on PEC problems. Table 5 summarizes SR and NFE obtained by these algorithms with SC max = 10, 25 and 50 along with the maximum allowable iterations of 1500 (to avoid indefinite looping), all using n = 10D. For PEC problems 3, 6 and 7, the three algorithms obtained 100% reliability. CMA-ES obtained the best reliability for problem 2, while FA obtained the best reliability for problem 4. Both these methods obtained 100% reliability for problem 8. CMA-ES and SCE obtained the best reliability for problems 1 and 5. Fig. 7 summarizes GSR and NFE of CMA-ES, SCE and FA algorithms with four stopping criteria. We obtained the same conclusion of higher reliability with higher SC max. It can be observed in Fig. 7(a) that the use of SC-2 gives similar GSR compared to that with SC-1 for CMA-ES and SCE but lower GSR for FA. NFE values in Fig. 7(b) shows that SCE uses most NFE to terminate the global search by SC-2 compared to CMA-ES and FA. In general, SC-2 requires significantly fewer NFE compared to SC-1, which

19 2068 chemical engineering research and design 9 0 ( ) confirms the need for a good termination criterion. Especially, with SC max = 50, SR obtained by the algorithms is comparable to that obtained with SC-1 but uses much fewer NFE (Fig. 7). CMA-ES achieved better reliability with fewer NFE when SC-2 was used. When SC-1 was used, FA achieved better reliability with fewer NFE compared to the other two methods Performance of algorithms on rpec problems GSR of CMA-ES, SCE and FA algorithms for all rpec problems using SC-1 is illustrated in Fig. 8(a), when global stochastic optimization was used without the subsequent use of local optimization. GSR generally improves with increasing number of iterations for these problems as well. The highest GSR is 88.5% obtained by FA. At 50 iterations, CMA-ES obtained best GSR, but from 250 to 1500 iterations, its GSR did not improve. On the other hand, FA obtained better GSR at higher iterations. GSR of SCE was 74% at 250 iterations and remained almost constant until 1500 iterations. GSR of FA remained very low until 750 iterations when it climbed steadily until it reached the highest GSR of 88.5% at 1500 iterations. In short, when comparing the stochastic optimizations without the use of subsequent local optimization, FA is more reliable than SCE, which itself is more reliable than CMA-ES. When the results of the stochastic global optimization after 1500 iterations followed by local optimization are compared (Fig. 8b), FA comes in top in terms of reliability with GSR of 96.5% compared to 82.6% for CMA-ES and 76.5% for SCE. The performance of CMA-ES and SCE were very good since both have reached very close to the final GSR after only 50 iterations although no significant improvement was obtained in subsequent iterations. In short, FA is the most reliable whereas CMA-ES is the most effective to find the global optimum in small number of iterations. Results obtained on the effect of stopping criteria on the three algorithms using SC-2 with SC max = 6D, 12D and 24D, for rpec problems are summarized in Table 6. Note that SC max values used for each rpec problem were those used by Bonilla- Petriciolet et al. (2011) so that their and the present results can be compared. Table 6 shows that reliability of the algorithm increases with SC max but requires more NFE. For rpec-1, 2, 3 and 8, CMA-ES, SCE and FA algorithms obtained 100% SR. For rpec problem 4, FA obtained the best reliability followed by SCE and CMA-ES. This is considered one of the difficult problems, and will be separately discussed later. CMA-ES obtained the best reliability for problems 5, 6 and 7, with FA close behind. Among the three algorithms, NFE required by FA is much lower than that by CMA-ES and SCE. As shown in Table 6, total NFE required by FA for all rpec problems is 50,317 compared to 169,975 by CMA-ES and 153,949 by SCE. The average GSR is 74.4% for FA, 80.7% for CMA-ES and 76.0% for SCE. Thus, these results again show that SCE is less reliable than CMA-ES and FA. Fig. 9 shows the performance of GSR and NFE of CMA-ES, SCE and FA with different stopping criteria for rpec problems. Again, we conclude that the higher the SC max, the better the reliability of the algorithm is except for FA, and that the use of SC-2 gives similar GSR compared to SC-1 (Fig. 9a) except for FA. The use of SC-2 for CMA-ES and SCE will bring about similar reliability compared to the use of SC-1 but with much higher efficiency. This is not the same conclusion that can be drawn for FA as SC-1 gave the highest reliability (96.5%) as shown in Fig. 9(a) Comparison with the reported performance of other stochastic methods Recently, Zhang et al. (2011b) reported the performance of Unified Bare-Bones Particle Swarm Optimization (UBBPSO), Integrated Differential Evolution (IDE) and IDE without Tabu List and radius (IDE N). They also analyzed the performance of UBBPSO, IDE and IDE N, and compared them with other published results such as classical PSO with Quasi-Newton Method (PSO-CQN), classical PSO with Nelder Mead Simplex Method (PSO-CNM), Simulated Annealing (SA), Genetic Algorithm (GA), and Differential Evolution with Tabu List (DETL). All these stochastic algorithms were run 100 times independently, and at the end of every run, a deterministic local optimizer was activated. Zhang et al. (2011b) reported that IDE gave better performance across the entire spectrum of problems. Hence, it is sufficient to compare the performance of CMA-ES and FA with IDE for the three categories of problems, with the different stopping criteria. Fig. 10 shows the average GSR of CMA-ES, SCE and FA for the 24 problems as compared with the average GSR of IDE, at different iterations. CMA-ES shows the best convergence rate as its average GSR reaches about 87.9% after 50 iterations only. The reliability of IDE is superior to all evaluated algorithms until the 750th iteration. At larger iterations, the GSR of FA was the highest, reaching 95% at the 1500th iteration compared to 92.8% for IDE. Fig. 11 shows the average GSR of CMA-ES, SCE and FA for the 24 problems as compared with the average GSR of IDE, when SC-2 was used; IDE is superior over the other three algorithms in terms of its reliability as shown in Fig. 11a. GSR of IDE for SC max = 50 for PS and PEC and SC max = 24D for rpec is 92.0% as opposed to GSR of CMA-ES, which was 88.6%. Its NFE, however, was significantly more than that of CMA-ES: 19,600 and 11,700 (Fig. 1b). Since FA showed better reliability than CMA-ES and SCE, the following modification was made to enhance its reliability and efficiency. Its reliability supersedes that of IDE at large number of iterations. The moves of the fireflies in FA algorithm are not affected by the position of the best firefly (xbest). This strategy may prevent FA algorithm in falling into local optima and leaves open to continue the search for all available minima. However, this strategy also results in slower convergence toward the global optimum in comparison with other techniques. The modification that we attempted was based on a suggestion by Yang (2007) to improve the efficiency by adding an extra term ε i (x i g * ) to the move equation, Eq. (4). In the case of this Modified Firefly Algorithm (MFA), the global optimum becomes a factor in determining how all fireflies move. The parameter determines how significant the contribution of the location of the best firefly to the direction of the moves of the other fireflies. In our case, was taken as 1. We tested the performance of MFA algorithm in the most challenging phase stability and equilibrium problems. In particular, nine of the twenty-four problems were identified to be the most challenging as their success rates were less than 50%, for at least one of the methods when the first stopping criterion was applied. These problems were PS 6, 7 and 8; PEC 4, 6, 7 and 8; and rpec 4 and 7. In the first instance, the solution of these problems were attempted with a larger population size (20D) using the three algorithms under evaluation. No significant impact of the population size on the performance of all the three algorithms was found. Fig. 12 shows the average GSR for all 9 problems, when the four algorithms were used with 20D population size up to 750 iterations. The

20 chemical engineering research and design 9 0 ( ) Fig. 12 Global Success Rate (GSR) versus iterations for the most challenging problems using CMA-ES, SCE, FA and MFA with SC-1 (a) stochastic method only and (b) stochastic method combined with local optimization. comparison of GSR, when global stochastic optimization is used alone as depicted in Fig. 12(a), shows that CMA-ES is still the most effective algorithm to find the global in 50 iterations. MFA increases the effectiveness of FA at 250 iterations, and it is the most reliable method with the highest GSR at 750th iteration. However, the addition of the local optimization at the end of the stochastic optimization considerably improves the reliability of all techniques as shown in Fig. 12(b). In this case, FA is the most reliable technique and CMA-ES is the most effective. MFA using the local optimizer did not improve FA performance in this case. Other types of improvements that may be considered are the inclusion of other meta-heuristics such as Tabu List or Simulated Annealing, to improve the efficacy of the search or the modification of the move equation by accepting or rejecting the moves with certain probability. These ideas will be attempted in future research projects. 6. Conclusions In this study, three stochastic global optimization algorithms, namely, CMA-ES, SCE and FA have been evaluated for solving the challenging phase stability, and phase and chemical equilibrium problems. Performance at different iteration levels and the effect of stopping criterion have also been analyzed. CMA- ES was found to be the most effective algorithm to find the global minimum reliably and accurately in 50 iterations. FA was found to be the most reliable technique across different problems tried but requires relatively more computational effort. The modification of a term that relates the firefly move to the location of the global optimum did not significantly improve the efficiency of FA. The stopping criterion, SC-1 gives slightly better reliability than SC-2 at the expense of computational resources, and the use of SC max can significantly reduce the computational effort for solving PEC, rpec and PS problems without significantly affecting the reliability of the stochastic algorithms studied. Comparison of the performance of CMA- ES, SCE and FA with the results in Zhang et al. (2011b) shows that IDE is more reliable among the algorithms tested. References Avami, A., Saboohi, Y., A simultaneous method for phase identification and equilibrium calculations in reactive mixtures. Chem. Eng. Res. Des. 89, Bayer, P., Finkel, M., Optimization of concentration control by evolution strategies: formulation, application, and assessment of remedial solutions. Water Resour. Res. 43, W Bonilla-Petriciolet, A., Rangaiah, G.P., Segovia-Hernández, J.G., Constrained and unconstrained Gibbs free energy minimization in reactive systems using genetic algorithm and

Phase stability and equilibrium calculations in reactive systems using differential evolution and tabu search

Phase stability and equilibrium calculations in reactive systems using differential evolution and tabu search Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2010 Phase stability and equilibrium calculations in reactive systems using differential evolution and tabu

More information

Global Gibbs free energy minimization in reactive systems via harmony search

Global Gibbs free energy minimization in reactive systems via harmony search Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2012 Global Gibbs free energy minimization in reactive systems via harmony search Adrian Bonilla-Petriciolet

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

A Short Method To Calculate Residue Curve Maps in Multireactive and Multicomponent Systems

A Short Method To Calculate Residue Curve Maps in Multireactive and Multicomponent Systems pubs.acs.org/iecr A Short Method To Calculate Residue Curve Maps in Multireactive and Multicomponent Systems Marcelino Carrera-Rodríguez, Juan Gabriel Segovia-Hernandez,*, and Adrian Bonilla-Petriciolet

More information

Beta Damping Quantum Behaved Particle Swarm Optimization

Beta Damping Quantum Behaved Particle Swarm Optimization Beta Damping Quantum Behaved Particle Swarm Optimization Tarek M. Elbarbary, Hesham A. Hefny, Atef abel Moneim Institute of Statistical Studies and Research, Cairo University, Giza, Egypt tareqbarbary@yahoo.com,

More information

Performance of stochastic optimization methods in the calculation of phase stability analyses for nonreactive and reactive mixtures

Performance of stochastic optimization methods in the calculation of phase stability analyses for nonreactive and reactive mixtures Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2006 Performance of stochastic optimization methods in the calculation of phase stability analyses for nonreactive

More information

Nonlinear Parameter Estimation of e-nrtl Model For Quaternary Ammonium Ionic Liquids Using Cuckoo Search

Nonlinear Parameter Estimation of e-nrtl Model For Quaternary Ammonium Ionic Liquids Using Cuckoo Search Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 215 Nonlinear Parameter Estimation of e-nrtl Model For Quaternary Ammonium Ionic Liquids Using Cuckoo Search

More information

Modeling and Computation of Phase Equilibrium. Using Interval Methods. Department of Chemical and Biomolecular Engineering, University of Notre Dame

Modeling and Computation of Phase Equilibrium. Using Interval Methods. Department of Chemical and Biomolecular Engineering, University of Notre Dame Modeling and Computation of Phase Equilibrium Using Interval Methods Mark A. Stadtherr Department of Chemical and Biomolecular Engineering, University of Notre Dame Notre Dame, IN, USA 2nd Scandinavian

More information

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].

Hybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5]. Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Metaheuristics and Local Search

Metaheuristics and Local Search Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :

More information

DEVELOPMENT OF A ROBUST ALGORITHM TO COMPUTE REACTIVE AZEOTROPES

DEVELOPMENT OF A ROBUST ALGORITHM TO COMPUTE REACTIVE AZEOTROPES Brazilian Journal of Chemical Engineering ISSN 0104-6632 Printed in Brazil www.abeq.org.br/bjche Vol. 23, No. 03, pp. 395-403, July - September, 2006 DEVELOPMENT OF A ROBUST ALGORITHM TO COMPUTE REACTIVE

More information

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches

Metaheuristics and Local Search. Discrete optimization problems. Solution approaches Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,

More information

Firefly algorithm in optimization of queueing systems

Firefly algorithm in optimization of queueing systems BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 60, No. 2, 2012 DOI: 10.2478/v10175-012-0049-y VARIA Firefly algorithm in optimization of queueing systems J. KWIECIEŃ and B. FILIPOWICZ

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2017 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Outline

More information

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.2 Advanced Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.2 Advanced Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Acceptance Function Initial Temperature Equilibrium State Cooling Schedule Stopping Condition Handling Constraints

More information

Modeling of liquid liquid equilibrium of systems relevant for biodiesel production using Backtracking Search Optimization

Modeling of liquid liquid equilibrium of systems relevant for biodiesel production using Backtracking Search Optimization Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2015 Modeling of liquid liquid equilibrium of systems relevant for biodiesel production using Backtracking Search

More information

Local Search & Optimization

Local Search & Optimization Local Search & Optimization CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition, Chapter 4 Some

More information

Numerical Optimization: Basic Concepts and Algorithms

Numerical Optimization: Basic Concepts and Algorithms May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Study of arrangements for distillation of quaternary mixtures using less than n-1 columns

Study of arrangements for distillation of quaternary mixtures using less than n-1 columns Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2008 Study of arrangements for distillation of quaternary mixtures using less than n-1 columns J.G. Segovia-Hernández,

More information

On Two Flash Methods for Compositional Reservoir Simulations: Table Look-up and Reduced Variables

On Two Flash Methods for Compositional Reservoir Simulations: Table Look-up and Reduced Variables 32 nd IEA EOR Annual Symposium & Workshop On Two Flash Methods for Compositional Reservoir Simulations: Table Look-up and Reduced Variables Wei Yan, Michael L. Michelsen, Erling H. Stenby, Abdelkrim Belkadi

More information

A Restart CMA Evolution Strategy With Increasing Population Size

A Restart CMA Evolution Strategy With Increasing Population Size Anne Auger and Nikolaus Hansen A Restart CMA Evolution Strategy ith Increasing Population Size Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2005 c IEEE A Restart CMA Evolution Strategy

More information

The particle swarm optimization algorithm: convergence analysis and parameter selection

The particle swarm optimization algorithm: convergence analysis and parameter selection Information Processing Letters 85 (2003) 317 325 www.elsevier.com/locate/ipl The particle swarm optimization algorithm: convergence analysis and parameter selection Ioan Cristian Trelea INA P-G, UMR Génie

More information

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION

OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION OPTIMAL DISPATCH OF REAL POWER GENERATION USING PARTICLE SWARM OPTIMIZATION: A CASE STUDY OF EGBIN THERMAL STATION Onah C. O. 1, Agber J. U. 2 and Ikule F. T. 3 1, 2, 3 Department of Electrical and Electronics

More information

Reliable initialization strategy for the equal area rule in flash calculations of ternary systems

Reliable initialization strategy for the equal area rule in flash calculations of ternary systems Instituto Tecnologico de Aguascalientes From the SelectedWorks of Adrian Bonilla-Petriciolet 2007 Reliable initialization strategy for the equal area rule in flash calculations of ternary systems Adrian

More information

A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios

A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios A Fast Algorithm for Computing High-dimensional Risk Parity Portfolios Théophile Griveau-Billion Quantitative Research Lyxor Asset Management, Paris theophile.griveau-billion@lyxor.com Jean-Charles Richard

More information

Reliable Modeling Using Interval Analysis: Chemical Engineering Applications

Reliable Modeling Using Interval Analysis: Chemical Engineering Applications Reliable Modeling Using Interval Analysis: Chemical Engineering Applications Mark A. Stadtherr Department of Chemical Engineering University of Notre Dame Notre Dame, IN 46556 USA SIAM Workshop on Validated

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Reliable Prediction of Phase Stability Using an Interval Newton Method

Reliable Prediction of Phase Stability Using an Interval Newton Method Reliable Prediction of Phase Stability Using an Interval Newton Method James Z. Hua a, Joan F. Brennecke b and Mark A. Stadtherr a a Department of Chemical Engineering, University of Illinois, 600 S. Mathews

More information

Hill climbing: Simulated annealing and Tabu search

Hill climbing: Simulated annealing and Tabu search Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is

More information

Modified Raoult's Law and Excess Gibbs Free Energy

Modified Raoult's Law and Excess Gibbs Free Energy ACTIVITY MODELS 1 Modified Raoult's Law and Excess Gibbs Free Energy Equilibrium criteria: f V i = L f i For vapor phase: f V i = y i i P For liquid phase, we may use an activity coefficient ( i ), giving:

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

PVTpetro: A COMPUTATIONAL TOOL FOR ISOTHERM TWO- PHASE PT-FLASH CALCULATION IN OIL-GAS SYSTEMS

PVTpetro: A COMPUTATIONAL TOOL FOR ISOTHERM TWO- PHASE PT-FLASH CALCULATION IN OIL-GAS SYSTEMS PVTpetro: A COMPUTATIONAL TOOL FOR ISOTHERM TWO- PHASE PT-FLASH CALCULATION IN OIL-GAS SYSTEMS A. M. BARBOSA NETO 1, A. C. BANNWART 1 1 University of Campinas, Mechanical Engineering Faculty, Energy Department

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties

Introduction to Black-Box Optimization in Continuous Search Spaces. Definitions, Examples, Difficulties 1 Introduction to Black-Box Optimization in Continuous Search Spaces Definitions, Examples, Difficulties Tutorial: Evolution Strategies and CMA-ES (Covariance Matrix Adaptation) Anne Auger & Nikolaus Hansen

More information

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms

Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Three Steps toward Tuning the Coordinate Systems in Nature-Inspired Optimization Algorithms Yong Wang and Zhi-Zhong Liu School of Information Science and Engineering Central South University ywang@csu.edu.cn

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Particle swarm optimization approach to portfolio optimization

Particle swarm optimization approach to portfolio optimization Nonlinear Analysis: Real World Applications 10 (2009) 2396 2406 Contents lists available at ScienceDirect Nonlinear Analysis: Real World Applications journal homepage: www.elsevier.com/locate/nonrwa Particle

More information

Implementation and performance of selected evolutionary algorithms

Implementation and performance of selected evolutionary algorithms Research Center at the Department for Applied Geology www.d-site.de Peter Bayer, Claudius Buerger, Michael Finkel Implementation and performance of selected evolutionary algorithms... for the tuning of

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM

OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM Advanced Logistic Systems Vol. 6. No. 1. (2012) pp. 117-126. OPTIMIZATION OF THE SUPPLIER SELECTION PROBLEM USING DISCRETE FIREFLY ALGORITHM LÁSZLÓ KOTA 1 Abstract: In this article I show a firefly optimization

More information

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing

Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing International Conference on Artificial Intelligence (IC-AI), Las Vegas, USA, 2002: 1163-1169 Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing Xiao-Feng

More information

CSC 4510 Machine Learning

CSC 4510 Machine Learning 10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date:

Particle Swarm Optimization. Abhishek Roy Friday Group Meeting Date: Particle Swarm Optimization Abhishek Roy Friday Group Meeting Date: 05.25.2016 Cooperation example Basic Idea PSO is a robust stochastic optimization technique based on the movement and intelligence of

More information

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution

Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Decomposition and Metaoptimization of Mutation Operator in Differential Evolution Karol Opara 1 and Jaros law Arabas 2 1 Systems Research Institute, Polish Academy of Sciences 2 Institute of Electronic

More information

A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS

A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS A COMPARATIVE STUDY ON OPTIMIZATION METHODS FOR THE CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS OZGUR YENIAY Received 2 August 2004 and in revised form 12 November 2004 Constrained nonlinear programming

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization

CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization CMA-ES a Stochastic Second-Order Method for Function-Value Free Numerical Optimization Nikolaus Hansen INRIA, Research Centre Saclay Machine Learning and Optimization Team, TAO Univ. Paris-Sud, LRI MSRC

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Blackbox Optimization Marc Toussaint U Stuttgart Blackbox Optimization The term is not really well defined I use it to express that only f(x) can be evaluated f(x) or 2 f(x)

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction

Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1 Benjamin W. Wah and Yixin Chen Department of Electrical and Computer Engineering and the Coordinated Science

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

DISTRIBUTION SYSTEM OPTIMISATION

DISTRIBUTION SYSTEM OPTIMISATION Politecnico di Torino Dipartimento di Ingegneria Elettrica DISTRIBUTION SYSTEM OPTIMISATION Prof. Gianfranco Chicco Lecture at the Technical University Gh. Asachi, Iaşi, Romania 26 October 2010 Outline

More information

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm

Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm Multi-objective Emission constrained Economic Power Dispatch Using Differential Evolution Algorithm Sunil Kumar Soni, Vijay Bhuria Abstract The main aim of power utilities is to provide high quality power

More information

Chemical Equilibrium: A Convex Optimization Problem

Chemical Equilibrium: A Convex Optimization Problem Chemical Equilibrium: A Convex Optimization Problem Linyi Gao June 4, 2014 1 Introduction The equilibrium composition of a mixture of reacting molecules is essential to many physical and chemical systems,

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Integer weight training by differential evolution algorithms

Integer weight training by differential evolution algorithms Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1

Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1 Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization 1 Benjamin W. Wah 1, Yixin Chen 2 and Tao Wang 3 1 Department of Electrical and Computer Engineering and the Coordinated

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Reliable Computation of Reactive Azeotropes

Reliable Computation of Reactive Azeotropes Reliable Computation of Reactive Azeotropes Robert W. Maier, Joan F. Brennecke and Mark A. Stadtherr 1 Department of Chemical Engineering 182 Fitzpatrick Hall University of Notre Dame Notre Dame, IN 46556

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Real-Time Feasibility of Nonlinear Predictive Control for Semi-batch Reactors

Real-Time Feasibility of Nonlinear Predictive Control for Semi-batch Reactors European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 2005 Elsevier Science B.V. All rights reserved. Real-Time Feasibility of Nonlinear Predictive Control

More information

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

An Analysis on Recombination in Multi-Objective Evolutionary Optimization An Analysis on Recombination in Multi-Objective Evolutionary Optimization Chao Qian, Yang Yu, Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 20023, China

More information

Weight minimization of trusses with natural frequency constraints

Weight minimization of trusses with natural frequency constraints th World Congress on Structural and Multidisciplinary Optimisation 0 th -2 th, June 20, Sydney Australia Weight minimization of trusses with natural frequency constraints Vu Truong Vu Ho Chi Minh City

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

A Fast Heuristic for GO and MINLP

A Fast Heuristic for GO and MINLP A Fast Heuristic for GO and MINLP John W. Chinneck, M. Shafique, Systems and Computer Engineering Carleton University, Ottawa, Canada Introduction Goal: Find a good quality GO/MINLP solution quickly. Trade

More information

All Rights Reserved. Armando B. Corripio, PhD, P.E., Multicomponent Distillation Column Specifications... 2

All Rights Reserved. Armando B. Corripio, PhD, P.E., Multicomponent Distillation Column Specifications... 2 Multicomponent Distillation All Rights Reserved. Armando B. Corripio, PhD, P.E., 2013 Contents Multicomponent Distillation... 1 1 Column Specifications... 2 1.1 Key Components and Sequencing Columns...

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

DISTILLATION. Keywords: Phase Equilibrium, Isothermal Flash, Adiabatic Flash, Batch Distillation

DISTILLATION. Keywords: Phase Equilibrium, Isothermal Flash, Adiabatic Flash, Batch Distillation 25 DISTILLATION Keywords: Phase Equilibrium, Isothermal Flash, Adiabatic Flash, Batch Distillation Distillation refers to the physical separation of a mixture into two or more fractions that have different

More information

Shortcut Design Method for Columns Separating Azeotropic Mixtures

Shortcut Design Method for Columns Separating Azeotropic Mixtures 3908 Ind. Eng. Chem. Res. 2004, 43, 3908-3923 Shortcut Design Method for Columns Separating Azeotropic Mixtures Guilian Liu, Megan Jobson,*, Robin Smith, and Oliver M. Wahnschafft Department of Process

More information

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy

Evolutionary Multiobjective. Optimization Methods for the Shape Design of Industrial Electromagnetic Devices. P. Di Barba, University of Pavia, Italy Evolutionary Multiobjective Optimization Methods for the Shape Design of Industrial Electromagnetic Devices P. Di Barba, University of Pavia, Italy INTRODUCTION Evolutionary Multiobjective Optimization

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Finding Ground States of SK Spin Glasses with hboa and GAs

Finding Ground States of SK Spin Glasses with hboa and GAs Finding Ground States of Sherrington-Kirkpatrick Spin Glasses with hboa and GAs Martin Pelikan, Helmut G. Katzgraber, & Sigismund Kobe Missouri Estimation of Distribution Algorithms Laboratory (MEDAL)

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Status and results of group contribution methods

Status and results of group contribution methods Pure & Appl. Cbem., Vol. 65, No. 5, pp. 919926, 1993. Printed in Great Britain. @ 1993 IUPAC Status and results of group contribution methods J. Gmehling, K. Fischer, J. Li, M. Schiller Universitat Oldenburg,

More information

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology

4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology 4M020 Design tools Algorithms for numerical optimization L.F.P. Etman Department of Mechanical Engineering Eindhoven University of Technology Wednesday September 3, 2008 1 / 32 Outline 1 Problem formulation:

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

18 a 21 de novembro de 2014, Caldas Novas - Goiás THERMODYNAMIC MODELING OF VAPOR-LIQUID EQUILIBRIUM FOR PETROLEUM FLUIDS

18 a 21 de novembro de 2014, Caldas Novas - Goiás THERMODYNAMIC MODELING OF VAPOR-LIQUID EQUILIBRIUM FOR PETROLEUM FLUIDS 18 a 21 de novembro de 2014, Caldas Novas - Goiás THERMODYNAMIC MODELING OF VAPOR-LIQUID EQUILIBRIUM FOR PETROLEUM FLUIDS Antonio Marinho Barbosa Neto, aneto@dep.fem.unicamp.br 1 Jônatas Ribeiro, jonand@petrobras.com.br

More information

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES

Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating Support Vector Machines and Covariance-Matrix Adaptation-ES Covariance Matrix Adaptation-Evolution Strategy Surrogate models for Single and Multi-Objective Stochastic Optimization: Integrating and Covariance-Matrix Adaptation-ES Ilya Loshchilov, Marc Schoenauer,

More information

A Brief Introduction to Multiobjective Optimization Techniques

A Brief Introduction to Multiobjective Optimization Techniques Università di Catania Dipartimento di Ingegneria Informatica e delle Telecomunicazioni A Brief Introduction to Multiobjective Optimization Techniques Maurizio Palesi Maurizio Palesi [mpalesi@diit.unict.it]

More information

A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints

A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints Noname manuscript No. (will be inserted by the editor) A Recursive Formula for the Kaplan-Meier Estimator with Mean Constraints Mai Zhou Yifan Yang Received: date / Accepted: date Abstract In this note

More information

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms

Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms Motivation, Basic Concepts, Basic Methods, Travelling Salesperson Problem (TSP), Algorithms 1 What is Combinatorial Optimization? Combinatorial Optimization deals with problems where we have to search

More information

THERMODYNAMIC ANALYSIS OF MULTICOMPONENT DISTILLATION-REACTION PROCESSES FOR CONCEPTUAL PROCESS DESIGN

THERMODYNAMIC ANALYSIS OF MULTICOMPONENT DISTILLATION-REACTION PROCESSES FOR CONCEPTUAL PROCESS DESIGN THERMODYNAMIC ANALYSIS OF MULTICOMPONENT DISTILLATION-REACTION PROCESSES FOR CONCEPTUAL PROCESS DESIGN Oliver Ryll, Sergej Blagov 1, Hans Hasse Institute of Thermodynamics and Thermal Process Engineering,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem. An artificial chemical reaction optimization algorithm for multiple-choice knapsack problem Tung Khac Truong 1,2, Kenli Li 1, Yuming Xu 1, Aijia Ouyang 1, and Xiaoyong Tang 1 1 College of Information Science

More information

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 24 CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 2.1 INTRODUCTION Polynomial factorization is a mathematical problem, which is often encountered in applied sciences and many of

More information

A Sequential and Hierarchical Approach for the Feasibility Analysis and the Preliminary Synthesis and Design of Reactive Distillation Processes

A Sequential and Hierarchical Approach for the Feasibility Analysis and the Preliminary Synthesis and Design of Reactive Distillation Processes A Sequential and Hierarchical Approach for the Feasibility Analysis and the Preliminary Synthesis and Design of Reactive Distillation Processes Raphaële Thery, Xuân-Mi Meyer 1, Xavier Joulia Laboratoire

More information

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules

An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules Joc Cing Tay and Djoko Wibowo Intelligent Systems Lab Nanyang Technological University asjctay@ntuedusg Abstract As the Flexible

More information

Quadrature based Broyden-like method for systems of nonlinear equations

Quadrature based Broyden-like method for systems of nonlinear equations STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like

More information

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Lecture 4: Simulated Annealing. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lecture 4: Simulated Annealing Lec04/1 Neighborhood based search Step 1: s=s0; Step 2: Stop if terminating condition satisfied; Step 3: Generate N(s); Step 4: s =FindBetterSolution(N(s)); Step 5: s=s ;

More information