Convergence of Ant Colony Optimization on First-Order Deceptive Systems
|
|
- Madlyn Robbins
- 5 years ago
- Views:
Transcription
1 Convergence of Ant Colony Optimization on First-Order Deceptive Systems Yixin Chen Washington University in St. Louis Department of Computer Science & Engineering St. Louis, MO 6330, USA Haiying Sun Yangzhou University Department of Computer Science Yangzhou, 5009, China Abstract Deceptive problems have been considered difficult for ant colony optimization ACO) and it was believed that ACO will fail to converge to global optima of deceptive problems. This paper presents a convergence analysis of ACO on deceptive systems. This paper proves, for the first time, that ACO can achieve reachability convergence but not asymptotic convergence for a class of first order deceptive systems FODS) without assuming a minimum pheromone at each iteration. Experimental results confirm the analysis. Introduction Ant colony optimization ACO) is a popular method for hard discrete optimization problems 6, 5,, 9]. Although there are a huge amount of experimentations and variants of ACO, its theoretical foundation is still in its early development 5]. Recently, there has been an increased effort to deepen the understanding of the convergence behavior of ACO. Guijahr 7, 8] proves the convergence of a particular implementation of ACO called the graph-based ant system GBAS). However, GBAS is quite different from common ACO implementations and its practical performance is unknown. Dorigo et al. shows the convergence of another class of ACO 0, 6], in which there is a lower bound τ min to all pheromone values. Such a method is denoted as ACO τmin. Typically, there are two types of convergence of a stochastic optimization algorithm: Asymptotic convergence. An algorithm has asymptotic convergence if t P s t) =, where P s t) is the probability that the algorithm generates an optimal solution in the t th iteration. Reachability convergence. An algorithm has reachability convergence if t P r t) =, where P r t) is the probability that the algorithm generates an optimal solution at least once in the st to t th iterations. Dorigo et al. 0, 6] show that ACO τmin achieves reachability convergence under certain assumptions. Dorigo et al. 0, 6] also show the asymptotic convergence of ACO τmint), in which the pheromone lower bound τ min t) changes over time, under the assumption that τ min t) = d/lnt +), where d is a constant. The main itation of studying ACO τmin and ACO τmint) is that they do not allow for an exponentially fast decrement of the pheromone trails resulted by using a constant evaporation factor, which is used by most ACO implementations. In this paper, we consider an ACO algorithm that uses the exponentially fast decrement of the pheromone and prove its reachability convergence. Our results are established for a particular class of optimization problem, called the n-bit trap problem. The n-bit trap problem is deemed a difficult problem because it is a first-order deceptive system FODS). FODS are characterized by local optimal fix-points with large basins of attractions. It is known that both genetic algorithms GA) and ACO may fall into the local optimal traps and fail to find the global optimal solution, 3, 4]. We present an ACO algorithm that follows closely to the most common ACO implementations and prove that our algorithm achieves reachability convergence for the n-bit trap problem. We also show that the algorithm does not achieve asymptotic convergence. We prove that, the standard ACO algorithm with an exponential decrease of pheromone, cannot converge to optimal solutions of the n-bit trap problems. This result is meaningful as it provides a first explanation of the lack of asymptotic convergence of ACO on deceptive problems observed by many others, 3, 4].
2 Algorithm : Ant colony optimization ACO) initialize pheromone values τ j i for each i =,,n and j =,, D i ; foreach iteration t =,, do foreach ant k =,,,mdo s ConstructSolutionτ); if s is a feasible solution then if fs) <fs b ) or s b =null then s b s; Ψ t Ψ t {s}; UpdatePheromoneτ,Ψ t,s b ); Ant Colony Optimization and Deceptive Problems In this section, we introduce the framework of the ACO algorithm we will use and review the concept of deceptive problems. ACO are designed for solving constrained optimization CO) problems defined as follows 5]. Definition A constrained optimization problem is defined by a model P =S, Ω,f), where: ) S is a search space defined over a finite set of discrete variables; ) Ω is a set of constraints on the variables; and 3) f : S R is the objective function to be maximized. In the search space S, there are n decision variables X i,i =,,n, where X i can take values from the set D i = {c i,,c Di i }. A variable assignment is written as X i = c j i. A complete assignment to all X i gives a solution instantiation. S is the set of all such complete assignments. We also denote the set of all solution components as R = {c j i i =,,n, j =,, D i }. A solution s Sis called a feasible solution if it satisfies all the constraints in Ω. A feasible solution s is a global optimum if fs ) fs) for all s S. The set of all global optima is denoted by S S. The general ACO algorithm considered in this paper is shown in Algorithm. In the ACO algorithm, we assign a value τ j i, called the pheromone, to each of the solution component c j i. The vector of all pheromone is denoted by τ. The central component of ACO is the ConstructSolutionτ) procedure that each ant uses to construct a solution. This construction procedure assembles solutions as sequences of elements from the finite set of solution components S. Let s p be the partial solution constructed by an ant. Initially s p = {} is empty. The ant at each construction step adds a feasible solution component from the set Rs p ), where Rs p ) R \{s p } is the set of feasible solution components that satisfy the constraints in Ω given the partial solution s p. When selecting the component for variable X i, the probability that the ant chooses c j i is, j =,, D i, P c j i sp )= τ j i ]α ηc j i c )]β ki Rsp ) τ i k]α ηc k. ) i )]β Here, ηc j i ) is a heuristic function for the component c j i. α>0and β>0are two parameters controlling the relative importance of the pheromone and heuristic information. In iteration t, after all ants have constructed solutions, ACO calls UpdatePheromoneτ,Ψ t,s b ) to update the pheromone vector τ. Here, Ψ t is the set of solutions constructed by the ants in iteration t and s b is the incumbent best solution. The pheromone update rule is, i =,,n, j =,, D i, τ j i = ρτ j i + S j i F s), ) s S j i where ρ 0, ) is an evaporation factor and S j i is the set of solutions in Ψ t that have c j i as a component, and F : S R + is a quality function such that, for any s, s S, if fs) >fs ), then F s) F s ). The expected iteration quality is denoted as W F τ t) where t is the iteration number. W F τ t) is defined as: W F τ t) = s S F s)p s τ), 3) where P s τ) is the probability that the solution s is generated by an ant given the pheromone vector τ. Definition Given a constrained optimization problem P, an ACO algorithm is a local optimizer for P if for any initial pheromone values, the expected iteration quality satisfies: W F τ t +) W F τ t), t 0. 4) We can now introduce the notion of deception for local optimizers 3]. Definition 3 Given a constrained optimization problem P, it is called a first-order deceptive system FODS) for an ACO algorithm, if the ACO algorithm is a local optimizer, and there exists an initial setting of pheromone values such that the algorithm does not in expectation converge to a global optimum. 3 Convergence Analysis of ACO on FODS Currently, whether a problem is a FODS is mostly proved by empirical studies, and lacks theoretical analysis. For example, experiments have been run to prove that the n-bit trap and job-shop scheduling problems are deceptive
3 s bit bit bit bit fs) Table. Fitness function value of the 4-bit trap problem. systems by showing that the ACO does not converge to the global optimal solutions 3]. However, there is no proof as to why the ACO cannot achieve asymptotic convergence for such problems. Also, it is yet to be determined if the ACO can achieve reachability convergence on deceptive systems. Further, understanding the time complexity for solving deceptive problems is important because that it sheds insights into the worst-case performance of ACO, as the deceptive problems are harder problems for ACO. In this paper, we study these issues on the n-bit trap problem, a well-known example of FODS that has also been studied for evolutionary algorithms. 3. The n-bit trap problem The n-bit trap problem is to find among the n binary numbers, from 0 to n, the one with the highest fitness. The fitness of a binary number s is defined as: { hs) s 0 fs) = n + s =0, 5) where hs) is the Hamming distance between s and 0. Obviously, the global optimum is s =0. To solve the n-bit trap problem, we design the following ACO algorithm which fits into the general ACO framework in Algorithm. We use m ants. In each iteration, an ant sequentially fixes the j th bit of the binary number in order of j =,,,n. For the j th bit, the ant has two choices c 0 j and c j, corresponding to setting the j th bit to 0 and, respectively. The pheromone of c 0 j and c j are τ j 0 and τ j, respectively. Define the set G k j = {b b b n ) S b j = k}, where k {0, } and b j is the value of the j th bit. Hence, G k j is the set of binary numbers whose j th bit is k. We define the function F G k j ) as the average fitness of all the numbers in G k j : F G k j )= G k j fs). 6) s G k j For example, when n =4,wehaveF G 0 j )=.5 and F G j )=.5for all j =,, 3, 4. We initialize pheromone τj k as τ j k0) = F Gk j ), for all j =,,nand k {0, }. For the j th bit, the probability an ant selects value k, k =0or, is: P c k j,t)= τj kt) τj 0t)+τ j, 7) t) which is derived from ) by setting α =and β =0.In each iteration, the pheromone is updated as: τj k t +)=ρτ j k t)+ fs), 8) s S k j where S k j is the set of solutions generated at the tth iteration that have c k j as the jth bit. 3. Asymptotic convergence of ACO on the n-bit trap problem Lemma For the n-bit trap problem and F G k j ) function defined in 6), we have: F G n + j )= and F G 0 j )=n Proof. We have from the definition in 6) that: F G j ) = F G 0 j ) = n n k= C k n k = n + n n n +)+ = n k= + n +. 9) n C k n k) 0) + n + n ) Lemma Let β>0 be a constant, we have t= β + Proof. We consider four cases. ) In case β =,wehave t= β + β t ) = β )=0 ) t 3 3 T 4 T = =0. 3) T ) In case β >, we have T t= β + β t ) < T t= β + t ). Let β t= t )=r, wehave: r t= β + β T t )= ). 4) t t=
4 T Again, because t= t ) = 0, we have r T t= β + β t ) = 0 which means T t= β + β t )=0. 3) In case β<, and β = q where q is an integer larger than. Let T = p q and c = q t= t ), then we have: T t ) = c t ) t= Therefore, t=q p q = c p jq + i ) >c jq )q. j= i=0 j= T t= t ) >c p p j= jq ] )q q p = c p j= jq ) =0 5) Since we know T t= t ) = 0 and and ] q c p p j= jq ) 0, combined with 5) ] q we get c p p j= jq ) = 0, which implies p p j= jq )=0, which is equivalent to: t= qt ) = t= β t )=0. 4) In case β <, and β = p r where p and r are both integers larger than. We can always find β = q where q is an integer larger than such that β <β. Since T t= β t ) > T t= β t ) and T β t= t )=0from case 3), we have T t= β t )=0. We consider the ACO algorithm described in Algorithm using 7) and 8) as the selection and update functions, respectively. We name this algorithm ACO n bit. For the ACO n bit algorithm, define: α j t) = τ j 0t) τj j =,,,n,t=0,,,. 6) t), Namely, α j t) is the ratio of the pheromone on value 0 to the pheromone on value for bit j in iteration t. Theorem The ACO n bit algorithm cannot achieve asymptotic convergence on the n-bit trap problem when n>3. Proof. For the n-bit trap problem, since the global optimal solution is s = {0, 0,, 0), we show that j =,,,n, τ 0 ] Eα j t) jt)] = E t t τj t) =0. 7) Suppose we have m ants. We first prove by induction that Eα j t)] <, j =,,,n,t=0,,,. 8) When t =0, τj 00) is initialized to F G0 j ) and τ j 0) is initialized to F G j ). From Lemma, while n > 3, F G j ) > FG0 j ). For a fixed n, we set F G0 j ) = α 0 F G j ), where α 0 is a constant and 0 <α 0 <. Hence, we have our base case: Eα j 0)] = τ j 00) τj 0) = F G0 j ) F G j ) = α 0 <. To prove 8) by induction, we assume that Eα j t)] <. According to 8), we have that in the t +) th iteration, the expected value of pheromone satisfies ] Eτj 0 t +)] ρeτj 0 mαj t) t)] + E +α j t) F G0 j ), ] Eτj t +)] ρeτ j t)] + E m +α j t) F G j ). Given 6), we have Eα j t +)]= Eτ 0 j t+)] Eτ j t+)] ρeτ 0 j t)]+ meαj t)] +Eα j t)] F G0 j ) ρeτj t)]+ m +Eα j t)] F G j ) = ρ+eαj t)])eαjt)]eτ j t)]+eαjt)]mf G0 j ) ρ+eα jt)])eτ j t)]+mf G j ) Hence, we have Eα j t +)] α 0 )mf G j = Eα j t)] ) ) ρ + Eα j t)])eτj t)] + mf G j ) < Eα j t)]. Therefore, we get, for any j =,, n, > α 0 = Eα j 0)] >Eα j )] > > Eα j t)] >Eα j t +)]> > 0. 9) Thus, we have proved 8) by induction. Further, since Eτj t)] = ρeτj t )] + +Eα j t)] mf G j ) < Eτj t )] + mf G j), 0) recursively applying 0) leads to Eτj t)] <Eτ j 0)] + tmf G j ) < t +)mf G j ). ) Substituting ) into 9) yields ) α 0 ) Eα j t +)]<Eα j t)] ρ + Eα j t)])t +)+
5 From 9), we know +Eα j t)] <. Therefore, ) Eα j t +)] <Eα j t)] α0) ρt+)+ ) <Eα j t)] α0) ρ+3)t Let β = α0 ρ+3,wehave Eα j t +)]<Eα j t)] β ) < t t i= β ) α 0 i From Lemma, we get t Eα j t)] = 0. Therefore, the ACO n bit algorithm will not converge to the optimal solution s = {0, 0,, 0}. Theorem gives an explanation why the ACO algorithms have difficulties in solving deceptive problems such as the n-bit trap problem. 3.3 Reachability convergence of ACO on the n-bit trap problem Although ACO n bit cannot achieve asymptotic convergence, we will show that it can achieve reachability convergence. We need some preinary results first. Lemma 3 The pheromone in the t th iteration of ACO n bit satisfies τj 0 ρt+ t) ρ ϕf G0 j ), where ϕ is a constant, 0 <ϕ<, for j =,,,nand t =,,. Proof. From the pheromone update rule in 8), we have τj k t) =ρτ j k t ) + fs), ) s S k j } Let θj {fs) s 0 =min =b b b n ),s S,b j =0 and θ 0 j = ϕf G0 j ). Obviously, 0 <ϕ<. Since τ 0 j t) ρτ 0 j t ) + θ0 j,wehaveτ 0 j t) ρτ 0 j t ) + ϕf G0 j ). Therefore, we get τj 0 t) ρ t τj 0 0) + ρ t + ρ t + +)ϕf G 0 j ) = ρ t + ρ t + ρ t + +)ϕf G 0 j ) = ρt+ ρ ϕf G0 j ). Lemma 4 For any positive integer n and a real number ρ 0, ), there exists a positive integer t 0 such that ρt+ ρ > t n for any t>t 0. Proof. We see that fact that t t n ρ t+ ρ ρ = t ρ t+ ) n t =0. Thus, there exists an integer t 0 > 0 such that ρt+ ρ >t n for t>t 0. From Lemma 3 and Lemma 4, we see that when t is large enough, τj 0t) t n ϕf G 0 j ). We set τ min 0 j, t) = t n ϕf G 0 j ) and τ min 0 t) =min j n τmin 0 j, t). If we set θ min =min j n ϕf G 0 j ), then from Lemma 3 and Lemma 4, τmin 0 t) t n θmin, t >t 0. 3) Lemma 5 The pheromone in the t th iteration of ACO n bit satisfies τj kt) ρ F Gk j ), for k =0,, j =,,,n and t =,,. Proof. From the pheromone update rule in 8), we have τj k t) =ρτj k t ) + fs), 4) s S k j } Let λ k j {fs) s =max =b b b n ),s S,b j = k and λ k j = φf Gk j ) where φ>is a constant for a given n. Since τj kt) ρτ j kt ) + mλk j,wehaveτ j kt) ρτj kt ) + φmf Gk j ). Therefore, we get τj k t) ρt τj k 0) + ρt + ρ t + +)φmf G k j ) ρ φmf Gk j ). We denote τmaxj) k = ρ φmf Gk j ) and τ max = max j n,k=0, τmax k j). Theorem The ACO n bit algorithm achieves reachability convergence on the n-bit trap problem. Proof. Although there are multiple ants, we only need to prove that one ant can achieve reachability convergence in order to establish the result. From Lemma 4 and Lemma 5, we know that for t>t 0, where t 0 is defined in Lemma 4, P c 0 j,t)= τj 0t) τj 0t)+τ j t) τ min 0 t). 5) Hence, let P t) be the probability that the ant can generate the optimal solution s =0, 0,, 0) in iteration t, we have ] τ 0 n P t) min t) 6) Let P succ T ) and P fail T ), respectively, be the probability that the ant does and does not find s in the first T iterations. From 3) and Lemma 5, we know ) τ 0 n ] P fail T ) = P t)) min t) t= t=t 0 θmin t=t 0 ) n ) n ] t n.
6 Figure. Average solution pheromone. Figure. Average incumbent solution. Denote θmin ) n = γ, then P fail T ) T t=t 0 γ t ]. From Lemma, we have that P fail T ) = 0 and thus P succ T ) = P fail T )=. 4 Experimental Results We have implemented an ACO on the n-bit trap problem. We set n =, evaporation rate ρ =0.95, and use 0 ants. First, to test the asymptotic convergence, we make 00 runs and record Q, the solution pheromone at each iteration t. Figure shows the average pheromone at each iteration of the 00 runs. We see that Q decreases as t increases, indicating that ACO does not have asymptotic convergence on the n-bit trap problem. Next, to show reachability convergence, we plot in Figure the average fitness of the incumbent solution f versus the iteration number t. We see that the average f approaches the optimal value as t increases, showing the reachability convergence of ACO. 5 Conclusions In this paper, we have theoretically analyzed ACO algorithms on deceptive problems, a class of problems that are considered difficult for ACO. We have presented a first attempt towards a convergence analysis of ACO on deceptive systems. Using the n-trap problem as an example of firstorder deceptive problems, we have proved that that ACO can achieve reachability convergence but not asymptotic convergence for some first order deceptive systems without assuming a minimum pheromone at each iteration. We have also presented experimental results that confirm the analysis. References ] C. Blum. Theoretical and practical aspects of ant colony optimization. Dissertations in Artificial Intelligence, 463, 004. ] C. Blum. Ant colony optimization: Introduction and recent trends. Physics of Life Reviews, 4): , December ] C. Blum and M. Dorigo. Deception in ant colony optimization. In Proceedings of ANTS004, Lecture Notes in Computer Science, pages 9 30, ] C. Blum and M. Dorigo. Search bias in ant colony optimization: on the role of competition-balanced systems. IEEE Transactions on Evolutionary Computation, 9:59 74, ] M. Dorigo and C. Blum. Ant colony optimization theory: A survey. Theoretical Computer Science, 344:43 78, ] M. Dorigo and T. Stützle. Ant Colony Optimization. MIT Press, ] W. Gutjahr. A Graph-based Ant System and its convergence. Applied Soft Computing, 6: , ] W. Gutjahr. ACO algorithms with guaranteed convergence to the optimal solution. Info. Processing Lett., 8:45 53, 00. 9] S. Shtovba. Ant algorithms: Theory and applications. Programming and Computer Software, 3:67 78, ] T. Stützle and M. Dorigo. A Short Convergence Proof for a Class of Ant Colony Optimization Algorithms. IEEE Transactions on Evolutionary Computation, 6: , 00.
Natural Computing. Lecture 11. Michael Herrmann phone: Informatics Forum /10/2011 ACO II
Natural Computing Lecture 11 Michael Herrmann mherrman@inf.ed.ac.uk phone: 0131 6 517177 Informatics Forum 1.42 25/10/2011 ACO II ACO (in brief) ACO Represent solution space Set parameters, initialize
More informationA Note on the Parameter of Evaporation in the Ant Colony Optimization Algorithm
International Mathematical Forum, Vol. 6, 2011, no. 34, 1655-1659 A Note on the Parameter of Evaporation in the Ant Colony Optimization Algorithm Prasanna Kumar Department of Mathematics Birla Institute
More informationAnt Colony Optimization: an introduction. Daniel Chivilikhin
Ant Colony Optimization: an introduction Daniel Chivilikhin 03.04.2013 Outline 1. Biological inspiration of ACO 2. Solving NP-hard combinatorial problems 3. The ACO metaheuristic 4. ACO for the Traveling
More informationNew approaches to evaporation in ant colony optimization algorithms
New approaches to evaporation in ant colony optimization algorithms E. Foundas A. Vlachos Department of Informatics University of Piraeus Piraeus 85 34 Greece Abstract In Ant Colony Algorithms (ACA), artificial
More informationMetaheuristics and Local Search
Metaheuristics and Local Search 8000 Discrete optimization problems Variables x 1,..., x n. Variable domains D 1,..., D n, with D j Z. Constraints C 1,..., C m, with C i D 1 D n. Objective function f :
More informationCapacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm
Capacitor Placement for Economical Electrical Systems using Ant Colony Search Algorithm Bharat Solanki Abstract The optimal capacitor placement problem involves determination of the location, number, type
More informationMetaheuristics and Local Search. Discrete optimization problems. Solution approaches
Discrete Mathematics for Bioinformatics WS 07/08, G. W. Klau, 31. Januar 2008, 11:55 1 Metaheuristics and Local Search Discrete optimization problems Variables x 1,...,x n. Variable domains D 1,...,D n,
More informationOutline. Ant Colony Optimization. Outline. Swarm Intelligence DM812 METAHEURISTICS. 1. Ant Colony Optimization Context Inspiration from Nature
DM812 METAHEURISTICS Outline Lecture 8 http://www.aco-metaheuristic.org/ 1. 2. 3. Marco Chiarandini Department of Mathematics and Computer Science University of Southern Denmark, Odense, Denmark
More informationIntuitionistic Fuzzy Estimation of the Ant Methodology
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 9, No 2 Sofia 2009 Intuitionistic Fuzzy Estimation of the Ant Methodology S Fidanova, P Marinov Institute of Parallel Processing,
More informationRigorous Analyses for the Combination of Ant Colony Optimization and Local Search
Rigorous Analyses for the Combination of Ant Colony Optimization and Local Search Frank Neumann 1, Dirk Sudholt 2, and Carsten Witt 2 1 Max-Planck-Institut für Informatik, 66123 Saarbrücken, Germany, fne@mpi-inf.mpg.de
More informationUpdating ACO Pheromones Using Stochastic Gradient Ascent and Cross-Entropy Methods
Updating ACO Pheromones Using Stochastic Gradient Ascent and Cross-Entropy Methods Marco Dorigo 1, Mark Zlochin 2,, Nicolas Meuleau 3, and Mauro Birattari 4 1 IRIDIA, Université Libre de Bruxelles, Brussels,
More informationA reactive framework for Ant Colony Optimization
A reactive framework for Ant Colony Optimization Madjid Khichane 1,2, Patrick Albert 1, and Christine Solnon 2 1 ILOG SA, 9 rue de Verdun, 94253 Gentilly cedex, France {mkhichane,palbert}@ilog.fr 2 LIRIS
More information3D HP Protein Folding Problem using Ant Algorithm
3D HP Protein Folding Problem using Ant Algorithm Fidanova S. Institute of Parallel Processing BAS 25A Acad. G. Bonchev Str., 1113 Sofia, Bulgaria Phone: +359 2 979 66 42 E-mail: stefka@parallel.bas.bg
More informationSelf-Adaptive Ant Colony System for the Traveling Salesman Problem
Proceedings of the 29 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 29 Self-Adaptive Ant Colony System for the Traveling Salesman Problem Wei-jie Yu, Xiao-min
More informationA Short Convergence Proof for a Class of Ant Colony Optimization Algorithms
358 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 6, NO. 4, AUGUST 2002 A Short Convergence Proof for a Class of Ant Colony Optimization Algorithms Thomas Stützle and Marco Dorigo, Senior Member,
More informationImplementation of Travelling Salesman Problem Using ant Colony Optimization
RESEARCH ARTICLE OPEN ACCESS Implementation of Travelling Salesman Problem Using ant Colony Optimization Gaurav Singh, Rashi Mehta, Sonigoswami, Sapna Katiyar* ABES Institute of Technology, NH-24, Vay
More informationAnt Colony Optimization for Resource-Constrained Project Scheduling
Ant Colony Optimization for Resource-Constrained Project Scheduling Daniel Merkle, Martin Middendorf 2, Hartmut Schmeck 3 Institute for Applied Computer Science and Formal Description Methods University
More informationRuntime Analysis of a Simple Ant Colony Optimization Algorithm
Algorithmica (2009) 54: 243 255 DOI 10.1007/s00453-007-9134-2 Runtime Analysis of a Simple Ant Colony Optimization Algorithm Frank Neumann Carsten Witt Received: 22 January 2007 / Accepted: 20 November
More informationPart B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! Vants! Example! Time Reversibility!
Part B" Ants (Natural and Artificial)! Langton s Vants" (Virtual Ants)! 11/14/08! 1! 11/14/08! 2! Vants!! Square grid!! Squares can be black or white!! Vants can face N, S, E, W!! Behavioral rule:!! take
More informationTraffic Signal Control with Swarm Intelligence
009 Fifth International Conference on Natural Computation Traffic Signal Control with Swarm Intelligence David Renfrew, Xiao-Hua Yu Department of Electrical Engineering, California Polytechnic State University
More informationResearch Article Ant Colony Search Algorithm for Optimal Generators Startup during Power System Restoration
Mathematical Problems in Engineering Volume 2010, Article ID 906935, 11 pages doi:10.1155/2010/906935 Research Article Ant Colony Search Algorithm for Optimal Generators Startup during Power System Restoration
More informationDeterministic Nonlinear Modeling of Ant Algorithm with Logistic Multi-Agent System
Deterministic Nonlinear Modeling of Ant Algorithm with Logistic Multi-Agent System Rodolphe Charrier, Christine Bourjot, François Charpillet To cite this version: Rodolphe Charrier, Christine Bourjot,
More informationANT colony optimization (ACO) [2] is a metaheuristic inspired
732 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 11, NO. 6, DECEMBER 2007 On the Invariance of Ant Colony Optimization Mauro Birattari, Member, IEEE, Paola Pellegrini, and Marco Dorigo, Fellow,
More informationA Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem
A Comparison of GAs Penalizing Infeasible Solutions and Repairing Infeasible Solutions on the 0-1 Knapsack Problem Jun He 1, Yuren Zhou 2, and Xin Yao 3 1 J. He is with the Department of Computer Science,
More informationUNIVERSITY OF DORTMUND
UNIVERSITY OF DORTMUND REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence Methods
More informationOn the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study
On the Usefulness of Infeasible Solutions in Evolutionary Search: A Theoretical Study Yang Yu, and Zhi-Hua Zhou, Senior Member, IEEE National Key Laboratory for Novel Software Technology Nanjing University,
More informationWhen to Use Bit-Wise Neutrality
When to Use Bit-Wise Neutrality Tobias Friedrich Department 1: Algorithms and Complexity Max-Planck-Institut für Informatik Saarbrücken, Germany Frank Neumann Department 1: Algorithms and Complexity Max-Planck-Institut
More informationLecture 9 Evolutionary Computation: Genetic algorithms
Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic
More informationSensitive Ant Model for Combinatorial Optimization
Sensitive Ant Model for Combinatorial Optimization CAMELIA CHIRA cchira@cs.ubbcluj.ro D. DUMITRESCU ddumitr@cs.ubbcluj.ro CAMELIA-MIHAELA PINTEA cmpintea@cs.ubbcluj.ro Abstract: A combinatorial optimization
More informationAnt Colony Optimization for Multi-objective Optimization Problems
Ant Colony Optimization for Multi-objective Optimization Problems Inès Alaya 1,2, Christine Solnon 1, and Khaled Ghédira 2 1 LIRIS, UMR 5205 CNRS / University of Lyon 2 SOIE ICTAI 2007 Multi-objective
More informationA Review of Stochastic Local Search Techniques for Theorem Provers
A Review of Stochastic Local Search Techniques for Theorem Provers Termpaper presentation for CSE814-Formal Methods in Software Development Course Michigan State University, East Lansing, USA Presenter:
More informationA Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions
A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions Chao Qian,2, Yang Yu 2, and Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of
More informationSolving the Post Enrolment Course Timetabling Problem by Ant Colony Optimization
Solving the Post Enrolment Course Timetabling Problem by Ant Colony Optimization Alfred Mayer 1, Clemens Nothegger 2, Andreas Chwatal 1, and Günther R. Raidl 1 1 Institute of Computer Graphics and Algorithms
More informationUpper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions
Upper Bounds on the Time and Space Complexity of Optimizing Additively Separable Functions Matthew J. Streeter Computer Science Department and Center for the Neural Basis of Cognition Carnegie Mellon University
More informationProbabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford
Probabilistic Model Checking Michaelmas Term 2011 Dr. Dave Parker Department of Computer Science University of Oxford Probabilistic model checking System Probabilistic model e.g. Markov chain Result 0.5
More informationMining Spatial Trends by a Colony of Cooperative Ant Agents
Mining Spatial Trends by a Colony of Cooperative Ant Agents Ashan Zarnani Masoud Rahgozar Abstract Large amounts of spatially referenced data has been aggregated in various application domains such as
More informationA New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms
A New Approach to Estimating the Expected First Hitting Time of Evolutionary Algorithms Yang Yu and Zhi-Hua Zhou National Laboratory for Novel Software Technology Nanjing University, Nanjing 20093, China
More informationSwitch Analysis for Running Time Analysis of Evolutionary Algorithms
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. X, 204 Switch Analysis for Running Time Analysis of Evolutionary Algorithms Yang Yu, Member, IEEE, Chao Qian, Zhi-Hua Zhou, Fellow, IEEE Abstract
More informationMore Approximation Algorithms
CS 473: Algorithms, Spring 2018 More Approximation Algorithms Lecture 25 April 26, 2018 Most slides are courtesy Prof. Chekuri Ruta (UIUC) CS473 1 Spring 2018 1 / 28 Formal definition of approximation
More informationSwarm intelligence: Ant Colony Optimisation
Swarm intelligence: Ant Colony Optimisation S Luz luzs@cs.tcd.ie October 7, 2014 Simulating problem solving? Can simulation be used to improve distributed (agent-based) problem solving algorithms? Yes:
More informationA new restart procedure for combinatorial optimization and its convergence
A new restart procedure for combinatorial optimization and its convergence Davide Palmigiani and Giovanni Sebastiani arxiv:79.6449v [math.oc] 8 Sep 27 Abstract We propose a new iterative procedure to optimize
More informationMULTI-PERIOD MULTI-DIMENSIONAL KNAPSACK PROBLEM AND ITS APPLICATION TO AVAILABLE-TO-PROMISE
MULTI-PERIOD MULTI-DIMENSIONAL KNAPSACK PROBLEM AND ITS APPLICATION TO AVAILABLE-TO-PROMISE Hoong Chuin LAU and Min Kwang LIM School of Computing National University of Singapore, 3 Science Drive 2, Singapore
More informationAn Enhanced Aggregation Pheromone System for Real-Parameter Optimization in the ACO Metaphor
An Enhanced Aggregation Pheromone System for Real-Parameter ptimization in the AC Metaphor Shigeyoshi Tsutsui Hannan University, Matsubara, saka 58-852, Japan tsutsui@hannan-u.ac.jp Abstract. In previous
More informationARTIFICIAL INTELLIGENCE
BABEŞ-BOLYAI UNIVERSITY Faculty of Computer Science and Mathematics ARTIFICIAL INTELLIGENCE Solving search problems Informed local search strategies Nature-inspired algorithms March, 2017 2 Topics A. Short
More informationNew Instances for the Single Machine Total Weighted Tardiness Problem
HELMUT-SCHMIDT-UNIVERSITÄT UNIVERSITÄT DER BUNDESWEHR HAMBURG LEHRSTUHL FÜR BETRIEBSWIRTSCHAFTSLEHRE, INSBES. LOGISTIK-MANAGEMENT Prof. Dr. M. J. Geiger Arbeitspapier / Research Report RR-10-03-01 March
More informationThe Steiner Network Problem
The Steiner Network Problem Pekka Orponen T-79.7001 Postgraduate Course on Theoretical Computer Science 7.4.2008 Outline 1. The Steiner Network Problem Linear programming formulation LP relaxation 2. The
More informationInteger weight training by differential evolution algorithms
Integer weight training by differential evolution algorithms V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece. e-mail: vpp
More informationAn Effective Chromosome Representation for Evolving Flexible Job Shop Schedules
An Effective Chromosome Representation for Evolving Flexible Job Shop Schedules Joc Cing Tay and Djoko Wibowo Intelligent Systems Lab Nanyang Technological University asjctay@ntuedusg Abstract As the Flexible
More informationCSC 4510 Machine Learning
10: Gene(c Algorithms CSC 4510 Machine Learning Dr. Mary Angela Papalaskari Department of CompuBng Sciences Villanova University Course website: www.csc.villanova.edu/~map/4510/ Slides of this presenta(on
More informationMining Spatial Trends by a Colony of Cooperative Ant Agents
Mining Spatial Trends by a Colony of Cooperative Ant Agents Ashan Zarnani Masoud Rahgozar Abstract Large amounts of spatially referenced data have been aggregated in various application domains such as
More informationAnt Algorithms. Ant Algorithms. Ant Algorithms. Ant Algorithms. G5BAIM Artificial Intelligence Methods. Finally. Ant Algorithms.
G5BAIM Genetic Algorithms G5BAIM Artificial Intelligence Methods Dr. Rong Qu Finally.. So that s why we ve been getting pictures of ants all this time!!!! Guy Theraulaz Ants are practically blind but they
More informationPure Strategy or Mixed Strategy?
Pure Strategy or Mixed Strategy? Jun He, Feidun He, Hongbin Dong arxiv:257v4 [csne] 4 Apr 204 Abstract Mixed strategy evolutionary algorithms EAs) aim at integrating several mutation operators into a single
More informationMultiobjective Optimization
Multiobjective Optimization MOO Applications Professores: Eduardo G. Carrano Frederico G. Guimarães Lucas S. Batista {egcarrano,fredericoguimaraes,lusoba}@ufmg.br www.ppgee.ufmg.br/ lusoba Universidade
More informationSolving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic
Solving the Homogeneous Probabilistic Traveling Salesman Problem by the ACO Metaheuristic Leonora Bianchi 1, Luca Maria Gambardella 1 and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno,
More informationChapter 8: Introduction to Evolutionary Computation
Computational Intelligence: Second Edition Contents Some Theories about Evolution Evolution is an optimization process: the aim is to improve the ability of an organism to survive in dynamically changing
More information14 : Theory of Variational Inference: Inner and Outer Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction
More informationA MIXED INTEGER QUADRATIC PROGRAMMING MODEL FOR THE LOW AUTOCORRELATION BINARY SEQUENCE PROBLEM. Jozef Kratica
Serdica J. Computing 6 (2012), 385 400 A MIXED INTEGER QUADRATIC PROGRAMMING MODEL FOR THE LOW AUTOCORRELATION BINARY SEQUENCE PROBLEM Jozef Kratica Abstract. In this paper the low autocorrelation binary
More informationHill climbing: Simulated annealing and Tabu search
Hill climbing: Simulated annealing and Tabu search Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Hill climbing Instead of repeating local search, it is
More informationGas Turbine LQR, INTEGRAL Controllers and Optimal PID Tuning by Ant Colony Optimization Comparative Study
International Journal of Computer Science and elecommunications [Volume 4, Issue, January 23] 35 ISSN 247-3338 Gas urbine LQR, INEGRAL Controllers and Optimal PID uning by Ant Colony Optimization Comparative
More informationA new ILS algorithm for parallel machine scheduling problems
J Intell Manuf (2006) 17:609 619 DOI 10.1007/s10845-006-0032-2 A new ILS algorithm for parallel machine scheduling problems Lixin Tang Jiaxiang Luo Received: April 2005 / Accepted: January 2006 Springer
More informationWorst case analysis for a general class of on-line lot-sizing heuristics
Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University
More informationSimple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean Functions
Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean Functions Timo Kötzing Max-Planck-Institut für Informatik 66123 Saarbrücken, Germany Dirk Sudholt CERCIA, University of Birmingham
More informationWeighted Activity Selection
Weighted Activity Selection Problem This problem is a generalization of the activity selection problem that we solvd with a greedy algorithm. Given a set of activities A = {[l, r ], [l, r ],..., [l n,
More informationAn ACO Algorithm for the Most Probable Explanation Problem
An ACO Algorithm for the Most Probable Explanation Problem Haipeng Guo 1, Prashanth R. Boddhireddy 2, and William H. Hsu 3 1 Department of Computer Science, Hong Kong University of Science and Technology
More informationA PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO
A PARAMETER CONTROL SCHEME FOR DE INSPIRED BY ACO Dražen Bajer, Goran Martinović Faculty of Electrical Engineering, Josip Juraj Strossmayer University of Osijek, Croatia drazen.bajer@etfos.hr, goran.martinovic@etfos.hr
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationChapter 3 Deterministic planning
Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions
More informationAn ant colony algorithm applied to lay-up optimization of laminated composite plates
10(2013) 491 504 An ant colony algorithm applied to lay-up optimization of laminated composite plates Abstract Ant colony optimization (ACO) is a class of heuristic algorithms proposed to solve optimization
More informationGenetic Algorithm for Solving the Economic Load Dispatch
International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 5 (2014), pp. 523-528 International Research Publication House http://www.irphouse.com Genetic Algorithm
More informationAlgorithms and Complexity theory
Algorithms and Complexity theory Thibaut Barthelemy Some slides kindly provided by Fabien Tricoire University of Vienna WS 2014 Outline 1 Algorithms Overview How to write an algorithm 2 Complexity theory
More informationA lower bound for scheduling of unit jobs with immediate decision on parallel machines
A lower bound for scheduling of unit jobs with immediate decision on parallel machines Tomáš Ebenlendr Jiří Sgall Abstract Consider scheduling of unit jobs with release times and deadlines on m identical
More informationRuntime Analysis of a Binary Particle Swarm Optimizer
Runtime Analysis of a Binary Particle Swarm Optimizer Dirk Sudholt Fakultät für Informatik, LS 2 Technische Universität Dortmund Dortmund, Germany Carsten Witt Fakultät für Informatik, LS 2 Technische
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference IV: Variational Principle II Junming Yin Lecture 17, March 21, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3 X 3 Reading: X 4
More informationGradient-based Adaptive Stochastic Search
1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction
More informationOptimizing Ratio of Monotone Set Functions
Optimizing Ratio of Monotone Set Functions Chao Qian, Jing-Cheng Shi 2, Yang Yu 2, Ke Tang, Zhi-Hua Zhou 2 UBRI, School of Computer Science and Technology, University of Science and Technology of China,
More informationREIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531
U N I V E R S I T Y OF D O R T M U N D REIHE COMPUTATIONAL INTELLIGENCE COLLABORATIVE RESEARCH CENTER 531 Design and Management of Complex Technical Processes and Systems by means of Computational Intelligence
More informationSolving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing
International Conference on Artificial Intelligence (IC-AI), Las Vegas, USA, 2002: 1163-1169 Solving Numerical Optimization Problems by Simulating Particle-Wave Duality and Social Information Sharing Xiao-Feng
More informationAlgorithm Analysis Recurrence Relation. Chung-Ang University, Jaesung Lee
Algorithm Analysis Recurrence Relation Chung-Ang University, Jaesung Lee Recursion 2 Recursion 3 Recursion in Real-world Fibonacci sequence = + Initial conditions: = 0 and = 1. = + = + = + 0, 1, 1, 2,
More informationHybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1. Abstract. 1 Introduction
Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization 1 Benjamin W. Wah and Yixin Chen Department of Electrical and Computer Engineering and the Coordinated Science
More informationCS 6783 (Applied Algorithms) Lecture 3
CS 6783 (Applied Algorithms) Lecture 3 Antonina Kolokolova January 14, 2013 1 Representative problems: brief overview of the course In this lecture we will look at several problems which, although look
More informationOn the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments
On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments Chao Qian 1, Yang Yu 1, Yaochu Jin 2, and Zhi-Hua Zhou 1 1 National Key Laboratory for Novel Software Technology, Nanjing
More informationFinding optimal configurations ( combinatorial optimization)
CS 1571 Introduction to AI Lecture 10 Finding optimal configurations ( combinatorial optimization) Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Constraint
More informationAvailable online at ScienceDirect. Procedia Computer Science 20 (2013 ) 90 95
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 20 (2013 ) 90 95 Complex Adaptive Systems, Publication 3 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri
More informationarxiv: v2 [cs.ds] 11 Oct 2017
Stochastic Runtime Analysis of a Cross-Entropy Algorithm for Traveling Salesman Problems Zijun Wu a,1,, Rolf H. Möhring b,2,, Jianhui Lai c, arxiv:1612.06962v2 [cs.ds] 11 Oct 2017 a Beijing Institute for
More informationFirefly algorithm in optimization of queueing systems
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 60, No. 2, 2012 DOI: 10.2478/v10175-012-0049-y VARIA Firefly algorithm in optimization of queueing systems J. KWIECIEŃ and B. FILIPOWICZ
More informationK-center Hardness and Max-Coverage (Greedy)
IOE 691: Approximation Algorithms Date: 01/11/2017 Lecture Notes: -center Hardness and Max-Coverage (Greedy) Instructor: Viswanath Nagarajan Scribe: Sentao Miao 1 Overview In this lecture, we will talk
More informationSolving Fuzzy PERT Using Gradual Real Numbers
Solving Fuzzy PERT Using Gradual Real Numbers Jérôme FORTIN a, Didier DUBOIS a, a IRIT/UPS 8 route de Narbonne, 3062, Toulouse, cedex 4, France, e-mail: {fortin, dubois}@irit.fr Abstract. From a set of
More informationHybrid particle swarm algorithm for solving nonlinear constraint. optimization problem [5].
Hybrid particle swarm algorithm for solving nonlinear constraint optimization problems BINGQIN QIAO, XIAOMING CHANG Computers and Software College Taiyuan University of Technology Department of Economic
More informationCS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding
CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.
More informationarxiv: v2 [cs.ds] 27 Sep 2014
New Results on Online Resource Minimization Lin Chen Nicole Megow Kevin Schewior September 30, 2014 arxiv:1407.7998v2 [cs.ds] 27 Sep 2014 Abstract We consider the online resource minimization problem in
More informationPrediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate
www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation
More informationProbabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013
School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Junming Yin Lecture 15, March 4, 2013 Reading: W & J Book Chapters 1 Roadmap Two
More informationResearch Article Study on the Stochastic Chance-Constrained Fuzzy Programming Model and Algorithm for Wagon Flow Scheduling in Railway Bureau
Mathematical Problems in Engineering Volume 2012, Article ID 602153, 13 pages doi:10.1155/2012/602153 Research Article Study on the Stochastic Chance-Constrained Fuzzy Programming Model and Algorithm for
More informationAn Evolution Strategy for the Induction of Fuzzy Finite-state Automata
Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College
More informationUsefulness of infeasible solutions in evolutionary search: an empirical and mathematical study
Edith Cowan University Research Online ECU Publications 13 13 Usefulness of infeasible solutions in evolutionary search: an empirical and mathematical study Lyndon While Philip Hingston Edith Cowan University,
More informationModels of Language Evolution
Models of Matilde Marcolli CS101: Mathematical and Computational Linguistics Winter 2015 Main Reference Partha Niyogi, The computational nature of language learning and evolution, MIT Press, 2006. From
More informationCS 6901 (Applied Algorithms) Lecture 3
CS 6901 (Applied Algorithms) Lecture 3 Antonina Kolokolova September 16, 2014 1 Representative problems: brief overview In this lecture we will look at several problems which, although look somewhat similar
More informationEmbedded Systems 14. Overview of embedded systems design
Embedded Systems 14-1 - Overview of embedded systems design - 2-1 Point of departure: Scheduling general IT systems In general IT systems, not much is known about the computational processes a priori The
More informationOverview. Optimization. Easy optimization problems. Monte Carlo for Optimization. 1. Survey MC ideas for optimization: (a) Multistart
Monte Carlo for Optimization Overview 1 Survey MC ideas for optimization: (a) Multistart Art Owen, Lingyu Chen, Jorge Picazo (b) Stochastic approximation (c) Simulated annealing Stanford University Intel
More information