Training Schro dinger s Cat
|
|
- Cornelia Barton
- 5 years ago
- Views:
Transcription
1 Training Schro dinger s Cat Quadratically Converging algorithms for Optimal Control of Quantum Systems David L. Goodwin & Ilya Kuprov d.goodwin@soton.ac.uk comp-chem@southampton, Wednesday 15th July 2015
2 Two equations... 2 J c (k) n 2 c (k) n 1 = σ ˆPN ˆPN 1 c (k) ˆPn2 n 2 c n (k) ˆPn1 ˆP2 ˆP1 ψ 0 (1) 1 iˆl t iĥ (k 1) n 1 t 0 exp 0 iˆl t iĥ (k 2) n 2 t = 0 0 iˆl t e i ˆL t c (k 1 ) e i ˆL t 1 2 e i ˆL t 2 n 1 c (k 1 n ) 1 c (k 2 n ) 2 e i ˆL t c (k 2 n ) 2 0 e i ˆL t 0 0 e i ˆL t (2)
3 01 Introducing Optimal Control 02 Newton-Raphson method 03 Gradient and Hessian 04 Van Loan s Augmented Exponentials 05 Regularisation
4 Introducing Optimal Control
5 Optimal Control Theory Optimal control can be thought of as an algorithm; there is a start and stop. Specifically, we can think of a dynamic system having a initial state and a target state. The optimality finds an algorithmic solution in a minimum of effort.
6 Newton-Raphson method
7 The Newton-Raphson method Taylor s Theorem Taylor series approximated to second order [1]. If f is continuously differentiable f(x + p) = f(x) + f(x + tp) T p If f is twice continuously differentiable f(x + p) = f(x) + f(x) T p pt 2 f(x + αp)p 1st order necessary condition: f(x ) = 0 2nd order necessary condition: 2 f(x ) is positive semidefinite Finding a minimum to the function using tangents of the gradient f (x) f (x) f (x) f(x) x0 x0 x x x 1 x x 1 x2 x x 1 x x2 x f (x) = x 2 1 f (x0) = 2x0 f (x1) = 2x1 f(x) = x3 3 x + 1 [1] B. Taylor. Inny, 1717, J. Nocedal and S. J. Wright
8 Numerical Optimisation Algorithms Gradient Descent Step in direction opposite to local gradient. f( x + x) = f( x) + f( x) T x Newton-Raphson Quadratic approximation of objective function, moving to this minimum. f( x + x) = f( x) + f( x) T x xt H x Quasi-Newton BFGS Approximate H with information from the gradient history. H k+1 =H k + g k g T k g T k x k H k x k (H k x k ) T x T k H k x k
9 The Hessian Matrix The Newton step: p N k = H 1 k f k 2 f k = H k is the Hessian matrix, one of second order partial derivatives [2] : 2 f 2 f 2 f x 2 x 1 1 x 2 x 1 x n 2 f 2 f 2 f x H = 2 x 1 x 2 x 2 2 x n f x n x 2 2 f x n x 1 2 f x 2 n The steepest descent method results from the same equation when we set H to the identity matrix. Quasi-Newton methods initialise H to the identity matrix, then to approximate it from an update formula using a gradient history. The Hessian proper must be positive definite (and quite well conditioned) to make an inverse; an indefinte Hessian results in non-descent search directions. [2] L. O. Hesse. Chelsea, 1972.
10 Gradient and Hessian
11 GRAPE Gradient Ascent Pulse Engineering Split Liouvillian to controllable and uncontrollable parts [3] ˆL(t) = Ĥ 0 + c (k) (t)ĥ k k T Maximise the fidelity measure, J = Re ˆσ exp (0) [ i 0 ] ˆL(t)dt ˆρ(0) J Optimality conditions, c k = 0 at a minimum, and the Hessian matrix (t) should be positive definite Discretize the time into small fixed intervals during which the control functions are assumed to be constant (piecewise-constant approximation). c (k) n t 1 t n t N [3] N. Khaneja et al. In: Journal of Magnetic Resonance (2005), pp t t
12 GRAPE gradient Gradient found from forward and backward propagation: J = σ ˆPN ˆPN 1 ˆPN 2 ˆPN 3 ˆPN 4... ˆP3 ˆP2 ˆP1 ρ 0 }{{} (I) propagate forwards from source c (k) N 3 ˆP N 3 (III) compute expectation of the derivative (II) propagate backwards from target {}}{ J = σ ˆPN ˆPN 1 ˆPN 2 ˆPN 3 ˆPN 4... ˆP3 ˆP2 ˆP1 ρ 0 Propagator over a time slice: [ ( ˆP n = exp i Ĥ 0 + k c (k) n Ĥ k ) t ]
13 GRAPE Hessian (block) diagonal elements 2 J 2 = σ ˆPN ˆPN 1 c (k) n 2 c (k) n 2 ˆP n ˆP2 ˆP1 ψ 0 non-diagonal elements 2 J c (k) n 2 c (k) n 1 = σ ˆPN ˆPN 1 c (k) ˆPn2 n 2 c (k) ˆPn1 ˆP2 ˆP1 ψ 0 n 1 All propagators of the non-diagonal blocks have been calculated within a gradient calculation, and can be reused. Only need to find the diagonal blocks.
14 Van Loan s Augmented Exponentials
15 Efficient Gradient Calculation Augmented Exponentials Among the many complicated functions encountered in magnetic resonance simulation context, chained exponential integrals involving square matrices A k and B k occur particularly often: t 0 dt 1 t 1 0 t n 2 } dt 2 dt n 1 {e A 1(t t 1 ) B 1 e A 2(t 1 t 2 ) B 2 e A 1(t t 1 ) B n 1 e Ant n 1 0 A method for computing some of the integrals of the general type shown in Equation of this type was proposed by Van Loan in 1978 [4] (pointed out by Sophie Schirmer [5] ) Using this augmented exponential technique, we can write an upper-triangular block matrix exponential as ( ) t 1 A B exp = eat e A(t s) Be As ds 0 A 0 = ea e A(1 s) Be As ds 0 e At 0 0 e A [4] C. F. Van Loan. In: Automatic Control, IEEE Transactions on 23.3 (1978), pp [5] F. F. Floether, P. de Fouquieres and S. G. Schirmer. In: New Journal of Physics 14.7 (2012), p
16 Efficient Gradient Calculation Augmented Exponentials Find the derivative of the control pulse at a specific time point set 1 0 ) e A(1 s) Be As ds = D cn (t) exp ( iˆl t B = iĥ n (k) t leading to an efficient calculation of the gradient element ( ) iˆl t iĥ n (k) t exp = e i ˆL t c (k) n 0 iˆl t 0 e i ˆL t e i ˆL t
17 Efficient Hessian Calculation Augmented Exponentials second order derivatives can be calculated with a 3 3 augmented exponential [6] set 1 s 0 0 ) e A(1 s) B n1 e A(s r) B n2 e Ar drds = Dc 2 n1 c n2 (t) exp ( iˆl t B n = iĥ n (k) t Giving the efficient Hessian element calculation iˆl t iĥ (k 1) n 1 t 0 exp 0 iˆl t iĥ (k 2) n 2 t = 0 0 iˆl t e i ˆL t c (k 1 ) e i ˆL t 1 2 n 1 2 e i ˆL t c (k 1 n ) 1 c (k 2 n ) 2 0 e i ˆL t c (k 2 ) n 2 e i ˆL t 0 0 e i ˆL t [6] T. F. Havel, I Najfeld and J. X. Yang. In: Proceedings of the National Academy of Sciences (1994), pp , I. Najfeld and T. Havel. In: Advances in Applied Mathematics 16.3 (1995), pp
18 Overtone excitation Simulation ˆT 1,0 ˆT 2,2 Excite 14 N from a state ˆT 1,0 ˆT 2,2. Solid state powder average, with objective functional weighted over the crystalline orientations (rank 17 Lebedev grid points). Nuclear quadrupolar interaction. 400 time points for total pulse duration of 40µs
19 Overtone excitation Comparison of BFGS and Newton-Raphson BFGS Newton fidelity iterates
20 Regularisation
21 Problems Singularities everywhere! BFGS (using the DFP formula) is guaranteed to produce a positive definite Hessian update The Newton-Raphson method does not: p N k = H 1 k f k Properties of the Hessian matrix: 1. Must be symmetric: 2 = 2 c (i) c (j) c (j) c (i) ; not if control operators commute 2. Must be sufficiently positive definite; non-singular; invertible. 3. The Hessian is diagonally dominant.
22 Regularisation Avoiding Singularities Common when we have negative eigenvalues, regularise the Hessian to be positive definite [7]. Check eigenvalues performing an eigendecomposition of the Hessian matrix: H = QΛQ 1 Add the smallest negative eigenvalue to all eigenvalues, then reform the Hessian with initial eigenvectors: λ min = max [0, min(λ)] H reg =Q(Λ + λ min Î)Q 1 TRM Introduce a constant δ; region of a radius we trust to give a sufficiently positive definite Hessian. H reg = Q(Λ + δλ min Î)Q 1 However, if δ is too small, the Hessian will become ill-conditioned. RFO The method proceeds to construct an augmented Hessian matrix [ ] δ 2 H δ g H aug = = QΛQ 1 δ g 0 [7] X. P. Resina. PhD thesis. Universitat Autónoma de Barcelona, 2004.
23 State transfer, ˆL(H), ˆL(C), ˆL(C) y, ˆL(F ) x, ˆL(F ) {ˆL(H) Controls := x y x y valid vs. invalid parametrisation of Lie groups. } C +140 HzH 160 Hz F Interaction parameters of a hydroflourocarbon molecular group used in state transfer simulations, with a magnetic induction of 9.4 Tesla
24 State transfer Comparison of BFGS and Newton-Raphson I fidelity 1-fidelity iterations wall clock time (s)
25 State transfer Comparison of BFGS and Newton-Raphson II fidelity fidelity iterations wall clock time (s)
26 Closing Remarks Hessian calculation that scales with O(n) computations. Efficient directional derivative calculation with augmented exponentials better regularisation and conditioning infeasible start points? Different line search methods? forced symmetry of a Hessian
Higher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationTrust Regions. Charles J. Geyer. March 27, 2013
Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.
ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationMA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS
MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS 1. Please write your name and student number clearly on the front page of the exam. 2. The exam is
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More informationIntroduction to the GRAPE Algorithm
June 8, 2010 Reference: J. Mag. Res. 172, 296 (2005) Journal of Magnetic Resonance 172 (2005) 296 305 www.elsevier.com/locate/jmr Optimal control of coupled spin dynamics: design of NMR pulse sequences
More informationNumerical Optimization Algorithms
Numerical Optimization Algorithms 1. Overview. Calculus of Variations 3. Linearized Supersonic Flow 4. Steepest Descent 5. Smoothed Steepest Descent Overview 1 Two Main Categories of Optimization Algorithms
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex
More information4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:
Unconstrained Convex Optimization 21 4 Newton Method H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: f(x + p) f(x)+p T f(x)+ 1 2 pt H(x)p ˆf(p) In general, ˆf(p) won
More information, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are
Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationUnconstrained minimization of smooth functions
Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and
More informationECE580 Exam 2 November 01, Name: Score: / (20 points) You are given a two data sets
ECE580 Exam 2 November 01, 2011 1 Name: Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc. I do
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationNumerical simulations of spin dynamics
Numerical simulations of spin dynamics Charles University in Prague Faculty of Science Institute of Computer Science Spin dynamics behavior of spins nuclear spin magnetic moment in magnetic field quantum
More informationECE580 Partial Solution to Problem Set 3
ECE580 Fall 2015 Solution to Problem Set 3 October 23, 2015 1 ECE580 Partial Solution to Problem Set 3 These problems are from the textbook by Chong and Zak, 4th edition, which is the textbook for the
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationIterative Methods for Smooth Objective Functions
Optimization Iterative Methods for Smooth Objective Functions Quadratic Objective Functions Stationary Iterative Methods (first/second order) Steepest Descent Method Landweber/Projected Landweber Methods
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationPerformance Surfaces and Optimum Points
CSC 302 1.5 Neural Networks Performance Surfaces and Optimum Points 1 Entrance Performance learning is another important class of learning law. Network parameters are adjusted to optimize the performance
More informationGeometry optimization
Geometry optimization Trygve Helgaker Centre for Theoretical and Computational Chemistry Department of Chemistry, University of Oslo, Norway European Summer School in Quantum Chemistry (ESQC) 211 Torre
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationVasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks
C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More informationNumerical Linear and Multilinear Algebra in Quantum Tensor Networks
Numerical Linear and Multilinear Algebra in Quantum Tensor Networks Konrad Waldherr October 20, 2013 Joint work with Thomas Huckle QCCC 2013, Prien, October 20, 2013 1 Outline Numerical (Multi-) Linear
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationSECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS
SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss
More informationLECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION
15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationOptimization for neural networks
0 - : Optimization for neural networks Prof. J.C. Kao, UCLA Optimization for neural networks We previously introduced the principle of gradient descent. Now we will discuss specific modifications we make
More informationIntroduction to Nonlinear Optimization Paul J. Atzberger
Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,
More informationECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.
ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.
More informationIntroduction to gradient descent
6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationHW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2
HW3 - Due 02/06 Each answer must be mathematically justified Don t forget your name Problem 1 Find a 2 2 matrix B such that B 3 = A, where A = 2 2 If A was diagonal, it would be easy: we would just take
More informationModified Newton-Raphson GRAPE methods for optimal control of spin systems. D.L. Goodwin, Ilya Kuprov *
Modified Newton-Raphson GRAPE methods for optimal control of spin systems D.L. Goodwin, Ilya Kuprov * School of Chemistry, University of Southampton, Highfield Campus, Southampton, SO17 1BJ, UK. * Corresponding
More informationALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS
ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS JOHANNES BRUST, OLEG BURDAKOV, JENNIFER B. ERWAY, ROUMMEL F. MARCIA, AND YA-XIANG YUAN Abstract. We present
More informationIntroduction to unconstrained optimization - direct search methods
Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the
More informationNumerical Simulation of Spin Dynamics
Numerical Simulation of Spin Dynamics Marie Kubinova MATH 789R: Advanced Numerical Linear Algebra Methods with Applications November 18, 2014 Introduction Discretization in time Computing the subpropagators
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More informationOptimization Methods. Lecture 19: Line Searches and Newton s Method
15.93 Optimization Methods Lecture 19: Line Searches and Newton s Method 1 Last Lecture Necessary Conditions for Optimality (identifies candidates) x local min f(x ) =, f(x ) PSD Slide 1 Sufficient Conditions
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationQuasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)
Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb
More informationInverse Singular Value Problems
Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139 140 Lecture 8 IEP versus ISVP Inverse
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationEfficient and Accurate Rectangular Window Subspace Tracking
Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationLecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods
Lecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods Qixing Huang The University of Texas at Austin huangqx@cs.utexas.edu 1 Disclaimer This note is adapted from Section 4 of
More informationMultivariate Newton Minimanization
Multivariate Newton Minimanization Optymalizacja syntezy biosurfaktantu Rhamnolipid Rhamnolipids are naturally occuring glycolipid produced commercially by the Pseudomonas aeruginosa species of bacteria.
More informationProximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization
Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R
More information1 Kernel methods & optimization
Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationContinuous Steepest Descent Paths for Non-Convex Optimization
Continuous Steepest Descent Paths for Non-Convex Optimization by Salah Beddiaf A thesis submitted in partial fulfilment of the requirements of the University of Hertfordshire for the degree of Doctor of
More informationComparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems
International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming
More informationZero-Order Methods for the Optimization of Noisy Functions. Jorge Nocedal
Zero-Order Methods for the Optimization of Noisy Functions Jorge Nocedal Northwestern University Simons Institute, October 2017 1 Collaborators Albert Berahas Northwestern University Richard Byrd University
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationMatrix Derivatives and Descent Optimization Methods
Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign
More informationHomework 5. Convex Optimization /36-725
Homework 5 Convex Optimization 10-725/36-725 Due Tuesday November 22 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationMATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,
More informationSecond Order Optimization Algorithms I
Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationArc Search Algorithms
Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where
More informationPractical Optimization: Basic Multidimensional Gradient Methods
Practical Optimization: Basic Multidimensional Gradient Methods László Kozma Lkozma@cis.hut.fi Helsinki University of Technology S-88.4221 Postgraduate Seminar on Signal Processing 22. 10. 2008 Contents
More informationECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review
ECE 680Modern Automatic Control p. 1/1 ECE 680 Modern Automatic Control Gradient and Newton s Methods A Review Stan Żak October 25, 2011 ECE 680Modern Automatic Control p. 2/1 Review of the Gradient Properties
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationNetwork Newton. Aryan Mokhtari, Qing Ling and Alejandro Ribeiro. University of Pennsylvania, University of Science and Technology (China)
Network Newton Aryan Mokhtari, Qing Ling and Alejandro Ribeiro University of Pennsylvania, University of Science and Technology (China) aryanm@seas.upenn.edu, qingling@mail.ustc.edu.cn, aribeiro@seas.upenn.edu
More informationNotes on Numerical Optimization
Notes on Numerical Optimization University of Chicago, 2014 Viva Patel October 18, 2014 1 Contents Contents 2 List of Algorithms 4 I Fundamentals of Optimization 5 1 Overview of Numerical Optimization
More information