Continuous Optimization
|
|
- Chester Shields
- 5 years ago
- Views:
Transcription
1 Continuous Optimization Sanzheng Qiao Department of Computing and Software McMaster University March, 2009
2 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
3 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
4 Problem setting Single variable functions. Minimization: min f(x) x S f(x): objective function, single variable and real-valued S: support
5 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
6 Golden section search Assumption: f(x) has a unique global minimum in [a, b].
7 Golden section search Assumption: f(x) has a unique global minimum in [a, b]. If x is the minimizer, then f(x) monotonically decreases in [a, x ] and monotonically increases in [x, b].
8 Golden section search Assumption: f(x) has a unique global minimum in [a, b]. If x is the minimizer, then f(x) monotonically decreases in [a, x ] and monotonically increases in [x, b]. Algorithm Choose interior points c, d: c = a + r(b a) d = a + (1 r)(b a), 0 < r < 0.5 if f(c) f(d) b = d else a = c end Each step, the length of the interval is reduced by a factor of (1 r).
9 Golden section search (cont.) The choice of r: When f(c) f(d), d + = c (the next d is c) When f(c) > f(d), c + = d (the next c is d) Why? Reduce the number of function evaluations
10 Choice of r When f(c) f(d), b + = d, d + = a + (1 r)(b + a) = a + (1 r)(d a) then d + = c means a + (1 r)(d a) = a + r(b a) which implies (1 r) 2 = r. When f(c) > f(d), a + = c, then c + = d means c + = c + r(b c) = a + (1 r)(b a) which also implies (1 r) 2 = r. Thus we have r = 3 5 2
11 Algorithm c = a + r*(b - a); fc = f(c); d = a + (1-r)*(b - a); fd = f(d); if fc <= fd b = d; fb = fd; d = c; fd = fc; c = a + r*(b-a); fc = f(c); else a = c; fa = fc; c = d; fc = fd; d = a + (1-r)*(b-a); fd = f(d); end
12 Convergence and termination Convergence rate: Each step reduces the length of the interval by a factor of 1 r =
13 Convergence and termination Convergence rate: Each step reduces the length of the interval by a factor of 1 r = Termination criteria: (d c) u max( c, d ) or a tolerance
14 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
15 Problem setting min f(x) where x is a vector (of variables x 1, x 2,..., x n ).
16 Problem setting min f(x) where x is a vector (of variables x 1, x 2,..., x n ). Gradient f(x c ) = f(x c) x 1. f(x c) x n
17 Problem setting min f(x) where x is a vector (of variables x 1, x 2,..., x n ). Gradient f(x c ) = f(x c) x 1. f(x c) x n f(x c ): the direction of greatest decrease from x c
18 Steepest descent method Idea: Steepest descent direction: s c = f(x c ); Find λ c such that f(x c + λ c s c ) f(x c + λs c ), for all λ R (single variable minimization problem); x + = x c + λ c s c.
19 Steepest descent method Idea: Steepest descent direction: s c = f(x c ); Find λ c such that f(x c + λ c s c ) f(x c + λs c ), for all λ R (single variable minimization problem); x + = x c + λ c s c. Remark. Conjugate gradient method: Use conjugate gradient to replace gradient.
20 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
21 Problem setting Given a matrix A (m-by-n, m n) and b (m-by-1), find x (n-by-1) minimizing Ax b 2 2. Example. Square root problem revisited. Find a 1 and a 2 in y(x) = a 1 x + a 2, such that (y(0.25) 0.25) 2 + (y(0.5) 0.5) 2 + (y(1.0) 1.0) 2 is minimized. In matrix-vector form: A = 0.5 1, x = [ a1 a 2 ], b =
22 Method Transform A into a triangular matrix: [ ] R PA = 0 where R is upper triangular. Then the problem becomes Ax b 2 2 = P 1 ( Rx Pb) 2 2 where R = [ R 0 ].
23 Method (cont.) Desirable properties of P: P 1 is easy to compute; P 1 z 2 2 = z 2 2 for any z. Partitioning Pb = [ b1 then the LS solution is the solution of the triangular system b 2 Rx = b 1. ],
24 Choice of P Orthogonal matrix (transformation) Q: Q 1 = Q T. Example. Givens rotation [ G = cosθ sin θ sin θ cosθ ] Introducing a zero into a 2-vector: [ ] x1 G = x 2 [ 0 ] i.e., rotate x onto x 1 -axis.
25 Givens rotation cos θ = x 1 x x 2 2 sin θ = x 2 x x 2 2 Algorithm. if x(2) = 0 c =1.0; s = 0.0; elseif abs(x(2)) >= abs(x(1)) ct = x(1)/x(2); s = 1/sqrt(1 + ct*ct); c = s*ct; else t = x(2)/x(1); c = 1/sqrt(1 + t*t); s = c*t; end
26 Givens rotation (cont.) In general, G 13 = G 13 c 0 s s 0 c x 1 x 2 x 3 x 4 = x 2 0 x 4 Select a pair (x i, x j ), find a rotation G ij to eliminate x j.
27 QR factorization G 34 G 24 G 23 G 14 G 13 G 12 A = [ R 0 Q = G T 12 GT 13 GT 14 GT 23 GT 24 GT 34 A = QR 0 0 ]
28 Householder transformation Basically, in the QR decomposition, we introduce zeros below the main diagonal of A using orthogonal transformations. Another example. Householder transformation H = I 2uu T with u T u = 1 H is symmetric and orthogonal (H 2 = I). Goal: Ha = αe 1. Choose u = a ± a 2 e 1 A geometric interpretation: u a a u e 1 b (a) (b)
29 Householder transformation (cont.) Normalize u using u 2 2 = 2( a 2 2 ± a 1 a 2 ) for efficiency. Algorithm. Given an n-vector x, this algorithm returns σ, α, and u such that (I σ 1 uu T )x = αe 1. m = max(abs(x)); u = x/m; alpha = sign(u(1))*norm(u); u(1)= u(1) + alpha; sigma = alpha*u(1); alpha = m*alpha;
30 Framework A framework of the QR decomposition method for solving the linear least squares problem min Ax b 2 Using orthogonal transformations to triangularize A, applying the transformations to b simultaneously; Solving the resulting triangular system.
31 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
32 Problem setting Multivariate vector-valued function f 1 (x) f(x) = find the solution of. f m (x) 1 ρ(x) = min x 2 R m, m f i (x) 2 i=1 x R n
33 Problem setting Multivariate vector-valued function f 1 (x) f(x) = find the solution of. f m (x) 1 ρ(x) = min x 2 Application: Model fitting problem. R m, m f i (x) 2 i=1 x R n
34 Newton s Method Idea: Solve ρ(x) = 0. (Root finding problem).
35 Newton s Method Idea: Solve ρ(x) = 0. (Root finding problem). At each step, find the correction s c (x + = x c + s c ) satisfying 2 ρ(x c )s c = ρ(x c )
36 Newton s Method Idea: Solve ρ(x) = 0. (Root finding problem). At each step, find the correction s c (x + = x c + s c ) satisfying 2 ρ(x c )s c = ρ(x c ) Note. This is Newton s method for solving nonlinear systems.
37 Newton t method (cont.) What is the gradient ρ(x c )?
38 Newton t method (cont.) What is the gradient ρ(x c )? where the Jacobian ρ(x c ) = J(x c ) T f(x c ) [ ] fi (x c ) J(x c ) = x j
39 Newton t method (cont.) What is the gradient ρ(x c )? ρ(x c ) = J(x c ) T f(x c ) where the Jacobian [ ] fi (x c ) J(x c ) = x j How to get 2 ρ(x c )?
40 Newton t method (cont.) What is the gradient ρ(x c )? ρ(x c ) = J(x c ) T f(x c ) where the Jacobian [ ] fi (x c ) J(x c ) = x j How to get 2 ρ(x c )? m 2 ρ(x c ) = J(x c ) T J(x c ) + f i (x c ) 2 f i (x c ) i=1
41 Newton t method (cont.) What is the gradient ρ(x c )? where the Jacobian How to get 2 ρ(x c )? ρ(x c ) = J(x c ) T f(x c ) [ ] fi (x c ) J(x c ) = x j 2 ρ(x c ) = J(x c ) T J(x c ) + m f i (x c ) 2 f i (x c ) If x fits the model well (f i (x ) 0) and x c is close to x, then f i (x c ) 0. Then i=1 2 ρ(x c ) J(x c ) T J(x c ).
42 Gauss-Newton Method Evaluate f c = f(x c ) and compute the Jacobian J c = J(x c ); Solve (J T c J c )s c = J T c f c for s c ; Update x + = x c + s c ;
43 Gauss-Newton Method Evaluate f c = f(x c ) and compute the Jacobian J c = J(x c ); Solve (J T c J c )s c = J T c f c for s c ; Update x + = x c + s c ; Note. s c is the solution to the normal equations for the linear least squares problem: min s ( J c s + f c 2 ) Reliable methods such as the QR decomposition method can be used to solve for s c.
44 Gauss-Newton Method Evaluate f c = f(x c ) and compute the Jacobian J c = J(x c ); Solve (J T c J c )s c = J T c f c for s c ; Update x + = x c + s c ; Note. s c is the solution to the normal equations for the linear least squares problem: min s ( J c s + f c 2 ) Reliable methods such as the QR decomposition method can be used to solve for s c. Remark. Gauss-Newton method works well on small residual (f i (x ) 0) problems.
45 Outline 1 Introduction 2 Golden Section Search 3 Multivariate Functions Steepest Descent Method 4 Linear Least Squares Problem 5 Nonlinear Least Squares Newton s Method Gauss-Newton Method 6 Software Packages
46 Software packages IMSL uvmif, uminf, umiah, unlsf, flprs, nconf, ncong MATLAB fmin, fmins, leastsq, lp, constr NAG e04abf, e04jaf, e04laf, e04fdf, e04mbf, e04vdf MINPACK lmdif1 NETLIB varpro, dqed Octave sqp, ols, gls
47 Summary Problem setting: Real valued objective function Golden section search: Convergence rate Direction of descent: Steepest descent Linear least squares: Data fitting, QR decomposition or triangularization of a matrix using orthogonal transformations (rotation, Householder transformation) Nonlinear least squares: Newton s method (relation with solving nonlinear systems), Gauss-Newton method (relation with solving linear least squares)
Nonlinear Equations and Continuous Optimization
Nonlinear Equations and Continuous Optimization Sanzheng Qiao Department of Computing and Software McMaster University March, 2014 Outline 1 Introduction 2 Bisection Method 3 Newton s Method 4 Systems
More informationEigenvalue Problems and Singular Value Decomposition
Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationMotivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationApplication of the LLL Algorithm in Sphere Decoding
Application of the LLL Algorithm in Sphere Decoding Sanzheng Qiao Department of Computing and Software McMaster University August 20, 2008 Outline 1 Introduction Application Integer Least Squares 2 Sphere
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationNumerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??
Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement
More information13. Nonlinear least squares
L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares
More informationConstrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.
Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization
More informationLecture 7: Minimization or maximization of functions (Recipes Chapter 10)
Lecture 7: Minimization or maximization of functions (Recipes Chapter 10) Actively studied subject for several reasons: Commonly encountered problem: e.g. Hamilton s and Lagrange s principles, economics
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Jason E. Hicken Aerospace Design Lab Department of Aeronautics & Astronautics Stanford University 14 July 2011 Lecture Objectives describe when CG can be used to solve Ax
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationIntro Polynomial Piecewise Cubic Spline Software Summary. Interpolation. Sanzheng Qiao. Department of Computing and Software McMaster University
Interpolation Sanzheng Qiao Department of Computing and Software McMaster University July, 2012 Outline 1 Introduction 2 Polynomial Interpolation 3 Piecewise Polynomial Interpolation 4 Natural Cubic Spline
More informationMATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,
More informationLecture 7 Unconstrained nonlinear programming
Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,
More informationIntro Polynomial Piecewise Cubic Spline Software Summary. Interpolation. Sanzheng Qiao. Department of Computing and Software McMaster University
Interpolation Sanzheng Qiao Department of Computing and Software McMaster University January, 2014 Outline 1 Introduction 2 Polynomial Interpolation 3 Piecewise Polynomial Interpolation 4 Natural Cubic
More informationStat751 / CSI771 Midterm October 15, 2015 Solutions, Comments. f(x) = 0 otherwise
Stat751 / CSI771 Midterm October 15, 2015 Solutions, Comments 1. 13pts Consider the beta distribution with PDF Γα+β ΓαΓβ xα 1 1 x β 1 0 x < 1 fx = 0 otherwise, for fixed constants 0 < α, β. Now, assume
More informationOptimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections
Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations Subsections One-dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's
More informationOrthogonal Transformations
Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations
More informationNumerical Integration
Numerical Integration Sanzheng Qiao Department of Computing and Software McMaster University February, 2014 Outline 1 Introduction 2 Rectangle Rule 3 Trapezoid Rule 4 Error Estimates 5 Simpson s Rule 6
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationC&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz
C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 12 1 / 18 Overview
More informationCS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3
CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both
More informationMath 411 Preliminaries
Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector
More informationSome definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts
Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric
More informationHence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.
The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,
More informationnonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.
Introduction to nonlinear LS estimation R. I. Hartley and A. Zisserman: Multiple View Geometry in Computer Vision. Cambridge University Press, 2ed., 2004. After Chapter 5 and Appendix 6. We will use x
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationLinear and Nonlinear Optimization
Linear and Nonlinear Optimization German University in Cairo October 10, 2016 Outline Introduction Gradient descent method Gauss-Newton method Levenberg-Marquardt method Case study: Straight lines have
More informationCSL361 Problem set 4: Basic linear algebra
CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationOptimization and Calculus
Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =
More informationQR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:
(In practice not Gram-Schmidt, but another process Householder Transformations are used.) QR Decomposition When solving an overdetermined system by projection (or a least squares solution) often the following
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More information10.3 Steepest Descent Techniques
The advantage of the Newton and quasi-newton methods for solving systems of nonlinear equations is their speed of convergence once a sufficiently accurate approximation is known. A weakness of these methods
More informationNumerical optimization
THE UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 408 Phone: 661-111 ext. 857 Email: paul.klein@uwo.ca URL: www.ssc.uwo.ca/economics/faculty/klein/ Numerical optimization In these
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationLattice Basis Reduction Part II: Algorithms
Lattice Basis Reduction Part II: Algorithms Sanzheng Qiao Department of Computing and Software McMaster University, Canada qiao@mcmaster.ca www.cas.mcmaster.ca/ qiao November 8, 2011, revised February
More information3 QR factorization revisited
LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 2, 2000 30 3 QR factorization revisited Now we can explain why A = QR factorization is much better when using it to solve Ax = b than the A = LU factorization
More informationMatrix Derivatives and Descent Optimization Methods
Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationHouseholder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)
Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to
More informationPreface to Second Edition... vii. Preface to First Edition...
Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More information18.06 Problem Set 2 Solution
18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular U 1 : 1 a b 1 0 UU 1 = I 0 1 c x 1 x 2 x 3 = 0 1 0. 0 0 1 0 0
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationNumerical Optimization
Numerical Optimization Unit 2: Multivariable optimization problems Che-Rung Lee Scribe: February 28, 2011 (UNIT 2) Numerical Optimization February 28, 2011 1 / 17 Partial derivative of a two variable function
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationM.SC. PHYSICS - II YEAR
MANONMANIAM SUNDARANAR UNIVERSITY DIRECTORATE OF DISTANCE & CONTINUING EDUCATION TIRUNELVELI 627012, TAMIL NADU M.SC. PHYSICS - II YEAR DKP26 - NUMERICAL METHODS (From the academic year 2016-17) Most Student
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationPoisson Equation in 2D
A Parallel Strategy Department of Mathematics and Statistics McMaster University March 31, 2010 Outline Introduction 1 Introduction Motivation Discretization Iterative Methods 2 Additive Schwarz Method
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationCaculus 221. Possible questions for Exam II. March 19, 2002
Caculus 221 Possible questions for Exam II March 19, 2002 These notes cover the recent material in a style more like the lecture than the book. The proofs in the book are in section 1-11. At the end there
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationMath and Numerical Methods Review
Math and Numerical Methods Review Michael Caracotsios, Ph.D. Clinical Associate Professor Chemical Engineering Department University of Illinois at Chicago Introduction In the study of chemical engineering
More informationNumerical solution of Least Squares Problems 1/32
Numerical solution of Least Squares Problems 1/32 Linear Least Squares Problems Suppose that we have a matrix A of the size m n and the vector b of the size m 1. The linear least square problem is to find
More informationTrust-region methods for rectangular systems of nonlinear equations
Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini
More informationMath 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019
Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent
More informationDeveloping an Algorithm for LP Preamble to Section 3 (Simplex Method)
Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationMathematical optimization
Optimization Mathematical optimization Determine the best solutions to certain mathematically defined problems that are under constrained determine optimality criteria determine the convergence of the
More informationLeast-Squares Fitting of Model Parameters to Experimental Data
Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific
More informationDefinitions & Theorems
Definitions & Theorems Math 147, Fall 2009 December 19, 2010 Contents 1 Logic 2 1.1 Sets.................................................. 2 1.2 The Peano axioms..........................................
More informationOptimization. Totally not complete this is...don't use it yet...
Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign
More informationOptimization Methods
Optimization Methods Categorization of Optimization Problems Continuous Optimization Discrete Optimization Combinatorial Optimization Variational Optimization Common Optimization Concepts in Computer Vision
More informationLecture 9: MATH 329: Introduction to Scientific Computing
Lecture 9: MATH 329: Introduction to Scientific Computing Sanjeena Dang Department of Mathematical Sciences Vectors and Matrices in R Numeric vector " and matrix objects in R are a close match to mathematical
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation
More informationLecture 6, Sci. Comp. for DPhil Students
Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page
More informationSolving Ordinary Differential Equations
Solving Ordinary Differential Equations Sanzheng Qiao Department of Computing and Software McMaster University March, 2014 Outline 1 Initial Value Problem Euler s Method Runge-Kutta Methods Multistep Methods
More informationECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.
ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.
More informationVariations on Backpropagation
2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationOrthonormal Transformations
Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.
More informationNotes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University. (A Rough Draft)
Notes for Numerical Analysis Math 5465 by S. Adjerid Virginia Polytechnic Institute and State University (A Rough Draft) 1 2 Contents 1 Error Analysis 5 2 Nonlinear Algebraic Equations 7 2.1 Convergence
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationConjugate Gradient Method
Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationComputational Methods
Numerical Computational Methods Revised Edition P. B. Patil U. P. Verma Alpha Science International Ltd. Oxford, U.K. CONTENTS Preface List ofprograms v vii 1. NUMER1CAL METHOD, ERROR AND ALGORITHM 1 1.1
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches
More informationMATH 409 Advanced Calculus I Lecture 11: More on continuous functions.
MATH 409 Advanced Calculus I Lecture 11: More on continuous functions. Continuity Definition. Given a set E R, a function f : E R, and a point c E, the function f is continuous at c if for any ε > 0 there
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 13: Conditioning of Least Squares Problems; Stability of Householder Triangularization Xiangmin Jiao Stony Brook University Xiangmin Jiao
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationMaximum and Minimum Values section 4.1
Maximum and Minimum Values section 4.1 Definition. Consider a function f on its domain D. (i) We say that f has absolute maximum at a point x 0 D if for all x D we have f(x) f(x 0 ). (ii) We say that f
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationAdvanced Techniques for Mobile Robotics Least Squares. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Advanced Techniques for Mobile Robotics Least Squares Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Problem Given a system described by a set of n observation functions {f i (x)} i=1:n
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More information