ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013

Similar documents
Math 411 Preliminaries

Applied Numerical Analysis

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Applied Linear Algebra

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information

1 Number Systems and Errors 1

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Math 411 Preliminaries

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Third Edition. William H. Press. Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin. Saul A.

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

Numerical Methods in Matrix Computations

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to Numerical Analysis

APPLIED NUMERICAL LINEAR ALGEBRA

Direct methods for symmetric eigenvalue problems

Introduction to Applied Linear Algebra with MATLAB

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Scientific Computing: An Introductory Survey

Optimization. Totally not complete this is...don't use it yet...

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Lecture 8 Optimization

Linear algebra & Numerical Analysis

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

8 Numerical methods for unconstrained problems

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Lecture 8. Root finding II

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

Newton-Raphson. Relies on the Taylor expansion f(x + δ) = f(x) + δ f (x) + ½ δ 2 f (x) +..

THE QR METHOD A = Q 1 R 1

Inverse Eigenvalue Problems: Theory, Algorithms, and Applications

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Orthogonal iteration to QR

Eigenvalues and eigenvectors

Eigenvalue Problems and Singular Value Decomposition

Example - Newton-Raphson Method

Solutions Preliminary Examination in Numerical Analysis January, 2017

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

Introduction to Unconstrained Optimization: Part 2

Exploring the energy landscape

Eigenvalue and Eigenvector Problems

Chapter 6. Algebraic eigenvalue problems Introduction Introduction 113. Das also war des Pudels Kern!

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

FINITE-DIMENSIONAL LINEAR ALGEBRA

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

Numerical Optimization

Numerical Analysis Fall. Roots: Open Methods

Numerical Mathematics

Nonlinear Optimization

Computational Methods. Solving Equations

Lecture 4 Eigenvalue problems

G1110 & 852G1 Numerical Linear Algebra

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

FEM and sparse linear system solving

Algebra C Numerical Linear Algebra Sample Exam Problems

Matrices, Moments and Quadrature, cont d

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Lecture 34 Minimization and maximization of functions

, B = (b) Use induction to prove that, for all positive integers n, f(n) is divisible by 4. (a) Use differentiation to find f (x).

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

Numerical Analysis Lecture Notes

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

MATCOM and Selected MATCOM Functions

Minimization of Static! Cost Functions!

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Introduction. Chapter One

Scientific Computing: An Introductory Survey

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Linear Algebra: Matrix Eigenvalue Problems

Numerical Analysis for Engineers and Scientists

Statistics 580 Optimization Methods

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Numerical Optimization: Basic Concepts and Algorithms

Introduction to unconstrained optimization - direct search methods

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Lecture 3: QR-Factorization

NUMERICAL MATHEMATICS AND COMPUTING

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Transcription:

ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues Prof. Peter Bermel January 23, 2013

Outline Recap from Friday Optimization Methods Brent s Method Golden Section Search Downhill Simplex Conjugate gradient methods Multiple level, single linkage (MLSL) Eigenproblems Power Method Factorization Methods

Recap from Friday Methods for finding zeros: Bisection bracket + continuously halve intervals Newton-Raphson method uses tangent Laguerre s method for polynomials Brent s method inverse quadratic interpolation Optimization key distinctions Convexity refers to problem difficulty Locality how widely to search Gradient-based accelerates search

Brent s Method: Finding Optima Assumes a concave function Algorithm: Evaluate function at bracket endpoints & center Fit parabola Find & ( ) Keep two closest points for bracket and repeat until bracket is around Infer optimum based

Golden Section Search Closely related to bisection approach to finding roots Algorithm Taking a downhill step Bracket lowest point with higher values on each side Keep repeating until interval is around ( ) ( ) ( ) ( )

Downhill Simplex Search Simplex is a triangle (2D), tetrahedron (3D), etc. Algorithm: Create an N-dimensional simplex: = + Perform one of 4 steps shown on right Repeat until tolerances reached (e.g., for change in simplex end-points, or function values)

Conjugate Gradient Method Assumes convex multidimensional function Uses derivative information Algorithm: Start with initial g o =h o Calculate scalars λ i, γ i Construct new vectors g i+1 and h i+1, satisfying orthogonality& conjugacy conditions Repeat until tolerance reached Note that no a prioriknowledge of Hessian matrix A is required!

Multiple Level Single Linkage Global search Algorithm: Quasi-random sequence of starting points Local optimization (e.g., conjugate gradient) Heuristic tracks basins of convergence ECE 595, Prof. Bermel M. Ghebrebrhan, P. Bermel, et al., Opt. Express 17, 7505 (2009)

Eigenproblems Regular eigenproblem: = Generalized eigenproblem: = Common examples in research Schrödinger s equation: ħ Ψ+ Ψ= Ψ EM master eqn: = Direct method solve: det 1 =0 1/9/2013 ECE 595, Prof. Bermel

Eigenproblems: Key Terminology Symmetric: = Hermitian(self-adjoint): = implies all real eigenvalues Orthogonal: = Unitary: = Normal: = Right eigenvectors: =

Power Method Algorithm: Initially guess dominant eigenvector Let = (optionally: normalize each step) Dominant eigenvalue = To find other eigenvalues: Inverse power method Shifted inverse power method Error vs. iteration number

Transformation Methods General concept of similarity transformations: Atomic transformations: construct each Z explicitly Factorization methods: QR and QL methods Keep iterating atomic transformations: Until off-diagonal elements are small: then use matrix to read off eigenvectors Otherwise: use factorization approach

Factorization in Eigenproblems Most common approach known as QR method Can also do the same with Can exploit efficient solutions for special cases: Tridiagonal matrices Hessenberg matrix (cf. upper-right matrix) 1/9/2013 ECE 595, Prof. Bermel

Jacobi Transformations Atomic transformation: Generalizes rotation matrix: cos sin sin cos 1/9/2013 ECE 595, Prof. Bermel

Householder Transformation Basic approach discussed previously Keys to this strategy: Construct vector wto eliminate most elements: = 0,,,, Where = sgn Iterate recursively to tridiagonalform and solve: =

Next Class Is on Friday, Jan. 23 Continue discussion of eigenproblems Again, refer Chapter 11 of Numerical Recipes by W.H. Press et al.