Nonlinear Programming Algorithms Handout

Size: px
Start display at page:

Download "Nonlinear Programming Algorithms Handout"

Transcription

1 Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are the roots of the characteristic polynomial φ(λ) := det(λi A). The set of all eigenvalues is called the spectrum and is denoted by λ(a). Eigenvectors are vectors x satisfying Ax = λx for some λ λ(a). If A R n n is symmetric then n A = QΛQ T = λ i q i qi T where Q R n n is orthogonal (basis of eigenvectors) and Λ R n n is diagonal (each entry called an eigenvalue). Note that in this case the eigenvalues and eigenvectors may be taken as real. The above factorization is called a spectral decomposition. For real symmetric matrices, the eigenvalues are assumed to be ordered using λ 1 λ... λ n. Singular Values Orthogonal matrices satisfy U T U = UU T = I. If A R m n then there exist orthogonal matrices U = [u 1,..., u m ] R m m and V = [v 1,..., v n ] R n n such that U T AV = diag(σ 1,..., σ p ) R m n where p = min{m, n} and σ 1 σ... σ p. This is often written as A = USV T and is termed the singular value decomposition. Singular values are non-negative; σ i (A) denote the ith largest singular value of A. If we define r by σ 1... σ r > σ r+1 =... = σ p = 1

2 then rank(a) = r, ker(a) = span{v r+1,..., v n }, im(a) = span{u 1,..., u r } and r A = σ i u i vi T. How are eigenvalues and singular values related? σ(a) = + λ(aa T ) Definition 1 A matrix A is positive definite if x T Ax > for all x. A matrix A is positive semidefinite if x T Ax for all x. When A is a symmetric positive definite matrix, then singular values and eigenvalues coincide (take U = V = Q and S = Λ), and thus σ 1 (A) = largest eigenvalue of A If A is square, then σ n (A) = smallest eigenvalue of A σ n (A) min i λ i max λ i σ 1 (A) i Lemma For a symmetric matrix A R n n, the following are equivalent 1. A is positive definite. The (n) leading principal subdeterminants of A are strictly positive 3. All principal subdeterminants of A are strictly positive 4. The eigenvalues of A are strictly positive The matrix [ shows symmetry to be necessary for this result. ] 3 Matrix Norms Suppose A R m n. A α,β := sup x Ax β x α In particular, when α = β = p, this is called the p-norm of A. The Frobenius norm is defined by A F = m nj=1 A ij. Various properties of matrix norms detail in GVL (pp 54). Interesting facts: A = σ 1

3 p A F = σi, p = min(m, n) max i,j A A F n A m A 1 = max A ij 1 j n n A = max 1 i m j=1 A ij A ij A mn max A ij i,j A A 1 A min x Ax x = σ n Lemma 3 Let A be a real symmetric matrix. For each x R n λ n x x T Ax λ 1 x If A is also positive definite, then A = λ 1. Proof Since A = QΛQ T (Q invertible), we let x = Qy for some y. Then Also Thus, n x T Ax = y T Q T AQy = y T Λy = λ i yi. y = y T y = x T QQ T x = x. λ n x n n = λ n yi λ i yi (= x T n Ax) λ 1 yi = λ 1 x Finally, Ax = QΛQ T Qy = QΛy, so n Ax = y T ΛQ T QΛy = λ i yi λ 1 y = λ 1 x, the inequality following from the positive definite assumption. Hence Ax x λ 1 so A = sup x Ax x λ 1. But the supremum is attained at an eigenvector of A corresponding to λ 1. Definition 4 Suppose A R n n is nonsingular. The condition number of A, κ(a), is defined to be A A 1. 3

4 The condition number of a matrix depends on the underlying norm. If A is singular, then κ(a) =. Note that κ (A) = A A 1 = σ 1(A) σ n A. Corollary 5 If A is real symmetric positive definite, then A 1 = λ 1 n and κ(a) = λ 1 λ n. λ 1 1 x x T A 1 x λ 1 n x Proof The eigenvalues of A 1 are simply λ 1 i. Definition 6 Suppose A C n n. The spectral radius of A, ρ(a), is defined as the maximum of λ 1,..., λ n, where λ 1,..., λ n are the eigenvalues of A. Note that if A C n n and λ is any eigenvalue of A with eigenvector u, then Au = λ u, so that λ A. Hence ρ(a) A. Lemma 7 Let A C n n. Then lim k A k = if and only if ρ(a) < 1. Lemma 8 (Neumann) Let E R n n and ρ(e) < 1. Then I E is nonsingular and I E 1 = lim k E i k i= Proof Since ρ(e) < 1, the maximum eigenvalue of E must be less than 1 in modulus and hence I E is invertible. Furthermore so that (I E)(I + + E k 1 ) = I E k I + E + + E k 1 = (I E) 1 (I E) 1 E k The result now follows in the limit as k from the above lemma. 4 Results from Matrix Theory Lemma 9 (Debreu) Suppose H R n n and A R m n. The following are equivalent: (a) Az = and z implies z, Hz > (b) There exists γ such that H + γa T A is positive definite 4

5 Remark If (b) holds and γ γ then for any z, z, (H + γa T A)z = z, (H + γa T A)z + (γ γ) Az z, (H + γa T A)z so that H + γa T A is also positive definite. Proof (b) (a) If Az = and z, then < z, Hz = z, (H + γa T A)z. (a) (b) If (b) were false, then there would exist z k of norm 1, with z k, (H + ka T A)z k Without loss of generality, we can assume that z k z. Now for each k z k, (k 1 H + A T A)z k Az so Az =. But z k, Hz k + k Az k z k, Hz k z, Hz and this contradicts (a), since z = 1. Lemma 1 (Schur) Suppose that ( A B M = C D ) (1) with D nonsingular. Then M is nonsingular if and only if S = A BD 1 C is nonsingular, and in that case ( M 1 S = 1 S 1 BD 1 ) D 1 CS 1 D 1 + D 1 CS 1 BD 1 () Remark S is the Schur complement of D in M. We can also show that det M = det S det D since ( ) (( ) ( ) ( )) A B A B I I det M = det = det C D C D C D 1 DC D ( A BD = det 1 C BD 1 ) ( ) I det I DC D = det A BD 1 C det D 5

6 Proof If S is nonsingular, then just multiply the expressions given in (1) and () to obtain I. Thus assume M is nonsingular. Write its inverse as ( ) E F M 1 = G H Since MM 1 = ( I I ) we have I = AE + BG (3) = AF + BH (4) = CE + DG (5) I = CF + DH (6) From (5) we have G = D 1 CE, and substituting this in (3) yields (A BD 1 C)E = I. This implies that S = A BD 1 C is nonsingular and that E = S 1. Using (5) again we have that G = D 1 CS 1. From (6) we have H = D 1 (I CF ), and putting this in (4) yields AF + BD 1 BD 1 CF = that is which implies F = S 1 BD 1. 5 Matrix Norms SF + BD 1 = Suppose A R m n. A α,β := sup x Ax β x α In particular, when α = β = p, this is called the p-norm of A. The Frobenius norm is defined by A F = m nj=1 A ij. Various properties of matrix norms detaile in GVL (pp 54). Interesting facts: The two norm of A is the square root of the largest eigenvalue of A T A. max i,j A A F n A m A 1 = max A ij 1 j n n A = max A ij 1 i m j=1 A ij A mn max A ij A i,j A 1 A The condition number of a matrix depends on the underlying norm, but is defined for a square matrix by κ(a) := A A 1 6

7 If A is singular, then κ(a) =. Note that κ (A) = A inva = σ 1(A) σ n A where σ i (A) is the ith largest singular value of A. Singular values are non-negative. How are they related to eigenvalues? p A F = σi, p = min(m, n) A = σ 1 min x Ax x = σ n 6 Positive (Semi)-Definite Matrices Definition 11 A matrix A is positive definite if x T Ax > for all x. A matrix A is positive semidefinite if x T Ax for all x. Theorem 1 See Bertsekas 7 Strong Convexity Let f: Ω R, h: Ω R where Ω is an open convex set. Definition 13 f is strongly convex (ρ) on Ω if ρ > such that x, y Ω, λ [, 1] f((1 λ)x + λy) (1 λ)f(x) + λf(y) ρ λ(1 λ) x y Definition 14 h is strongly monotone (ρ) on Ω if ρ > such that x, y Ω h(x) h(y), x y ρ x y Theorem 15 (Strong convexity) If f is continuously differentiable on Ω then the following are equivalent: (a) f is strongly convex (ρ) on Ω (b) For all x, y Ω, f(y) f(x) + f(x), y x + (ρ/) x y (c) f is strongly monotone (ρ) on Ω If f is twice continuously differentiable on Ω, then (d) For all x,y,z Ω, x y, f(z)(x y) ρ x y 7

8 is equivalent to the above. Proof We show (a) (b) (c). (a) (b) The hypothesis gives f(x + λ(y x)) f(x) λ so taking the limit as λ (b) (c) Applying (b) twice gives f(y) f(x) ρ (1 λ) x y f(x), y x f(y) f(x) ρ x y f(y) f(x) + f(x), y x + ρ x y f(x) f(y) + f(y), x y + ρ x y Adding these inequalities gives f(y) + f(x) f(x) + f(y) + f(x) f(y), y x + ρ x y from where the result follows. (c) (b) The hypothesis gives f(x + t(y x)) f(x), y x dt which implies the result. (b) (a) Letting y = u and x = (1 λ)u + λv in (b) gives ρt x y dt f(u) f((1 λ)u + λv) + f((1 λ)u + λv), λ(u v) + ρ λ(u v) (7) Also letting y = v and x = (1 λ)u + λv in (b) implies f(v) f((1 λ)u + λv) + f((1 λ)u + λv), (1 λ)(v u) + ρ (1 λ)(v u) (8) Adding (1 λ) times (7) to λ times (8) gives the required result. To complete the proof, we assume that f is twice continuously differentiable on Ω. (d) (c) This follows from the hypothesis since f(x) f(y), x y = f(y + t(x y))(x y)dt, x y ρ x y (c) (d) Let x, y, z Ω. Then z + λ(x y) Ω for sufficiently small λ, so x y, f(z + λ(x y))(x y) = The result follows in the limit as λ. x y, f(z + λ(x y)) f(z) + o(1) λ ρ x y + o(1) 8

9 8 Lipschitz Continuity Definition 16 f is Lipschitz continuous (ρ) on Ω if ρ > such that x, y Ω f(y) f(x) ρ y x Lemma 17 (Lipschitz continuity) Consider the following statements: (a) f is Lipschitz continuous on Ω with constant ρ (b) For all x, y Ω, f(y) f(x) + f(x), y x + (ρ/) x y (c) For all x, y Ω, f(x) f(y), x y ρ x y Then (a) implies (b) implies (c). If f is twice continuously differentiable, then (c) implies (d) For all x, y Ω, y x, f(z)(y x) ρ y x If x y, f(x)(y x) γ y x for some γ (not necessarily positive), then (d) implies (a), possibly with a different constant ρ. Proof (a) (b) f(y) f(x) f(x), y x = (b) (c) Invoking (b) twice gives f(x + t(y x)) f(x), y x dt y x ρ y x tdt = ρ y x f(y) f(x) + f(x), y x + ρ x y f(x) f(y) + f(y), x y + ρ x y f(x + t(y x)) f(x) dt Adding these inequalities gives f(y) + f(x) f(x) + f(y) + f(x) f(y), y x + ρ x y from where (c) follows. (c) (d) It follows from (c) that f(x + λ(y x)) f(x), λ(y x) ρλ y x If we divide both sides by λ, (d) then follows in the limit as λ. 9

10 (d) (a) Let δ := max{ γ, } + ρ. We first show that f(z) δ. Note that However, f(z) = sup x, f(z)y = 1 y =1 f(z)y = sup x, f(z)y x =1, y =1 x y, f(z)(y x) + x, f(z)x + y, f(z)y 1 { γ x y + ρ x + y } 1 {max{ γ, }( x + x y + y ) + ρ( x + y )} Hence, f(z) max{ γ, } + ρ, as required. The Lipschitz continuity now follows easily since f(y) f(x) = f(x + t(y x))(y x)dt δ y x 1

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Nonlinear equations. Norms for R n. Convergence orders for iterative methods Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Chapter 0. Mathematical Preliminaries. 0.1 Norms

Chapter 0. Mathematical Preliminaries. 0.1 Norms Chapter 0 Mathematical Preliminaries 0.1 Norms Throughout this course we will be working with the vector space R n. For this reason we begin with a brief review of its metric space properties Definition

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

March 5, 2012 MATH 408 FINAL EXAM SAMPLE March 5, 202 MATH 408 FINAL EXAM SAMPLE Partial Solutions to Sample Questions (in progress) See the sample questions for the midterm exam, but also consider the following questions. Obviously, a final

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. 3.3 Diagonalization Let A = 4. Then and are eigenvectors of A, with corresponding eigenvalues 2 and 6 respectively (check). This means 4 = 2, 4 = 6. 2 2 2 2 Thus 4 = 2 2 6 2 = 2 6 4 2 We have 4 = 2 0 0

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Topics in Applied Linear Algebra - Part II

Topics in Applied Linear Algebra - Part II Topics in Applied Linear Algebra - Part II April 23, 2013 Some Preliminary Remarks The purpose of these notes is to provide a guide through the material for the second part of the graduate module HM802

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Contents 1 Basic Concepts and Notations 2 2 Matrix Operations and Properties 3 21 Matrix Multiplication 3 211 Vector-Vector Products 3 212 Matrix-Vector Products 4 213 Matrix-Matrix

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization

Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization January 9, 2015 1 Notation 1. Book pages without explicit citation refer to [1]. 2. R n denotes the set of real n-column vectors. 3.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz

C&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

Lecture Notes to be used in conjunction with. 233 Computational Techniques

Lecture Notes to be used in conjunction with. 233 Computational Techniques Lecture Notes to be used in conjunction with 233 Computational Techniques István Maros Department of Computing Imperial College London V29e January 2008 CONTENTS i Contents 1 Introduction 1 2 Computation

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

ACM 104. Homework Set 5 Solutions. February 21, 2001

ACM 104. Homework Set 5 Solutions. February 21, 2001 ACM 04 Homework Set 5 Solutions February, 00 Franklin Chapter 4, Problem 4, page 0 Let A be an n n non-hermitian matrix Suppose that A has distinct eigenvalues λ,, λ n Show that A has the eigenvalues λ,,

More information

Lecture Notes to be used in conjunction with. 233 Computational Techniques

Lecture Notes to be used in conjunction with. 233 Computational Techniques Lecture Notes to be used in conjunction with 233 Computational Techniques István Maros Department of Computing Imperial College London V2.9e January 2008 CONTENTS i Contents 1 Introduction 1 2 Computation

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1 Eigenvalue Problems. Introduction Let A an n n real nonsymmetric matrix. The eigenvalue problem: EIGENVALE PROBLEMS AND THE SVD. [5.1 TO 5.3 & 7.4] Au = λu Example: ( ) 2 0 A = 2 1 λ 1 = 1 with eigenvector

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,

More information