The Fréchet derivative of a generalized matrix function. Network Science meets Matrix Functions Univerity of Oxford, 1-2/9/16

Size: px
Start display at page:

Download "The Fréchet derivative of a generalized matrix function. Network Science meets Matrix Functions Univerity of Oxford, 1-2/9/16"

Transcription

1 The Fréchet derivative of a generalized matrix function Vanni Noferini (University of Essex) Network Science meets Matrix Functions Univerity of Oxford, 1-2/9/16 September 1st, 2016 Vanni Noferini The Fréchet derivative of a generalized matrix function 1 / 33

2 Classical matrix functions Classical definition in linear algebra: given f : C C and a square matrix A C n n we wish to define f (A) in a way that mimics the property of f. Vanni Noferini The Fréchet derivative of a generalized matrix function 2 / 33

3 Classical matrix functions Classical definition in linear algebra: given f : C C and a square matrix A C n n we wish to define f (A) in a way that mimics the property of f. For example: A 1/2 should solve X 2 = A e ta should integrate y(t) = Ay(t), y(0) = v.... Vanni Noferini The Fréchet derivative of a generalized matrix function 2 / 33

4 Defining matrix functions from the eigendecomposition From Linear Algebra 1: A Jordan block is a bidiagonal Toeplitz matrix with unit superdiagonal: λ λ λ λ A matrix is in Jordan canonical form if it is the direct sum of Jordan blocks: λ λ λ µ Vanni Noferini The Fréchet derivative of a generalized matrix function 3 / 33

5 Defining matrix functions from the eigendecomposition Rules: The function of a Jordan block is an upper triangular Toeplitz matrix: λ f (λ) f (λ) f (λ)/2! f (λ)/3! f 0 λ λ 1 = 0 f (λ) f (λ) f (λ)/2! 0 0 f (λ) f (λ) λ f (λ) f (A 1 A 2 ) = f (A 1 ) f (A 2 ) f (ZAZ 1 ) = Zf (A)Z 1 Vanni Noferini The Fréchet derivative of a generalized matrix function 4 / 33

6 Defining matrix functions from the eigendecomposition Rules: The function of a Jordan block is an upper triangular Toeplitz matrix: λ f (λ) f (λ) f (λ)/2! f (λ)/3! f 0 λ λ 1 = 0 f (λ) f (λ) f (λ)/2! 0 0 f (λ) f (λ) λ f (λ) f (A 1 A 2 ) = f (A 1 ) f (A 2 ) f (ZAZ 1 ) = Zf (A)Z 1 Hence f (A) is defined as long as f is defined and sufficiently many times differentiable on the spectrum of A. Vanni Noferini The Fréchet derivative of a generalized matrix function 4 / 33

7 Computing matrix functions On a computer, it is important to know the sensitivity of the attempetd computation. Vanni Noferini The Fréchet derivative of a generalized matrix function 5 / 33

8 Computing matrix functions On a computer, it is important to know the sensitivity of the attempetd computation. A backward stable algorithm computes the exact solution of a nearby problem: f (A) = f (A + E). Vanni Noferini The Fréchet derivative of a generalized matrix function 5 / 33

9 Computing matrix functions On a computer, it is important to know the sensitivity of the attempetd computation. A backward stable algorithm computes the exact solution of a nearby problem: f (A) = f (A + E). Whether this is close to f (A) or not depends on the sensitivity of the problem and, intuitively, on the first derivative of f at A. For a scalar, analytic f : f (A + E) = f (A) + f (A)E + O(E 2 ). Vanni Noferini The Fréchet derivative of a generalized matrix function 5 / 33

10 Derivatives in Banach spaces Let X, Y be Banach spaces over R, x X and f : X Y. Vanni Noferini The Fréchet derivative of a generalized matrix function 6 / 33

11 Derivatives in Banach spaces Let X, Y be Banach spaces over R, x X and If the limit f : X Y. f (x + te) f (x) lim t 0 t exists for all e X then f is Gâteaux differentiable at x. Vanni Noferini The Fréchet derivative of a generalized matrix function 6 / 33

12 Derivatives in Banach spaces Let X, Y be Banach spaces over R, x X and If the limit f : X Y. f (x + te) f (x) lim t 0 t exists for all e X then f is Gâteaux differentiable at x. If there exists a bounded R-linear map L f (x, ) s.t. f (x + h) f (x) L f (x, h) Y lim h X 0 h X exists for all h X then f is Fréchet differentiable at x. Vanni Noferini The Fréchet derivative of a generalized matrix function 6 / 33

13 Derivatives in Banach spaces Vanni Noferini The Fréchet derivative of a generalized matrix function 7 / 33

14 Derivatives in Banach spaces Fréchet differentiable Gâteaux differentiable, but Vanni Noferini The Fréchet derivative of a generalized matrix function 7 / 33

15 Derivatives in Banach spaces Fréchet differentiable Gâteaux differentiable, but If f is Fréchet differentiable then cond abs (f, A) = L f (A, ) Vanni Noferini The Fréchet derivative of a generalized matrix function 7 / 33

16 The Fréchet derivative of a classical matrix function When A is diagonalizable, an explicit formula is known. Vanni Noferini The Fréchet derivative of a generalized matrix function 8 / 33

17 The Fréchet derivative of a classical matrix function When A is diagonalizable, an explicit formula is known. Theorem (Daleckǐi and Kreǐn 1965) Let A = ZDZ 1 and f differentiable on the spectrum of A. Then L f (A, E) = Z(F (Z 1 EZ))Z 1 with F ij = f (D ii ) f (D jj ) D ii Djj if D ii D jj or F ij = f (D ii ) otherwise. Vanni Noferini The Fréchet derivative of a generalized matrix function 8 / 33

18 Example Take Then A = 0 2 0, f (A) = e A, E = e 2 e 2 sinh(2) e 2 2e 2 sinh(2) F = e 2 e 2 sinh(2) L f (A, E) = sinh(2) sinh(2) e e 2 Vanni Noferini The Fréchet derivative of a generalized matrix function 9 / 33

19 Consequences The eigenvector matrix Z appears in the Daleckǐi-Kreǐn theorem. This leads to issues caused by non-normality of the argument matrix. Vanni Noferini The Fréchet derivative of a generalized matrix function 10 / 33

20 Consequences The eigenvector matrix Z appears in the Daleckǐi-Kreǐn theorem. This leads to issues caused by non-normality of the argument matrix. Theorem (N. Higham 2008) cond abs (f, A) κ 2 (Z) 2 F. Vanni Noferini The Fréchet derivative of a generalized matrix function 10 / 33

21 Generalized matrix functions Given A C m n with SVD A = USV and a function f : [0, ) R we wish to define f (A) in some sensible way. Vanni Noferini The Fréchet derivative of a generalized matrix function 11 / 33

22 Gmf: definition From Linear Algebra 2: In an SVD A = USV, U, V are unitary and S is diagonal with nonincreasing nonnegative elements. This leads to the CSVD A = U r S r Vr where U r, V r are submatrices of U, V and S r is square invertible (empty if A = 0). Vanni Noferini The Fréchet derivative of a generalized matrix function 12 / 33

23 Gmf: definition Definition (Hawkins and Ben-Israel 1973): Equivalent definition: A = U r S r V r f (A) = U r f (S r )V r. A = USV f (A) = Uf (S)V, where f (S) is diagonal with diagonal entries { f f (σ i ) if σ i 0 (σ i ) = 0 if σ i = 0 Vanni Noferini The Fréchet derivative of a generalized matrix function 13 / 33

24 Why do we care? Among the applications: complex networks (Arrigo, Benzi, Estrada, Fenu, D. Higham, Klymko... ); computing classical matrix functions of structured matrices (Del Buono, Lopez, Politi... ); computer vision (Bylow, Kahl, Larsson, Olsson... ); the unitary factor of the polar decompisition of a full rank matrix is a gmf; the Moore-Penrose pseudoinverse of A is a gmf of A. Vanni Noferini The Fréchet derivative of a generalized matrix function 14 / 33

25 Why do we care? Among the applications: complex networks (Arrigo, Benzi, Estrada, Fenu, D. Higham, Klymko... ); computing classical matrix functions of structured matrices (Del Buono, Lopez, Politi... ); computer vision (Bylow, Kahl, Larsson, Olsson... ); the unitary factor of the polar decompisition of a full rank matrix is a gmf; the Moore-Penrose pseudoinverse of A is a gmf of A. (So they are familiar to at least some people in the room...) Vanni Noferini The Fréchet derivative of a generalized matrix function 14 / 33

26 Example Toy matrix: A = Vanni Noferini The Fréchet derivative of a generalized matrix function 15 / 33

27 Example Toy matrix: for f (σ) = σ 3, A = f (A) = Vanni Noferini The Fréchet derivative of a generalized matrix function 15 / 33

28 Example Toy matrix: for f (σ) = σ 3, for f (σ) = e σ, Note that e A = f (A) = e f (A) = 0 e Vanni Noferini The Fréchet derivative of a generalized matrix function 15 / 33

29 Basic observations In spite of their name, gmf do not reduce to classical mf when m = n; Vanni Noferini The Fréchet derivative of a generalized matrix function 16 / 33

30 Basic observations In spite of their name, gmf do not reduce to classical mf when m = n; Even when m = n, f (A) for a polynomial function f is not a polynomial in A in the classical sense; Vanni Noferini The Fréchet derivative of a generalized matrix function 16 / 33

31 Basic observations In spite of their name, gmf do not reduce to classical mf when m = n; Even when m = n, f (A) for a polynomial function f is not a polynomial in A in the classical sense; If f (0) 0 and A is rank deficient then f (A) is not continuous let alone differentiable at A. Vanni Noferini The Fréchet derivative of a generalized matrix function 16 / 33

32 Basic observations In spite of their name, gmf do not reduce to classical mf when m = n; Even when m = n, f (A) for a polynomial function f is not a polynomial in A in the classical sense; If f (0) 0 and A is rank deficient then f (A) is not continuous let alone differentiable at A. If A = QH is a polar decomposition then f (A) = Qf (H). Vanni Noferini The Fréchet derivative of a generalized matrix function 16 / 33

33 Basic observations In spite of their name, gmf do not reduce to classical mf when m = n; Even when m = n, f (A) for a polynomial function f is not a polynomial in A in the classical sense; If f (0) 0 and A is rank deficient then f (A) is not continuous let alone differentiable at A. If A = QH is a polar decomposition then f (A) = Qf (H). If H is Hpd then f (H) = f (H); if H is Hpsd this is not generally true unless f (0) = 0. Vanni Noferini The Fréchet derivative of a generalized matrix function 16 / 33

34 Intuitions on computational advantage Although you do not compute classical mf via the (unstable?) eigendecomposition, their definition may lead to numerical ill conditioning. Vanni Noferini The Fréchet derivative of a generalized matrix function 17 / 33

35 Intuitions on computational advantage Although you do not compute classical mf via the (unstable?) eigendecomposition, their definition may lead to numerical ill conditioning. Gmf are defined via the (stable) SVD. Does this imply that they are always well conditioned? Vanni Noferini The Fréchet derivative of a generalized matrix function 17 / 33

36 Gmf: conditions on differentiability Theorem Let A C m n and f : [0, ) R differentiable on an open subset containing the positive singular value of A. Moreover, if ranka < min(m, n), suppose that f (0) = 0 and f is right differentiable at 0. Then f (X ) is Fréchet differentiable at X = A. Vanni Noferini The Fréchet derivative of a generalized matrix function 18 / 33

37 Gmf: conditions on differentiability Theorem Let A C m n and f : [0, ) R differentiable on an open subset containing the positive singular value of A. Moreover, if ranka < min(m, n), suppose that f (0) = 0 and f is right differentiable at 0. Then f (X ) is Fréchet differentiable at X = A. Important: in this theorem, the Fréchet derivative is the real Fréchet derivative. Generally, generalized matrix functions are not complex differentiable. Vanni Noferini The Fréchet derivative of a generalized matrix function 18 / 33

38 Some notation... Given X C m n, Υ : C m n C m n is the following real-linear operator: [ X 1 X 2 ] Υ (X ) = X if m = n; if m > n and X = [ X 1 X 2 ] ; ] [X 1 X 2 if m < n and X = [X 1 X 2 ]. Vanni Noferini The Fréchet derivative of a generalized matrix function 19 / 33

39 Notation again... Given the singular values of A C m n we introduce two operators F, G R m n : Vanni Noferini The Fréchet derivative of a generalized matrix function 20 / 33

40 Notation again... Given the singular values of A C m n we introduce two operators F, G R m n : σ i f (σ i ) σ j f (σ j ) σi 2 σ2 j if i j, max(i, j) ν, and σ i σ j ; σ i f (σ i )+f (σ i ) 2σ i if i j, max(i, j) ν, and σ i = σ j 0; F ij = f (σ j ) σ j if i > n, and σ j 0; f (σ i ) σ i if j > m, and σ i 0; f (σ i ) otherwise; Vanni Noferini The Fréchet derivative of a generalized matrix function 20 / 33

41 Notation again... Given the singular values of A C m n we introduce two operators F, G R m n : σ i f (σ i ) σ j f (σ j ) σi 2 σ2 j if i j, max(i, j) ν, and σ i σ j ; σ i f (σ i )+f (σ i ) 2σ i if i j, max(i, j) ν, and σ i = σ j 0; F ij = f (σ j ) σ j if i > n, and σ j 0; f (σ i ) σ i if j > m, and σ i 0; f (σ i ) otherwise; σ j f (σ i ) σ i f (σ j ) if i j, i, j ν, and σ σi 2 i σ j ; σ2 j G ij = σ i f (σ i ) f (σ i ) 2σ i if i j, i, j ν, and σ i = σ j 0; 0 otherwise. Vanni Noferini The Fréchet derivative of a generalized matrix function 20 / 33

42 I am lost! Help! Forgetting about the degenerate cases... Vanni Noferini The Fréchet derivative of a generalized matrix function 21 / 33

43 I am lost! Help! Forgetting about the degenerate cases... F is f (σ i ) on the main diagonal, and it is off the main diagonal. σ i f (σ i ) σ j f (σ j ) σ 2 i σ 2 j Vanni Noferini The Fréchet derivative of a generalized matrix function 21 / 33

44 I am lost! Help! Forgetting about the degenerate cases... F is f (σ i ) on the main diagonal, and it is off the main diagonal. G is 0 on the main diagonal, and it is off the main diagonal. σ i f (σ i ) σ j f (σ j ) σ 2 i σ 2 j σ j f (σ i ) σ i f (σ j ) σ 2 i σ 2 j Vanni Noferini The Fréchet derivative of a generalized matrix function 21 / 33

45 Daleckǐi-Kreǐn formula for gmf Theorem Given A = USV, under the assumptions of the previous theorem, ) L f (A, E) = U (F Ê + ih Im(Ê) + G Υ (Ê) V, where Ê = U EV, F, G are as in the previous slides, and H is diagonal with H ii = f (σ i )/σ i F ii if σ i 0 or H ii = f (0) F ii otherwise. Vanni Noferini The Fréchet derivative of a generalized matrix function 22 / 33

46 Daleckǐi-Kreǐn formula for real gmf U, V, E are real, so that the formula simplifies a little. Theorem Given A = USV T, under the assumptions of the previous theorem, ) L f (A, E) = U (F Ê + G Υ (Ê) V T, where Ê = UT EV, F, G are as a few slides ago. Vanni Noferini The Fréchet derivative of a generalized matrix function 23 / 33

47 Special cases For special f, formuale may simplify. For example, f = 1 unitary factor of a polar decomposition. (f (0) 0, hence differentiability A has full rank). The theorem then leads to efficient algorithms for the computation of the Fréchet derivative, see Arslan, Noferini and Tisseur (in preparation). Vanni Noferini The Fréchet derivative of a generalized matrix function 24 / 33

48 Example Toy matrices: A = 0 2 0, E = Vanni Noferini The Fréchet derivative of a generalized matrix function 25 / 33

49 Example Toy matrices: for f (σ) = σ 2, A = 0 2 0, E = / /3 F = 3 4 7/3, G = 1 0 2/3, 7/3 7/3 2 2/3 2/ /3 L f (A, E) = 1 0 2/3. 7/3 7/3 2 Vanni Noferini The Fréchet derivative of a generalized matrix function 25 / 33

50 Example Toy matrices: A = 0 2 0, E = Vanni Noferini The Fréchet derivative of a generalized matrix function 26 / 33

51 Example Toy matrices: for f (σ) = 1 (polar decomposition), A = 0 2 0, E = /4 1/3 0 1/4 1/3 F = 1/4 0 1/3 = G, L f (A, E) = 1/4 0 1/3. 1/3 1/3 0 1/3 1/3 0 Vanni Noferini The Fréchet derivative of a generalized matrix function 26 / 33

52 Application to conditioning Theorem Assuming m n and A R m n, cond abs (f, A) max{max i F ii, max j<i F ij, max i<j F ij + G ij, max F ij G ij }. i<j Vanni Noferini The Fréchet derivative of a generalized matrix function 27 / 33

53 Application to conditioning Theorem Assuming m n and A R m n, cond abs (f, A) max{max i F ii, max j<i F ij, max i<j F ij + G ij, max F ij G ij }. i<j Unlike for classical matrix function, no condition number of some potentially ill conditioned matrix appears! Vanni Noferini The Fréchet derivative of a generalized matrix function 27 / 33

54 Some more bounds In practice, only a few singular values may be known: Theorem If A R m n has full rank and smallest singular value σ r, suppose that M is the sup norm of f on [σ r, A 2 ] and that f is Lipschitz continuos on the same interval with constant K. Then cond abs (f, A) max{k, Mσ 1 r }. Vanni Noferini The Fréchet derivative of a generalized matrix function 28 / 33

55 Some more bounds In practice, only a few singular values may be known: Theorem If A R m n has full rank and smallest singular value σ r, suppose that M is the sup norm of f on [σ r, A 2 ] and that f is Lipschitz continuos on the same interval with constant K. Then Theorem cond abs (f, A) max{k, Mσ 1 r }. If A R m n and f (0) = 0, suppose that f is Lipschitz continuos on on [0, A 2 ] with constant K. Then cond abs (f, A) K. Vanni Noferini The Fréchet derivative of a generalized matrix function 28 / 33

56 Some more bounds Again, for special choices of f more can be said. Vanni Noferini The Fréchet derivative of a generalized matrix function 29 / 33

57 Some more bounds Again, for special choices of f more can be said. The following is known but can be recovered as a special case: Theorem (Kenney and Laub, 1991) If A R m n has full rank r, then for f (x) = 1 cond abs (f, A) = 2 σ r + σ r 1. Vanni Noferini The Fréchet derivative of a generalized matrix function 29 / 33

58 Some more bounds Again, for special choices of f more can be said. The following is known but can be recovered as a special case: Theorem (Kenney and Laub, 1991) If A R m n has full rank r, then for f (x) = 1 Theorem cond abs (f, A) = 2 σ r + σ r 1. If A R m n has full rank r, then for f (x) = e x, defining g(x) = e x /x: cond abs (f, A) = max{g(σ r ), g( A 2 ), e A 2 }. Vanni Noferini The Fréchet derivative of a generalized matrix function 29 / 33

59 Relative condition number..it is the relative condition number that is of interest, but it is more convenient to state results for the absolute condition number. Vanni Noferini The Fréchet derivative of a generalized matrix function 30 / 33

60 Relative condition number..it is the relative condition number that is of interest, but it is more convenient to state results for the absolute condition number. Vanni Noferini The Fréchet derivative of a generalized matrix function 30 / 33

61 Some more bounds Define µ = f ( A 2 ) 2 + f (σ r ) 2. Theorem If A R m n has full rank and smallest singular value σ r, suppose that M is the sup norm of f on [σ r, A 2 ] and that f is Lipschitz continuos on the same interval with constant K. Then cond rel (f, A) max(m, n) A 2 max{k/µ, σ 1 r }. Vanni Noferini The Fréchet derivative of a generalized matrix function 31 / 33

62 Some more bounds Define µ = f ( A 2 ) 2 + f (σ r ) 2. Theorem If A R m n has full rank and smallest singular value σ r, suppose that M is the sup norm of f on [σ r, A 2 ] and that f is Lipschitz continuos on the same interval with constant K. Then Theorem cond rel (f, A) max(m, n) A 2 max{k/µ, σ 1 r }. If A R m n and f (0) = 0, suppose that f is Lipschitz continuos on on [0, A 2 ] with constant K. Then cond rel (f, A) max(m, n)k A 2 µ 1. Vanni Noferini The Fréchet derivative of a generalized matrix function 31 / 33

63 The big picture If f (0) = 0, then f has essentially the same condition number as the scalar function f. Unlike classical mf, gmf are never numerically dodgier than the scalar case. Vanni Noferini The Fréchet derivative of a generalized matrix function 32 / 33

64 The big picture If f (0) = 0, then f has essentially the same condition number as the scalar function f. Unlike classical mf, gmf are never numerically dodgier than the scalar case. If f (0) 0, trouble may happen only if max i f (σ i )/σ i max f (σ i ). i Vanni Noferini The Fréchet derivative of a generalized matrix function 32 / 33

65 Can trouble happen? If f (0) 0, f is discontinuos at a rank deficient A. Intuitively, we expect numerical issues for an ill conditioned A. Let Then, Yet, A = [ ] ɛ 0 0, f (x) = 1 + (x ɛ) cond rel (f, ɛ) = 0, cond rel (f, 1) = 1 + O(ɛ 2 ). cond rel (f, A) = 1 ɛ 5 + O(1). Vanni Noferini The Fréchet derivative of a generalized matrix function 33 / 33

66 Can trouble happen? If f (0) 0, f is discontinuos at a rank deficient A. Intuitively, we expect numerical issues for an ill conditioned A. Let Then, Yet, A = [ ] ɛ 0 0, f (x) = 1 + (x ɛ) cond rel (f, ɛ) = 0, cond rel (f, 1) = 1 + O(ɛ 2 ). cond rel (f, A) = 1 ɛ 5 + O(1) ρ Vanni Noferini ǫ The Fréchet derivative of a generalized matrix function 33 / 33

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Jordan Normal Form and Singular Decomposition

Jordan Normal Form and Singular Decomposition University of Debrecen Diagonalization and eigenvalues Diagonalization We have seen that if A is an n n square matrix, then A is diagonalizable if and only if for all λ eigenvalues of A we have dim(u λ

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Pseudoinverse & Orthogonal Projection Operators

Pseudoinverse & Orthogonal Projection Operators Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Singular Value Decompsition

Singular Value Decompsition Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost

More information

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability

More information

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

MATH 5524 MATRIX THEORY Problem Set 4

MATH 5524 MATRIX THEORY Problem Set 4 MATH 5524 MATRIX THEORY Problem Set 4 Posted Tuesday 28 March 217. Due Tuesday 4 April 217. [Corrected 3 April 217.] [Late work is due on Wednesday 5 April.] Complete any four problems, 25 points each.

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Lecture 2: Numerical linear algebra

Lecture 2: Numerical linear algebra Lecture 2: Numerical linear algebra QR factorization Eigenvalue decomposition Singular value decomposition Conditioning of a problem Floating point arithmetic and stability of an algorithm Linear algebra

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Solving Ax = b w/ different b s: LU-Factorization

Solving Ax = b w/ different b s: LU-Factorization Solving Ax = b w/ different b s: LU-Factorization Linear Algebra Josh Engwer TTU 14 September 2015 Josh Engwer (TTU) Solving Ax = b w/ different b s: LU-Factorization 14 September 2015 1 / 21 Elementary

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

AMS Foundations Exam - Part A, January 2018

AMS Foundations Exam - Part A, January 2018 AMS Foundations Exam - Part A, January 2018 Name: ID Num. Part A: / 75 Part B: / 75 Total: / 150 This component of the exam (Part A) consists of two sections (Linear Algebra and Advanced Calculus) with

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants

Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/

More information

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003

T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 T.8. Perron-Frobenius theory of positive matrices From: H.R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003 A vector x R n is called positive, symbolically x > 0,

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 EigenValue decomposition Singular Value Decomposition Ramani Duraiswami, Dept. of Computer Science Hermitian Matrices A square matrix for which A = A H is said

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Eigenvalue Problems and Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition Eigenvalue Problems and Singular Value Decomposition Sanzheng Qiao Department of Computing and Software McMaster University August, 2012 Outline 1 Eigenvalue Problems 2 Singular Value Decomposition 3 Software

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

MATH 612 Computational methods for equation solving and function minimization Week # 2

MATH 612 Computational methods for equation solving and function minimization Week # 2 MATH 612 Computational methods for equation solving and function minimization Week # 2 Instructor: Francisco-Javier Pancho Sayas Spring 2014 University of Delaware FJS MATH 612 1 / 38 Plan for this week

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information