Sophomoric Matrix Multiplication

Size: px
Start display at page:

Download "Sophomoric Matrix Multiplication"

Transcription

1 Sophomoric Matrix Multiplication Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) Universidad de Zaragoza, 3 julio 2009

2 Linear algebra students learn, for m n matrices A, B, y C, matrix addition is A + B = C if and only if a ij + b ij = c ij. Expect matrix multiplication is AB = C if and only if a ij b ij = c ij, But, the professor says No! It is much more complicated than that! Today, I want to explain why this kind of multiplication not only is sensible but also is very practical,very interesting, and has many applications in mathematics and related subjects. Definition If A y B are m n matrices, the Schur (o Hadamard o naïve o sophomoric) product of A y B is the m n matrix C = A B with c ij = a ij b ij.

3 These ideas go back more than a century to Moutard (1894), who didn t even notice he had proved anything(!), Hadamard (1899), and Schur (1911). Hadamard considered analytic functions f(z) = n=0 a nz n and g(z) = n=0 b nz n that have singularities at {α i } and {β j } respectively. He proved that if h(z) = n=0 a nb n z n which has singularities {γ k }, then {γ k } {α i β j }.

4 This seems a little less surprising when you consider convolutions: Let f and g be 2π-periodic functions on R and so that a k = 2π 0 e ikθ f(θ) dθ 2π y b k = 2π 0 e ikθ g(θ) dθ 2π f a k e ikθ y g b k e ikθ If h(θ) = 2π 0 f(θ t)g(t) dt 2π, then h a k b k e ikθ y f 0 y g 0 implies h 0.

5 Schur s name is most often associated with the matrix product because he published the first theorem about esta typo de matrix multiplicación. Definition A real ( or complex) n n matrix is called positive or positive semidefinite if A = A Ax, x 0 for all x in R n ( or C n ) Then For any m n matrix A, both AA and A A are positive. Conversely, if B is positive, then B = AA for some A. In statistics, every variance-covariance matrix is positive.

6 Examples: A = B = 1 2 is NOT positive: but BC = 2 1, y C = = =, are positive is not. = 1

7 Schur Product Theorem (1911) Si A y B son positive n n matrices, then A B is positive also. Applications: Experimental design: Si A y B son variance-covariance matrices, then A B is positive also. P.D.E. s: Let Ω be a domain in R 2 and let L be the differential operator Lu = a 11 x + 2a a 22 y + b u 2 1 x + b u 2 y + cu L is called elliptic if a 11 a 12 a 21 a 22 is positive definite.

8 Lu = a 11 x + 2a a 22 y + b u 2 1 x + b u 2 y + cu Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. By contradiction: If u has a minimum at (x 0, y 0 ) and u(x 0, y 0 ) < 0, then and u x (x 0, y 0 ) = u y (x 0, y 0 ) = 0 0 = Lu = a 11 x + 2a a 22 y + b u 2 1 x + b u 2 y + cu = a 11 x + 2a a 22 y + cu 2

9 Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu

10 Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu

11 Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu

12 Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a a 22 y + cu 2 = a 11 2 u x 2 a 12 a 12 = a 11 a 12 a 12 a 22 a 22 y 2 x 2 y 2, + cu, + cu

13 Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu > 0

14 Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu > 0 Contradiction!

15 Fejer s Uniqueness Theorem Si L is elliptic y c < 0 on Ω, then there is at most one solution to the boundary value problem Lu = f u = g en Ω en Ω that is continuous on Ω and smooth en Ω. Si u 1 y u 2 son both solutíones con u 1 u 2, then since u 1 = g = u 2 en Ω and u 1 u 2 = 0 = u 2 u 1 en Ω, either u 1 u 2 or u 2 u 1 must have a negative minimum value en Ω. But Lu 1 = f = Lu 2 en Ω, so L(u 1 u 2 ) 0 L(u 2 u 1 ) en Ω Moutard: neither has negative minimum value en Ω so must have u 1 u 2.

16 Goal: Prove Schur s theorem Recall (AB) t = B t A t y (AB) = B A Do for case of real scalars: same pero less comfortable para most matématicos con complex scalars Use column vectors: x, y = x 1 y 1 + x 2 y 2 + x 3 y x n y n = x t y Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t v k v t k for vectors v 1, v 2, v 3,, v k and k n.

17 Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) Let x be in R n and A = v 1 v t 1 + v 2 v t v k v t k for vectors v 1, v 2,, v k. Then Ax, x = (v 1 v t 1 + v 2 v t v k v t k)x, x = j v j v t j x, x = j (v j v t j x) t x = j = j x t ((v t j ) t v t j x = j v j, x v j, x = j (v t j x) t (v t j x) v j, x 2 0

18 Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) Let A be positive. Since A = A t, there is an orthonormal basis for R n that consists of eigenvectors for A, call it w 1, w 2,, w n. Then, for each j, let α j be the eigenvalues of A, that is, Aw j = α j w j. Because A is positive, α j 0 for all j. We suppose they have been numbered so that for 1 j k, α j > 0 and for j k + 1, α j = 0. For 1 j k, choose β j > 0 with α j = βj 2 and let v j = β j w j. Then, we show that A = v 1 v t 1 + v 2 v t v k v t k = α 1 w 1 w t 1 + α 2 w 2 w t α k w k w t k

19 Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) To show that A = α 1 w 1 w t 1 + α 2 w 2 w t α k w k w t k we will show that for each x in R n, Ax and (α 1 w 1 w t 1 + α 2 w 2 w t α k w k w t k )x are the same. If x is a vector in R n, then x is a linear combination of the w j s, say x = x 1 w 1 + x 2 w x n w n. Then Ax is given by Ax = A(x 1 w 1 + x 2 w x n w n ) = x 1 Aw 1 + x 2 Aw x n Aw n

20 Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) which is Ax = x 1 Aw 1 + x 2 Aw x n Aw n = x 1 α 1 w 1 + x 2 α 2 w x k α k w k + + x n α n w n = x 1 α 1 w 1 + x 2 α 2 w x k α k w k + + x n 0w n = x 1 α 1 w 1 + x 2 α 2 w x k α k w k Notice that w t i w j = w i, w j = δ ij.

21 Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) Similar to the above calculation, (v 1 v t 1 + v 2 v t v k vk)x t = (α 1 w 1 w t 1 + α 2 w 2 w t α k w k wk)x t = ( i α i w i w t i )( j x j w j ) = i,j (α i w i w t i )x j w j = i,j α i x j w i (w t i w j ) = i α i x i w i = x 1 α 1 w 1 + x 2 α 2 w x k α k w k Thus, for each x, Ax = (v 1 v t 1 + v 2 v t v k v t k )x and A = v 1 v t 1 + v 2 v t v k v t k

22 Lemma If u and v are vectors in R n, then (uu t ) (vv t ) = (u v)(u v) t. u 1 u 1 u 1 u 2 u 1 u n If u = (u 1, u 2,, u n ), then uu t u 2 u 1 u 2 u 2 u 2 u n =..... u n u 1 u n u 2 u n u n Thus, u 1 u 1 u 1 u 2 u 1 u n v 1 v 1 v 1 v 2 v 1 v n (uu t ) (vv t u 2 u 1 u 2 u 2 u 2 u n v 2 v 1 v 2 v 2 v 2 v n ) = u n u 1 u n u 2 u n u n v n v 1 v n v 2 v n v n =

23 Lemma If u and v are vectors in R n, then (uu t ) (vv t ) = (u v)(u v) t. Thus, (uu t ) (vv t ) = u 1 u 1 v 1 v 1 u 1 u 2 v 1 v 2 u 1 u n v 1 v n u 2 u 1 v 2 v 1 u 2 u 2 v 2 v 2 u 2 u n v 2 v n..... u n u 1 v n v 1 u n u 2 v n v 2 u n u n v n v n = u 1 v 1 u 1 v 1 u 1 v 1 u 2 v 2 u 1 v 1 u n v n u 2 v 2 u 1 v 1 u 2 v 2 u 2 v 2 u 2 v 2 u n v n..... = (u v)(u v) t u n v n u 1 v 1 u n v n u 2 v 2 u n v n u n v n

24 Schur Product Theorem (1911) Si A y B son positive n n matrices, then A B is positive also. There are vectors u 1, u 2,, u k so that A = u 1 u t 1 + u 2 u t u k u t k and vectors v 1, v 2,, v l so that B = v 1 v t 1 + v 2 v t v l v t l. Now, ( ) A B = u i u t i j v j v t j i = i,j (u i u t i ) (v j v t j ) = i,j (u i v j ) (u i v j ) t 0

25 Corollary Si A = (a ij is a positive n n matrix, then (a 2 ij ), (a3 ij ), (ea ij) are positive also. For example, is positive, so , , and e3 e 2 e 2 e 2 are also positive!

26 Application: Matrix completion problems 15 2 a b c Are there numbers a, b, and c so that A = a b c 3 13 is positive? In this case, yes: a = 7, b = 5, and c = 8.

27 General Problem For B is a fixed n n matrix, compute the Schur multiplier norm of B, that is, find the smallest constant K B such that X B K B X Moreover, we want to have a computationally effective way to find K B.

28 Schur (1911) Si B is a positive n n matrix, then its Schur multiplier norm is its largest diagonal entry. If β is the largest diagonal entry of B, then B I = β, and K B β. Note that A α if and only if Schur s Theorem implies, 0 B B B B αi A A αi I A A I = 0. B I B A B A B I = B I B A (B A) B I βi B A (B A) βi

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3 Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. An n n matrix

More information

12. Perturbed Matrices

12. Perturbed Matrices MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions Commutants of Finite Blaschke Product Multiplication Operators on Hilbert Spaces of Analytic Functions Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) Universidad de Zaragoza, 5

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to : MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

8 Extremal Values & Higher Derivatives

8 Extremal Values & Higher Derivatives 8 Extremal Values & Higher Derivatives Not covered in 2016\17 81 Higher Partial Derivatives For functions of one variable there is a well-known test for the nature of a critical point given by the sign

More information

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Nonlinear Programming Algorithms Handout

Nonlinear Programming Algorithms Handout Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

In this section again we shall assume that the matrix A is m m, real and symmetric.

In this section again we shall assume that the matrix A is m m, real and symmetric. 84 3. The QR algorithm without shifts See Chapter 28 of the textbook In this section again we shall assume that the matrix A is m m, real and symmetric. 3.1. Simultaneous Iterations algorithm Suppose we

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

EE16B Designing Information Devices and Systems II

EE16B Designing Information Devices and Systems II EE6B Designing Information Devices and Systems II Lecture 9B Geometry of SVD, PCA Uniqueness of the SVD Find SVD of A 0 A 0 AA T 0 ) ) 0 0 ~u ~u 0 ~u ~u ~u ~u Uniqueness of the SVD Find SVD of A 0 A 0

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Matrix Vector Products

Matrix Vector Products We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and

More information

Teaching & Testing Mathematics Reading

Teaching & Testing Mathematics Reading Teaching & Testing Mathematics Reading Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) M & T Seminar, 23 February 2010 What are our goals for our students in our classes?? From

More information

Composition Operators on Hilbert Spaces of Analytic Functions

Composition Operators on Hilbert Spaces of Analytic Functions Composition Operators on Hilbert Spaces of Analytic Functions Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) and Purdue University First International Conference on Mathematics

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Exercise Set 7.2. Skills

Exercise Set 7.2. Skills Orthogonally diagonalizable matrix Spectral decomposition (or eigenvalue decomposition) Schur decomposition Subdiagonal Upper Hessenburg form Upper Hessenburg decomposition Skills Be able to recognize

More information

CHAPTER 3. Gauss map. In this chapter we will study the Gauss map of surfaces in R 3.

CHAPTER 3. Gauss map. In this chapter we will study the Gauss map of surfaces in R 3. CHAPTER 3 Gauss map In this chapter we will study the Gauss map of surfaces in R 3. 3.1. Surfaces in R 3 Let S R 3 be a submanifold of dimension 2. Let {U i, ϕ i } be a DS on S. For any p U i we have a

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Matrix operations Linear Algebra with Computer Science Application

Matrix operations Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

1 Principal component analysis and dimensional reduction

1 Principal component analysis and dimensional reduction Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

MTH 215: Introduction to Linear Algebra

MTH 215: Introduction to Linear Algebra MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

LEAST SQUARES SOLUTION TRICKS

LEAST SQUARES SOLUTION TRICKS LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that Problem 1A. Suppose that f is a continuous real function on [, 1]. Prove that lim α α + x α 1 f(x)dx = f(). Solution: This is obvious for f a constant, so by subtracting f() from both sides we can assume

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop Fall 2017 Inverse of a matrix Authors: Alexander Knop Institute: UC San Diego Row-Column Rule If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding

More information

Math 4377/6308 Advanced Linear Algebra

Math 4377/6308 Advanced Linear Algebra 2.3 Composition Math 4377/6308 Advanced Linear Algebra 2.3 Composition of Linear Transformations Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math4377

More information

Lecture 6, Sci. Comp. for DPhil Students

Lecture 6, Sci. Comp. for DPhil Students Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

The QR Decomposition

The QR Decomposition The QR Decomposition We have seen one major decomposition of a matrix which is A = LU (and its variants) or more generally PA = LU for a permutation matrix P. This was valid for a square matrix and aided

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes Applications of Eigenvalues and Eigenvectors 22.2 Introduction Many applications of matrices in both engineering and science utilize eigenvalues and, sometimes, eigenvectors. Control theory, vibration

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Linear Algebra Solutions 1

Linear Algebra Solutions 1 Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. #07 Jordan Canonical Form Cayley Hamilton Theorem (Refer Slide Time:

More information

Determinants Chapter 3 of Lay

Determinants Chapter 3 of Lay Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information