Sophomoric Matrix Multiplication

Similar documents
j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

12. Perturbed Matrices

Review of Linear Algebra

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions

1 Last time: least-squares problems

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Recall the convention that, for us, all vectors are column vectors.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Linear Algebra Practice Problems

Econ Slides from Lecture 7

8 Extremal Values & Higher Derivatives

Singular Value Decomposition (SVD)

Large Scale Data Analysis Using Deep Learning

Numerical Linear Algebra Homework Assignment - Week 2

Lecture 8 : Eigenvalues and Eigenvectors

Math 240 Calculus III

Chap 3. Linear Algebra

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Nonlinear Programming Algorithms Handout

Chapter 3 Transformations

7.6 The Inverse of a Square Matrix

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

In this section again we shall assume that the matrix A is m m, real and symmetric.

Math 108b: Notes on the Spectral Theorem

Matrix decompositions

EE16B Designing Information Devices and Systems II

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

1 Inner Product and Orthogonality

Foundations of Computer Vision

Matrix Vector Products

Teaching & Testing Mathematics Reading

Composition Operators on Hilbert Spaces of Analytic Functions

Lecture 3: Review of Linear Algebra

arxiv: v1 [math.na] 5 May 2011

Lecture 3: Review of Linear Algebra

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Exercise Set 7.2. Skills

CHAPTER 3. Gauss map. In this chapter we will study the Gauss map of surfaces in R 3.

Eigenvalues and diagonalization

Matrix operations Linear Algebra with Computer Science Application

Solutions to Final Exam

Linear Algebra - Part II

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Spectral radius, symmetric and positive matrices

EECS 275 Matrix Computation

Notes on Eigenvalues, Singular Values and QR

1 Kernel methods & optimization

Review of some mathematical tools

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

5 Linear Algebra and Inverse Problem

1 Principal component analysis and dimensional reduction

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Basic Calculus Review

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Lecture 7: Positive Semidefinite Matrices

MTH 215: Introduction to Linear Algebra

1 Linearity and Linear Systems

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

LEAST SQUARES SOLUTION TRICKS

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

DS-GA 1002 Lecture notes 10 November 23, Linear models

Functional Analysis Review

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

Math 4377/6308 Advanced Linear Algebra

Lecture 6, Sci. Comp. for DPhil Students

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

9. Iterative Methods for Large Linear Systems

Math 2331 Linear Algebra

OR MSc Maths Revision Course

The QR Decomposition

This appendix provides a very basic introduction to linear algebra concepts.

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1 Singular Value Decomposition and Principal Component

LINEAR ALGEBRA SUMMARY SHEET.

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Quantum Computing Lecture 2. Review of Linear Algebra

Linear Algebra Solutions 1

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Applied Linear Algebra in Geoscience Using MATLAB

MATH 369 Linear Algebra

Review problems for MA 54, Fall 2004.

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

The Singular Value Decomposition and Least Squares Problems

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Matrix Arithmetic. j=1

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Determinants Chapter 3 of Lay

Fiedler s Theorems on Nodal Domains

Lecture: Face Recognition and Feature Reduction

Transcription:

Sophomoric Matrix Multiplication Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) Universidad de Zaragoza, 3 julio 2009

Linear algebra students learn, for m n matrices A, B, y C, matrix addition is A + B = C if and only if a ij + b ij = c ij. Expect matrix multiplication is AB = C if and only if a ij b ij = c ij, But, the professor says No! It is much more complicated than that! Today, I want to explain why this kind of multiplication not only is sensible but also is very practical,very interesting, and has many applications in mathematics and related subjects. Definition If A y B are m n matrices, the Schur (o Hadamard o naïve o sophomoric) product of A y B is the m n matrix C = A B with c ij = a ij b ij.

These ideas go back more than a century to Moutard (1894), who didn t even notice he had proved anything(!), Hadamard (1899), and Schur (1911). Hadamard considered analytic functions f(z) = n=0 a nz n and g(z) = n=0 b nz n that have singularities at {α i } and {β j } respectively. He proved that if h(z) = n=0 a nb n z n which has singularities {γ k }, then {γ k } {α i β j }.

This seems a little less surprising when you consider convolutions: Let f and g be 2π-periodic functions on R and so that a k = 2π 0 e ikθ f(θ) dθ 2π y b k = 2π 0 e ikθ g(θ) dθ 2π f a k e ikθ y g b k e ikθ If h(θ) = 2π 0 f(θ t)g(t) dt 2π, then h a k b k e ikθ y f 0 y g 0 implies h 0.

Schur s name is most often associated with the matrix product because he published the first theorem about esta typo de matrix multiplicación. Definition A real ( or complex) n n matrix is called positive or positive semidefinite if A = A Ax, x 0 for all x in R n ( or C n ) Then For any m n matrix A, both AA and A A are positive. Conversely, if B is positive, then B = AA for some A. In statistics, every variance-covariance matrix is positive.

Examples: A = B = 1 2 is NOT positive: 2 3 1 2 2 3 1 0 0 2 but BC = 2 1, y C = 1 0 0 2 2 1 = 0 1 5 4 4 5 5 4 4 5 =, are positive 5 4 8 10 2 1 is not. = 1

Schur Product Theorem (1911) Si A y B son positive n n matrices, then A B is positive also. Applications: Experimental design: Si A y B son variance-covariance matrices, then A B is positive also. P.D.E. s: Let Ω be a domain in R 2 and let L be the differential operator Lu = a 11 x + 2a 2 12 + a 22 y + b u 2 1 x + b u 2 y + cu L is called elliptic if a 11 a 12 a 21 a 22 is positive definite.

Lu = a 11 x + 2a 2 12 + a 22 y + b u 2 1 x + b u 2 y + cu Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. By contradiction: If u has a minimum at (x 0, y 0 ) and u(x 0, y 0 ) < 0, then and u x (x 0, y 0 ) = u y (x 0, y 0 ) = 0 0 = Lu = a 11 x + 2a 2 12 + a 22 y + b u 2 1 x + b u 2 y + cu = a 11 x + 2a 2 12 + a 22 y + cu 2

Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a 2 12 + a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu

Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a 2 12 + a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu

Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a 2 12 + a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu

Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a 2 12 + a 22 y + cu 2 = a 11 2 u x 2 a 12 a 12 = a 11 a 12 a 12 a 22 a 22 y 2 x 2 y 2, + cu, + cu

Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a 2 12 + a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu > 0

Weak Minimum Principle (Moutard, 1894) Si L is elliptic, c < 0, and L 0 in Ω, then u cannot have a negative minimum value in Ω. 0 = Lu = a 11 x + 2a 2 12 + a 22 y + cu 2 = a 11 2 u x 2 a 12 = a 11 a 12 a 12 a 22 a 12 a 22 y 2 x 2 y 2, + cu, + cu > 0 Contradiction!

Fejer s Uniqueness Theorem Si L is elliptic y c < 0 on Ω, then there is at most one solution to the boundary value problem Lu = f u = g en Ω en Ω that is continuous on Ω and smooth en Ω. Si u 1 y u 2 son both solutíones con u 1 u 2, then since u 1 = g = u 2 en Ω and u 1 u 2 = 0 = u 2 u 1 en Ω, either u 1 u 2 or u 2 u 1 must have a negative minimum value en Ω. But Lu 1 = f = Lu 2 en Ω, so L(u 1 u 2 ) 0 L(u 2 u 1 ) en Ω Moutard: neither has negative minimum value en Ω so must have u 1 u 2.

Goal: Prove Schur s theorem Recall (AB) t = B t A t y (AB) = B A Do for case of real scalars: same pero less comfortable para most matématicos con complex scalars Use column vectors: x, y = x 1 y 1 + x 2 y 2 + x 3 y 3 + + x n y n = x t y Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t 3 + + v k v t k for vectors v 1, v 2, v 3,, v k and k n.

Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t 3 + + v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) Let x be in R n and A = v 1 v t 1 + v 2 v t 2 + + v k v t k for vectors v 1, v 2,, v k. Then Ax, x = (v 1 v t 1 + v 2 v t 2 + + v k v t k)x, x = j v j v t j x, x = j (v j v t j x) t x = j = j x t ((v t j ) t v t j x = j v j, x v j, x = j (v t j x) t (v t j x) v j, x 2 0

Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t 3 + + v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) Let A be positive. Since A = A t, there is an orthonormal basis for R n that consists of eigenvectors for A, call it w 1, w 2,, w n. Then, for each j, let α j be the eigenvalues of A, that is, Aw j = α j w j. Because A is positive, α j 0 for all j. We suppose they have been numbered so that for 1 j k, α j > 0 and for j k + 1, α j = 0. For 1 j k, choose β j > 0 with α j = βj 2 and let v j = β j w j. Then, we show that A = v 1 v t 1 + v 2 v t 2 + + v k v t k = α 1 w 1 w t 1 + α 2 w 2 w t 2 + + α k w k w t k

Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t 3 + + v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) To show that A = α 1 w 1 w t 1 + α 2 w 2 w t 2 + + α k w k w t k we will show that for each x in R n, Ax and (α 1 w 1 w t 1 + α 2 w 2 w t 2 + + α k w k w t k )x are the same. If x is a vector in R n, then x is a linear combination of the w j s, say x = x 1 w 1 + x 2 w 2 + + x n w n. Then Ax is given by Ax = A(x 1 w 1 + x 2 w 2 + + x n w n ) = x 1 Aw 1 + x 2 Aw 2 + + x n Aw n

Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t 3 + + v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) which is Ax = x 1 Aw 1 + x 2 Aw 2 + + x n Aw n = x 1 α 1 w 1 + x 2 α 2 w 2 + + x k α k w k + + x n α n w n = x 1 α 1 w 1 + x 2 α 2 w 2 + + x k α k w k + + x n 0w n = x 1 α 1 w 1 + x 2 α 2 w 2 + + x k α k w k Notice that w t i w j = w i, w j = δ ij.

Lemma An n n matrix A is positive if and only if A = v 1 v t 1 + v 2 v t 2 + v 3 v t 3 + + v k v t k for vectors v 1, v 2, v 3,, v k and k n. ( ) Similar to the above calculation, (v 1 v t 1 + v 2 v t 2 + + v k vk)x t = (α 1 w 1 w t 1 + α 2 w 2 w t 2 + + α k w k wk)x t = ( i α i w i w t i )( j x j w j ) = i,j (α i w i w t i )x j w j = i,j α i x j w i (w t i w j ) = i α i x i w i = x 1 α 1 w 1 + x 2 α 2 w 2 + + x k α k w k Thus, for each x, Ax = (v 1 v t 1 + v 2 v t 2 + + v k v t k )x and A = v 1 v t 1 + v 2 v t 2 + + v k v t k

Lemma If u and v are vectors in R n, then (uu t ) (vv t ) = (u v)(u v) t. u 1 u 1 u 1 u 2 u 1 u n If u = (u 1, u 2,, u n ), then uu t u 2 u 1 u 2 u 2 u 2 u n =..... u n u 1 u n u 2 u n u n Thus, u 1 u 1 u 1 u 2 u 1 u n v 1 v 1 v 1 v 2 v 1 v n (uu t ) (vv t u 2 u 1 u 2 u 2 u 2 u n v 2 v 1 v 2 v 2 v 2 v n ) =.......... u n u 1 u n u 2 u n u n v n v 1 v n v 2 v n v n =

Lemma If u and v are vectors in R n, then (uu t ) (vv t ) = (u v)(u v) t. Thus, (uu t ) (vv t ) = u 1 u 1 v 1 v 1 u 1 u 2 v 1 v 2 u 1 u n v 1 v n u 2 u 1 v 2 v 1 u 2 u 2 v 2 v 2 u 2 u n v 2 v n..... u n u 1 v n v 1 u n u 2 v n v 2 u n u n v n v n = u 1 v 1 u 1 v 1 u 1 v 1 u 2 v 2 u 1 v 1 u n v n u 2 v 2 u 1 v 1 u 2 v 2 u 2 v 2 u 2 v 2 u n v n..... = (u v)(u v) t u n v n u 1 v 1 u n v n u 2 v 2 u n v n u n v n

Schur Product Theorem (1911) Si A y B son positive n n matrices, then A B is positive also. There are vectors u 1, u 2,, u k so that A = u 1 u t 1 + u 2 u t 2 + + u k u t k and vectors v 1, v 2,, v l so that B = v 1 v t 1 + v 2 v t 2 + + v l v t l. Now, ( ) A B = u i u t i j v j v t j i = i,j (u i u t i ) (v j v t j ) = i,j (u i v j ) (u i v j ) t 0

Corollary Si A = (a ij is a positive n n matrix, then (a 2 ij ), (a3 ij ), (ea ij) are positive also. For example, 3 2 2 2 is positive, so 9 4 4 4, 27 8 8 8, and e3 e 2 e 2 e 2 are also positive!

Application: Matrix completion problems 15 2 a b 2 7 2 c Are there numbers a, b, and c so that A = a 2 17 3 b c 3 13 is positive? In this case, yes: a = 7, b = 5, and c = 8.

General Problem For B is a fixed n n matrix, compute the Schur multiplier norm of B, that is, find the smallest constant K B such that X B K B X Moreover, we want to have a computationally effective way to find K B.

Schur (1911) Si B is a positive n n matrix, then its Schur multiplier norm is its largest diagonal entry. If β is the largest diagonal entry of B, then B I = β, and K B β. Note that A α if and only if Schur s Theorem implies, 0 B B B B αi A A αi I A A I = 0. B I B A B A B I = B I B A (B A) B I βi B A (B A) βi