Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
|
|
- Edwin Smith
- 5 years ago
- Views:
Transcription
1 Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That is, if P 2 P 2. Exercise: If P is a projection then so is Q I P. Moreover, im Q ker P ker Q im P 3. Let P : V V a projection then V is a direct sum: V im P ker P Proof. Let v V. Then v P x + (I P )x im P + ker P Let y im P ker P. Since y im P there exists z V such that y P z. Since y ker P, P y P 2 z 0. By idempotency P 2 z P z and so y If W is a complete inner product space then a projection P in W is called orthogonal if im P and ker P are orthogonal subspaces. 5. Let P be a projection. The following are equivalent: (a) P is self-adjoint. (b) P is orthogonal. Proof: Let P be orthogonal. ker P are orthogonal. Since im P and P x, (y P y) (x P x), P y Hence x, P y P x, P y P x, y x, P y Thus, P is self adoint. Conversely, if P is self adjoint P x, y P y P 2 x, y P y P x, P (I P )y P x, (P P 2 )y 0 6. Claim: Let W be a Hilbert space. If U is a closed (under the norm topology) subspace of W, then orthogonal projection on U exists. Proof: [From Wikipedia] Let x W and define f(u) x u, u U The infimum of f exists and by completeness f has a minimum at some u U. Let P x u. Clearly P 2 P. Now let e x P x. Then for any non zero vector u U: e e, u u 2 u 2 e 2 e, u 2 u 2 From this it follows that unless e, u 0. the vector satisfies w P x + e, u u u 2 x w < x P x contradicting minimality. Thus, for all u U, and x W x P x, u 0
2 In particular, x P x, P x 0. Also, for any u U (x + y) P (x + y), u 0 (x P x) + (y P y), u 0 Subtraction yields P x + P y P (x + y), u 0 Finally, choosing u P x+p y P (x+y) shows that: P x + P y P (x + y) By a similar argument λp x P (λx) for every scalar λ. Hence P is linear. 7. Orthogonal Projection Special Case Let a be a non zero column vector in IR n with span C(a) {ta : t IR} and x IR n. The orthogonal projection of x on C(a) is the unique point P a x C(a) which satisfies O x P a x, a 0 x Pa x Writing P a x ta and solving for t yields: x, a P a x a, a a 8. Exercise: The matrix of P a relative to the standard basis is is: P a(a T a) 1 a T a 9. Example: In statistics, the mean of a data vector y is determined by projection onto the vector 1 (1, 1,... 1) T IR n. Specifically: y, 1 P 1 (y) 1, 1 1 y y n 1 n ȳ1 We write ȳ for the mean vector ȳ Claim: Let V be a finite dimensional inner product space and W be a proper subspace of V with basis (e 1,..., e k ). Let g be the metric matrix: (g ij ) e i, e j. If x V then,the orthogonal projection of x on W is given by: where Proof: P W x c 1 e 1 + c k e k e 1, x c g 1 e 2, x e k, x c 1 e 1 + c k e k + x x (1) Taking inner products with each of the e j k e j, e i c i e j, x i1 In matrix form e 1, x e 2, x gc e k, x Hence 11. Special Cases e 1, x c g 1 e 2, x e k, x (2) 2
3 (a) If the basis is orthonormal, then g is the k k identity matrix I, so that c i e i, x. In this case: k P W x e i, x e i i1 (b) If the basis is merely orthogonal then g is diagonal and easily inverted to give, c i e i, x e i, e i 12. Let V IR m and W V the column space C(A) of an m n real matrix A with independent columns A 1, A 2,..., A n, and y IR n then, (2) for projection onto W takes the form: A 1, y A 2, y gc A n, y Since (A T ) 1, y (A T ) 2, y (A T ) n, y A T y g(i, j) (A i, A j (A T ) i, A j A T A(i, j) The projection coefficient vector c is given by: A T Ac A T y In Statistics, this is called the normal equation. The orthogonal projection of y onto W C(A) is: P W y c i A i Ac A(A T A) 1 A T y Hence, projection onto the column space C(A) has matrix P A(A T A) 1 A T (3) 13. Equation (3) can be proven directly using properties of the row and column spaces. 14. Lemma: If the columns A i of A are independent then the (square) matrix A T A is invertible. Proof: It suffices to show that A T Ax 0 has unique solution x 0. A T Ax 0 x T A T Ax 0 (AX) T Ax 0 Ax 0 x i A i 0 Since the A i are independent it must be that x Claim: Let A IR m n have independent columns. Then the matrix P A(A T A) 1 A T determines an orthogonal projection onto C(A). Proof: Notice P 2 P so P is indeed a projection. Since P A A the subspace C(A) is P -invariant. Moreover, if x IR m and y (A T A) 1 A T x then P x Ay yi A i C(A) and so P is a projection onto C(A). Since the columns of A are independent, A has a left inverse. From this fact and the preceeding lemma we have: b ker P P b 0 A(A T A) 1 A T b 0 (A T A) 1 A T b 0 A T b 0 b N(A T ) 3
4 But the left nullspace N(A T ) is orthogonal to the the column space C(A). Since im P and ker P are orthogonal subspaces it follows that the projection P is orthogonal. 16. Equation (3) can also be obtained geometrically as follows. Let A IR m n has independent columns, and let matrix P represent orthogonal projection onto C(A). Let y IR n and ŷ P y. Since ŷ C(A) there exists ˆx such that Aˆx ŷ. By orthogonality (Aˆx y) C(A) and I P is the orthogonal projection onto N(A T ). Thus if y IR m and ŷ P y and e y ŷ, then the residual vector e belongs to the left nullspace. R(A) dim n ĉ 0 Aĉ ŷ P y ŷ y e + ŷ e ŷ C(A) dim n R m N(A T ) dim m n But C(A) N(A T ), hence A T (Aˆx y) 0 Since A T A is invertible it follows that ˆx (A T A) 1 A T y Thus the projection of y on C(A) is ŷ Aˆx where ˆx is the solution of. A T Aˆx A T y 17. The action of an arbitrary m n real matrix is illustrated in the following diagram (due to Strang). R(A) dim r R n N(A) dim n r x r x n 0 Ax r b Ax b x x r + x n Ax n When A has independent columns, the nullspace N(A) is trivial, IR m C(A) N(A T ) b C(A) dim r R m N(A T ) dim m r Example Consider the problem of finding the line y b + mx which best fits the points (1, 1), (2, 2) and (3, 2). An exact fit would require that: or in matrix form: b + 1m 1 b + 2m 2 b + 3m [ ] b 2 m 1 3 }{{} 2 }{{} x }{{} A y This system has no solution since y / C(A). The normal equation A T Aˆx A T y [ ] [ ] [ ] 3 6 ˆb ˆm 11 Solving for the projection coefficients: [ ] [ ] 1 [ ] ˆb ˆm [ ] 2/3 1/2
5 The projection of y on the column space of A is: or equivalently ŷ Aˆx /6 5/3 13/6 ˆb + 1 ˆm 7/6 ˆb + 2 ˆm 5/3 ˆb + 3 ˆm 13/6 It follows that the regression line line ŷ ˆb + ˆmx contains the points (1, 7/6), (2, 5/3), and (3, 13/6). Since the projection ŷ is the point in C(A) which minimizes ŷ y it follows that the sum of the squares of the vertical distances of the data points (1, 1), (2, 2) and(3, 2) from the regression line is minimized. For this reason, the above procedure is known as the method of least squares. The vector e y ŷ 1/6 1/3 1/6 is called the residual vector and is orthogonal to C(A). 20. The Mean. The simplest case of least squares uses a model spanned by a single vector 1 (1, 1,..., 1) T IR n Let ȳ (y 1, y 2,..., y n ) T. As noted previously, the projection of ȳ on the vector 1 is just the mean vector ȳ ȳ1. This means that ȳ is the real number b IR which minimizes y b 1 2 (y i b) 2. We can also obtain this result using the normal equations. Let A (1, 1,..., 1) T and b (b, b,... b) T. The equation Ay b will not have a solution unless b belongs to the one dimensional column space C(A). The normal equation A T Aŷ A T b nŷ b ŷ ȳ so, as already proven, the projection of y on C(A) is Aȳ ȳ 1. Let v 1 1 IR n and extend to a basis: variance {}}{ v 1 1, v 2,... v n The difference C n (y) y ȳ which lies in the span of v 2,... v n is called the centering of y. The map C n is just the projection of y onto the subspace perpendicular to 1. The random variable S 2 n i1 (Y i Ȳ )2 Y Ȳ 2 provides an unbiased estimate of the variance of the random variable Y (of which, y is an instance). data: y mean: ȳ y ȳ }{{} C n(y) Finally we note that, since the projection is orthogonal, (y ȳ) x 5
6 21. Statistical Appendix Let Y (Y 1, Y 2,..., Y n ) be a vector of independent random variables with means and standard deviations specified by: and u IR n then (a) where µ (µ 1, µ 1,..., µ n ) (µ 1, µ 1,..., µ n ) σ (σ 1, σ 2,..., σ n ) E(Y u) u µ Var(Y u) u 2 σ 2 σ 2 (σ 2 1, σ 2 2,..., σ 2 n) u 2 (u 2 1, u 2 2,..., u 2 n). (b) If Y i (µ, σ 2 ) and u 1 Then E(Y u) (u 1)µ Var(Y u) σ 2 Moreover E(Y u) 0 whenever u 1 0. (c) If Y i (µ, σ 2 ) and u 1 and u 1 then E(Y.u) 2 σ 2. (d) If Y i (µ, σ 2 ) and u, v are unit vectors in IR n, the following are equivalent: i. Y u, Y v are independent ii. u v 0 (e) If Y i N(µ, σ 2 ) then and u 1 then Y u is also normal with variance σ Corollary. Let Y (Y 1, Y 2,..., Y n ) IR n be a random vector with Y i (µ, σ 2 ). Let S 2 n i1 (Y i Ȳ )2 Y Ȳ 2 then ES 2 n σ 2 Proof. Let u 1 1/ n IR n be extended (say, by Gram Schmidt) to an orthonormal basis u 1, u 2,..., u n. By orthonormal expansion: Y P ui Y Hence, i1 P u1 Y + P ui Y i2 Ȳ + P ui Y Y Ȳ 2 i2 P ui Y 2 i2 i2 ( u i Y ) 2 For i 2, u i 1 and therefore E(Y u i ) 2 σ 2. Since the vectors u i are orthogonal the projection coefficients are independent and it follows that ESn 2 σ Linear Regression in IR n. For n 2 an orthogonal basis is used in which u 1 1 n 1, u 2,..., u p are designated as basis vectors for the model space M and the remaining vectors u p+1,... u p+q as basis for the error space M. u } 1 1/ variance {}}{ n, u {{ 2,..., u } p, u p+1,... u p+q model space Notice that if u M, and Y (Y 1, Y 2,..., Y n ) IR n is a random vector with Y i (µ, σ 2 ) then (as in the proof of the corollary) E(Y u) 2 σ 2 and it follows that [ q i1 E (Y u ] p+i) 2 σ 2 q 6
7 In the special case n 2, q 1, and u 1 (1, 1)/ 2 u 2 ( 1, 1)/ 2 Then, as advertized: [ ] (Y u2 ) 2 E E 1 σ 2 [ (Y2 ) ] 2 Y 1 2 For details, please see [2]. response: y fit: ŷ mean ȳ y ȳ residuals: e y ŷ Model Space effect: ŷ ȳ Sources Linear Regression in Higher Dimensions -diagram taken from Pruim 1. Gilbert Strang, Introduction to Linear Algebra. 2. Statistical Methods: The Geometric Approach (Springer Texts in Statistics), by David Saville and Graham R. Wood. 3. Foundations and Applications of Statistics: An Introduction Using R (Pure and Applied Undergraduate Texts), by Randall Pruim 2. 2 Whose color scheme we have followed 7
Chapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationPseudoinverse & Moore-Penrose Conditions
ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationSolutions to Review Problems for Chapter 6 ( ), 7.1
Solutions to Review Problems for Chapter (-, 7 The Final Exam is on Thursday, June,, : AM : AM at NESBITT Final Exam Breakdown Sections % -,7-9,- - % -9,-,7,-,-7 - % -, 7 - % Let u u and v Let x x x x,
More informationP = A(A T A) 1 A T. A Om (m n)
Chapter 4: Orthogonality 4.. Projections Proposition. Let A be a matrix. Then N(A T A) N(A). Proof. If Ax, then of course A T Ax. Conversely, if A T Ax, then so Ax also. x (A T Ax) x T A T Ax (Ax) T Ax
More informationMATH 22A: LINEAR ALGEBRA Chapter 4
MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate
More informationMATH 167: APPLIED LINEAR ALGEBRA Least-Squares
MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of
More informationMATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.
MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationMATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.
MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products. Orthogonal projection Theorem 1 Let V be a subspace of R n. Then any vector x R n is uniquely represented
More informationOrthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6
Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and
More informationUniversity of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015
University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam January 23, 2015 Name: Exam Rules: This exam lasts 4 hours and consists of
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationSymmetric and self-adjoint matrices
Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationChapter 6 - Orthogonality
Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More information2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.
Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationOrthogonality and Least Squares
6 Orthogonality and Least Squares 6.1 INNER PRODUCT, LENGTH, AND ORTHOGONALITY INNER PRODUCT If u and v are vectors in, then we regard u and v as matrices. n 1 n The transpose u T is a 1 n matrix, and
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationNormed & Inner Product Vector Spaces
Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed
More informationPseudoinverse & Orthogonal Projection Operators
Pseudoinverse & Orthogonal Projection Operators ECE 174 Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 48 The Four
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationHilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality
(October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are
More informationApproximations - the method of least squares (1)
Approximations - the method of least squares () In many applications, we have to consider the following problem: Suppose that for some y, the equation Ax = y has no solutions It could be that this is an
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationLeast squares problems Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application April 8, 018 1 Least Squares Problems 11 Least Squares Problems What do you do when Ax = b has no solution? Inconsistent systems arise often in applications
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationChapter 6. Orthogonality
6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter
More informationorthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,
5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications
More informationSpectral theory for compact operators on Banach spaces
68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationMiderm II Solutions To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce
Miderm II Solutions Problem. [8 points] (i) [4] Find the inverse of the matrix A = To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce We have A = 2 2 (ii) [2] Possibly
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationMATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.
MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More information18.06 Quiz 2 April 7, 2010 Professor Strang
18.06 Quiz 2 April 7, 2010 Professor Strang Your PRINTED name is: 1. Your recitation number or instructor is 2. 3. 1. (33 points) (a) Find the matrix P that projects every vector b in R 3 onto the line
More information4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan
The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationA Primer in Econometric Theory
A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,
More informationLinear Algebra, Summer 2011, pt. 3
Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................
More informationA Review of Linear Algebra
A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationSample ECE275A Midterm Exam Questions
Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationThe geometry of least squares
The geometry of least squares We can think of a vector as a point in space, where the elements of the vector are the coordinates of the point. Consider for example, the following vector s: t = ( 4, 0),
More informationMATH Linear Algebra
MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationEECS 275 Matrix Computation
EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 9 1 / 23 Overview
More information22 Approximations - the method of least squares (1)
22 Approximations - the method of least squares () Suppose that for some y, the equation Ax = y has no solutions It may happpen that this is an important problem and we can t just forget about it If we
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationMTH 2310, FALL Introduction
MTH 2310, FALL 2011 SECTION 6.2: ORTHOGONAL SETS Homework Problems: 1, 5, 9, 13, 17, 21, 23 1, 27, 29, 35 1. Introduction We have discussed previously the benefits of having a set of vectors that is linearly
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationLinear algebra review
EE263 Autumn 2015 S. Boyd and S. Lall Linear algebra review vector space, subspaces independence, basis, dimension nullspace and range left and right invertibility 1 Vector spaces a vector space or linear
More informationSPECTRAL THEORY EVAN JENKINS
SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for
More informationECE 275A Homework # 3 Due Thursday 10/27/2016
ECE 275A Homework # 3 Due Thursday 10/27/2016 Reading: In addition to the lecture material presented in class, students are to read and study the following: A. The material in Section 4.11 of Moon & Stirling
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationSTAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017
STAT 151A: Lab 1 Billy Fang 2 September 2017 1 Logistics Billy Fang (blfang@berkeley.edu) Office hours: Monday 9am-11am, Wednesday 10am-12pm, Evans 428 (room changes will be written on the chalkboard)
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationSTAT 350: Geometry of Least Squares
The Geometry of Least Squares Mathematical Basics Inner / dot product: a and b column vectors a b = a T b = a i b i a b a T b = 0 Matrix Product: A is r s B is s t (AB) rt = s A rs B st Partitioned Matrices
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationWorksheet for Lecture 25 Section 6.4 Gram-Schmidt Process
Worksheet for Lecture Name: Section.4 Gram-Schmidt Process Goal For a subspace W = Span{v,..., v n }, we want to find an orthonormal basis of W. Example Let W = Span{x, x } with x = and x =. Give an orthogonal
More informationElements of Convex Optimization Theory
Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization
More informationNOTES ON BILINEAR FORMS
NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationMATH 167: APPLIED LINEAR ALGEBRA Chapter 3
MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More information. = V c = V [x]v (5.1) c 1. c k
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More information