«Random Vectors» Lecture #2: Introduction Andreas Polydoros
|
|
- Betty Dixon
- 6 years ago
- Views:
Transcription
1 «Random Vectors» Lecture #2: Introduction Andreas Polydoros
2 Introduction Contents: Definitions: Correlation and Covariance matrix Linear transformations: Spectral shaping and factorization he whitening concept he Karhunen-Loeve expansion Binary Hypothesis testing
3 Introduction Definition-Correlation and Covariance matrix: Random vectors come about either by sampling one random 2 process X u, t at t, t2,, t or by observing a number of processes X u, t, X u, t,, X u, t at the same time. Essentially the two ways are equivalent mathematically. X u X u, m X, E { X( u) } X( u, ) mx
4 Introduction he autocorrelation function is: * RX E { X( u) X ( u) } X( u,) * * = E X ( u, ),, X ( u, ) X( u, ) RX (,) RX (, 2) RX (, ) RX ( 2,) RX ( 2, 2) RX ( 2, ) = RX (, ) RX (,2 ) RX (, )
5 Introduction he covariance matrix is: { * ( )( ) } { } K E X u m X u m X X X { } { } = E X u X u m E X u E X u m + m m * * * * X X X X K = R m m * X X X X
6 Linear transformations Suppose we are given a random vector X u and we construct another random vector Y u through the linear transformation n= = HX( u) Y u ym = hmnxn; m=,2,, M Question: What is the second-moment description of Y u?
7 Linear transformations m Y { Y( u )} { X( u )}, h h, E E = E { Y( u) } = = E{ Y( u, M) } hm h M E{ X( u, ) } m = Hm Y X In the above derivation we claimed that: E hmn X u n = h X u n n= n= { }? (, ) mn E (, ) In others words we assumed that expectation and summation (, 2 ) can be interchanged, but this holds only if R t, tgf is finite. X
8 Linear transformations For the autocorrelation function of Y u : Y * { } R = E Y u Y u { * ( HX ( u) )( HX ( u) ) } * * { HX ( u) X ( u) H } * * E { } = E = E = H X u X u H R = HR H Y X *
9 White vectors A useful concept is that of a white vector W u, which is a random vector with mean m W = 0, and covariance matrix: 2 σ σ 0 0 RW = KW = σ I = σ σ where is a constant and is the identity matrix. his means w i I W u that all components of are uncorrelated with each other, with zero mean and variance. σ 2
10 Spectral Shaping W u Problem: Given the white vector, can we find a linear transformation such that the resultant vector X u = HW u has given mean m and given covariance matrix K? X X W( u) = HW( u) X u
11 Spectral Shaping Since, m = 0, it follows that W m X = Hm = W 0 he covariance matrix of X u is: R X = = σ HR H herefore, spectral shaping is equivalent to the following: W HH * 2 * Given a correlation matrix RX, find an H such that RX = HH * 2 ote: σ can be absorbed in the given RX by creating a new given X. Other names for this problem are matrix factorization, square root of a matrix R
12 Spectral Shaping Definition: A complex (real) matrix is called Hermitian symmetric iff: A A = A * Definition: A complex (real) matrix is called Unitary (orthogonal) iff: A * AA = I
13 Spectral Shaping heorem: if K is Hermitian symmetric then there exists a unitary matrix E such that λ * λ 2 K = EΛE Λ = λ Ν with λ n; n=,2,,, the eigenvalues of K (not necessarily distinct). In other words: Hermitian symmetric matrices are always diagonalizable. heorem: A necessary and sufficient condition for such a to be nonnegative definite is that λn 0; n=, 2,,. K
14 Spectral Shaping heorem: Let K be Hermitian symmetric. hen for each distinct (simple) eigenvalue there corresponds an eigenvector which is orthogonal (orthonormal) to all others.o each eigenvalue of multiplicity k there correspond k linearly independent eigenvectors, which are orthogonal to all eigenvectors of the rest eigenvalues. hese k eigenvectors can be made orthogonal by application of the Gram-Schmidt procedure In summary, every Hermitian ( ) matrix has orthonormal eigenvectors { e n}, associated with its eigenvalues { λ n}. n= n= In fact, matrix E consists of these e `s as its columns, i.e., n [ 2 ] E= e e e
15 Spectral Shaping Returning to the factorization problem, we want to find an H * * such that R = HH. Writing R EΛE (since R is Hermitian) we have X X = X R X = EΛE * = EΛ Λ = EΛ 2 2 * 2 2 * * ( Λ ) ( 2)( 2 EΛ EΛ ) H E H * E * = ; Λ 2 λ λ 2 λ
16 Spectral Shaping We have arrived at a solution where H = EΛ 2 However this solution is not unique. o see this, take any unitary matrix U and observe that: X ( Λ )( Λ ) R = HH = E E * 2 2 = * 2 * 2 ( EΛ ) UU ( EΛ ) 2 2 ( EΛ U)( EΛ U) = another H * *
17 Spectral Shaping * Sometimes we take U = E and the resulting H is given as: 2 2 * H = EΛ U = EΛ E his matrix is often called the square root of From an applications viewpoint this factorization is useful in simulation, i.e., creating a random vector with desired correlation properties, starting from a random number generator. ote: if mx 0 then the appropriate linear transformation is: X = HW + m X where the factorization is done on K, not on R. X R X X
18 Spectral Shaping Example: he required covariance matrix is: K X 2 2 = he eigenvalues are found by solving the characteristic equation: det K λ I = 0; n=,2,3 { X n } λ = 0, λ = λ =
19 Spectral Shaping Solving for the corresponding eigenvectors we get: λ= 0 e= 3 3 λ2 = e2 = e3 = 2 3
20 Spectral Shaping herefore, we could choose the linear transformation: = = [ 2 3 ] = H EΛ e e e W( u,) X = HW = W( u,2) 0 0 W( u,3) otice that does not depend on W u,. X
21 Spectral Shaping In the above, we solved the problem of spectral shaping which is equivalent to a covariance matrix factorization. he solution was unconstrained, i.e., we imposed no restrictions on the nature of the linear transformation H ow assume that we impose the constraint of the linear transformation being causal.
22 Spectral Shaping Definition: A causal linear transformation is equivalent to H being lower triangular, i.e., the wanted linear transformation is h 0 0 X ( u, ) W ( u,) h2 h22 = + 0 X ( u, ) W ( u, ) h h 2 h n X( u, n) = hnl W( u, l) ; n=,2,, l= he problem can now be restated as Find a lower-triangular matrix H such that: ote: his factorization is called the Cholesky factorization of positive definite matrices KX m X = HH *
23 Spectral Shaping Example (real-valued covariance matrix): k k2 k h 0 0 h h2 h k2 k22 k2 h2 h h22 h2 = k k k h h h 0 0 h 2 2 k = h h =± k 2 k = h h h = k h in the same manner we can find the rest of. h ij
24 Properties - Spectral Resolution Assume a real covariance matrix K X. We can rewrite the factorization KX = EΛE as e e2 K X = [ λe λ2e2 λe] e or K X λnenen = K X n= his shows that can be decomposed (resolved) into a sum of matrices, each of the form ee with weight λ. Τhe set of eigenvectors n constitutes a basis for the n -dimensional vector space n { e } = n n
25 Properties - Spectral Resolution Every deterministic vector A can be expanded into a series n= where an = A, en = A en is the projection of basis vector e n A = ane A on the hus, vector A can be described in terms of its projections dcdc along the coordinates { a n } { e n} = n n
26 Properties - Spectral Resolution It is clear that we can create random vectors by choosing these projections as random variables An u, i.e., n= ote: If the eigenvectors have the form with in the n-th position, then A = An u e [ ] n { } e = 0,0,,0,,0,0,,0 n A u A = A ( u)
27 Properties - Directional Preference var Suppose we are given the covariance matrix of some vector X( u) and would like to project this vector on some unitlength vector b ( 2 bn = ). he projection is the inner n= product: Y u = X u, b = X u b Assuming that 0, the variance of Y u equals 2 2 { } σy b KX b m X = { } i.e., the variance of is a quadratic functional of the K X { } { } Y u = = E Y u = E Y u Y u = E b X u X u b = Y( u ) { b n } `s
28 Properties - Directional Preference Directional preference translates to finding those 2 directions b where the variance σ Y = b KXb is highest (or lowest). his is an optimization problem where we want to maximize the above quadratic form, subject to the unit-norm constraint. o solve this, we expand b on the orthonormal basis, i.e., { e n} = n b = bne n= 2 so that bn =. he quadratic form can now be written n= as: 2 σy = bkb X = be n n K X be m m = n= m= n n= m= bb e K e n m n X m
29 Properties - Directional Preference 2 Recalling that K e = λ e, can be written as X m m m 2 σy = bb n menλmem = λmbb n me nem ow the original problem can be equivalently stated as follows: { } { } Let u 2 n bn. We want to maximize U = λnun n subject to the constraint = un = and ui 0, λi 0 σ Y n= m= n= m= δnm σ 2 2 Y = λnbn n= n=
30 Properties - Directional Preference Example ( = 2): > 2 For λ2 λ the optimal solution is u = 0, u =. he general solution is to choose u m = where λm = max { λn} 2 and u n = 0 for n m. Since bi = ; i=, 2,,, it follows that: b =± m b = 0; n m n
31 Properties - Directional Preference he resulting variance is the maximum eigenvalue { } σ = λ = max λ 2 Y m n Recalling that b= bnen it follows that n= b max = where emax is the eigenvector of K X corresponding to the largest eigenvalue e max Question: What is the direction that minimizes the variance?
32 he whitening concept Converse to the factorization or spectral shaping problem Problem statement: Given a random vector X u with some mean m X and covariance K X, find a linear transformation G such that the output W( u) is a white vector m X = 0 X( u) W( u) m X 0 X( u) W( u) m X
33 he whitening concept From previous theory we know, that the covariance matrix of the output vector W u is For W u to be white we require K W = GK G We also know that K X can be factorized as (assuming real matrices) K = HH hus, we require the following equality to hold: GHH G = I X GH GH = I X KW = I
34 he whitening concept he simplest form of G that satisfies this equality is G= H 2 However, since H = EΛ U, we can express G in terms of E and Λ as 2 G= H = EΛ U = U 2 Recalling that U is by definition an arbitrary unitary matrix and E is also unitary since its columns are the orthonormal eigenvectors of, we end up at K X Λ E 2 G= U Λ Ε
35 he Karhunen-Loeve expansion Starting from the coloring problem equation, we define the following random vectors 2 = Λ X u E UW u Yu UW( u) 2 Λ Y( u) Z u Claim: Vector Y u = UW u is also white, and vector ghj Z ( u) = Λ gj 2 Y( u) has uncorrelated components, each with a different variance
36 he Karhunen-Loeve expansion Proof: Using the standard formulas we obtain my = UmX = 0 Y u KY = UKWU = UU = I is white m K Z Z 2 = Λ my = 0 ( Λ ) 2 2 = Λ KY 2 2 = λn = = Λ I Λ = Λ λ 0 = 0 λ { n} var Z u ; n, 2,,
37 he Karhunen-Loeve expansion Rewriting the coloring problem equation as we have: Z( u) X( u) = [ e e2 e ] Z ( u) X( u) = Zn( u) en n= λ X u = n= W u e n n n = EZ( u) X u his is the Karhunen Loeve expansion of X u. It states that every random vector can be written as a sum of orthonormal eigenvectors { e n }, each weighted by a random variable Wn ( u) and further scaled by λn
38 he Karhunen-Loeve expansion ote that the Karhunen Loeve expansion of a random vector X( u) is simply an expansion on a certain basis ({ e n }) of the -dimensional vector space. However, the basis is special, since (as we just showed) the projections X( u), en are uncorrelated random variables with variance λ n. (Projecting on an arbitrary basis, would not have the same effect) One could say that a random vector has preferences into how it is going to be distributed in space!
VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:
VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition
More informationWe will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m
Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationLECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY
LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationBasic Operator Theory with KL Expansion
Basic Operator Theory with Karhunen Loéve Expansion Wafa Abu Zarqa, Maryam Abu Shamala, Samiha Al Aga, Khould Al Qitieti, Reem Al Rashedi Department of Mathematical Sciences College of Science United Arab
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationSchur s Triangularization Theorem. Math 422
Schur s Triangularization Theorem Math 4 The characteristic polynomial p (t) of a square complex matrix A splits as a product of linear factors of the form (t λ) m Of course, finding these factors is a
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationEcon 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis
Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationLinear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg
Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector
More informationDegenerate Perturbation Theory
Physics G6037 Professor Christ 12/05/2014 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. 1 General framework and strategy
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationPrincipal Component Analysis
Principal Component Analysis Yuanzhen Shao MA 26500 Yuanzhen Shao PCA 1 / 13 Data as points in R n Assume that we have a collection of data in R n. x 11 x 21 x 12 S = {X 1 =., X x 22 2 =.,, X x m2 m =.
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationNotes on basis changes and matrix diagonalization
Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More information5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors
EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we
More informationICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization
ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n
More information22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices
m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationMath 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam
Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationMATH 532: Linear Algebra
MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationLecture 4 Orthonormal vectors and QR factorization
Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationConcentration Ellipsoids
Concentration Ellipsoids ECE275A Lecture Supplement Fall 2008 Kenneth Kreutz Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San Diego VERSION LSECE275CE
More informationSupplementary information I Hilbert Space, Dirac Notation, and Matrix Mechanics. EE270 Fall 2017
Supplementary information I Hilbert Space, Dirac Notation, and Matrix Mechanics Properties of Vector Spaces Unit vectors ~xi form a basis which spans the space and which are orthonormal ( if i = j ~xi
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More informationLinear Algebra, part 3 QR and SVD
Linear Algebra, part 3 QR and SVD Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2012 Going back to least squares (Section 1.4 from Strang, now also see section 5.2). We
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationLecture 7 Spectral methods
CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional
More informationMath 291-2: Lecture Notes Northwestern University, Winter 2016
Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,
More informationEconomics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010
Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationPrincipal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17
Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationMathematical Introduction
Chapter 1 Mathematical Introduction HW #1: 164, 165, 166, 181, 182, 183, 1811, 1812, 114 11 Linear Vector Spaces: Basics 111 Field A collection F of elements a,b etc (also called numbers or scalars) with
More information6 Inner Product Spaces
Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationDesigning Information Devices and Systems II
EECS 16B Fall 2016 Designing Information Devices and Systems II Linear Algebra Notes Introduction In this set of notes, we will derive the linear least squares equation, study the properties symmetric
More informationDegenerate Perturbation Theory. 1 General framework and strategy
Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this
More informationc 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0
LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =
More informationMaximum variance formulation
12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationSolution to Homework 8, Math 2568
Solution to Homework 8, Math 568 S 5.4: No. 0. Use property of heorem 5 to test for linear independence in P 3 for the following set of cubic polynomials S = { x 3 x, x x, x, x 3 }. Solution: If we use
More informationThe Spectral Theorem for normal linear maps
MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More information1. Elements of linear algebra
Elements of linear algebra Contents Solving systems of linear equations 2 Diagonal form of a square matrix 3 The Jordan normal form of a square matrix 4 The Gram-Schmidt orthogonalization process 5 The
More informationECE 275A Homework 6 Solutions
ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationDominant Eigenvalue of a Sudoku Submatrix
Sacred Heart University DigitalCommons@SHU Academic Festival Apr 20th, 9:30 AM - 10:45 AM Dominant Eigenvalue of a Sudoku Submatrix Nicole Esposito Follow this and additional works at: https://digitalcommons.sacredheart.edu/acadfest
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationLinear Algebra and Dirac Notation, Pt. 2
Linear Algebra and Dirac Notation, Pt. 2 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 2 February 1, 2017 1 / 14
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationLecture 2: Linear operators
Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study
More informationLecture 9: SVD, Low Rank Approximation
CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More information