LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)

Size: px
Start display at page:

Download "LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK)"

Transcription

1 LECTURE 8: ORTHOGONALITY (CHAPTER 5 IN THE BOOK) Everythig marked by is ot required by the course syllabus I this lecture, all vector spaces is over the real umber R. All vectors i R is viewed as a colum vector. 1. Ier proudcut Defiitio 1.1. A ier product o a real vector space V is the followig: For a pair of vectors u, v V we associate a real umber u, v ad it satisfies (1) u, u 0 with equality if ad oly if u = 0. (Positive) (2) u, v = v, u (Symmetric) (3) αu + βv, w = α u, w + β v, z for all scaler α ad β. (Liear) A vector space with a ier product is called a ier product space. Defiitio 1.2. We defie the legth or orm of a vector v by (the o-egative umber) v = v, v. Example 1.3 (Stadard ier product for R ). Defie the stadard ier product o R by u, v = u T v. I R 2,, is the usual ier product of vectors ad x = x x2 2 is just the usual defiio of the legth of a vector x = (x 1, x 2 ) R 2. Example 1.4. Recall that the space Mat m (R) of m matrices is a vector space. Defie ier product by m A, B = a ij b ij for A = (a ij ), B = (b ij ) Mat m. j=1 Example 1.5 ( space of fuctios). Defie ier product o the space C([a, b]) of cotious fuctios by f, g = a b f(x)g(x)dx Exercise ( see the book.). Check that f, f = 0 if ad oly if f(x) = 0 for all x [a, b]. Example 1.6. Let x 1, x 2,, x be distict real umbers. For each pair of polyomials i P, defie product, p, q = p(x i )q(x i ). Defiitio 1.7. Two vectors u ad v are said to be orthogoal if u, v = 0. If u ad v are orthogoal to eachother, we write u v. Example 1.8. (a) 0 is orthogoal to every vector i V. (b) 0 is the oly vector i V that is orthogoal to itself. 1

2 2 LECTURE 8-9: ORTHOGONALITY (1) Note that We get u + v 2 = u + v, u + v = u, u + v, u + u, v + v, v = u u, v + v 2. Theorem 1.9 (The Pythagorea Law). If u ad v are orthogoal vectors i a ier product space V, the u + v 2 = u 2 + v 2. I R 2, this is the familiar Pythagorea theorem, see followig picture. u + v u v 2. Projectio Defiitio 2.1. If u ad v are vectors i a ier product space V ad v 0, the the scaler projectio of u oto v is give by α = u, v v ad the (vector) projectio of u oto v is give by p = α ( 1 u, v v) = v v, v v. Notatio. We deote the projectio p of u to v 0 by proj v (u). The defiig property of the projectio. If v 0, the the (vector) projectio p of u oto v is the uique vector satisfies followig two coditios: (i) p is a scaler multiple of v. (ii) u p is orthogoal to v (hece also orthogoal to p). Suppose p is a vector satisfies the above coditio. The by coditio (i), p = βv for some β R. O the other had, (ii) meas Sice v 0, v, v 0. Hece we get 0 = u p, v = u, v βv, v = u, v β v, v. β = u, v v, v, ad u, v p = v, v v Remark: From the above discussio, we see that u = proj v (u) if ad oly if u is a scaler multiple of v. Theorem 2.2 (The Cauchy-Schwarz Iequality). If u ad v are ay two vectors i a ier product space V, the u, v u v Equality holds if ad oly if u ad v are liearly depedet. More simply stated, equality will hold if ad oly if u ad v are liearly depedet.

3 Proof. If v = 0, the the iequality is trivial, sice LECTURE 8-9: ORTHOGONALITY 3 u, v = 0 = u v. If v 0, the let p be the vector projectio of u otov. Sice p is orthogoal to u p, it follows from the Pythagorea law that 2 u, v ( ) ( v ) = p 2 = u 2 u p 2 u 2 Therefore, u, v u v. Moreover, the equality hold i eq. ( ) if ad oly u p = 0, i.e. u = p. Therefore u is a scalar multiple of v. As a cosequece of Cauchy-Schwarz iequality is that if u ad v are ozero vectors, the u, v 1 u v 1. Hece there is a uique agle θ i [0, π] such that cos θ = u, v u v, ad the agle two ozero vectors u ad v is defied to be this θ. Example 2.3. Cauchy-Schwarz iequality gives lots of useful iequality. Here is a example. Cosider the stadard ier product o R. Let u = (1, 1,, 1) ad x = (x 1,, x ). The Cauchy-Schwarz iequality gives x i = u, x x 2 i. O other words, x x x x2 ad the equality holds if ad oly if x i are all the same. Defiitio 2.4. A vector space V is said to be a ormed liear space if, to each vector v V, there is associated a real umber v, called the orm of v, satisfyig (i) v 0 with equality if ad oly if v = 0. (ii) αv = α v for ay α R. (iii) u + v u + v for all v, u V. (triagle iequality) Theorem 2.5. If V is a ier product space, the defies a orm o V. v = v, v. Proof. Part (i) ad (ii) i Defiitio 2.4 is easy. We check part (iii): Thus u + v 2 = u u, v + v 2 u u v + v 2 =( u + v ) 2 u + v u + v (Cauchy Schwarz)

4 4 LECTURE 8-9: ORTHOGONALITY Remark: There are other orms ot comig from ay ier products. For example, for every x = (x 1,, x ) R, defie x 1 = x i ad x = max 1 i x i. The 1 ad are also orms but they are ot realted to ay ier product. 3. Orthogoal Subspaces Let V be a vector space with ier product,. Defiitio 3.1. Two subspaces X ad Y of V are said to be orthogoal if x, y = 0 for every x X ad every y Y. If X ad Y are orthogoal, we write X Y Example 3.2. I R 3 with stadard ier product. Let X = Spae 1 ad Y = Spae 2. The X Y. This is because, for ay α, β R, Lemma 3.3. If X Y, the X Y = 0. αe 1, βe 2 = (αe T 1 )(βe 2 ) = 0. Proof. If x X Y, the x, x = 0 ad hece x = 0. Notatio (The sum of two subspaces). Let X ad Y be two subspaces of V, defie a subspace of V X + Y = { x + y x X ad y Y }. Exercise. Check that X + Y is a subspace of V. Lemma 3.4. If X Y = 0, the for each vector w X + Y there is a uique way two write w = x + y such that x X ad y Y. Proof. Sice w X + Y, there is a x X ad y Y such that w = x + y by defiitio. Suppose w = x + y is aother expressio such that x X ad y Y. We get x x = y y. The left hadside of the above equatio is i X ad the right hadside is i Y. Hece both sides are i X Y = 0. Hece x x = 0 ad y y = 0. This fiished the proof. Defiitio 3.5 (The direct sum of two subspaces). Let X ad Y be two subspace of V. If X Y = 0 is the zero spaces, the X + Y is called a direct sum, ad writte as X Y. Defiitio 3.6. Let X be a subspace of V. The set of all vectors i V that are orthogoal to every vector i X will be deoted X. Thus, X = { v V v, x = 0 for every x X }. The set X is called the orthogoal complemet of X. Lemma 3.7. X is a subspace of V. Proof. Let y, z X ad α R. We have αy, x =α y, x = 0 ad y + z, x = y, x + z, x = for all x X, i.e. αy X ad y + z X. Therefore X is a subspace of R. Example 3.8. It is clear that (c.f. Example 1.8). (1) 0 = V

5 (2) V = 0. LECTURE 8-9: ORTHOGONALITY 5 The followig lemma is true for ay fiite dimesioal ier product space, however we will first prove the versio i R with the stadard ier product. Theorem 3.9. Let X be a subspace of V, the X + X = V. Combie this with X X = 0, we get V = X X. Corollary If X is a subspace of V, the (X ) = X. Proof. O the oe had, if x X, the x is orthogoal to each y X. Therefore, x (X ) ad hece X (X ). O the other had, suppose that z is a arbitrary elemet i (X ). By Theorem 3.9, we ca write z as a sum u + v, where u X ad v X. Sice v X, we get v u ad v z. Therefore, 0 = z, v = u + v, v = u, v + v, v = 0 + v, v. ad, cosequtly, v = 0. Hece z = u X ad so X = (X ). 4. The stadard ier product o R I this sectio, we cosider the vector space R with stadard ier product give by x, y = x T y for all x, y R. Let A be a m matrix. It is also viewed as a liear trasformatio from R to R m. Notatio. The colum space of A is the image of the liear map A, deoted by R(A) = { b R m b = Ax for some x R }. Similarly, the row space is the colum space of A T i R, i.e. the space R(A T ) = { y R y = A T x for some x R m }. Theorem 4.1 (Fudametal Space Theorem). If A is a m matrix, the N(A) = R(A T ) ad N(A T ) = R(A). Proof. For y = A T c R(A) with c R m, ad x N(A), we have (2) y, x = (A T c) T x = c T Ax. Sice Ax = 0, we get y, x = c T 0 = 0, i.e. x y. Hece N(A) R(A). O the other had, if x R(A), the eq. (2) implies c Ax for all c R m. Hece, Ax = 0, i.e. x N(A). Therefore, N(A) = R(A T ). The reuslts for A T is immediat if oe repace the role of A by A T. Theorem 4.2. If X is a subspace of R. The dim X + dim X = Furthermore, if { x 1,, x r } is a basis for X ad { x r+1,, x } is a basis for X, the { x 1,, x r, x r+1,, x } is a basis for R. Proof. Suppose X = 0 is the zero space, the X = R ad dim X + dim X = 0 + =. Suppose X 0, the let { x 1,, x r } be a basis of X. Defie x T 1 A = x T r

6 6 LECTURE 8-9: ORTHOGONALITY to be the r matrix whose i-th row is x T i. By costructio R(A T ) = X ad N(A) = X. So dim X = rak(a) ad dim X = ull(a). Now by the Nullity-Rak theorem, we get dim X + dim X = rak(a) + (A) =. To show that { x 1,, x r, x r+1,, x } is a basis it is suffice to show that the vectors are liearly idepeedet. Suppose that, for some c i R, c i x i = 0. Let y = r c i x i ad y = i=r+1 c i x 2. The y X ad z X. Sice X X, we see that each vector i X + X has a uique decompusitio, i particular, = 0 it the uique decompositio of 0. Hece, y = 0 ad z = 0. Now by the liearly idepedecey of { x 1, x r } ad { x r+1, x } we coclude c i = 0 for all i. Hece we showed that the vectors are liearly idepeedet. This fiished the proof. Theorem 4.3. If X is a subspace of R, the R = X + X ad hece R = X X. Proof. This is clear by Theorem 4.2, sice the uio of a pair of baises of X ad X will form a basis of R. Corollary 4.4. We have (X ) = X. (c.f. Corollary 3.10) Corollary 4.5. If A is a m matrix ad b R m, the either there is a vector x R such that Ax = b (i.e. b R(A)) or there is a vector y R m ( y R(A) = N(A T ))such that A T y = 0 ad y T b 0 (b / R(A) = (N(A T )) ) Defiitios. 5. Orthoormal basis Defiitio 5.1. Let v 1, v 2,, v be ozero vectors i a ier product space V. If v i, v j = 0 wheever i j, the { v 1, v 2,, v } is said to be a orthogoal set of vectors. Lemma 5.2. If { v 1, v 2,, v } is a orthogoal set of ozero vectors i a ier product space V, the v 1, v 2,, v are liearly idepedet. Proof. Suppose that c 1 v 1 + c 2 v c v = 0. Take the ier product of the equatio with v j for a 1 j. we get Hece c j = 0. 0 = c 1 j, v 1 + c 2 j, v c j, v = c j v j 2. Defiitio 5.3. A vector v V is called a uit vector if v = 1. A orthoormal set of vectors is a orthogoal set of uit vectors. Let { u 1, u } be a orthoormal set, the u i, u j = δ ij = { 1 i = j 0 i j. Defiitio 5.4. A basis B of a ier product space V is called a orthoormal basis if B is also a orthoormal set.

7 LECTURE 8-9: ORTHOGONALITY 7 Example 5.5. Let S = { u 1, u 2,, u k } be a orthoormal set i V. The S is a orthoormal basis of Spau 1, u 2,, u k (Note that a orthoormal set is liearly idepedet) Example 5.6. The vectors u 1 = ( 1 2, 1 2 ) ad u 2 = ( 1 2, 1 2 ) form a orthoormal basis for R 2. Lemma 5.7. Let { u 1, u } be a orthoormal basis for a ier product space V. (i) If v = c i u i the c i = v, u i. (ii) If v = c i u i ad w = d i u i, the v, w = (iii) (Parseval s Formula) If v = c i u i the Proof. (i) v, u i = c j u j, u i = j=1 (ii) v, w = v, d i u i = (iii) v 2 = v, v = c 2 i. j=1 v, u i d i = v 2 = c i d i. c 2 i. c j u j, u i = c i d i j=1 c j δ ji = c i. Remark: Recall that, give a basis B, we will get a liear isomorphism R V. (So V ca be idetified with R ) Now Lemma 5.7 says that, whe B is a orthoormal basis, the ier product structure o V is same as the stadard ier product structure o R via this isomorphism. (So V ca ba idetified with R equipped with the stadard ier product) More precisely, let, V ad, R deote the ier product o V ad R respectively, the v, w V = [v] B, [w] B R. I the Sectio 6, we will see that orthoormal basis always exists Orthogoal Matrices. Defiitio 5.8. A matrix Q is said to be a orthogoal matrix if Q T Q = I or equivaletly, the colum vectors of Q form a orthoormal set i R. Let q i be the i-th colum of Q. The equivalece is clear, sice { q 1,, q } is a orthoormal set if ad oly if q i, q j = q T i q j = δ ij ad q T i q j is the (i, j)-th etry of Q T Q. Immediately, from the defiitio, we get. Lemma 5.9. Let Q be a matrix. The Q is a orthoormal matrix if ad oly if Q is ivertible ad Q 1 = Q T. Example Fix θ R, the matrix Q = ( cos θ si θ si θ cos θ )

8 8 LECTURE 8-9: ORTHOGONALITY is a orthogoal matrix ad Q 1 = Q T = ( cos θ si θ si θ cos θ ) I fact, all 2 2 orthoormal matrices are i this form. Note that Q represet the trasitio matrix from the basis { q 1, q } to the stadard basis { e 1,, e 2 }, the Lemma 5.7 implies that the stadard ier product is preserved by left multiplicatio of Q. This is also ca be see directly from the followig calculatio: for x, y R, We collect Qx, Qy = (Qx T )Qy = x T Q T Qy = x T Iy = x T y = x, y. Lemma 5.11 (Properties of Orthogoal Matrices). Let Q be a matrix. The the followig are equivalet, (a) the colum vectors of Q form a orthoormal basis for R ; (b) Q T Q = I; (c) Q is ivertiable ad Q T = Q 1 ; (d) Qx, Qy = x, y ; (e) Qx = x. Whe Q satifies oe of the above coditio, it is called a orthogoal matrix. Proof. The equivalece of (a), (b) ad (c) are clear. We already see that (b) implies (d). Now suppose (d) hold, substitue e i ad e j i the formula, we get q i, q j = Qe i, Qe j = e i, e j = δ ij, i.e. the colums of Q form a orthoormal set i R. Therefore (a) holds. Now cosdier the equivalece betwee (d) ad (e). Clearly, (d) implies (e). O the other had, (c.f. eq. (1)) Hece, (e) impies (d), sice x, y = ( x + y 2 x 2 y 2 )/2. Qx, Qy = ( Q(x + y) 2 Qx 2 Qy 2 )/2 = ( x + y 2 x 2 y 2 )/2 = x, y. Remark: Cosider the the set, called the orthogoal group of dimesio, O() = { Q Mat (R) Q is a orthogoal matrix }. The above lemmas says that O() is exactly the set of matrices such that multiply with the matrix preserves the ier product (or preserve the orm). Exercise. Recall the defiitio of Group (i Lecture 5 Remark after the Defiitio 1.1). Show that O() is a group. I fact, O(1) = { ±1 } ad O(2) are commutative groups. But O() is o-commutative whe 3. (c.f. Tutorial 3, Questio 2)

9 LECTURE 8-9: ORTHOGONALITY Permutatio Matrix. Let σ be a permutatio of { 1, 2,, }. (c.f. i Defiitio 2.1 Lecture 4) I short, σ permutes the tuple of ordered umber (1, 2,, ) to the tuple (σ(1), σ(2),, σ()) by sed umber i to umber σ(i). Defie a matrix P σ correspodig to σ by let P σ = (e σ(1) e σ(2) e σ(3) e σ() ). The matrix P σ is clearly a orthogoal matrix. Moreover, P σ have the followig properites: the right multiplicatio by P σ permutes the colums accordig to σ, i.e. ad Let Q σ = Pσ T have (a 1 a 2 a ) P σ = (a σ(1) a σ(2) a σ() ) P 1 σ = P T σ = e T σ(1) e T σ(2) e T σ() be the matrix obtaied by permutig the rows accordig to σ. The we (3) Q σ b T 1 b T 2 b T = b T σ(1) b T σ(2) b T σ() Let τ be aother permutatio of { 1, 2,, } the we have σ τ is also a permutatio. Moreover, it is easy to check by eq. (3) that Q σ Q τ = Q σ τ. Remark: Recall that the set S of all permutatios of { 1, 2,, } form a group. The above equatio meas that the map S O() defied by σ Q σ is a group homomorphism Projectio to a subspace. Defiitio Let S be a subspace of a ier product space V ad let x V. Let { u 1,, u } be a orthoormal basis for S. The projectio of x oto S is defied to be proj S (x) = p =. c i u i with c i = x, u i. Theorem Uder the assumptio of Defiitio 5.12, x p S. Proof. First (x p) u i for each i: If y S, the y = α i u i. Hece, Therefore x p S. x p, u i = x, u i p, u i = x, u i c j u j, u i =c i j=i x p, y = x p, α i u i = j=1 c j u j, u i = 0 α i x p, u i = 0,

10 10 LECTURE 8-9: ORTHOGONALITY Remark: The projectio proj S (x) of x is characterised by the property i Theorem 5.13: it is the uique vector p i S such that x p S. I particular, proj S (x) is idepedet of the choice of orthoormal basis for S. (ote that we will show i Sectio 6 that there always exists a orthoormal basis.) O the other had, proj S (x) is also characterised by the fact that it is the elemet i S closest to x: Theorem Uder the assumptio of Defiitio 5.12, p = proj S (x) is the elemet i S that is closest to x, that is Proof. If y S ad y p, the x y 2 = (x p) + (p y) 2 = x p 2 + p y 2 x y > x p for ay p y S. > x p 2. (sice p y 0) Now let s calculate the projectio oto a subspace S i R (sice p y ad so (x p) (p y)) Corollary Let S be a ozero subspace of R ad let x R. If { u 1, u 2,, u k } is a orthoormal basis for S ad U = (u 1 u 2 u k ), the the projectio p of x oto S is give by proj S (x) = UU T x. Proof. By defiitio with Hece proj S (x) = UU T x. proj S (x) = Uc = x, u i. c 1 u T 1 x c c = 2 u = T 2 x = U T x. u T x c 5.5. Least Squares Problems. Let A be a m matrix. Cosider a system of liear equatio Ax = b. For each x R, form a residual: r(x) = b Ax. If the equatio Ax = b has a solutio ˆx, the r(ˆx) = 0. I geeral, the equatio may ot have a solutio. I this case, we ask a vector ˆx such that the legth of the residual r(x) is miimized. Such vector ˆx is called a least squares solutio of the system Ax = b. Cosider the space S spaed by the colum vector of A, the clearly, S = { Ax x R }. By the remark after Theorem 5.13, r(x) = b Ax take miimal value exactly whe Ax is the projectio of b oto S, i.e. b Ax is orthogoal to all vectors i S. Therefore, for each y R, we get a equatio: 0 = Ay, b Ax = y T (A T b A T Ax). Sice y is arbitray, we get a equatio called the ormal equatio A T Ax = A T b.

11 LECTURE 8-9: ORTHOGONALITY 11 Theorem If A is a m matrix of rak, the ormal equatio A T Ax = A T b has a uique solutio ˆx = (A T A) 1 A T b. Proof. It suffice to prove that A T A is ivertible. We claim that: for ay matrix A, N(A) = N(A T A). First, it is clear that N(A) N(A T A). O the other had, if x N(A T A), the Ax 2 = x T (A T Ax) = 0. But this implies Ax = 0, i.e. x N(A). This proves the claim. Now back to the proof of the theorem. We have rak(a) =. Therefore, N(A) = 0, sice ull(a) = rak(a) = 0 by the ullity-rak theorem. By the above claim, we get N(A T A) = 0. Hece A T A is ivertible sice it is a square matrix. Example Give a table of data Fid a quadratic fuctio x x 1 x 2 x m y y 1 y 2 y m y = c 0 + c 1 x + c 2 x 2 that best fits the data i the least square sece. This gives a system of equatios with ukows c 0, c 1 ad c 2 : 1 x 1 x x 2 x 2 1 c 0 y 1 y c 1 = 2 1 x m x 2 c 2 m y m Now oe ca use Theorem 5.16 to get the least square fit. 6. Gram-Schmidt Orthogoalizatio Process I this sectio we give a algorithm for costructig a orthoormal basis form a (ordiary) basis { x 1, x 2,, x }. Theorem 6.1. Let { x 1, x 2,, x } be a basis for a ier product space V. Let ad defie u 2,, u recursively by where u k+1 = u 1 = 1 x 1 x 1 1 x k+1 p k (x k+1 p k ) fork = 1,, 1 p k = proj Xk (x k+1 ) = x k+1, u 1 u 1 + x k+1, u 2 u x k+1, u k u k is the projectio of x k+1 oto X k = Spa { u 1, u 2,, u k }. The the set is a orthoormal basis for V. Moreover, { u 1, u 2,, u } X k = Spa { u 1, u 2,, u k } = Spa { x 1, x 2,, x k } fork = 1,,.

12 12 LECTURE 8-9: ORTHOGONALITY Proof. We prove iductively. Clearly, X 1 = Spa { u 1 } = Spa { x 1 } ad u 1 is a uit vector, i.e. { u 1 } is a orthoormal basis of X 1. Now suppose that u 1 1, u 2,, u k have bee costructed so that { u 1, u 2,, u k } is a orthoormal set ad X k = Spa { u 1, u 2,, u k } = Spa { x 1, x 2,, x k }. Sice p k Spa { u 1, u 2,, u k } = Spa { x 1, x 2,, x k }, it follows that x k+1 p k Spa { x 1, x 2,, x k+1 }. Sice x 1,, x k+1 are liearly idepedet, it follows that x k+1 p k is ozero. O the other had, by Theorem 5.13, x k+1 p k X k, i.e. it is orthogoal to each u i, 1 i k. Thus, { u 1, u 2,, u k+1 } is a orthoormal set of vectors i Spa { x 1, x 2,, x k+1 }. Sice u 1,, u k+1 are liearly idepedet, they form a basis for Spa { x 1, x 2,, x k+1 } ad, cosequetly, X k+1 = Spa { u 1, u 2,, u k+1 } = Spa { x 1, x 2,, x k+1 }. It follows by methematical iductio that { u 1, u 2,, u } is a orthoormal basis of V = Spa { x 1, x 2,, x }. Theorem 6.2 (Gram-Schmidt QR Factorizatio). If A is a m matrix of rak, the A ca be factored ito a product QR, where Q is a m matrix with orthoormal colum vectors ad R is a upper triagular matrix whose diagoal etries are all positive. Proof. Let a 1, a 2,, a be the colums of A. Sice rak(a) =, { a 1, a 2,, a } from a basis of the colums space R(A) of A. Apply the Gram-Schmidt process o the basis { a 1, a 2,, a }. Let p 1,, p 1 be the projectio vectors ad { q 1, q 2,, q } be the orthoormal basis derived for the process. Defie r 11 = a 1. r kk = a k a k 1 for k = 2,, r ik = a k, q i for i = 1,, k 1 ad k = 2,,. By the Gram-Schmidt process, we have This is equivalet to r 11 q 1 =a 1 rr kk q k =a k r 1k q 1 r 2k q 2 r k 1,k q k for k = 2,, Let a 1 =r 11 q 1 a =r 12 q 1 + r 22 q 2 ad let R to be the upper triagular matrix a =r 1 q 1 + r 2 q r 1, q Q = (q 1 q 2 q ) r 11 r 12 r 1 0 r R = 22 r r

13 The we get LECTURE 8-9: ORTHOGONALITY 13 QR = A Remark: I the fractrizatio, R must be osigular, otherwise rak(a) = rak(qr) rak(r) < which cotradicte to rak(a) =. Moreover, it is ot hard to see that the QR dcompositio is uique. Here is a applicatio of QR Factorizatio. Example 6.3. Let A be a m matrix of rak. The the least square solutio of Ax = b is give by ˆx = R 1 Q T b, where Q ad R are matrices obtaied from the factrizatio give i Theorem 6.2. This is clear, by substitue A = QR ad the fact that Q T Q = I : ˆx =(A T A) 1 A T b = ((QR) T (QR)) 1 (QR) T b =(R T Q T QR) 1 R T Q T b = (R T I R) 1 R T Q T b =R 1 (R T ) 1 R T Q T b =R 1 Q T b.

(VII.A) Review of Orthogonality

(VII.A) Review of Orthogonality VII.A Review of Orthogoality At the begiig of our study of liear trasformatios i we briefly discussed projectios, rotatios ad projectios. I III.A, projectios were treated i the abstract ad without regard

More information

Matrix Algebra from a Statistician s Perspective BIOS 524/ Scalar multiple: ka

Matrix Algebra from a Statistician s Perspective BIOS 524/ Scalar multiple: ka Matrix Algebra from a Statisticia s Perspective BIOS 524/546. Matrices... Basic Termiology a a A = ( aij ) deotes a m matrix of values. Whe =, this is a am a m colum vector. Whe m= this is a row vector..2.

More information

Chapter 3 Inner Product Spaces. Hilbert Spaces

Chapter 3 Inner Product Spaces. Hilbert Spaces Chapter 3 Ier Product Spaces. Hilbert Spaces 3. Ier Product Spaces. Hilbert Spaces 3.- Defiitio. A ier product space is a vector space X with a ier product defied o X. A Hilbert space is a complete ier

More information

MATH10212 Linear Algebra B Proof Problems

MATH10212 Linear Algebra B Proof Problems MATH22 Liear Algebra Proof Problems 5 Jue 26 Each problem requests a proof of a simple statemet Problems placed lower i the list may use the results of previous oes Matrices ermiats If a b R the matrix

More information

Symmetric Matrices and Quadratic Forms

Symmetric Matrices and Quadratic Forms 7 Symmetric Matrices ad Quadratic Forms 7.1 DIAGONALIZAION OF SYMMERIC MARICES SYMMERIC MARIX A symmetric matrix is a matrix A such that. A = A Such a matrix is ecessarily square. Its mai diagoal etries

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

Inverse Matrix. A meaning that matrix B is an inverse of matrix A. Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix

More information

24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS

24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS Corollary 2.30. Suppose that the semisimple decompositio of the G- module V is V = i S i. The i = χ V,χ i Proof. Sice χ V W = χ V + χ W, we have:

More information

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc. 2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES 2012 Pearso Educatio, Ic. Theorem 8: Let A be a square matrix. The the followig statemets are equivalet. That is, for a give A, the statemets

More information

A brief introduction to linear algebra

A brief introduction to linear algebra CHAPTER 6 A brief itroductio to liear algebra 1. Vector spaces ad liear maps I what follows, fix K 2{Q, R, C}. More geerally, K ca be ay field. 1.1. Vector spaces. Motivated by our ituitio of addig ad

More information

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc.

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc. 2 Matrix Algebra 2.2 THE INVERSE OF A MATRIX MATRIX OPERATIONS A matrix A is said to be ivertible if there is a matrix C such that CA = I ad AC = I where, the idetity matrix. I = I I this case, C is a

More information

M A T H F A L L CORRECTION. Algebra I 1 4 / 1 0 / U N I V E R S I T Y O F T O R O N T O

M A T H F A L L CORRECTION. Algebra I 1 4 / 1 0 / U N I V E R S I T Y O F T O R O N T O M A T H 2 4 0 F A L L 2 0 1 4 HOMEWORK ASSIGNMENT #4 CORRECTION Algebra I 1 4 / 1 0 / 2 0 1 4 U N I V E R S I T Y O F T O R O N T O P r o f e s s o r : D r o r B a r - N a t a Correctio Homework Assigmet

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

Introduction to Optimization Techniques

Introduction to Optimization Techniques Itroductio to Optimizatio Techiques Basic Cocepts of Aalysis - Real Aalysis, Fuctioal Aalysis 1 Basic Cocepts of Aalysis Liear Vector Spaces Defiitio: A vector space X is a set of elemets called vectors

More information

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a Math E-2b Lecture #8 Notes This week is all about determiats. We ll discuss how to defie them, how to calculate them, lear the allimportat property kow as multiliearity, ad show that a square matrix A

More information

Lecture 4: Grassmannians, Finite and Affine Morphisms

Lecture 4: Grassmannians, Finite and Affine Morphisms 18.725 Algebraic Geometry I Lecture 4 Lecture 4: Grassmaias, Fiite ad Affie Morphisms Remarks o last time 1. Last time, we proved the Noether ormalizatio lemma: If A is a fiitely geerated k-algebra, the,

More information

Lecture 8: October 20, Applications of SVD: least squares approximation

Lecture 8: October 20, Applications of SVD: least squares approximation Mathematical Toolkit Autum 2016 Lecturer: Madhur Tulsiai Lecture 8: October 20, 2016 1 Applicatios of SVD: least squares approximatio We discuss aother applicatio of sigular value decompositio (SVD) of

More information

Real Numbers R ) - LUB(B) may or may not belong to B. (Ex; B= { y: y = 1 x, - Note that A B LUB( A) LUB( B)

Real Numbers R ) - LUB(B) may or may not belong to B. (Ex; B= { y: y = 1 x, - Note that A B LUB( A) LUB( B) Real Numbers The least upper boud - Let B be ay subset of R B is bouded above if there is a k R such that x k for all x B - A real umber, k R is a uique least upper boud of B, ie k = LUB(B), if () k is

More information

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j. Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α

More information

5.1 Review of Singular Value Decomposition (SVD)

5.1 Review of Singular Value Decomposition (SVD) MGMT 69000: Topics i High-dimesioal Data Aalysis Falll 06 Lecture 5: Spectral Clusterig: Overview (cotd) ad Aalysis Lecturer: Jiamig Xu Scribe: Adarsh Barik, Taotao He, September 3, 06 Outlie Review of

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

1 Last time: similar and diagonalizable matrices

1 Last time: similar and diagonalizable matrices Last time: similar ad diagoalizable matrices Let be a positive iteger Suppose A is a matrix, v R, ad λ R Recall that v a eigevector for A with eigevalue λ if v ad Av λv, or equivaletly if v is a ozero

More information

Linear Transformations

Linear Transformations Liear rasformatios 6. Itroductio to Liear rasformatios 6. he Kerel ad Rage of a Liear rasformatio 6. Matrices for Liear rasformatios 6.4 rasitio Matrices ad Similarity 6.5 Applicatios of Liear rasformatios

More information

The multiplicative structure of finite field and a construction of LRC

The multiplicative structure of finite field and a construction of LRC IERG6120 Codig for Distributed Storage Systems Lecture 8-06/10/2016 The multiplicative structure of fiite field ad a costructio of LRC Lecturer: Keeth Shum Scribe: Zhouyi Hu Notatios: We use the otatio

More information

Abstract Vector Spaces. Abstract Vector Spaces

Abstract Vector Spaces. Abstract Vector Spaces Astract Vector Spaces The process of astractio is critical i egieerig! Physical Device Data Storage Vector Space MRI machie Optical receiver 0 0 1 0 1 0 0 1 Icreasig astractio 6.1 Astract Vector Spaces

More information

Introduction to Optimization Techniques. How to Solve Equations

Introduction to Optimization Techniques. How to Solve Equations Itroductio to Optimizatio Techiques How to Solve Equatios Iterative Methods of Optimizatio Iterative methods of optimizatio Solutio of the oliear equatios resultig form a optimizatio problem is usually

More information

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine

Singular value decomposition. Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaine Lecture 11 Sigular value decompositio Mathématiques appliquées (MATH0504-1) B. Dewals, Ch. Geuzaie V1.2 07/12/2018 1 Sigular value decompositio (SVD) at a glace Motivatio: the image of the uit sphere S

More information

Math Solutions to homework 6

Math Solutions to homework 6 Math 175 - Solutios to homework 6 Cédric De Groote November 16, 2017 Problem 1 (8.11 i the book): Let K be a compact Hermitia operator o a Hilbert space H ad let the kerel of K be {0}. Show that there

More information

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = = Review Problems ICME ad MS&E Refresher Course September 9, 0 Warm-up problems. For the followig matrices A = 0 B = C = AB = 0 fid all powers A,A 3,(which is A times A),... ad B,B 3,... ad C,C 3,... Solutio:

More information

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001.

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001. Physics 324, Fall 2002 Dirac Notatio These otes were produced by David Kapla for Phys. 324 i Autum 2001. 1 Vectors 1.1 Ier product Recall from liear algebra: we ca represet a vector V as a colum vector;

More information

denote the set of all polynomials of the form p=ax 2 +bx+c. For example, . Given any two polynomials p= ax 2 +bx+c and q= a'x 2 +b'x+c',

denote the set of all polynomials of the form p=ax 2 +bx+c. For example, . Given any two polynomials p= ax 2 +bx+c and q= a'x 2 +b'x+c', Chapter Geeral Vector Spaces Real Vector Spaces Example () Let u ad v be vectors i R ad k a scalar ( a real umber), the we ca defie additio: u+v, scalar multiplicatio: ku, kv () Let P deote the set of

More information

MATH 205 HOMEWORK #2 OFFICIAL SOLUTION. (f + g)(x) = f(x) + g(x) = f( x) g( x) = (f + g)( x)

MATH 205 HOMEWORK #2 OFFICIAL SOLUTION. (f + g)(x) = f(x) + g(x) = f( x) g( x) = (f + g)( x) MATH 205 HOMEWORK #2 OFFICIAL SOLUTION Problem 2: Do problems 7-9 o page 40 of Hoffma & Kuze. (7) We will prove this by cotradictio. Suppose that W 1 is ot cotaied i W 2 ad W 2 is ot cotaied i W 1. The

More information

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a

a for a 1 1 matrix. a b a b 2 2 matrix: We define det ad bc 3 3 matrix: We define a a a a a a a a a a a a a a a a a a Math S-b Lecture # Notes This wee is all about determiats We ll discuss how to defie them, how to calculate them, lear the allimportat property ow as multiliearity, ad show that a square matrix A is ivertible

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

Chapter Vectors

Chapter Vectors Chapter 4. Vectors fter readig this chapter you should be able to:. defie a vector. add ad subtract vectors. fid liear combiatios of vectors ad their relatioship to a set of equatios 4. explai what it

More information

PAPER : IIT-JAM 2010

PAPER : IIT-JAM 2010 MATHEMATICS-MA (CODE A) Q.-Q.5: Oly oe optio is correct for each questio. Each questio carries (+6) marks for correct aswer ad ( ) marks for icorrect aswer.. Which of the followig coditios does NOT esure

More information

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx) Cosider the differetial equatio y '' k y 0 has particular solutios y1 si( kx) ad y cos( kx) I geeral, ay liear combiatio of y1 ad y, cy 1 1 cy where c1, c is also a solutio to the equatio above The reaso

More information

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1) 5. Determiats 5.. Itroductio 5.2. Motivatio for the Choice of Axioms for a Determiat Fuctios 5.3. A Set of Axioms for a Determiat Fuctio 5.4. The Determiat of a Diagoal Matrix 5.5. The Determiat of a Upper

More information

8. Applications To Linear Differential Equations

8. Applications To Linear Differential Equations 8. Applicatios To Liear Differetial Equatios 8.. Itroductio 8.. Review Of Results Cocerig Liear Differetial Equatios Of First Ad Secod Orders 8.3. Eercises 8.4. Liear Differetial Equatios Of Order N 8.5.

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Riesz-Fischer Sequences and Lower Frame Bounds

Riesz-Fischer Sequences and Lower Frame Bounds Zeitschrift für Aalysis ud ihre Aweduge Joural for Aalysis ad its Applicatios Volume 1 (00), No., 305 314 Riesz-Fischer Sequeces ad Lower Frame Bouds P. Casazza, O. Christese, S. Li ad A. Lider Abstract.

More information

A Note on the Symmetric Powers of the Standard Representation of S n

A Note on the Symmetric Powers of the Standard Representation of S n A Note o the Symmetric Powers of the Stadard Represetatio of S David Savitt 1 Departmet of Mathematics, Harvard Uiversity Cambridge, MA 0138, USA dsavitt@mathharvardedu Richard P Staley Departmet of Mathematics,

More information

On Involutions which Preserve Natural Filtration

On Involutions which Preserve Natural Filtration Proceedigs of Istitute of Mathematics of NAS of Ukraie 00, Vol. 43, Part, 490 494 O Ivolutios which Preserve Natural Filtratio Alexader V. STRELETS Istitute of Mathematics of the NAS of Ukraie, 3 Tereshchekivska

More information

M 340L CS Homew ork Set 6 Solutions

M 340L CS Homew ork Set 6 Solutions 1. Suppose P is ivertible ad M 34L CS Homew ork Set 6 Solutios A PBP 1. Solve for B i terms of P ad A. Sice A PBP 1, w e have 1 1 1 B P PBP P P AP ( ).. Suppose ( B C) D, w here B ad C are m matrices ad

More information

M 340L CS Homew ork Set 6 Solutions

M 340L CS Homew ork Set 6 Solutions . Suppose P is ivertible ad M 4L CS Homew ork Set 6 Solutios A PBP. Solve for B i terms of P ad A. Sice A PBP, w e have B P PBP P P AP ( ).. Suppose ( B C) D, w here B ad C are m matrices ad D is ivertible.

More information

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition

6. Kalman filter implementation for linear algebraic equations. Karhunen-Loeve decomposition 6. Kalma filter implemetatio for liear algebraic equatios. Karhue-Loeve decompositio 6.1. Solvable liear algebraic systems. Probabilistic iterpretatio. Let A be a quadratic matrix (ot obligatory osigular.

More information

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0.

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0. 40 RODICA D. COSTIN 5. The Rayleigh s priciple ad the i priciple for the eigevalues of a self-adjoit matrix Eigevalues of self-adjoit matrices are easy to calculate. This sectio shows how this is doe usig

More information

Supplemental Material: Proofs

Supplemental Material: Proofs Proof to Theorem Supplemetal Material: Proofs Proof. Let be the miimal umber of traiig items to esure a uique solutio θ. First cosider the case. It happes if ad oly if θ ad Rak(A) d, which is a special

More information

Notes for Lecture 11

Notes for Lecture 11 U.C. Berkeley CS78: Computatioal Complexity Hadout N Professor Luca Trevisa 3/4/008 Notes for Lecture Eigevalues, Expasio, ad Radom Walks As usual by ow, let G = (V, E) be a udirected d-regular graph with

More information

R is a scalar defined as follows:

R is a scalar defined as follows: Math 8. Notes o Dot Product, Cross Product, Plaes, Area, ad Volumes This lecture focuses primarily o the dot product ad its may applicatios, especially i the measuremet of agles ad scalar projectio ad

More information

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y Questio (a) A square matrix A= A is called positive defiite if the quadratic form waw > 0 for every o-zero vector w [Note: Here (.) deotes the traspose of a matrix or a vector]. Let 0 A = 0 = show that:

More information

Linearly Independent Sets, Bases. Review. Remarks. A set of vectors,,, in a vector space is said to be linearly independent if the vector equation

Linearly Independent Sets, Bases. Review. Remarks. A set of vectors,,, in a vector space is said to be linearly independent if the vector equation Liearly Idepedet Sets Bases p p c c p Review { v v vp} A set of vectors i a vector space is said to be liearly idepedet if the vector equatio cv + c v + + c has oly the trivial solutio = = { v v vp} The

More information

Week 5-6: The Binomial Coefficients

Week 5-6: The Binomial Coefficients Wee 5-6: The Biomial Coefficiets March 6, 2018 1 Pascal Formula Theorem 11 (Pascal s Formula For itegers ad such that 1, ( ( ( 1 1 + 1 The umbers ( 2 ( 1 2 ( 2 are triagle umbers, that is, The petago umbers

More information

Topics in Eigen-analysis

Topics in Eigen-analysis Topics i Eige-aalysis Li Zajiag 28 July 2014 Cotets 1 Termiology... 2 2 Some Basic Properties ad Results... 2 3 Eige-properties of Hermitia Matrices... 5 3.1 Basic Theorems... 5 3.2 Quadratic Forms & Noegative

More information

LECTURE NOTES, 11/10/04

LECTURE NOTES, 11/10/04 18.700 LECTURE NOTES, 11/10/04 Cotets 1. Direct sum decompositios 1 2. Geeralized eigespaces 3 3. The Chiese remaider theorem 5 4. Liear idepedece of geeralized eigespaces 8 1. Direct sum decompositios

More information

Solutions to home assignments (sketches)

Solutions to home assignments (sketches) Matematiska Istitutioe Peter Kumli 26th May 2004 TMA401 Fuctioal Aalysis MAN670 Applied Fuctioal Aalysis 4th quarter 2003/2004 All documet cocerig the course ca be foud o the course home page: http://www.math.chalmers.se/math/grudutb/cth/tma401/

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e.

Theorem: Let A n n. In this case that A does reduce to I, we search for A 1 as the solution matrix X to the matrix equation A X = I i.e. Theorem: Let A be a square matrix The A has a iverse matrix if ad oly if its reduced row echelo form is the idetity I this case the algorithm illustrated o the previous page will always yield the iverse

More information

CHAPTER 5. Theory and Solution Using Matrix Techniques

CHAPTER 5. Theory and Solution Using Matrix Techniques A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

HILBERT SPACE GEOMETRY

HILBERT SPACE GEOMETRY HILBERT SPACE GEOMETRY Defiitio: A vector space over is a set V (whose elemets are called vectors) together with a biary operatio +:V V V, which is called vector additio, ad a eteral biary operatio : V

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigevalues ad Eigevectors 5.3 DIAGONALIZATION DIAGONALIZATION Example 1: Let. Fid a formula for A k, give that P 1 1 = 1 2 ad, where Solutio: The stadard formula for the iverse of a 2 2 matrix yields

More information

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS DEMETRES CHRISTOFIDES Abstract. Cosider a ivertible matrix over some field. The Gauss-Jorda elimiatio reduces this matrix to the idetity

More information

On Nonsingularity of Saddle Point Matrices. with Vectors of Ones

On Nonsingularity of Saddle Point Matrices. with Vectors of Ones Iteratioal Joural of Algebra, Vol. 2, 2008, o. 4, 197-204 O Nosigularity of Saddle Poit Matrices with Vectors of Oes Tadeusz Ostrowski Istitute of Maagemet The State Vocatioal Uiversity -400 Gorzów, Polad

More information

Chimica Inorganica 3

Chimica Inorganica 3 himica Iorgaica Irreducible Represetatios ad haracter Tables Rather tha usig geometrical operatios, it is ofte much more coveiet to employ a ew set of group elemets which are matrices ad to make the rule

More information

PART 2: DETERMINANTS, GENERAL VECTOR SPACES, AND MATRIX REPRESENTATIONS OF LINEAR TRANSFORMATIONS

PART 2: DETERMINANTS, GENERAL VECTOR SPACES, AND MATRIX REPRESENTATIONS OF LINEAR TRANSFORMATIONS PART 2: DETERMINANTS, GENERAL VECTOR SPACES, AND MATRIX REPRESENTATIONS OF LINEAR TRANSFORMATIONS 3.1: THE DETERMINANT OF A MATRIX Learig Objectives 1. Fid the determiat of a 2 x 2 matrix 2. Fid the miors

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

CHAPTER I: Vector Spaces

CHAPTER I: Vector Spaces CHAPTER I: Vector Spaces Sectio 1: Itroductio ad Examples This first chapter is largely a review of topics you probably saw i your liear algebra course. So why cover it? (1) Not everyoe remembers everythig

More information

Iterative Techniques for Solving Ax b -(3.8). Assume that the system has a unique solution. Let x be the solution. Then x A 1 b.

Iterative Techniques for Solving Ax b -(3.8). Assume that the system has a unique solution. Let x be the solution. Then x A 1 b. Iterative Techiques for Solvig Ax b -(8) Cosider solvig liear systems of them form: Ax b where A a ij, x x i, b b i Assume that the system has a uique solutio Let x be the solutio The x A b Jacobi ad Gauss-Seidel

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

If a subset E of R contains no open interval, is it of zero measure? For instance, is the set of irrationals in [0, 1] is of measure zero?

If a subset E of R contains no open interval, is it of zero measure? For instance, is the set of irrationals in [0, 1] is of measure zero? 2 Lebesgue Measure I Chapter 1 we defied the cocept of a set of measure zero, ad we have observed that every coutable set is of measure zero. Here are some atural questios: If a subset E of R cotais a

More information

In number theory we will generally be working with integers, though occasionally fractions and irrationals will come into play.

In number theory we will generally be working with integers, though occasionally fractions and irrationals will come into play. Number Theory Math 5840 otes. Sectio 1: Axioms. I umber theory we will geerally be workig with itegers, though occasioally fractios ad irratioals will come ito play. Notatio: Z deotes the set of all itegers

More information

Math 4707 Spring 2018 (Darij Grinberg): homework set 4 page 1

Math 4707 Spring 2018 (Darij Grinberg): homework set 4 page 1 Math 4707 Sprig 2018 Darij Griberg): homewor set 4 page 1 Math 4707 Sprig 2018 Darij Griberg): homewor set 4 due date: Wedesday 11 April 2018 at the begiig of class, or before that by email or moodle Please

More information

On n-collinear elements and Riesz theorem

On n-collinear elements and Riesz theorem Available olie at www.tjsa.com J. Noliear Sci. Appl. 9 (206), 3066 3073 Research Article O -colliear elemets ad Riesz theorem Wasfi Shataawi a, Mihai Postolache b, a Departmet of Mathematics, Hashemite

More information

Stochastic Matrices in a Finite Field

Stochastic Matrices in a Finite Field Stochastic Matrices i a Fiite Field Abstract: I this project we will explore the properties of stochastic matrices i both the real ad the fiite fields. We first explore what properties 2 2 stochastic matrices

More information

CSE 1400 Applied Discrete Mathematics Number Theory and Proofs

CSE 1400 Applied Discrete Mathematics Number Theory and Proofs CSE 1400 Applied Discrete Mathematics Number Theory ad Proofs Departmet of Computer Scieces College of Egieerig Florida Tech Sprig 01 Problems for Number Theory Backgroud Number theory is the brach of

More information

Math 778S Spectral Graph Theory Handout #3: Eigenvalues of Adjacency Matrix

Math 778S Spectral Graph Theory Handout #3: Eigenvalues of Adjacency Matrix Math 778S Spectral Graph Theory Hadout #3: Eigevalues of Adjacecy Matrix The Cartesia product (deoted by G H) of two simple graphs G ad H has the vertex-set V (G) V (H). For ay u, v V (G) ad x, y V (H),

More information

COLLIN COUNTY COMMUNITY COLLEGE COURSE SYLLABUS CREDIT HOURS: 3 LECTURE HOURS: 3 LAB HOURS: 0

COLLIN COUNTY COMMUNITY COLLEGE COURSE SYLLABUS CREDIT HOURS: 3 LECTURE HOURS: 3 LAB HOURS: 0 COLLIN COUNTY COMMUNITY COLLEGE COURSE SYLLABUS Revised Fall 2017 COURSE NUMBER: MATH 2318 COURSE TITLE: Liear Algebra CREDIT HOURS: 3 LECTURE HOURS: 3 LAB HOURS: 0 ASSESSMENTS: Noe PREREQUISITE: MATH

More information

AN INTRODUCTION TO SPECTRAL GRAPH THEORY

AN INTRODUCTION TO SPECTRAL GRAPH THEORY AN INTRODUCTION TO SPECTRAL GRAPH THEORY JIAQI JIANG Abstract. Spectral graph theory is the study of properties of the Laplacia matrix or adjacecy matrix associated with a graph. I this paper, we focus

More information

4 The Sperner property.

4 The Sperner property. 4 The Sperer property. I this sectio we cosider a surprisig applicatio of certai adjacecy matrices to some problems i extremal set theory. A importat role will also be played by fiite groups. I geeral,

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

Chain conditions. 1. Artinian and noetherian modules. ALGBOOK CHAINS 1.1

Chain conditions. 1. Artinian and noetherian modules. ALGBOOK CHAINS 1.1 CHAINS 1.1 Chai coditios 1. Artiia ad oetheria modules. (1.1) Defiitio. Let A be a rig ad M a A-module. The module M is oetheria if every ascedig chai!!m 1 M 2 of submodules M of M is stable, that is,

More information

Assignment 5: Solutions

Assignment 5: Solutions McGill Uiversity Departmet of Mathematics ad Statistics MATH 54 Aalysis, Fall 05 Assigmet 5: Solutios. Let y be a ubouded sequece of positive umbers satisfyig y + > y for all N. Let x be aother sequece

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

CMSE 820: Math. Foundations of Data Sci.

CMSE 820: Math. Foundations of Data Sci. Lecture 17 8.4 Weighted path graphs Take from [10, Lecture 3] As alluded to at the ed of the previous sectio, we ow aalyze weighted path graphs. To that ed, we prove the followig: Theorem 6 (Fiedler).

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

( ) ( ) ( ) notation: [ ]

( ) ( ) ( ) notation: [ ] Liear Algebra Vectors ad Matrices Fudametal Operatios with Vectors Vector: a directed lie segmets that has both magitude ad directio =,,,..., =,,,..., = where 1, 2,, are the otatio: [ ] 1 2 3 1 2 3 compoets

More information

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by Manisha Bhardwaj

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by Manisha Bhardwaj Matrix Theory, Math6304 Lecture Notes from October 25, 2012 take by Maisha Bhardwaj Last Time (10/23/12) Example for low-rak perturbatio, re-examied Relatig eigevalues of matrices ad pricipal submatrices

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get

More information

CLRM estimation Pietro Coretto Econometrics

CLRM estimation Pietro Coretto Econometrics Slide Set 4 CLRM estimatio Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Thursday 24 th Jauary, 2019 (h08:41) P. Coretto

More information

Application of Jordan Canonical Form

Application of Jordan Canonical Form CHAPTER 6 Applicatio of Jorda Caoical Form Notatios R is the set of real umbers C is the set of complex umbers Q is the set of ratioal umbers Z is the set of itegers N is the set of o-egative itegers Z

More information

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row:

(3) If you replace row i of A by its sum with a multiple of another row, then the determinant is unchanged! Expand across the i th row: Math 5-4 Tue Feb 4 Cotiue with sectio 36 Determiats The effective way to compute determiats for larger-sized matrices without lots of zeroes is to ot use the defiitio, but rather to use the followig facts,

More information

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n Review of Power Series, Power Series Solutios A power series i x - a is a ifiite series of the form c (x a) =c +c (x a)+(x a) +... We also call this a power series cetered at a. Ex. (x+) is cetered at

More information

[ 11 ] z of degree 2 as both degree 2 each. The degree of a polynomial in n variables is the maximum of the degrees of its terms.

[ 11 ] z of degree 2 as both degree 2 each. The degree of a polynomial in n variables is the maximum of the degrees of its terms. [ 11 ] 1 1.1 Polyomial Fuctios 1 Algebra Ay fuctio f ( x) ax a1x... a1x a0 is a polyomial fuctio if ai ( i 0,1,,,..., ) is a costat which belogs to the set of real umbers ad the idices,, 1,...,1 are atural

More information

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices A Hadamard-type lower boud for symmetric diagoally domiat positive matrices Christopher J. Hillar, Adre Wibisoo Uiversity of Califoria, Berkeley Jauary 7, 205 Abstract We prove a ew lower-boud form of

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Amog the most useful of the various geeralizatios of KolmogorovâĂŹs strog law of large umbers are the ergodic theorems of Birkhoff ad Kigma, which exted the validity of the

More information

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3.

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3. Closed Leotief Model Chapter 6 Eigevalues I a closed Leotief iput-output-model cosumptio ad productio coicide, i.e. V x = x = x Is this possible for the give techology matrix V? This is a special case

More information

FFTs in Graphics and Vision. The Fast Fourier Transform

FFTs in Graphics and Vision. The Fast Fourier Transform FFTs i Graphics ad Visio The Fast Fourier Trasform 1 Outlie The FFT Algorithm Applicatios i 1D Multi-Dimesioal FFTs More Applicatios Real FFTs 2 Computatioal Complexity To compute the movig dot-product

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information