Further Mathematical Methods (Linear Algebra)

Size: px
Start display at page:

Download "Further Mathematical Methods (Linear Algebra)"

Transcription

1 Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that: i. Positivity: x x and x x = if and only if x =. ii. Symmetry: x y = y x. iii. Linearity: αx + βy z = α x z + β y z. for all vectors x y z V and all scalars α β R. (b) We need to show that the function defined by x y = x t A t Ay where A is an invertible matrix and x y are any vectors in R is an inner product on R. To do this we show that this formula satisfies all of the conditions given in part (a). Thus taking any three vectors x y and z in R and any two scalars α and β in R we have: i. x x = x t A t Ax = (Ax) t (Ax) and since Ax is itself a vector in R say [a a a ] t with a a a R we have x x = [ ] a a a a a = a + a + a a which is the sum of the squares of three real numbers and as such it is real and non-negative. Further to show that x x = if and only if x = we note that: LTR: If x x = then a + a + a =. But this is the sum of the squares of three real numbers and so it must be the case that a = a = a =. Thus Ax = and since A is invertible this gives us x = as the only solution. RTL: If x = then A = and so clearly = t =. (as required). ii. As x y = x t A t Ay is a real number it is unaffected by transposition and so we have: iii. Taking the vector αx + βy R we have: x y = x y t = (x t A t Ay) t = y t A t Ax = y x αx + βy z = (αx + βy) t A t Az = (αx t + βy t )A t Az = αx t A t Az + βy t A t Az = α x z + β y z. Consequently the formula given above does define an inner product on R (as required). (c) We are given that A is the matrix A =. Of course strictly speaking the quantity given by x t A t Ay is a matrix. However it is obvious that here we can treat it as a real number by stipulating that its single entry is the required real number.

2 So taking an arbitrary vector x = [x x x ] t R we can see that x x + x Ax = x = x + x x x + x and so using the inner product defined in (b) its norm will be given by x = x x = (Ax) t (Ax) = (x + x ) + (x + x ) + (x + x ). Thus a condition that must be satisfied by the components of this vector if it is to have unit norm using the given inner product is (x + x ) + (x + x ) + (x + x ) =. Hence to find a symmetric matrix B which expresses this condition in the form x t Bx = we expand this out to get: x + x + x + x x + x x + x x = which can be written in matrix form as x t Bx = [ ] x x x x x = x where the required symmetric matrix is B =. Otherwise: B = A t A. (d) Since we are given the eigenvectors we can calculate the eigenvalues using Bx = λx as follows: For [ ] t we have = and so is the eigenvalue corresponding to this eigenvector. For [ ] t we have = and so is the eigenvalue corresponding to this eigenvector. For [ ] t we have 4 = 4 = 4 4 and so 4 is the eigenvalue corresponding to this eigenvector. So for an orthogonal matrix P we need an orthonormal set of eigenvectors. But the eigenvectors are already mutually orthogonal and so we only need to normalise them. Doing this we find that: P = 6

3 and the corresponding diagonal matrix D is D = 4 where P t BP = D. (e) To find a basis S of R such that the coordinate vector of x with respect to this basis i.e. [x] S is such that [x] t SC t C[x] S = and the associated diagonal matrix C we note that since P t BP = D we have B = PDP t and so x t Bx = = x t PDP t x = = (P t x) t D(P t x) =. Now x can be written as x = y v + y v + y v = P y y y S = P[x] S where the vectors in the set S = {v v v } are the column vectors of the matrix P. As such since P is orthogonal we have P t x = [x] S i.e. P t x is the coordinate vector of x with respect to the basis given by the column vectors of P. That is if we let S be the basis of R given by the set of vectors S = 6 we have [x] t SD[x] S =. Now D is the diagonal matrix above and so if we choose the diagonal matrix C to be C = then we have as required. [x] t SC t C[x] S =

4 Question. (a) The Leslie matrix for such a population involving three age classes (i.e. C C C ) is: a a a L = b b Here the demographic parameters a a a b and b have the following meanings: for i = a i will be the average number of daughters born to a female in the ith age class. for i = b i will be the fraction of females in age class C i expected to survive for the next twenty years and hence enter age class C i+. It should be clear that since the number of females entering into the next age class is determined by the relevant b i in successive time periods we have x (k) = b x (k ) gives the number of females surviving long enough to go from C to C and as such the remaining ( b )x (k ) females in C die. x (k) = b x (k ) gives the number of females surviving long enough to go from C to C and as such the remaining ( b )x (k ) females in C die. whereas the x (k ) females in C all die by the end of the (k )th time period. As such the number of different DNA samples that will be available for cloning at the end of the the (k )th time period will be ( b )x (k ) + ( b )x (k ) + x (k ). Thus since the re-birth afforded by cloning creates new members of the first age class together with those created by natural means we have = a x (k ) + a x (k ) + a x (k ) + ( b )x (k ) + ( b )x (k ) + x (k ) }{{}}{{} by birth by cloning x (k) = ( + a b )x (k ) + ( + a b )x (k ) + ( + a )x (k ) females in C by the end of the kth time period. Consequently the Leslie matrix L i.e. the matrix such that x (k) = Lx (k ) is given by + a b + a b + a L = b. b as required. The Leslie matrix for the population in question is / L = we can see that and / / / L = = / / / / L = / = / =. / 4

5 As such we can see that after j sixty year periods (i.e. where k = j) the population distribution vector will be given by ( ) j x (j) = x (). as such we can see that: Every sixty years the proportion of the population in each age class is the same as it was at the beginning. Every sixty years the number of females in each age class changes by a factor of / (i.e. increases by 5%). as such we can see that overall the population of females is increasing in a cyclic manner with a period of sixty years. (b) The steady states of the coupled non-linear differential equations ẏ = y y y y ẏ = 4y y y y are given by the solutions of the simultaneous equations y y y y = 4y y y y = i.e. by (y y ) = ( ) ( ) (/ ) and ( 7) where the latter steady state is found by solving the linear simultaneous equations for y and y. y y = 4 y y = The steady state whose asymptotic stability we have to establish is clearly ( ). In order to establish it we work find the Jacobian matrix for this system of differential equations i.e. [ ] 4y y DF[y] = y y 4 4y y and evaluating this at the relevant steady state gives [ ] 5 DF[( )] =. 4 The eigenvalues of this matrix are 4 and 5 and since these are both real and negative this steady state is asymptotically stable. 5

6 Question. (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R: i. Closure under vector addition: x + y W. ii. Closure under scalar multiplication: αx W. (b) The sum of two subspaces Y and Z of a vector space V denoted by Y + Z is defined to be the set Y + Z = {y + z y Y and z Z}. To show that Y + Z is a subspace of V we note that: As Y and Z are both subspaces of V the additive identity of V (say the vector ) is in both Y and Z. As such the vector + = Y + Z and so this set is non-empty. As such referring to part (a) we consider two general vectors in Y + Z say and any scalar α R to see that: Y + Z is closed under vector addition since: x = y + z where y Y and z Z x = y + z where y Y and z Z x + x = (y + z ) + (y + z ) = (y + y ) + (z + z ) which is a vector in Y + Z since y + y Y and z + z Z (as Y and Z are closed under vector addition). Y + Z is closed under scalar multiplication since: αx = α(y + z ) = αy + αz which is a vector in Y + Z since αy Y and αz Z (as Y and Z are closed under scalar multiplication). Consequently as Y + Z is non-empty and closed under vector addition and scalar multiplication it is a subspace of V (as required). (c) For a real vector space V we say that V is the direct sum of two of its subspaces (say Y and Z) denoted by V = Y Z if V = Y + Z and every vector v V can be written uniquely in terms of vectors in Y and vectors in Z i.e. we can write for vectors y Y and z Z in exactly one way. To prove that v = y + z The sum of Y and Z is direct iff Y Z = {}. we note that: LTR: We are given that the sum of Y and Z is direct. Now it is clear that We can write Y + Z as = + since Y + Z Y and Z are all subspaces. Given any vector u Y Z we have u Y u Z and u Z (as Z is a subspace). As such we can write Y + Z as = u + ( u). 6

7 But as Y + Z is direct by uniqueness we can write Y + Z in only one way using vectors Y and vectors in Z i.e. we must have u =. Consequently we have Y Z = {} (as required). RTL: We are given Y Z = {} and we note that any vector u Y + Z can be written as u = y + z in terms of a vector y Y and a vector z Z. In order to establish that this sum is direct we have to show that this representation of x is unique. To do this suppose that there are two ways of writing such a vector in terms of a vector in Y and a vector in Z i.e. as such we have that is using the assumption above we have x = y + z = y + z for y y Y and z z Z y y = z }{{}} {{ z } Y Z y y = z z Y Z = {} Consequently we can see that y y = z z = i.e. y = y and z = z and as such the representation is unique (as required). (d) We are given that Y and Z are subspaces of the vector space V with bases given by {y... y r } and {z... z s } respectively. So in order to prove that V = Y Z iff {y... y r z... z s } is a basis of V. we have to prove it both ways i.e. LTR: Given that V = Y Z we know that every vector in V can be written uniquely in terms of a vector in Y and a vector in Z. But every vector in Y can be written uniquely as a linear combination of the vectors in {y... y r } (as it is a basis of Y ) and every vector in Z can be written uniquely as a linear combination of the vectors in {z... z s } (as it is a basis of Z). Thus every vector in V can be written uniquely as a linear combination of vectors in the set {y... y r z... z s } i.e. this set is a basis of V (as required). RTL: Given that {y... y r z... z s } is a basis of V we need to establish that V = Y Z and to do this we start by noting that V = Lin{y... y r z... z s } = {α y + + α r y r + β z + + β s z s α... α r β... β s R} = {y + z y Y and z Z} V = Y + Z As such we only need to establish that the sum is direct and there are two ways of doing this: Method : (Short) Using part (c) we show that every vector in V can be written uniquely in terms of vectors in Y and vectors in Z. Clearly as {y... y r z... z s } is a basis of V every vector in V can be written uniquely as a linear combination of these vectors. As such every vector in V can be written uniquely as the sum of a vector in Y (a unique linear combination of the vectors in {y... y r } a basis of Y ) and a vector in Z (a unique linear combination of the vectors in {z... z s } a basis of Z) as required. Method : (Longer) Using part (c) we show that Y Z = {}. Take any vector u Y Z i.e. u Y and u Z and so we can write this vector as u = r α i y i and u = i= 7 s β i z i. i=

8 But since these two linear combinations of vectors represent the same vector we can equate them. So doing this and rearranging we get α y + + α r y r β z β s z s =. But since this vector equation involves the vectors which form a basis of V these vectors are linearly independent and as such the only solution to this vector equation is the trivial solution i.e. α = = α r = β = = β s =. Consequently the vector u considered above must be and so Y Z = {} (as required). Hence we are asked to establish that if V = Y Z then dim(v ) = dim(y ) + dim(z). But this is obvious since using the bases given above we have: as required. dim(v ) = r + s = dim(y ) + dim(z) 8

9 Question 4. (a) Let A be a real square matrix where x is an eigenvector of A with a corresponding eigenvalue of λ i.e. Ax = λx. We are asked to prove that: i. x is an eigenvector of the identity matrix I with a corresponding eigenvalue of one. Proof: Clearly as x is an eigenvector x and so we have Ix = x = x. Thus x is an eigenvector of the identity matrix I with a corresponding eigenvalue of one as required. ii. x is an eigenvector of A + I with a corresponding eigenvalue of λ +. Proof: Clearly using the information about A and I given above we have (A + I)x = Ax + Ix = λx + x = (λ + )x. Thus x is an eigenvector of A + I with a corresponding eigenvalue of λ + as required. iii. x is an eigenvector of A with a corresponding eigenvalue of λ. Proof: Clearly using the information about A given above we have A x = A(λx) = λ(ax) = λ(λx) = λ x. Thus x is an eigenvector of A with a corresponding eigenvalue of λ as required. As such if the matrix A has eigenvalues λ and corresponding eigenvectors x for k N it should be clear that the matrix A k has eigenvalues λ k and corresponding eigenvectors x. (b) We are given that the n n invertible matrix A has a spectral decomposition given by A = λ i x i x i i= where for i n the x i are an orthonormal set of eigenvectors corresponding to the eigenvalues λ i of A. So using (iii) it should be clear that and for k N we can see that A = A k = λ i x i x i i= λ k i x i x i i= (c) We are given that the eigenvalues of the matrix A are all real and such that < λ i for i n. As such we can re-write the definition of the natural logarithm of the matrix I + A as follows: ( ) k+ ln(i + A) = A k k k= [ ( ) k+ ] = λ k i x i x i k k= i= [ ] ( ) k+ = λ k i x i x i k i= k= ln(i + A) = ln( + λ i )x i x i i= 9

10 where we have used the Taylor series for ln( + x) to establish that for i n ( ) k+ λ k i = ln( + λ i ) k k= since it was stipulated that the eigenvalues are real and such that < λ i. (d) We are given the matrix A = 4 4 and to find its spectral decomposition we need to find an orthonormal set of eigenvectors and the corresponding eigenvalues. For the eigenvalues we solve the equation det(a λi) = i.e. 4λ 4 4λ 4λ = = (4 4λ)[( + 4λ) ] = = ( λ)λ( + λ) = and so the eigenvalues are /. The eigenvectors that correspond to these eigenvalues are then given by solving the matrix equation (A λi)x = for each eigenvalue: For λ = / we have: x 6 y = = 4 z x z = 6y = x + z = i.e. x = z for z R and y =. Thus a corresponding eigenvector is [ ] t. For λ = we have: x 4 y = = 4 z x z = 4y = x z = i.e. x = z for z R and y =. Thus a corresponding eigenvector is [ ] t. For λ = we have: 5 x y = = 4 5 z 5x z = = x 5z = i.e. x = z = and y R. Thus a corresponding eigenvector is [ ] t. So to find the spectral decomposition of A we need to find an orthonormal set of eigenvectors i.e. becomes (since the eigenvectors are already mutually orthogonal) and substitute all of this into the expression in (c). Thus since the λ = term vanishes we have: A = [ ] + [ ] = +.

11 is the spectral decomposition of A. Consequently using part (c) we can see that ln(i + A) is given by the matrix ln(i + A) = ln since ln(/) = ln. ( ) + ln = ln

12 Question 5. (a) A strong generalised inverse of an m n matrix A is any n m matrix A G which is such that: AA G A = A. A G AA G = A G. AA G orthogonally projects R m onto R(A). A G A orthogonally projects R n parallel to N(A). (b) Given that the matrix equation Ax = b represents a set of m inconsistent equations in n variables we can see that any vector of the form x = A G b + (I A G A)w with w R n is a solution to the least squares fit problem associated with this set of linear equations since Ax = AA G b + (A AA G A)w = AA G b + (A A)w = AA G b using the first property of A G given in (a). As such using the third property of A G given in (a) we can see that Ax is equal to AA G b the orthogonal projection of b onto R(A). But by definition a least squares analysis of an inconsistent set of equations Ax = b minimises the distance (i.e. Ax b ) between the vector b and R(A) which is exactly what this orthogonal projection does. Thus the given vector is such a solution. (c) To find a strong generalised inverse of the matrix A = we use the given method. This is done by noting that the first column vector of A is linearly dependent on the other two since = + and so the matrix A is of rank (as the other two column vectors are linearly independent). Thus taking k = the matrices B and C are given by: B = and C = [ ] respectively. So to find the strong generalised inverse we note that: B t B = [ ] = [ ] = (B t B) = [ ] and Thus since CC t = [ ] = (B t B) B t = [ ] [ ] [ ] = (CC t ) = = [ ] [ ].

13 and we have C t (CC t ) = [ ] = A G = C t (CC t ) (B t B) B t = [ ] = which is the sought after strong generalised inverse of A. So to find the set of all solutions to the least squares fit problem associated with the system of linear equations given by x + y = x + z = x + y = we note that these equations can be written in the form Ax = b with x A = x = y and b =. z Thus as A G A = = 4 4 = we have and A G A I = = A G b = = 4 4 = So the solutions of this system of linear equations are given by x = + w 4 for any w R. We know from parts (a) and (b) that the (A g A I)w part of our solutions will yield a vector in N(A) and so the solution set is the translate of N(A) by [ 4]t. Further by the rank-nullity theorem we have η(a) = ρ(a) = = and so N(A) will be a line through the origin. Indeed the vector equation of the line representing the solution set is x = + λ where λ R.

14 Question 6 (a) To test the set of functions { x x } for linear independence we calculate the Wronskian as instructed i.e. f f f W (x) = f f f f = x x x = f f and as W (x) the set is linearly independent (as required). (b) We consider the inner product space formed by the vector space P [ ] and the inner product f(x) g(x) = f(x)g(x)dx. To find an orthonormal basis of the space Lin{ x x } we use the Gram-Schmidt procedure: We start with the vector and note that Consequently we set e = /. We need to find a vector u where But as we have u = x. Now as we set e = x. = = u = x x = dx = [x] =. x = x x x = x x = [ x xdx = ] [ x x dx = Lastly we need to find a vector u where u = x x x x But as and = x x x x x x x = x = we have u = x / = (x )/. Now as we set e = x = x x = = 5 8 (x ). [ x x 4 dx = 4 [ x x dx = [ ] 9 5 x5 x + x = 4 = ] x ] ] = = = (x ) dx = [ ] = [ 4 5 (9x 4 6x + )dx ] = 8 5

15 Consequently the set { } 5 x 8 (x ) is an orthonormal basis for the space Lin{ x x }. (c) We are given that {e e... e k } is an orthonormal basis of a subspace S of an inner product space V. Extending this to an orthonormal basis {e e... e k e k+... e n } of V we note that for any vector x V x = α i e i. i= Now for any j (where j n) we have x e j = α i e i e j = since we are using an orthonormal basis. Thus we can write i= α i e i e j = α j i= x = x e i e i = i= k x e i e i + i= } {{ } in S i=k+ x e i e i. } {{ } in S and so the orthogonal projection of V onto S [parallel to S ] is given by Px = k x e i e i i= for any x V (as required). (d) Using the result in (c) it should be clear that a least squares approximation to x 4 in Lin{ x x } will be given by Px 4. So using the inner product in (b) and the orthonormal basis for Lin{ x x } which we found there we have: since x 4 = x 4 x = x 4 x = Px 4 = x 4 e e + x 4 e e + x 4 e e = x4 + x4 x x x4 x (x ) = (x ) Px 4 = 5 (x ) [ x x 4 5 dx = 5 [ x x 5 6 dx = 6 ] ] x 4 (x )dx = = 5 = (x 6 x 4 )dx = Thus our least squares approximation to x 4 is 5 (x ). [ ] [ 7 x7 x5 = 5 7 ] [ ] 8 = =

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures.

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 2002 Solutions For Problem Sheet 4 In this Problem Sheet, we revised how to find the eigenvalues and eigenvectors of a matrix and the circumstances under which

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Math 310 Final Exam Solutions

Math 310 Final Exam Solutions Math 3 Final Exam Solutions. ( pts) Consider the system of equations Ax = b where: A, b (a) Compute deta. Is A singular or nonsingular? (b) Compute A, if possible. (c) Write the row reduced echelon form

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

Math 24 Spring 2012 Sample Homework Solutions Week 8

Math 24 Spring 2012 Sample Homework Solutions Week 8 Math 4 Spring Sample Homework Solutions Week 8 Section 5. (.) Test A M (R) for diagonalizability, and if possible find an invertible matrix Q and a diagonal matrix D such that Q AQ = D. ( ) 4 (c) A =.

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis. Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Review 1 Math 321: Linear Algebra Spring 2010

Review 1 Math 321: Linear Algebra Spring 2010 Department of Mathematics and Statistics University of New Mexico Review 1 Math 321: Linear Algebra Spring 2010 This is a review for Midterm 1 that will be on Thursday March 11th, 2010. The main topics

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 22 Solutions For Problem Sheet 3 In this Problem Sheet, we looked at some problems on real inner product spaces. In particular, we saw that many different

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name: Math 34/84 - Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your

More information

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang) 10-701/15-781 Recitation : Linear Algebra Review (based on notes written by Jing Xiang) Manojit Nandi February 1, 2014 Outline Linear Algebra General Properties Matrix Operations Inner Products and Orthogonal

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Linear Algebra Lecture Notes-I

Linear Algebra Lecture Notes-I Linear Algebra Lecture Notes-I Vikas Bist Department of Mathematics Panjab University, Chandigarh-6004 email: bistvikas@gmail.com Last revised on February 9, 208 This text is based on the lectures delivered

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

Lecture 2: Linear Algebra

Lecture 2: Linear Algebra Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,

More information

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula.

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula. 4 Diagonalization 4 Change of basis Let B (b,b ) be an ordered basis for R and let B b b (the matrix with b and b as columns) If x is a vector in R, then its coordinate vector x B relative to B satisfies

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions Warm-up True or false? 1. proj u proj v u = u 2. The system of normal equations for A x = y has solutions iff A x = y has solutions 3. The normal equations are always consistent Baby proof 1. Let A be

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Math 21b. Review for Final Exam

Math 21b. Review for Final Exam Math 21b. Review for Final Exam Thomas W. Judson Spring 2003 General Information The exam is on Thursday, May 15 from 2:15 am to 5:15 pm in Jefferson 250. Please check with the registrar if you have a

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam January 22, 216 Name: Exam Rules: This exam lasts 4 hours There are 8 problems

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. More on 2. 3. 2 Resume Linear transformations and

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

7 : APPENDIX. Vectors and Matrices

7 : APPENDIX. Vectors and Matrices 7 : APPENDIX Vectors and Matrices An n-tuple vector x is defined as an ordered set of n numbers. Usually we write these numbers x 1,...,x n in a column in the order indicated by their subscripts. The transpose

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information