Further Mathematical Methods (Linear Algebra)

Similar documents
Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Econ Slides from Lecture 7

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Linear Algebra. Session 12

MAT Linear Algebra Collection of sample exams

Review of Some Concepts from Linear Algebra: Part 2

1. General Vector Spaces

Linear Algebra Primer

235 Final exam review questions

Linear Algebra Practice Problems

Recall : Eigenvalues and Eigenvectors

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Math 310 Final Exam Solutions

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MA 265 FINAL EXAM Fall 2012

Linear Algebra Lecture Notes-II

Chapter 3. Matrices. 3.1 Matrices

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Review of Linear Algebra

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Lecture 15, 16: Diagonalization

Math 24 Spring 2012 Sample Homework Solutions Week 8

COMP 558 lecture 18 Nov. 15, 2010

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra: Matrix Eigenvalue Problems

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra Massoud Malek

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

CS 143 Linear Algebra Review

Review 1 Math 321: Linear Algebra Spring 2010

Further Mathematical Methods (Linear Algebra) 2002

LINEAR ALGEBRA SUMMARY SHEET.

Linear Algebra Highlights

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Eigenvalues and Eigenvectors

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Mathematical foundations - linear algebra

CS 246 Review of Linear Algebra 01/17/19

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Review of some mathematical tools

A Review of Linear Algebra

Vector Spaces and Linear Transformations

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Lecture 7: Positive Semidefinite Matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Dimension. Eigenvalue and eigenvector

1 Last time: least-squares problems

Linear Algebra- Final Exam Review

Chap 3. Linear Algebra

Basic Elements of Linear Algebra

UNIT 6: The singular value decomposition.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Lecture 1: Review of linear algebra

Linear Algebra Lecture Notes-I

2.3. VECTOR SPACES 25

Lecture 2: Linear Algebra

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula.

Elementary linear algebra

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Mathematics Department Stanford University Math 61CM/DM Inner products

LINEAR ALGEBRA QUESTION BANK

Chapter 6: Orthogonality

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Matrix Algebra: Summary

Linear Algebra Review. Vectors

Math 21b. Review for Final Exam

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Chapter 4 Euclid Space

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

The following definition is fundamental.

LINEAR ALGEBRA MICHAEL PENKAVA

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Review problems for MA 54, Fall 2004.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Properties of Matrices and Operations on Matrices

Pseudoinverse & Moore-Penrose Conditions

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Typical Problem: Compute.

7 : APPENDIX. Vectors and Matrices

Linear System Theory

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Eigenvalues and Eigenvectors A =

Transcription:

Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that: i. Positivity: x x and x x = if and only if x =. ii. Symmetry: x y = y x. iii. Linearity: αx + βy z = α x z + β y z. for all vectors x y z V and all scalars α β R. (b) We need to show that the function defined by x y = x t A t Ay where A is an invertible matrix and x y are any vectors in R is an inner product on R. To do this we show that this formula satisfies all of the conditions given in part (a). Thus taking any three vectors x y and z in R and any two scalars α and β in R we have: i. x x = x t A t Ax = (Ax) t (Ax) and since Ax is itself a vector in R say [a a a ] t with a a a R we have x x = [ ] a a a a a = a + a + a a which is the sum of the squares of three real numbers and as such it is real and non-negative. Further to show that x x = if and only if x = we note that: LTR: If x x = then a + a + a =. But this is the sum of the squares of three real numbers and so it must be the case that a = a = a =. Thus Ax = and since A is invertible this gives us x = as the only solution. RTL: If x = then A = and so clearly = t =. (as required). ii. As x y = x t A t Ay is a real number it is unaffected by transposition and so we have: iii. Taking the vector αx + βy R we have: x y = x y t = (x t A t Ay) t = y t A t Ax = y x αx + βy z = (αx + βy) t A t Az = (αx t + βy t )A t Az = αx t A t Az + βy t A t Az = α x z + β y z. Consequently the formula given above does define an inner product on R (as required). (c) We are given that A is the matrix A =. Of course strictly speaking the quantity given by x t A t Ay is a matrix. However it is obvious that here we can treat it as a real number by stipulating that its single entry is the required real number.

So taking an arbitrary vector x = [x x x ] t R we can see that x x + x Ax = x = x + x x x + x and so using the inner product defined in (b) its norm will be given by x = x x = (Ax) t (Ax) = (x + x ) + (x + x ) + (x + x ). Thus a condition that must be satisfied by the components of this vector if it is to have unit norm using the given inner product is (x + x ) + (x + x ) + (x + x ) =. Hence to find a symmetric matrix B which expresses this condition in the form x t Bx = we expand this out to get: x + x + x + x x + x x + x x = which can be written in matrix form as x t Bx = [ ] x x x x x = x where the required symmetric matrix is B =. Otherwise: B = A t A. (d) Since we are given the eigenvectors we can calculate the eigenvalues using Bx = λx as follows: For [ ] t we have = and so is the eigenvalue corresponding to this eigenvector. For [ ] t we have = and so is the eigenvalue corresponding to this eigenvector. For [ ] t we have 4 = 4 = 4 4 and so 4 is the eigenvalue corresponding to this eigenvector. So for an orthogonal matrix P we need an orthonormal set of eigenvectors. But the eigenvectors are already mutually orthogonal and so we only need to normalise them. Doing this we find that: P = 6

and the corresponding diagonal matrix D is D = 4 where P t BP = D. (e) To find a basis S of R such that the coordinate vector of x with respect to this basis i.e. [x] S is such that [x] t SC t C[x] S = and the associated diagonal matrix C we note that since P t BP = D we have B = PDP t and so x t Bx = = x t PDP t x = = (P t x) t D(P t x) =. Now x can be written as x = y v + y v + y v = P y y y S = P[x] S where the vectors in the set S = {v v v } are the column vectors of the matrix P. As such since P is orthogonal we have P t x = [x] S i.e. P t x is the coordinate vector of x with respect to the basis given by the column vectors of P. That is if we let S be the basis of R given by the set of vectors S = 6 we have [x] t SD[x] S =. Now D is the diagonal matrix above and so if we choose the diagonal matrix C to be C = then we have as required. [x] t SC t C[x] S =

Question. (a) The Leslie matrix for such a population involving three age classes (i.e. C C C ) is: a a a L = b b Here the demographic parameters a a a b and b have the following meanings: for i = a i will be the average number of daughters born to a female in the ith age class. for i = b i will be the fraction of females in age class C i expected to survive for the next twenty years and hence enter age class C i+. It should be clear that since the number of females entering into the next age class is determined by the relevant b i in successive time periods we have x (k) = b x (k ) gives the number of females surviving long enough to go from C to C and as such the remaining ( b )x (k ) females in C die. x (k) = b x (k ) gives the number of females surviving long enough to go from C to C and as such the remaining ( b )x (k ) females in C die. whereas the x (k ) females in C all die by the end of the (k )th time period. As such the number of different DNA samples that will be available for cloning at the end of the the (k )th time period will be ( b )x (k ) + ( b )x (k ) + x (k ). Thus since the re-birth afforded by cloning creates new members of the first age class together with those created by natural means we have = a x (k ) + a x (k ) + a x (k ) + ( b )x (k ) + ( b )x (k ) + x (k ) }{{}}{{} by birth by cloning x (k) = ( + a b )x (k ) + ( + a b )x (k ) + ( + a )x (k ) females in C by the end of the kth time period. Consequently the Leslie matrix L i.e. the matrix such that x (k) = Lx (k ) is given by + a b + a b + a L = b. b as required. The Leslie matrix for the population in question is / L = we can see that and / / / L = = / / / / L = / = / =. / 4

As such we can see that after j sixty year periods (i.e. where k = j) the population distribution vector will be given by ( ) j x (j) = x (). as such we can see that: Every sixty years the proportion of the population in each age class is the same as it was at the beginning. Every sixty years the number of females in each age class changes by a factor of / (i.e. increases by 5%). as such we can see that overall the population of females is increasing in a cyclic manner with a period of sixty years. (b) The steady states of the coupled non-linear differential equations ẏ = y y y y ẏ = 4y y y y are given by the solutions of the simultaneous equations y y y y = 4y y y y = i.e. by (y y ) = ( ) ( ) (/ ) and ( 7) where the latter steady state is found by solving the linear simultaneous equations for y and y. y y = 4 y y = The steady state whose asymptotic stability we have to establish is clearly ( ). In order to establish it we work find the Jacobian matrix for this system of differential equations i.e. [ ] 4y y DF[y] = y y 4 4y y and evaluating this at the relevant steady state gives [ ] 5 DF[( )] =. 4 The eigenvalues of this matrix are 4 and 5 and since these are both real and negative this steady state is asymptotically stable. 5

Question. (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R: i. Closure under vector addition: x + y W. ii. Closure under scalar multiplication: αx W. (b) The sum of two subspaces Y and Z of a vector space V denoted by Y + Z is defined to be the set Y + Z = {y + z y Y and z Z}. To show that Y + Z is a subspace of V we note that: As Y and Z are both subspaces of V the additive identity of V (say the vector ) is in both Y and Z. As such the vector + = Y + Z and so this set is non-empty. As such referring to part (a) we consider two general vectors in Y + Z say and any scalar α R to see that: Y + Z is closed under vector addition since: x = y + z where y Y and z Z x = y + z where y Y and z Z x + x = (y + z ) + (y + z ) = (y + y ) + (z + z ) which is a vector in Y + Z since y + y Y and z + z Z (as Y and Z are closed under vector addition). Y + Z is closed under scalar multiplication since: αx = α(y + z ) = αy + αz which is a vector in Y + Z since αy Y and αz Z (as Y and Z are closed under scalar multiplication). Consequently as Y + Z is non-empty and closed under vector addition and scalar multiplication it is a subspace of V (as required). (c) For a real vector space V we say that V is the direct sum of two of its subspaces (say Y and Z) denoted by V = Y Z if V = Y + Z and every vector v V can be written uniquely in terms of vectors in Y and vectors in Z i.e. we can write for vectors y Y and z Z in exactly one way. To prove that v = y + z The sum of Y and Z is direct iff Y Z = {}. we note that: LTR: We are given that the sum of Y and Z is direct. Now it is clear that We can write Y + Z as = + since Y + Z Y and Z are all subspaces. Given any vector u Y Z we have u Y u Z and u Z (as Z is a subspace). As such we can write Y + Z as = u + ( u). 6

But as Y + Z is direct by uniqueness we can write Y + Z in only one way using vectors Y and vectors in Z i.e. we must have u =. Consequently we have Y Z = {} (as required). RTL: We are given Y Z = {} and we note that any vector u Y + Z can be written as u = y + z in terms of a vector y Y and a vector z Z. In order to establish that this sum is direct we have to show that this representation of x is unique. To do this suppose that there are two ways of writing such a vector in terms of a vector in Y and a vector in Z i.e. as such we have that is using the assumption above we have x = y + z = y + z for y y Y and z z Z y y = z }{{}} {{ z } Y Z y y = z z Y Z = {} Consequently we can see that y y = z z = i.e. y = y and z = z and as such the representation is unique (as required). (d) We are given that Y and Z are subspaces of the vector space V with bases given by {y... y r } and {z... z s } respectively. So in order to prove that V = Y Z iff {y... y r z... z s } is a basis of V. we have to prove it both ways i.e. LTR: Given that V = Y Z we know that every vector in V can be written uniquely in terms of a vector in Y and a vector in Z. But every vector in Y can be written uniquely as a linear combination of the vectors in {y... y r } (as it is a basis of Y ) and every vector in Z can be written uniquely as a linear combination of the vectors in {z... z s } (as it is a basis of Z). Thus every vector in V can be written uniquely as a linear combination of vectors in the set {y... y r z... z s } i.e. this set is a basis of V (as required). RTL: Given that {y... y r z... z s } is a basis of V we need to establish that V = Y Z and to do this we start by noting that V = Lin{y... y r z... z s } = {α y + + α r y r + β z + + β s z s α... α r β... β s R} = {y + z y Y and z Z} V = Y + Z As such we only need to establish that the sum is direct and there are two ways of doing this: Method : (Short) Using part (c) we show that every vector in V can be written uniquely in terms of vectors in Y and vectors in Z. Clearly as {y... y r z... z s } is a basis of V every vector in V can be written uniquely as a linear combination of these vectors. As such every vector in V can be written uniquely as the sum of a vector in Y (a unique linear combination of the vectors in {y... y r } a basis of Y ) and a vector in Z (a unique linear combination of the vectors in {z... z s } a basis of Z) as required. Method : (Longer) Using part (c) we show that Y Z = {}. Take any vector u Y Z i.e. u Y and u Z and so we can write this vector as u = r α i y i and u = i= 7 s β i z i. i=

But since these two linear combinations of vectors represent the same vector we can equate them. So doing this and rearranging we get α y + + α r y r β z β s z s =. But since this vector equation involves the vectors which form a basis of V these vectors are linearly independent and as such the only solution to this vector equation is the trivial solution i.e. α = = α r = β = = β s =. Consequently the vector u considered above must be and so Y Z = {} (as required). Hence we are asked to establish that if V = Y Z then dim(v ) = dim(y ) + dim(z). But this is obvious since using the bases given above we have: as required. dim(v ) = r + s = dim(y ) + dim(z) 8

Question 4. (a) Let A be a real square matrix where x is an eigenvector of A with a corresponding eigenvalue of λ i.e. Ax = λx. We are asked to prove that: i. x is an eigenvector of the identity matrix I with a corresponding eigenvalue of one. Proof: Clearly as x is an eigenvector x and so we have Ix = x = x. Thus x is an eigenvector of the identity matrix I with a corresponding eigenvalue of one as required. ii. x is an eigenvector of A + I with a corresponding eigenvalue of λ +. Proof: Clearly using the information about A and I given above we have (A + I)x = Ax + Ix = λx + x = (λ + )x. Thus x is an eigenvector of A + I with a corresponding eigenvalue of λ + as required. iii. x is an eigenvector of A with a corresponding eigenvalue of λ. Proof: Clearly using the information about A given above we have A x = A(λx) = λ(ax) = λ(λx) = λ x. Thus x is an eigenvector of A with a corresponding eigenvalue of λ as required. As such if the matrix A has eigenvalues λ and corresponding eigenvectors x for k N it should be clear that the matrix A k has eigenvalues λ k and corresponding eigenvectors x. (b) We are given that the n n invertible matrix A has a spectral decomposition given by A = λ i x i x i i= where for i n the x i are an orthonormal set of eigenvectors corresponding to the eigenvalues λ i of A. So using (iii) it should be clear that and for k N we can see that A = A k = λ i x i x i i= λ k i x i x i i= (c) We are given that the eigenvalues of the matrix A are all real and such that < λ i for i n. As such we can re-write the definition of the natural logarithm of the matrix I + A as follows: ( ) k+ ln(i + A) = A k k k= [ ( ) k+ ] = λ k i x i x i k k= i= [ ] ( ) k+ = λ k i x i x i k i= k= ln(i + A) = ln( + λ i )x i x i i= 9

where we have used the Taylor series for ln( + x) to establish that for i n ( ) k+ λ k i = ln( + λ i ) k k= since it was stipulated that the eigenvalues are real and such that < λ i. (d) We are given the matrix A = 4 4 and to find its spectral decomposition we need to find an orthonormal set of eigenvectors and the corresponding eigenvalues. For the eigenvalues we solve the equation det(a λi) = i.e. 4λ 4 4λ 4λ = = (4 4λ)[( + 4λ) ] = = ( λ)λ( + λ) = and so the eigenvalues are /. The eigenvectors that correspond to these eigenvalues are then given by solving the matrix equation (A λi)x = for each eigenvalue: For λ = / we have: x 6 y = = 4 z x z = 6y = x + z = i.e. x = z for z R and y =. Thus a corresponding eigenvector is [ ] t. For λ = we have: x 4 y = = 4 z x z = 4y = x z = i.e. x = z for z R and y =. Thus a corresponding eigenvector is [ ] t. For λ = we have: 5 x y = = 4 5 z 5x z = = x 5z = i.e. x = z = and y R. Thus a corresponding eigenvector is [ ] t. So to find the spectral decomposition of A we need to find an orthonormal set of eigenvectors i.e. becomes (since the eigenvectors are already mutually orthogonal) and substitute all of this into the expression in (c). Thus since the λ = term vanishes we have: A = [ ] + [ ] = +.

is the spectral decomposition of A. Consequently using part (c) we can see that ln(i + A) is given by the matrix ln(i + A) = ln since ln(/) = ln. ( ) + ln = ln

Question 5. (a) A strong generalised inverse of an m n matrix A is any n m matrix A G which is such that: AA G A = A. A G AA G = A G. AA G orthogonally projects R m onto R(A). A G A orthogonally projects R n parallel to N(A). (b) Given that the matrix equation Ax = b represents a set of m inconsistent equations in n variables we can see that any vector of the form x = A G b + (I A G A)w with w R n is a solution to the least squares fit problem associated with this set of linear equations since Ax = AA G b + (A AA G A)w = AA G b + (A A)w = AA G b using the first property of A G given in (a). As such using the third property of A G given in (a) we can see that Ax is equal to AA G b the orthogonal projection of b onto R(A). But by definition a least squares analysis of an inconsistent set of equations Ax = b minimises the distance (i.e. Ax b ) between the vector b and R(A) which is exactly what this orthogonal projection does. Thus the given vector is such a solution. (c) To find a strong generalised inverse of the matrix A = we use the given method. This is done by noting that the first column vector of A is linearly dependent on the other two since = + and so the matrix A is of rank (as the other two column vectors are linearly independent). Thus taking k = the matrices B and C are given by: B = and C = [ ] respectively. So to find the strong generalised inverse we note that: B t B = [ ] = [ ] = (B t B) = [ ] and Thus since CC t = [ ] = (B t B) B t = [ ] [ ] [ ] = (CC t ) = = [ ] [ ].

and we have C t (CC t ) = [ ] = A G = C t (CC t ) (B t B) B t = [ ] = 6 6 4 which is the sought after strong generalised inverse of A. So to find the set of all solutions to the least squares fit problem associated with the system of linear equations given by x + y = x + z = x + y = we note that these equations can be written in the form Ax = b with x A = x = y and b =. z Thus as A G A = = 4 4 = 6 6 4 4 we have and A G A I = = A G b = = 4 4 =. 6 6 4 8 4 So the solutions of this system of linear equations are given by x = + w 4 for any w R. We know from parts (a) and (b) that the (A g A I)w part of our solutions will yield a vector in N(A) and so the solution set is the translate of N(A) by [ 4]t. Further by the rank-nullity theorem we have η(a) = ρ(a) = = and so N(A) will be a line through the origin. Indeed the vector equation of the line representing the solution set is x = + λ where λ R.

Question 6 (a) To test the set of functions { x x } for linear independence we calculate the Wronskian as instructed i.e. f f f W (x) = f f f f = x x x = f f and as W (x) the set is linearly independent (as required). (b) We consider the inner product space formed by the vector space P [ ] and the inner product f(x) g(x) = f(x)g(x)dx. To find an orthonormal basis of the space Lin{ x x } we use the Gram-Schmidt procedure: We start with the vector and note that Consequently we set e = /. We need to find a vector u where But as we have u = x. Now as we set e = x. = = u = x x = dx = [x] =. x = x x x = x x = [ x xdx = ] [ x x dx = Lastly we need to find a vector u where u = x x x x But as and = x x x x x x x = x = we have u = x / = (x )/. Now as we set e = x = x x = = 5 8 (x ). [ x x 4 dx = 4 [ x x dx = [ ] 9 5 x5 x + x = 4 = ] x ] ] = = = (x ) dx = [ 9 5 + ] = [ 4 5 (9x 4 6x + )dx ] = 8 5

Consequently the set { } 5 x 8 (x ) is an orthonormal basis for the space Lin{ x x }. (c) We are given that {e e... e k } is an orthonormal basis of a subspace S of an inner product space V. Extending this to an orthonormal basis {e e... e k e k+... e n } of V we note that for any vector x V x = α i e i. i= Now for any j (where j n) we have x e j = α i e i e j = since we are using an orthonormal basis. Thus we can write i= α i e i e j = α j i= x = x e i e i = i= k x e i e i + i= } {{ } in S i=k+ x e i e i. } {{ } in S and so the orthogonal projection of V onto S [parallel to S ] is given by Px = k x e i e i i= for any x V (as required). (d) Using the result in (c) it should be clear that a least squares approximation to x 4 in Lin{ x x } will be given by Px 4. So using the inner product in (b) and the orthonormal basis for Lin{ x x } which we found there we have: since x 4 = x 4 x = x 4 x = Px 4 = x 4 e e + x 4 e e + x 4 e e = x4 + x4 x x + 5 8 x4 x (x ) = 5 + 7 (x ) Px 4 = 5 (x ) [ x x 4 5 dx = 5 [ x x 5 6 dx = 6 ] ] x 4 (x )dx = = 5 = (x 6 x 4 )dx = Thus our least squares approximation to x 4 is 5 (x ). [ ] [ 7 x7 x5 = 5 7 ] [ ] 8 = = 6 5 5 5 5