MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

Similar documents
7.6 The Inverse of a Square Matrix

Elementary maths for GMT

ORIE 6300 Mathematical Programming I August 25, Recitation 1

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Math Matrix Theory - Spring 2012

Chapter 2 Notes, Linear Algebra 5e Lay

4 Elementary matrices, continued

Linear Algebra March 16, 2019

MTH 464: Computational Linear Algebra

1111: Linear Algebra I

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

E k E k 1 E 2 E 1 A = B

CS 246 Review of Linear Algebra 01/17/19

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Linear Algebra and Matrix Inversion

MAT 2037 LINEAR ALGEBRA I web:

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Matrix operations Linear Algebra with Computer Science Application

ELEMENTARY LINEAR ALGEBRA

CSL361 Problem set 4: Basic linear algebra

Numerical Analysis Lecture Notes

Linear Systems and Matrices

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

4 Elementary matrices, continued

Matrices and systems of linear equations

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

ICS 6N Computational Linear Algebra Matrix Algebra

Methods for Solving Linear Systems Part 2

Matrix & Linear Algebra

Inverses and Elementary Matrices

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Math x + 3y 5z = 14 3x 2y + 3z = 17 4x + 3y 2z = 1

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

CHAPTER 8: Matrices and Determinants

Chapter 2. Square matrices

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Lecture Notes in Linear Algebra

Matrices. Chapter Definitions and Notations

3 Matrix Algebra. 3.1 Operations on matrices

Elementary Linear Algebra

Math 313 Chapter 1 Review

A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter 1: Systems of Linear Equations and Matrices

Linear Algebra, Summer 2011, pt. 2

Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Equations and Matrix

22A-2 SUMMER 2014 LECTURE 5

Math 344 Lecture # Linear Systems

MATRICES. a m,1 a m,n A =

Elementary Row Operations on Matrices

1300 Linear Algebra and Vector Geometry

Offline Exercises for Linear Algebra XM511 Lectures 1 12

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

MIT Final Exam Solutions, Spring 2017

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Components and change of basis

Review Solutions for Exam 1

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Review Questions REVIEW QUESTIONS 71

Linear Algebra I Lecture 8

Linear Systems. Math A Bianca Santoro. September 23, 2016

Numerical Linear Algebra Homework Assignment - Week 2

Matrices. 1 a a2 1 b b 2 1 c c π

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MTH 215: Introduction to Linear Algebra

Math 54 HW 4 solutions

Numerical Methods Lecture 2 Simultaneous Equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

ELEMENTARY LINEAR ALGEBRA

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Definition 2.3. We define addition and multiplication of matrices as follows.

ELEMENTARY LINEAR ALGEBRA

Lecture 2 Systems of Linear Equations and Matrices, Continued

Eigenvalues and Eigenvectors: An Introduction

Introduction to Determinants

Basics. A VECTOR is a quantity with a specified magnitude and direction. A MATRIX is a rectangular array of quantities

Solution to Homework 1

LINEAR ALGEBRA WITH APPLICATIONS

MATH10212 Linear Algebra B Homework Week 5

Math 320, spring 2011 before the first midterm

ELEMENTARY LINEAR ALGEBRA

Topic 15 Notes Jeremy Orloff

Next topics: Solving systems of linear equations

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Math Linear Algebra Final Exam Review Sheet

Math 343 Midterm I Fall 2006 sections 002 and 003 Instructor: Scott Glasgow

Transcription:

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and b ij = 0 whenever i j. Now write AB = (c ij ) n n, i.e. let c ij denote the (i, j)-entry of AB. We are trying to show that AB is diagonal, so we must show that c ij = 0 whenever i j. The definition of matrix multiplication says that c ij = a ik b kj. Now, in this notation we have a ik = 0 whenever i k, and b kj = 0 whenever j k; so the only way that one of the terms a ik b kj in above sum can be non-zero is if both i and j are equal to k (because both a ik and b kj would have to be non-zero). In particular, i and j have to be equal to each other! If they are not, then c ij = 0, which is what we were trying to show. We also need to show that A and B commute. Write BA = (d ij ) n n. By the above proof, we know that BA is also diagonal (i.e. we could have just interchanged the roles of A and B in the above proof). Since both AB and BA are diagonal, we just need to show that their diagonal entries, i.e. those with i = j, are equal. That is, we must show that c ii = d ii for all i {1,..., n}. We have c ii = a ik b ki, and in order for a term a ik b ki in this sum to be non-zero, we need both a ik and b ki to be non-zero, so we need k = i. Therefore, c ii = a ii b ii. But if we swap the roles of A and B in this calculation, we find that d ii = b ii a ii = a ii b ii. Since this argument did not depend on the value of i, we have shown that c ii = d ii for all i, which is what we wanted. (b) Let s use the same notation A = (a ij ) n n, B = (b ij ) n n and AB = (c ij ) n n as in part (a). We are assuming that A and B are upper triangular, i.e. that a ij = 0 and b ij = 0 whenever i > j. We must show that AB is upper triangular, i.e. that c ij = 0 whenever i > j. From the definition of matrix multiplication, we can write j c ij = a ik b kj = a ik b kj + a ik b kj. k=j+1 Since A is upper triangular, we have a ik = 0 whenever i > k; but we are also assuming that i > j (because we are trying to show that c ij = 0 in this case), so in the sum j a ikb kj

above we have k j < i and hence all of the a ik in this sum are 0. Similarly, in the second sum n k=j+1 a ikb kj we have k < j (because k starts from j + 1 in this sum) and hence b kj = 0 because B is upper triangular. Combining these last two observations, we conclude that when i > j we have c ij = j a ik }{{} =0 b kj + k=j+1 a ik b }{{} kj = 0, =0 which means that AB = (c ij ) is indeed upper triangular. (c) Two upper triangular matrices will not necessarily commute. Here is a counterexample. If A = 1 1 0 0 1 0 and B = 1 0 0 0 1 1, then AB = 1 1 1 0 1 1 but BA = 1 1 0 0 1 1. Exercise 2. (a) We are assuming that A is symmetric, i.e. that A T = A, and we must prove that BAB T is symmetric, i.e. that (BAB T ) T = BAB T. By Proposition 2.12(4) in the lecture notes (which says that (CD) T = D T C T for matrices C and D), we have (BAB T ) T = (B T ) T A T B T = BA T B T. Since A is symmetric, this equals BAB T, which is what we wanted. (b) In general, we have (AB) T = B T A T. If A and B are symmetric, it follows that (AB) T = BA. This equals AB if and only if A and B commute (by definition of commute ). (c) We are assuming that AB = I, and we are trying to prove that also BA = I. This means that B is invertible (with inverse A). By the Invertible Matrix Theorem, to prove that B is invertible, we can instead prove that that Bx = 0 has only the trivial solution. But if Bx = 0, then x = Ix = ABx = A0 = 0, so indeed the only solution of Bx = 0 is the trivial solution. Hence, B is invertible, but we still need to show that A is the inverse of B, i.e. that BA = I (we already know that AB = I, by assumption). Let C denote the inverse of B, so that BC = I = CB. Then, in particular, CB = AB (because both are equal to I) and so part (d) gives A = C, i.e. A is the inverse of B. (Alternatively, observe that BA = BAI = BA(BC) = B(AB)C = BIC = BC = I.) (d) Multiplying both sides of the equation AB = AC on the left by A 1 (which we are assuming exists) gives A 1 AB = A 1 AC, i.e. IB = IC, i.e. B = C as required. Exercise 3. Consider the following 4 4 elementary matrices: 0 1 0 0 1 0 0 0 1 0 0 0 E 1 = 1 0 0 0 0, E 2 = 0 0 1 0 0, E 3 = 0 1 0 0 0. 0 0 0

Then E 1 and E 3 commute because 0 1 0 0 E 1 E 3 = E 3 E 1 = 1 0 0 0 0, 0 but E 1 and E 2 do not commute because 0 0 1 0 0 E 1 E 2 = 1 0 0 0 0 1 0 0 while E 2E 1 = 0 1 0 0 0. 0 0 (Remark: think about the row operations that these matrices are performing. Can you convince yourself that E 1 and E 3 commute but E 1 and E 2 do not, without explicitly computing the various matrix products?) Exercise 4. (a) Using Gauss Jordan inversion, we find 1 1 3 1 0 0 2 3 6 0 1 0 1 1 3 1 0 0 0 1 0 2 1 0 R 2 R 2 + 2R 1 1 1 4 1 0 1 R 3 R 3 R 1 1 1 0 4 0 3 0 1 0 2 1 0 R 1 R 1 3R 3 1 0 0 6 1 3 0 1 0 2 1 0 R 1 R 1 + R 2. 1 0 1 1 0 1 Since the left-hand side of the final augmented matrix is the identity matrix, we conclude that A is invertible, and the inverse of A is the right-hand side of the final augmented matrix, i.e. A 1 = 6 1 3 2 1 0. 1 0 1 (b) The system can be written in the form Ax = b, where A = 1 1 3 2 3 6, x = x 1 x 2, and b = 4 6. 1 1 4 5 Therefore, by part (a), x = A 1 b = 6 1 3 2 1 0 4 6 = 3 2, 1 0 1 5 1 that is, the solution set of the system is {(3, 2, 1)}. Exercise (#) 5. (a) Using the Gauss Jordan algorithm, we find 1 4 0 0 3 0 1 4 0 0 3 0 2 8 1 R 3 R 3 + 2R 1 1 4 0 0 1 0 R 2 1R 3 2 1 0 0 0 1 0 R 1 R 1 4R 2. x 3

(b) Part (a) shows that the matrix A is row equivalent to I 3 3, so it is invertible by the Invertible Matrix Theorem. The elementary matrices corresponding to the elementary row operations used to obtain I 3 3 from are E 1 = 1 0 0 0 1 0 (apply R 3 R 3 + 2R 1 to I 3 3 ); 2 0 1 E 2 = 1 0 0 0 1 0 3 (apply R 2 1R 3 2 to I 3 3 ); E 3 = 1 4 0 0 1 0 (apply R 1 R 1 4R 2 to I 3 3 ), and we have E 3 E 2 E 1 A = I. Therefore, the inverse of A can be written as and A itself can be written as A 1 = E 3 E 2 E 1, A = (A 1 ) 1 = (E 3 E 2 E 1 ) 1 = E1 1 E2 1 E3 1. Note that the inverses of the elementary matrices E i (i = 1, 2, 3) above are obtained by reversing the row operation that was used to obtain E i from I 3 3. That is, E1 1 = 1 0 0 0 1 0 (apply R 3 R 3 2R 1 to I 3 3 ); 2 0 1 E2 1 = 1 0 0 0 3 0 (apply R 2 3R 2 to I 3 3 ); E3 1 = 1 4 0 0 1 0 (apply R 1 R 1 + 4R 2 to I 3 3 ). Exercise (#) 6. (a) The first interpolation condition says that S(0) = y 1. Looking at the piecewise definition of S(x), we see that when x = 0, S(x) is defined by the formula a 1 x 3 + b 1 x 2 + c 1 x + d 1, and so in particular S(0) = d 1. Therefore, our first equation for the 16 coefficients of S(x) is d 1 = y 1. The second interpolation condition says that S(1) = y 2. The piecewise definition of S(x) says that when x = 1, S(x) is defined by the formula a 2 (x 1) 3 +b 2 (x 1) 2 +c 2 (x 1)+d 2, and this tells us that S(1) = d 2. Therefore, our second equation for the coefficients of S(x) is d 2 = y 2. Similarly, the conditions S(2) = y 3 and S(3) = y 4 give us the equations d 3 = y 3 and d 4 = y 4. The final interpolation condition, i.e. S(4) = y 5, gives us a slightly different looking equation. When x = 4, we have S(x) = a 4 (x 3) 3 + b 4 (x 3) 2 + c 4 (x 3) + d 4 according to the

piecewise definition of S(x). Substituting in x = 4 and setting the result equal to y 5 gives the fifth equation a 4 + b 4 + c 4 + d 4 = y 5. (b) In order for S(x) to be continuous at x = 1, the formulae that define S(x) on the intervals [0, 1) and [1, 2) should agree at x = 1. In other words, if we substitute x = 1 into a 1 x 3 + b 1 x 2 + c 1 x + d 1 and a 2 (x 1) 3 + b 2 (x 1) 2 + c 2 (x 1) + d 2, then we should get the same answer. This gives the following equation for the coefficients of S(x): a 1 + b 1 + c 1 + d 1 = d 2. Similarly, for S(x) to be continuous at x = 2 we must have a 2 + b 2 + c 2 + d 2 = d 3, and for S(x) to be continuous at x = 3 we must have a 3 + b 3 + c 3 + d 3 = d 4. (c) We now need to write down conditions that will make S(x) differentiable at x = 1, 2 and 3. Let s therefore first write down the derivative of S(x) using its piecewise definition: 3a 1 x 2 + 2b 1 x + c 1 if 0 x < 1 S 3a 2 (x 1) 2 + 2b 2 (x 1) + c 2 if 1 x < 2 (x) = 3a 3 (x 2) 2 + 2b 3 (x 2) + c 3 if 2 x < 3 3a 4 (x 3) 2 + 2b 4 (x 3) + c 4 if 3 x 4. If S(x) is to be differentiable at x = 1, then the first two formulae above must agree at x = 1. That is, we should get the same answer when we put x = 1 into both 3a 1 x 2 +2b 1 x+c 1 and 3a 2 (x 1) 2 + 2b 2 (x 1) + c 2. This gives us the following equation for the coefficients of S(x): 3a 1 + 2b 1 + c 1 = c 2. Similarly, for S(x) to be differentiable at x = 2 we must have 3a 2 + 2b 2 + c 2 = c 3, and for S(x) to be differentiable at x = 3 we must have 3a 3 + 2b 3 + c 3 = c 4. (d) Finally, to make S(x) twice differentiable at x = 1, 2 and 3, let s consider the second derivative of S(x): 6a 1 x + 2b 1 if 0 x < 1 S 6a 2 (x 1) + 2b 2 if 1 x < 2 (x) = 6a 3 (x 2) + 2b 3 if 2 x < 3 6a 4 (x 3) + 2b 4 if 3 x 4. In order for S(x) to be twice differentiable at x = 1, the first two formulae above must agree at x = 1. That is, if we put x = 1 into both 6a 1 x + 2b 1 and 6a 2 (x 1) + 2b 2, then we should get the same answer. This means that we must have 6a 1 + 2b 1 = 2b 2. Similarly, for S(x) to be twice differentiable at x = 2 and x = 3, we must have (respectively) 6a 2 + 2b 2 = 2b 3 and 6a 3 + 2b 3 = 2b 4.

(e) If we try to use a cubic spline to interpolate n points instead of five points, then the interpolant S(x) will be built from n 1 cubic polynomials, each depending on four coefficients. Therefore, S(x) itself will depend on a total of 4(n 1) = 4n 4 coefficients. If we follow steps (a) (d), we will obtain (1) n equations for the coefficients of S(x) from step (a), one for each of the n interpolation conditions; (2) n 2 equations from step (b), by making S(x) continuous at the n 2 points where pairs of the n 1 cubics are pieced together; (3) n 2 equations from step (c), by making S(x) differentiable at these n 2 points; (4) n 2 equations from step (d), by making S(x) twice differentiable at these n 2 points. In total, this is n + 3(n 2) = 4n 6 equations for the 4n 4 coefficients. (In other words, there are always two free variables the linear system for the coefficients of S(x); the number of free variables does not increase as we increase the number n of data points that we are trying to interpolate.) Exercise (MATLAB) 7. Please see the separate MATLAB output file on QMPlus, which contains model code/output for this exercise.