LS.1 Review of Linear Algebra

Similar documents
M. Matrices and Linear Algebra

a11 a A = : a 21 a 22

18.02 Multivariable Calculus Fall 2007

Elementary maths for GMT

Math 240 Calculus III

Chapter 2 Notes, Linear Algebra 5e Lay

Appendix C Vector and matrix algebra

MATH2210 Notebook 2 Spring 2018

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Lecture Summaries for Linear Algebra M51A

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

Section 9.2: Matrices.. a m1 a m2 a mn

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATRICES. a m,1 a m,n A =

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

1 Matrices and matrix algebra

Math 60. Rumbos Spring Solutions to Assignment #17

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

(1) for all (2) for all and all

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Review of linear algebra

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

2. Every linear system with the same number of equations as unknowns has a unique solution.

Matrix Operations: Determinant

Introduction to Matrices

Chapter 7. Linear Algebra: Matrices, Vectors,

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1 Last time: determinants

ECON2285: Mathematical Economics

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MTH 464: Computational Linear Algebra

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Extra Problems for Math 2050 Linear Algebra I

Some Notes on Linear Algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Chapter 3. Determinants and Eigenvalues

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear equations in linear algebra

Components and change of basis

1 Matrices and Systems of Linear Equations. a 1n a 2n

9. The determinant. Notation: Also: A matrix, det(a) = A IR determinant of A. Calculation in the special cases n = 2 and n = 3:

Calculation in the special cases n = 2 and n = 3:

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

MA 138 Calculus 2 with Life Science Applications Matrices (Section 9.2)

Online Exercises for Linear Algebra XM511

SPRING OF 2008 D. DETERMINANTS

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Math 123, Week 2: Matrix Operations, Inverses

LS.2 Homogeneous Linear Systems with Constant Coefficients

MTH 464: Computational Linear Algebra

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

ECON0702: Mathematical Methods in Economics

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Linear Systems and Matrices

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

v = ( 2)

Basic Concepts in Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Properties of the Determinant Function

Matrices and Linear Algebra

Linear Algebra 1 Exam 1 Solutions 6/12/3

Review problems for MA 54, Fall 2004.

Linear Algebra Exercises

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Matrices. In this chapter: matrices, determinants. inverse matrix

Review of Basic Concepts in Linear Algebra

7.6 The Inverse of a Square Matrix

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

NOTES on LINEAR ALGEBRA 1

Introduction to Determinants

MATH 583A REVIEW SESSION #1

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Linear Algebra V = T = ( 4 3 ).

Review of Linear Algebra

Exercises Chapter II.

Foundations of Matrix Analysis

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

Linear Algebra Primer

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

3 Matrix Algebra. 3.1 Operations on matrices

and let s calculate the image of some vectors under the transformation T.

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

MAT 1302B Mathematical Methods II

Transcription:

LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order equation. This gives important new insights into such systems, and it is usually a more convenient and faster way of solving them. The method makes use of some elementary ideas about linear algebra and matrices, which we will assume you know from your work in multivariable calculus. Your textbook contains a section (5.3) reviewing most of these facts, with numerical examples. Another source is the 18.02 Supplementary Notes, which contains a beginning section on linear algebra covering approximately the right material. For your convenience, what you need to know is summarized briefly in this section. Consult the above references for more details and for numerical examples. 1. Vectors. A vector (or n-vector) is an n-tuple of numbers; they are usually real numbers, but we will sometimes allow them to be complex numbers, and all the rules and operations below apply just as well to n-tuples of complex numbers. (In the context of vectors, a single real or complex number, i.e., a constant, is called a scalar.) The n-tuple can be written horizontally as a row vector or vertically as a column vector. In these notes it will almost always be a column. To save space, we will sometimes write the column vector as shown below; the small T stands for transpose, and means: change the row to a column. a = (a 1,..., a n ) row vector a = (a 1,..., a n ) T column vector These notes use boldface for vectors; in handwriting, place an arrow a over the letter. Vector operations. The three standard operations on n-vectors are: addition: (a 1,..., a n ) + (b 1,..., b n ) = (a 1 + b 1,..., a n + b n ). multiplication by a scalar: c (a 1,..., a n ) = (ca 1,..., ca n ) scalar product: (a 1,..., a n ) (b 1,..., b n ) = a 1 b 1 +... + a n b n. 2. Matrices and Determinants. An m n matrix A is a rectangular array of numbers (real or complex) having m rows and n columns. The element in the i-th row and j-th column is called the ij-th entry and written a ij. The matrix itself is sometimes written (a ij ), i.e., by giving its generic entry, inside the matrix parentheses. A 1 n matrix is a row vector; an n 1 matrix is a column vector. Matrix operations. These are addition: if A and B are both m n matrices, they are added by adding the corresponding entries; i.e., if A = (a ij ) and B = (b ij ), then A + B = (a ij + b ij ). multiplication by a scalar: to get ca, multiply every entry of A by the scalar c; i.e., if A = (a ij ), then ca = (ca ij ). 1

2 18.03 NOTES: LS. LINEAR SYSTEMS matrix multiplication: if A is an m n matrix and B is an n k matrix, their product AB is an m k matrix, defined by using the scalar product operation: ij-th entry of AB = (i-th row of A) (j-th column of B) T. The definition makes sense since both vectors on the right are n-vectors. In what follows, the most important cases of matrix multiplication will be (i) A and B are square matrices of the same size, i.e., both A and B are n n matrices. In this case, multiplication is always possible, and the product AB is again an n n matrix. (ii) A is an n n matrix and B = b, a column n-vector. In this case, the matrix product Ab is again a column n-vector. Laws satisfied by the matrix operations. For any matrices for which the products and sums below are defined, we have (A B) C = A (B C) (associative law) A (B + C) = A B + A C, (A + B) C = A B + A C (distributive laws) A B = B A (commutative law fails in general) Identity matrix. We denote by I n the n n matrix with 1 s on the main diagonal (upper left to bottom right), and 0 s elsewhere. If A is an arbitrary n n matrix, it is easy to check from the definition of matrix multiplication that A I n = A and I n A = A. I n is called the identity matrix of order n; the subscript n is often omitted. Determinants. Associated with every square matrix A is a number, written A or deta, and called the determinant of A. For these notes, it will be enough if you can calculate the determinant of 2 2 and 3 3 matrices, by any method you like. Theoretically, the determinant should not be confused with the matrix itself; the determinant is a number, the matrix is the square array. But everyone puts vertical lines on either side of the matrix to indicate its determinant, and then uses phrases like the first row of the determinant, meaning the first row of the corresponding matrix. An important formula which everyone uses and no one can prove is (1) det(a B) = deta detb. Inverse matrix. A square matrix A is called nonsingular or invertible if deta = 0. If A is nonsingular, there is a unique square matrix B of the same size, called the inverse to A, having the property that B A = I, and A B = I. This matrix B is denoted by A 1. To confirm that a given matrix B is the inverse to A, you only have to check one of the above equations; the other is then automatically true. Different ways of calculating A 1 are given in the references. However, if A is a 2 2 matrix, as it usually will be in the notes, it is easiest simply to use the formula for it:

LS.1 REVIEW OF LINEAR ALGEBRA 3 (2) A = a b 1 d b A 1 =. c d A c a Remember this as a procedure, rather than as a formula: switch the entries on the main diagonal, change the sign of the other two entries, and divide every entry by the determinant. (Often it is better for subsequent calculations to leave the determinant factor outside, rather than to divide all the terms in the matrix by deta.) As an example of (2), 1 1 2 1 4 2 2 1 = = 1 4 2 1 1. 1 1 2 2 To calculate the inverse of a nonsingular 3 3 matrix, see for example the 18.02 notes. 3. Square systems of linear equations. Matrices and determinants were originally invented to handle in an efficient way the solution of a system of simultaneous linear equations. This is still one of their most important uses. We give a brief account of what you need to know. This is not in your textbook, but can be found in the 18.02 Notes. We will restrict ourselves to square systems those having as many equations as they have variables (or unknowns, as they are frequently called). Our notation will be: A = (a ij ), x = (x 1,..., x n ) T, b = (b 1,..., b n ) T, a square n n matrix of constants, a column vector of unknowns, a column vector of constants; then the square system a 11 x 1 +... + a 1n x n = b 1 a 21 x 1 +... + a 2n x n = b 2 can be abbreviated by the matrix equation a n1 x 1 +... + a nn x n = b n (3) A x = b. If b = 0 = (0,..., 0) T, the system (3) is called homogeneous; if this is not assumed, it is called inhomogeneous. The distinction between the two kinds of system is significant. There are two important theorems about solving square systems: an easy one about inhomogeneous systems, and a more subtle one about homogeneous systems. Theorem about square inhomogeneous systems. If A is nonsingular, the system (3) has a unique solution, given by (4) x = A 1 b. Proof. Suppose x represents a solution to (3). We have Ax = b A 1 (Ax) = A 1 b, (A 1 A)x = A 1 b, by associativity; I x = A 1 b, definition of inverse; x = A 1 b, definition of I.

4 18.03 NOTES: LS. LINEAR SYSTEMS This gives a formula for the solution, and therefore shows it is unique if it exists. It does exist, since it is easy to check that A 1 b is a solution to (3). The situation with respect to a homogeneous square system A x = 0 is different. This always has the solution x = 0, which we call the trivial solution; the question is: when does it have a nontrivial solution? Theorem about square homogeneous systems. Let A be a square matrix. (5) Ax = 0 has a nontrivial solution deta = 0 (i.e., A is singular). Proof. The direction follows from (4), since if A is nonsingular, (4) tells us that Ax = 0 can have only the trivial solution x = 0. The direction follows from the criterion for linear independence below, which we are not going to prove. But in 18.03, you will always be able to show by calculation that the system has a nontrivial solution if A is singular. 4. Linear independence of vectors. Let x 1, x 2,..., x k be a set of n-vectors. We say they are linearly dependent (or simply, dependent) if there is a non-zero relation connecting them: (6) c 1 x 1 +... + c k x k = 0, (c i constants, not all 0). If there is no such relation, they are called linearly independent (or simply, independent). This is usually phrased in a positive way: the vectors are linearly independent if the only relation among them is the zero relation, i.e., (7) c 1 x 1 +... + c k x k = 0 c i = 0 for all i. We will use this definition mostly for just two or three vectors, so it is useful to see what it says in these low-dimensional cases. For k = 2, it says (8) x 1 and x 2 are dependent one is a constant multiple of the other. For if say x 2 = cx 1, then cx 1 x 2 = 0 is a non-zero relation; while conversely, if we have non-zero relation c 1 x 1 + c 2 x 2 = 0, with say c 2 = 0, then x 2 = (c 1 /c 2 ) x 1. By similar reasoning, one can show that (9) x 1, x 2, x 3 are dependent one of them is a linear combination of the other two. Here by a linear combination of vectors we mean a sum of scalar multiples of them, i.e., an expression like that on the left side of (6). If we think of the three vectors as origin vectors in three space, the geometric interpretation of (9) is (10) three origin vectors in 3-space are dependent they lie in the same plane. For if they are dependent, say x 3 = d 1 x 1 + d 2 x 2, then (thinking of them as origin vectors) the parallelogram law for vector addition shows that x 3 lies in the plane of x 1 and x 2 see the figure. Conversely, the same figure shows that if the vectors lie in the same plane and say x 1 and x 2 span the plane (i.e., don t lie on a line), then by completing the

d 2 x 2 LS.1 REVIEW OF LINEAR ALGEBRA x 5 2 parallelogram, x 3 can be expressed as a linear combination of x 1 and x 2. (If they all lie on a line, they are scalar multiples of each other and therefore dependent.) x 3 x 1 d 1 x 1 Linear independence and determinants. We can use (10) to see that (11) the rows of a 3 3 matrix A are dependent det A = 0. Proof. If we denote the rows by a 1, a 2, and a 3, then from 18.02, volume of the parallelepiped = a 1 (a 2 a 3 ) = deta, spanned by a 1, a 2, a 3 so that a 1, a 2, a 3 lie in a plane deta = 0. The above statement (11) generalizes to an n n matrix A ; we rephrase it in the statement below by changing both sides to their negatives. (We will not prove it, however.) Determinantal criterion for linear independence Let a 1,..., a n be n-vectors, and A the square matrix having these vectors for its rows (or columns). Then (12) a 1,..., a n are linearly independent deta = 0. Remark. The theorem on square homogeneous systems (5) follows from the criterion (12), for if we let x be the column vector of n variables, and A the matrix whose columns are a 1,..., a n, then (13) Ax = (a 1... a n ). = a 1 x 1 +... + a n x n x n and therefore Ax = 0 has only the solution x = 0 x 1 a 1 x 1 +... + a n x n = 0 has only the solution x = 0, by (13); a 1,..., a n are linearly independent, by (7); deta = 0, by the criterion (12). Exercises: Section 4A