Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Similar documents
System of Linear Equations

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Elementary maths for GMT

Notes on Row Reduction

POLI270 - Linear Algebra

Matrix Arithmetic. j=1

System of Linear Equations

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

MATRICES. a m,1 a m,n A =

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra

Matrices and systems of linear equations

1 - Systems of Linear Equations

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

R b. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 1, x h. , x p. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9

Elementary Linear Algebra

Math 1314 Week #14 Notes

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture 1 Systems of Linear Equations and Matrices

Solutions to Exam I MATH 304, section 6

1 Last time: linear systems and row operations

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Lecture 2 Systems of Linear Equations and Matrices, Continued

Matrices and RRE Form

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Linear Algebra and Matrix Inversion

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal

Solving Linear Systems Using Gaussian Elimination

Introduction to Matrices and Linear Systems Ch. 3

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Relationships Between Planes

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Solving Consistent Linear Systems

4 Elementary matrices, continued

3.4 Elementary Matrices and Matrix Inverse

Lecture 22: Section 4.7

MAC Module 1 Systems of Linear Equations and Matrices I

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Linear equations in linear algebra

Solution Set 3, Fall '12

Math 313 Chapter 1 Review

Systems of Linear Equations. By: Tri Atmojo Kusmayadi and Mardiyana Mathematics Education Sebelas Maret University

Linear Systems and Matrices

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices.. a m1 a m2 a mn

Section Gaussian Elimination

Linear Algebra 1 Exam 1 Solutions 6/12/3

PH1105 Lecture Notes on Linear Algebra.

Chapter 5. Linear Algebra. Sections A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Lectures on Linear Algebra for IT

Chapter 4. Solving Systems of Equations. Chapter 4

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Graduate Mathematical Economics Lecture 1

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

Undergraduate Mathematical Economics Lecture 1

Linear Equations in Linear Algebra

Chapter 1 Matrices and Systems of Equations

Matrix & Linear Algebra

Chapter 1. Vectors, Matrices, and Linear Spaces

4 Elementary matrices, continued

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Chapter 2 Notes, Linear Algebra 5e Lay

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

MTH 362: Advanced Engineering Mathematics

REPLACE ONE ROW BY ADDING THE SCALAR MULTIPLE OF ANOTHER ROW

Linear Algebra I Lecture 8

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Lecture 4: Gaussian Elimination and Homogeneous Equations

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

1.4 Gaussian Elimination Gaussian elimination: an algorithm for finding a (actually the ) reduced row echelon form of a matrix. A row echelon form

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

Lecture Notes in Linear Algebra

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

MATH 3511 Lecture 1. Solving Linear Systems 1

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Exercise Sketch these lines and find their intersection.

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

Digital Workbook for GRA 6035 Mathematics

Transcription:

Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix ( mn ) times a vector (n ) Matrix operations: A B addition ca scalar multiplication AB matrix multiplication Basic multiplication operation: a row times a column Individual entries in a matrix product AB ij row i of A column j of B Columns of a matrix product: Column j of AB A column j of B Rows of a matrix product: Row i of AB row i of A B Two views of a matrix times a vector: Ax is a vector whose components are the rows of A times x Ax is a linear combination of the columns of A, the components of x are the coefficients Example: 7 7 7 7 7 7 7 linear combination of the columns of A Similarly a row times a matrix is a row whose entries are the products of the row with each column of A a row given by a linear combination of the rows of A, the entries in the row are the coefficients Transpose of a matrix: A T is obtained from the matrix A by turning the columns of A into the 8 8

rows of A T and at the same time the rows of A become the columns of A T. Mathematically A T ij A ji Dot product of two vectors: If we have vectors ū u, u,.., u n and v v, v,.., v n then their dot productū v is defined by ū v u v u v...u n v n (sum of products of corresponding components) We can carry out, or express, a dot product using matrix operations. If we think of vectors as column matrices thenūv ū T v v T u Matrix operations: properties Aū v Aū Av, Acū caū these are called the linearity property of matrix multiplication AB C AB AC (the more general distributive property) ABC ABC associate law of multiplication see others in the book The identity matrix I or I n. It s a square matrix, one s down the diagonal, zeroes elsewhere AI A, IA A where in each case I represents the identity matrix of the appropriate size so that multiplication is defined. Linear systems: Ax b These are solved by Gaussian elimination, a series of operations on equations in which we add mutiples of equations to other equations, interchange the order of equations, or multiply equations by a nonzero constant, in order to produce equivalent systems (systems having the same solution set) in which variables have been "eliminated" in a systematic way to produce an easily solvable system. We keep track of the coefficient matrix and the right hand side as these operations are carried out. We assemble these in an augmented matrix Ab and then operations on equations become elementary row operations on the augmented matrix. These elementary row operations are: ) Subtract a multiple of a row from another row ) Interchange two rows ) Multiply a row by a nonzero constant Our goal in carrying out these operations is to obtain the reduced row echelon form of Ab. We first carry out the steps to put Ab into row echelon form, working from the upper left to the lower right, eliminating (i.e. making zero) elements below the pivots. Then we obtain reduced row echelon form by working from the lower right pivot to the upper left pivot, eliminating elements above the pivots. We divide each row first so as to make the pivot equal to one. Here is a calculation of the reduced row echelon form of a matrix.

x x x x x x x x x x 6 8 ~ ~ 9 6 8 We first obtain row echelon form (zeroes below the pivots, appearing in the boxes) and then reduced row echelon form. In doing these by hand it is sometimes convenient to use steps that avoid fractions as long as possible; the row echelon form is not unique and will depend on the elimination steps used. However, reduced row echelon form is always unique. From the reduced row echelon form of A b we can "read off" all the solutions. First determine whether a solution exists. If so we set each free variable equal to a parameter name and then solve for each basic variable. At the end, we write the solutions in vector form. In this example we would obtain x as our free variable, and the general solution x x x x x x 6 9 6 8 t Character of solutions of Ax b based on value of r ranka number of not-all-zero rows in row echelon form of A number of pivots/leading entries in row echelon form of A If r m (a pivot in every row) then there is a solution of Ax b for each b If r m (some all-zero rows) then for some choices of b there is no solution If r n (a pivot in every column) then, given b, any solution of Ax b is unique (though solution may or may not exist depending on m and b ) If r n then no solution of Ax b can be unique (there is at least one free variable) Special case of square systems m n If A is square and Ax b has a solution for each b then those solutions are unique. (since in that case r m n ) If A is square and Ax (the homogeneous system) has a unique solution, then Ax b has a unique solution for each b (since in that case r n m ) If A is square and r m n then A~I, i.e. the reduced row echelon form of A is the identity matrix.

A solution to Ax b exists for any b if ranka m and more generally, if ranka rank A b *********************************************************** Another look at the general solution of Ax b This is an important theorem about the structure of a general solution: The general solution of Ax b can be written as x x h x p where x h is the general solution of the homogeneous system Ax and x p represents some particular (i.e. specific) solution of Ax b. Why is this true? Well, note that Ax h x p Ax h Ax p b b so all of the vectors x h x p are solutions of the homogeneous equation. If we have any other solution of Ax b, say x then Ax x p Ax Ax p b b so x x p is a solution of the homogenous equation. This means that x is among the solutions x h x p. We study solutions of homogeneous systems. Suppose a matrix A has reduced row echelon form A~ There are three free variables, x, x, x 6. As per usual, we set x r, x s, x 6 t. It is not hard to see that the general solution then becomes x rst x rs x r x s x t x 6 t x 7 which can be written as x x x x x x 6 x 7 r s t This is a general linear combination of three specific solutions corresponding to free variable choices of r, s, t r, s, t

r, s, t We call these specific solutions fundamental solutions because they can be combined to produce the general solution. We can observe the following: To find the general solution of Ax we generate fundamental solutions by, in turn, setting each free variable equal to one with the others zero. The fundamental solutions thus generated combine with arbitrary coefficients to produce the general solution of Ax. Suppose a system Ax b has reduced row echelon form Ab~ We can generate a particular solution by setting each free variable to zero, and we obtain x x x x x x 6 x 7 The general solution is then obtained by adding in the general solution of the homogeneous equation, previously obtained as a general linear combination of our fundamental solutions, namely x x h x p r s t This shows how we can "read off" a general solution of Ax b from the reduced row echelon form of A b