Methods for Solving Linear Systems Part 2 We have studied the properties of matrices and found out that there are more ways that we can solve Linear Systems. In Section 7.3, we learned that we can use matrices and elementary row operations to solve Linear Systems using Gaussian and Gauss-Jordan Elimination. This was very similar to the regular Gaussian Elimination method which directly uses the equations of a Linear System. Now that we have learned about determinants and inverses of matrices, we can solve our previous problem again using Cramer s Rule [Section 7.4] and Inverse Matrices [Section 7.7]. Consider, again, the following example: Cramer s Rule Cramer s Rule for 3 variables states if we have the linear system, then,, and where,,, and. We must remember that there are limitations to Cramer s Rule. The value of must not be zero since dividing by zero is undefined. If is zero, this means that the system is inconsistent (no solution) or the system is consistent with dependent equations (infinite solutions). Therefore if we find out that = 0, we cannot use Cramer s Rule and we must resort to other methods like Gaussian Elimination. This also means that if 0, the system is consistent and the equations are independent. So if we can use Cramer s Rule we will always get only one solution. Let s solve our Linear System. Always determine first because if we find out that = 0 after we do other work, then we have wasted our valuable time. This is a determinant of a 3x3 matrix. Recall that the determinant of a 3x3 matrix is the sum of entries of any row or column of the matrix multiplied by their respective cofactors. We can choose any row or column. Let s choose the first row.
Therefore, are the cofactors where. example, are the minors and are the 2x2 determinants after eliminating the th row and th column. For is the determinant after eliminating the 1 st row and 3 rd column. Lastly, the determinant of a 2x2 matrix is: Since 0, we can use Cramer s Rule. This also means that we have a consistent system with independent equations. Therefore we will get only one solution. Let s now determine,, and : (First column) I arbitrarily chose the first column to determine this determinant. Choosing any other row or column would still give me the exact same answer.
(Second row) (First row) Note that choosing the first row (or if I chose first column) made my work much easier since I only had to deal with two determinants, not three, because of the zero. Either row or column will get the same answer but choosing a specific one may take you less time and work. Consider this when you are calculating the determinant of a 3x3 matrix. Now we can find,, and : Therefore, the solution set is
Solving a Linear System using an Inverse Matrix Yet another way we can solve a linear system is to convert it into Matrix Form, AX = B. System of Equation Matrix Form We can check that this is true by performing matrix multiplication to obtain our three equations. Our goal is to determine values for,, and. Therefore, we want to get a matrix equation for X. We now perform matrix algebra: AX = B (Given Matrix Equation) A -1 (AX) = A -1 B (Multiply A -1 on both sides of equations. Note: Multiply on left side of each side of the equation) (A -1 A)X = A -1 B (Associative Property) IX = A -1 B (A -1 A = I, where I is the Identity Matrix) X = A -1 B (IX = X for any X since I is the Identity Matrix) Therefore, we can solve our Linear System by computing the inverse of A and multiply it with B. Recall from our lecture the steps to calculate A -1 : 1) Form augmented matrix [ A I ] 2) Use elementary row operations to transform A into I: [ A I ] [ I C ]. C = A -1 3) Check work by showing AA -1 = I Step 1 Step 2 Our goal is to convert [ A I ] into Reduced Row Echelon Form. Our hope is that we convert it into the form [ I C ]. If we can, then C = A -1. If we cannot convert the left hand side of the augmented matrix into the identity matrix, I, then A does not have an inverse. This occurs when we get a row of all zeros on the left hand side of the augmented matrix. Additional Side Note Cramer s Rule stated: One unique solution (Consistent System and Independent Equations). This says if the determinant of A is not zero, then we have only one solution. There is an important theorem in Linear Algebra which is not in the textbook but worth a mention. It states that: A is invertible (i.e. A has an inverse) if and only if Therefore, when we use inverse matrices and can convert [ A I ] into [ I C ], this means that A is invertible. If A has an inverse, then. If then there is only one solution (Consistent System and Independent Equations). Therefore, the method of solving Linear Systems using Inverse Matrices can only work if there is only one solution just like Cramer s Rule! When the inverse doesn t exist (or when ), you have to go back to using methods like Gaussian Elimination from Sections 7.2 and 7.3. Gaussian Elimination can deal with any scenario.
We now use row operations on [ A I ]: Interchange Rows Multiply Row by Non-Zero Constant Multiply Row by Non-Zero Constant and We have converted [ A I ] into [ I C ], where Step 3 We check that AA -1 = I.
So, X = A -1 B Since then. Therefore, the solution set is We have now solved the same Linear System using regular Gaussian Elimination, Matrix Gaussian Elimination, Matrix Gauss-Jordan Elimination, Cramer s Rule, and Inverse Matrices. Remember that Cramer s Rule and Inverse Matrices are a powerful method but only work when there is one solution (Consistent System with Independent Equations). The other methods, however, can deal with Inconsistent Systems (no solutions) and Consistent Systems with Dependent Equations (infinite solutions).