Differential Equations and Linear Algebra Supplementary Notes. Simon J.A. Malham. Department of Mathematics, Heriot-Watt University

Similar documents
Rectangular Systems and Echelon Forms

Differential Equations and Linear Algebra Exercises. Department of Mathematics, Heriot-Watt University, Edinburgh EH14 4AS

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

REUNotes08-CircuitBasics May 28, 2008

1.2 Row Echelon Form EXAMPLE 1

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Mixing Problems. Solution of concentration c 1 grams/liter flows in at a rate of r 1 liters/minute. Figure 1.7.1: A mixing problem.

Notes for course EE1.1 Circuit Analysis TOPIC 4 NODAL ANALYSIS

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Supplemental Notes on Complex Numbers, Complex Impedance, RLC Circuits, and Resonance

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Review Questions REVIEW QUESTIONS 71

ELECTRONICS E # 1 FUNDAMENTALS 2/2/2011

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Chapter 2 Notes, Linear Algebra 5e Lay

Exercise 1: RC Time Constants

MTH 2032 Semester II

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

Matrices and Systems of Equations

Matrices and Systems of Equations

Physics Will Farmer. May 5, Physics 1120 Contents 2

Inductance, RL Circuits, LC Circuits, RLC Circuits

CPE 310: Numerical Analysis for Engineers

Physics 115. General Physics II. Session 24 Circuits Series and parallel R Meters Kirchoff s Rules

Switch. R 5 V Capacitor. ower upply. Voltmete. Goals. Introduction

Solution to Homework 2

Chapter 28. Direct Current Circuits

Chapter 6 DIRECT CURRENT CIRCUITS. Recommended Problems: 6,9,11,13,14,15,16,19,20,21,24,25,26,28,29,30,31,33,37,68,71.

Chapter 2. Engr228 Circuit Analysis. Dr Curtis Nelson

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

NUMERICAL MATHEMATICS & COMPUTING 7th Edition

Elementary maths for GMT

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

8. Electric Currents

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Lecture Outline Chapter 21. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc.

Solving Linear Systems Using Gaussian Elimination

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

MATRICES. a m,1 a m,n A =

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course

Methods for Solving Linear Systems Part 2

The word Matrices is the plural of the word Matrix. A matrix is a rectangular arrangement (or array) of numbers called elements.

MATH 312 Section 3.1: Linear Models

First-order transient

Outline. Week 5: Circuits. Course Notes: 3.5. Goals: Use linear algebra to determine voltage drops and branch currents.

Engineering Fundamentals and Problem Solving, 6e

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Low Pass Filters, Sinusoidal Input, and Steady State Output

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Linear Algebra 1 Exam 1 Solutions 6/12/3

7.6 The Inverse of a Square Matrix

Calculus Relationships in AP Physics C: Electricity and Magnetism

Linear System of Equations

EIT Review. Electrical Circuits DC Circuits. Lecturer: Russ Tatro. Presented by Tau Beta Pi The Engineering Honor Society 10/3/2006 1

Physics for Scientists & Engineers 2

LINEAR ALGEBRA. VECTOR CALCULUS

BFF1303: ELECTRICAL / ELECTRONICS ENGINEERING

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

130 CHAP. 4 Systems of ODEs. Phase Plane. Qualitative Methods. Find the eigenvalues and eigenvectors of the matrix

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Elementary Linear Algebra

Measurement of Electrical Resistance and Ohm s Law

Chapter 7. Linear Algebra: Matrices, Vectors,

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Lecture #3. Review: Power

CHAPTER 6. Direct Methods for Solving Linear Systems

FE Review 2/2/2011. Electric Charge. Electric Energy ELECTRONICS # 1 FUNDAMENTALS

Math 313 Chapter 1 Review

Version 001 CIRCUITS holland (1290) 1

Lecture 7: Laplace Transform and Its Applications Dr.-Ing. Sudchai Boonto

Direct Current (DC) Circuits

Solutions to these tests are available online in some places (but not all explanations are good)...

Linear Second-Order Differential Equations LINEAR SECOND-ORDER DIFFERENTIAL EQUATIONS

6. MESH ANALYSIS 6.1 INTRODUCTION

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Chapter 19 Lecture Notes

Lab 6. RC Circuits. Switch R 5 V. ower upply. Voltmete. Capacitor. Goals. Introduction

LINEAR SYSTEMS, MATRICES, AND VECTORS

FURTHER MATHEMATICS A2/FM/CP1 A LEVEL CORE PURE 1

MTH 464: Computational Linear Algebra

a11 a A = : a 21 a 22

Direct Methods for Solving Linear Systems. Matrix Factorization

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Name (print): Lab (circle): W8 Th8 Th11 Th2 F8. θ (radians) θ (degrees) cos θ sin θ π/ /2 1/2 π/4 45 2/2 2/2 π/3 60 1/2 3/2 π/

FINAL (CHAPTERS 7-10) MATH 141 FALL 2018 KUNIYUKI 250 POINTS TOTAL

SPRING OF 2008 D. DETERMINANTS

LABORATORY 4 ELECTRIC CIRCUITS I. Objectives

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Chapter 21 Electric Current and Direct- Current Circuits

Chapter 4. Solving Systems of Equations. Chapter 4

Series/Parallel Circuit Simplification: Kirchoff, Thevenin & Norton

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Transcription:

Differential Equations and Linear Algebra Supplementary Notes Simon J.A. Malham Department of Mathematics, Heriot-Watt University

Contents Chapter 1. Linear algebraic equations 5 1.1. Gaussian elimination in practice 5 1.2. Exercises 8 Appendix A. 11 A.1. Euler s formula 11 A.2. Electrical circuits and networks 12 A.3. Determinants and inverse matrices 14 A.4. Matrix transpose 16 Bibliography 17 3

CHAPTER 1 Linear algebraic equations 1.1. Gaussian elimination in practice 1.1.1. Operation counts. Which method should we use, Gaussian elimination or the Gauss-Jordan method? The answer lies in the efficiency of the respective methods when solving large systems. Gaussian elimination. Gaussian elimination with back-substitution applied to an n n system requires n 3 3 + n2 n multiplications/divisions 3 n 3 3 + n2 2 5n 2n3 additions/subtractions 3 + flops. 6 For large values of n, i.e. large systems, the 2n 3 /3 term dominates and we would normally quote that Gaussian elimination with back substitution for an n n system requires 2n 3 /3 flops (flops floating point operations). Gaussian-Jordan. Gaussian-Jordan applied to an n n system requires n 3 2 + n2 multiplications/divisions 2 n 3 2 n n 3 + flops. additions/subtractions 2 For large values of n Gaussian-Jordan requires n 3 flops. Hence Gauss-Jordan requires about 50% more effort than Gaussian elimination and this difference becomes significant when n is large. For example if n = 100, then 2n 3 /3 666, 666 while n 3 = 1, 000, 000 which amounts to 333, 333 extra flops see Meyer [10]. Thus Gauss-Jordan is not recommended for solving linear systems of equations that arise in practical situations, though it does have theoretical advantages for example for finding the inverse of a matrix. 5

6 1. LINEAR ALGEBRAIC EQUATIONS 1.1.2. Finite accuracy computer arithmetic. When we implement Gaussian elimination on a computer, in practice for large systems of equations, we need to contend with the fact that we can only perform calculations to a given finite accuracy (modern algebraic computing packages can perform exact integer arithmetic but become impractical for large systems). Here we will simply present the main issues associated with finite accuracy arithmetic, providing some cursory examples (borrowed from Meyer [10]) demonstrating their consequences and give practical solutions for avoiding them. For more details and illustrative examples see the excellent book by Meyer [10]. 1.1.3. Example. Consider the following system: The exact solution is 47x + 28y = 19, 89x + 53y = 36. x = 1 and y = 1. However using three digit arithmetic you get x = 0.191 and y = 1.00. 1.1.4. Partial pivoting. In this last example, rounding errors are responsible for the anomalous answer, and though such errors can never be completely eliminated, there are some practical strategies to help minimize these errors. Partial pivoting. The idea is to maximize the magnitude of the pivot at each step using only row interchanges. More precisely, at each step we interchange (if necessary) the pivot row with any row below it to ensure that the pivot has the maximum possible magnitude compared with the coefficients in the column below its position. The fourth step in a typical case would be: 0 0 0 0 0 0 P 0 0 0 P 0 0 0 P Search the positions in the fourth column and then interchange the rows so that the coefficient in the circled position is the largest in magnitude of all the terms labelled P.

1.1. GAUSSIAN ELIMINATION IN PRACTICE 7 Note. When partial pivoting is used, no multiplier ever exceeds 1 in magnitude and the possibility that large numbers can swamp small ones in finite digit arithmetic is significantly reduced but not entirely eliminated. 1.1.5. Example (two-scale system). The exact solution to the system of equations is 10x + 10 5 y = 10 5, x + y = 2, x = 1 and y = 1.0002 1.0001 1.0001. Using three digit accuracy and partial pivoting the approximate solution is x = 0 and y = 1. 1.1.6. Scaling. In this last example the differing orders of magnitude of the coefficients means that the first equation applies on a different scale to the second, and this produced the anomalous approximate solution. A sensible strategy would be to rescale the system before we proceed to solve it. The idea is to rescale the above system so that the coefficient of maximum possible magnitude is 1. For example if we multiply the first equation in the last example by 10 5, we recover the example system we saw in the partial pivoting section, which we were very adequately able to solve using partial pivoting. Hence the second refinement need for successful Gaussian elimination is a scaling strategy: which in fact combines both row scaling multiplying selected rows by non-zero multipliers with column scaling multiplying selected columns of the coefficient matrix A by non-zero multipliers. Note. Row scaling does not change the exact solution, but column scaling does! Column scaling the kth column is equivalent to changing the units of the kth unknown. Practical scaling strategy. Always choose units that are natural to the problem so that there is not a distorted relationship between the sizes of physical parameters within a given problem. Row scale the system of linear equations so that the maximum magnitude of the coefficient in each row is 1, i.e. divide each row by the coefficient (in that row) of maximum magnitude.

8 1. LINEAR ALGEBRAIC EQUATIONS Partial pivoting and this scaling strategy makes Gaussian elimination with back substitution a proven extremely reliable and effective tool for practical systems of linear equations. 1.1.7. Ill conditioned systems. This example demonstrates that there are some systems that are very sensitive to small perturbations and why care must always be taken when numerically solving equations. Consider the system.835x +.667y =.168, (1.1a).333x +.266y = b 2. (1.1b) First let s suppose that b 2 =.067, then the exact solution is x = 1. and y = 1. (exact values). Let s perturb b 2 slightly to b 2 =.066. Now the exact solution is x = 666. and y = 834. (exact values). This is an example of an ill conditioned system. It s sensitivity to small perturbations is intrinsic to the system itself, rather than the result of any numerical procedure. Ill-conditioned systems: A system of equations is said to be ill conditioned when some small perturbation in the system can produce relatively large changes in the exact solution. Otherwise the system is said to be well conditioned. Let s examine what s happening here from a geometric perspective. The equations (1.1) above represent two straight lines (with b 2 =.067): y = 1.25187 x +.251874, y = 1.25188 x +.25188. (1.2a) (1.2b) Notice that these two lines have very similar slopes and intercepts they are extremely close to each other. They do intersect at the point x = 1. and y = 1. when b 2 =.067. However it is easy to imagine how if we were to perturb one of the lines slightly, the point of intersection (solution) could change quite dramatically (as we have seen). 1.2. Exercises 1.1. Consider the following system: 10 3 x y = 1, x + y = 0. (a) Use 3-digit arithmetic without pivoting to solve this system. (b) Find a system that is exactly satisfied by your solution from part (a), and note how close the system is to the original system.

1.2. EXERCISES 9 (c) Now use partial pivoting and 3-digit arithmetic to solve the original system. (d) Find a system that is exactly satisfied by your solution from part (c), and note how close this system is to the original system. (e) Use exact arithmetic to obtain the solution to the original system, and compare the exact solution with the results of parts (a) and (c). (f) Round the exact solution to three significant digits, and compare the result with those of parts (a) and (c). 1.2. Determine the exact solution of the following system: 8x + 5y + 2z = 15, 21x + 19y + 16z = 56, 39x + 48y + 53z = 140. Now change 15 to 14 in the first equation and again solve the system with exact arithmetic. Is the system ill-conditioned? 1.3. Using geometric considerations, rank the following three systems according to their condition. (a) (b) (c) 1.001x y =.235, x +.0001y =.765 ; 1.001x y =.235, x +.9999y =.765 ; 1.001x + y =.235, x +.9999y =.765. 1.4. Use Gaussian elimination, with partial pivoting, to solve the following systems of equations. Work to five decimal places. (a) x y + 2z = 2, 3x 2y + 4z = 5, 2y 3z = 2, (b) 33x + 16y + 72z = 359, 24x 10y 57z = 281, 8x 4y 17z = 85.

APPENDIX A A.1. Euler s formula For any real angle θ measured in radians, e iθ = cos θ + i sin θ. To prove this we use the series expansion definition for e z : e z z n = n! n=0 = 1 + z + z2 2! + z3 3! +. This series is known to converge for all real and complex z. Let z = iθ, with θ real, and noting that i 2 = 1, i 3 = i,i 4 = 1, i 5 = i,... etc., then for every real θ: e iθ = 1 + iθ + i2 θ 2 + i3 θ 3 + i4 θ 4 + i5 θ 5 + 2! 3! ) 4! 5! ) = (1 θ2 2! + θ4 4! + + i (θ θ3 3! + θ5 5! + = cos θ + i sin θ. We have used that the Taylor series expansion for a given function f(θ) about θ = a is unique (in this case cos θ and sin θ about θ = 0). 11

12 A A.2. Electrical circuits and networks A.2.1. Simple electrical circuit. Consider the flow of an electrical current I(t) in a simple series circuit as shown in Figure A.1. The resistor has resistance R Ohms, the capacitor has capacitance C Farads and the inductor has inductance l Henrys which are positive constants. A battery or power source provides an impressed voltage of V (t) Volts at any given time. We have the following relations: The rate of change of total charge Q(t) Coulombs in the capacitor at time t, is the current, I(t) = dq dt. The voltage drop across the capacitor is Q/C, whilst the voltage drop across the inductor is l di dt. Ohm s law: The voltage drop V across any resistor of resistance R Ohms is given by V = IR, where I (measured in Amps) is the current passing through the resistor. Kirchhoff s Voltage law: in a closed circuit, the impressed voltage is equal to the sum of the voltage drops in the rest of the circuit. Combining all these relations, we see that for the circuit in Figure A.1, l d2 Q dt 2 + RdQ dt + 1 C Q = V (t). (A.1) Feedback-squeals in electric circuits at concerts are an example of resonance effects in such equations. Note also that, from Kirchhoff s law, there is a nontrivial relationship between the initial values: I (0) = (V 0 RI 0 Q 0 /C)/l. R C I(t) L V(t) Figure A.1. Simple electrical circuit.

A.2. ELECTRICAL CIRCUITS AND NETWORKS 13 A.2.2. Electrical networks. Consider the following laws for electrical circuits. Kirchhoff s node rule: At any point of a circuit (including the battery), the sum of the inflowing currents equals the sum of the outgoing currents. Kirchhoff s loop rule: In any closed loop, the sum of the voltage drops equals the impressed electromotive force. Combining these laws together with Ohm s Law from section A.2.1, we can write down the system of linear algebraic equations for the currents in various parts of the circuit shown in the Figure A.2: Node P: I 1 I 2 + I 3 = 0, Node Q: I 1 + I 2 I 3 = 0, Right loop: 10I 2 + 25I 3 = 90, Left loop: 20I 1 + 10I 2 = 80. (A.2) 20 Ω Q 10 Ω + + 80 V 10 Ω 90 V I 1 I 2 I 3 P 15 Ω Figure A.2. Simple electrical network.

14 A A.3. Determinants and inverse matrices A.3.1. Identity matrix. A square matrix is one in which the number of rows and columns are equal. An identity matrix is a square matrix whose elements are all zero except those on the leading diagonal which are unity (= 1). The leading diagonal is the one from the top left to the bottom right. The identity matrix is denoted by I (or I n to show that it has size n n). For example I 3 = 1 0 0 0 1 0. 0 0 1 The identity matrix has the property that for any matrix A. AI = IA = A A.3.2. Inverse matrices. If A is a square matrix, then provided it exists, the inverse matrix of A, denoted A 1, has the property that A 1 A = AA 1 = I. It is important to note that not every square matrix has an inverse. If we can find A 1 then we can solve the system because Ax = b Ax = b A 1 Ax = A 1 b Ix = A 1 b x = A 1 b. Thus the solution is simply x = A 1 b. A.3.3. Determinants. Recall our discussion of 2 2 systems. We found that the determinant D = a 11 a 22 a 12 a 21 was significant. We found that there was a unique solution provided D 0. If D 0 the matrix A has an inverse A 1 given by A 1 = 1 ( ) a22 a 12. D a 21 a 11 If D = 0, the matrix A has no inverse.

A.3.4. Example. A.3. DETERMINANTS AND INVERSE MATRICES 15 A = ( ) 1 5 2 3 det A = 13 0 A 1 = 1 ( ) 3 5. 13 2 1 We can check that this is indeed the inverse by testing for the properties of an inverse matrix: AA 1 = 1 ( ) ( ) 1 5 3 5 13 2 3 2 1 = 1 ( ) 3 10 5 5 13 6 6 10 3 ( ) 1 0 =. 0 1 These ideas can be extended from 2 2 matrices to 3 3,..., n n etc. Again the determinant will dictate whether the matrix is singular (has no inverse) or nonsingular (has an inverse). Given a 3 3 matrix A = a 11 a 12 a 13 a 21 a 22 a 23, a 31 a 32 a 33 we define its determinant to be ( ) ( ) ( ) a22 a det A = a 11 det 23 a21 a a a 32 a 12 det 23 a21 a + a 33 a 31 a 13 det 22. 33 a 31 a 32 A.3.5. Example. det 1 2 3 ( ) 1 3 5 3 5 = 1 det 5 12 1 5 12 ( ) 1 5 2 det + 3 det 1 12 ( ) 1 3 = 3. 1 5 The value of det A may be obtained by expanding about any row or column (not just the first row as we have done) using the chessboard pattern of signs and the cover up method. + + + + +

16 A A.3.6. Example. Let A = 1 3 0 2 6 4. Here are three of the possible 1 0 2 ways to calculate det A: Expand about the first row to give ( ) ( ) ( ) 6 4 2 4 2 6 det A = +1 det 3 det + 0 det = 12. 0 2 1 2 1 0 Expand about the middle row to give ( ) ( ) ( ) 3 0 1 0 1 3 det A = 2 det + 6 det 4 det = 12. 0 2 1 2 1 0 Expand about the last column to give ( ) ( ) ( ) 2 6 1 3 1 3 det A = 0 det 4 det + 2 det = 12. 1 0 1 0 2 6 A.4. Matrix transpose The transpose A T of the m n matrix A is the n m matrix obtained by interchanging rows and columns. A.4.1. Example. If A = 3 2 1 ( ) 4 5 6 3 2 1, B = 4 5 6 7 8 9 ( 1 and C = 4) then their transposes are A T = 3 4 7 2 5 8, B T = 3 4 2 5 and C T = ( 1 4 ). 1 6 9 1 6

Bibliography [1] Bernal, M. 1987 Lecture notes on linear algebra, Imperial College. [2] Boyce, W.E. & DiPrima, R.C. 1996 Elementary differential equations and boundaryvalue problems, John Wiley. [3] Cox, S.M. 1997 Lecture notes on linear algebra, University of Nottingham. [4] Hildebrand, F.B. 1976 Advanced calculus for applications, Second Edition, Prentice- Hall. [5] Hughes-Hallet, D. et al. 1998 Calculus, John Wiley & Sons, Inc.. [6] Jennings, A. 1977, Matrix computation for engineers and scientists, John Wiley & Sons. [7] Krantz, S.G. 1999 How to teach mathematics, Second Edition, American Mathematical Society. [8] Kreyszig, E. 1993 Advanced engineering mathematics, Seventh Edition, John Wiley. [9] Lopez, R.J. 2001 Advanced engineering mathematics, Addison-Wesley. [10] Meyer, C.D. 2000 Matrix analysis and applied linear algebra, SIAM. [11] Zill, D.G. 2000 A first course in differential equations: the classic fifth edition, Brooks/Cole Pub Co. 17