Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
|
|
- Arline Webb
- 6 years ago
- Views:
Transcription
1 Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its physical interpretation is the instantaneous rate of change of f with respect to x. Its graphical interpretation is that f (x) is the slope of the tangent line at the point x. Suppose we are told that the population p(t) of a colony of bird is only affected by their births and deaths. Then the change in the population is modeled by dp dt = r bp r d p = αp where r b denotes the rate of births, r d denotes the rate of deaths, and α = r b r d. If we know the initial population p 0 at some time t 0 then we can find the exact population (based on our simplified model) at any later time. For example, p dp = αdt ln p = αt + C p = ec+αt = e C e αt = Ĉeαt Using the fact that p(t 0 ) = p 0 gives p = p 0 e αt. If α > 0 (i.e., more births than deaths) then the population grows; otherwise it declines. This is a very simple model for which we can find an analytic solution. What if we change the model slightly by adding a migration term M(t, p)? For example M(t, p) = e t sin πp then our approach above fails. How do we even know this problem has a solution? Even if we can demonstrate that it has a solution, we may not be able to find an analytic solution. What do we do if we can t find an analytic solution? We must discretize. What does this mean? When we have an analytic solution we have a formula for every possible value of the independent variable (t in our example). When we discretize we typically approximate the solution at a finite number of points, say t, t 2,..., t n and as n (t i+ t i = t 0) we want our discrete solution to approach our analytic solution. We need to make precise what is meant by approaching. What would happen if we try to discretize a problem that doesn t have a unique solution? One way to discretize our problem above is to replace the derivatives by differences of function values. Recall that the derivative is defined in terms of the limit of a
2 difference quotient of function values. So if we replace our derivative in the ODE with this difference quotient we can approximate the solution p(t ) given its value at t 0. p(t ) p(t 0 ) t αp(t 0 ) + M(t 0, p(t 0 )) where t = t t 0. Because we know p(t 0 ) = p 0 then p(t ) p 0 + tαp 0 + tm(t 0, p 0 ) = ( + tα)p 0 + tm 0 Everything on the right hand side is known so we can approximate p(t ). The procedure can be repeated. In general, we define a discrete function P n which approximates p(t n ). The general difference equation for the equation is P k+ P k t Notice where we use = and. dp dt = αp + M(t, p) = αp k + M(t k, P k ), k = 0,, 2,... We need to know that as t 0 then P k P (t k ). We would also like to know how fast P k approaches P (t k ) because if we have two methods which are the same amount of work, then if one approaches the exact solution faster than the other, we would feel that it is, in some sense, better. This type of problem is called an initial value problem (IVP). Can we apply this technique to any other IVP? Consider the IVP y (t) = t y, y(0) = 0 which has a solution y(t) = t 4 /6. How can we verify this? Clearly it satisfies the initial condition. We can check that it satisfies the equation by computing y and substituting into the equation to see if we get equality, e.g., Now our difference equation becomes Thus with t =. Y k+ Y k t y = 4 6 t3 t y = t(t 2 /4) = t 3 /4 = t k (Y k ) Y k+ = Y k + t t k (Y k ) Y = Y (0) 0 = 0, Y 2 = Y + 0.(0.) 0 = 0,... Y k = 0 2
3 What went wrong? y = 0 is also a solution of the problem; i.e., the solution of the IVP is NOT UNIQUE! Clearly we need to know if our original problem has more than one solution. What do we do if we have an IVP where the DE has a higher derivative? Consider the problem u (t) = t 2 t 0 < t To solve this problem analytically we can just integrate twice to get u (t) = t3 3 + C u(t) = t4 2 + C t + C 2 Note that because we have integrated twice we need two auxiliary conditions. For example, if u(0) = 0 and u (0) = we have C = and C 2 = 0 yielding u(t) = t4 2 + t. How do we obtain a difference equation for this DE? We need a difference quotient which approximates the second derivative. Taylors series is a useful formula for obtaining difference quotients. Recall that for small t and u(t) possessing all derivatives Taylors series expansion for u in the neighborhood of t is u(t + t) = u(t) + tu (t) + t2 2! u (t) + t3 3! u (t) + Note that this is an infinite series. We can also write the series in the remainder form, e.g., u(t + t) = u(t) + tu (t) + t2 2! u (t) + t3 3! u (t) + t4 4! u (ξ), ξ (t, t + t) Why is the second expression no longer an infinite series? How can we use Taylors series to obtain an approximation to u (t)? Combining the above expression with we obtain u(t t) = u(t) + tu (t) + t2 2! u (t) t3 3! u (t) + t4 4! u (η) u (t) u(t + t) 2u(t) + u(t t) t 2 We now use this approximation in our ODE to get a difference equation. If U k u(t k ) then using the initial conditions U 0 = 0 and u (0) = implies (U U 0 )/ t =. Thus 3
4 U = t+u 0 and if t = 0., U = 0.; recall exact solution at 0. is (.) 4 /2+. =.00 Then at t = 2 t U 2 2U + U 0 and the exact solution is t 2 = (t ) 2 = ( t) 2 = 0.0 U 2 = Now wes change the problem in what appears on the surface to be a minor change but which, in fact, makes a huge change. u (x) = x 0 < x < 2 u(0) = 0 u(2) = 0 In this problem we know how u behaves at the endpoints of an interval and we want to find how it behaves in the interior. How does this compare with our previous problem? We call this problem a Boundary Value Problem (BVP). Can we use the same technique as above to solve it? Let U i x i. Then U 0 = 0 and using the difference quotient U 2 2U + U 0 t 2 = x Note that in this equation both U 2 and U are unknowns whereas in the previous example, only U 2 was unknown. Consequently, we can t solve for U 2. The next equation gives a similar situation U 3 2U 2 + U t 2 = x 2 Here all three variables are unknown. The unknowns are all coupled together. This makes sense because we know that the right endpoint should also influence the solution, not just the left endpoint. How can we solve this linear algebraic system of equations? We write them as a matrix equation. For example, for five uniform intervals U U 2 U 3 U 4 = x 2 x 3 x 4 x Does this problem have a unique solution, i.e., is the coefficient matrix invertible? If the original ODE has a unique solution will the discrete problem also have a unique solution? We need to determine an efficient method for solving this linear system. There is not ONE method but rather many methods which fall into general class of direct or iterative methods. 4
5 In direct methods we find the exact solution (if there was no roundoff) in a finite number of steps. In an iterative method, we start with an initial guess and generate a sequence of vectors which we hope converge to the exact solution; i.e., we can get as close as we want to the solution by taking more terms in the sequence. To choose the appropriate algorithm, you need to know the properties of the coefficient matrix - sparsity, symmetry, positive definiteness, etc. Now let s make another change to our problem and see it s effect. u (x) + u 2 = x 0 < x < 2 u(0) = 0 u(2) = 0 Again U 0 = 0 but now when we write the difference equation at dx we get and in general U 2 2U + U 0 t 2 + (U ) 2 = x U i+ 2U i + U i t 2 + (U i ) 2 = x i The problem is that we can no longer write our difference equations as a linear system because they are nonlinear due to the fact that the original ODE is nonlinear. We need to be able to recognize nonlinear ODEs and ultimately be able to approximate their solution. Lastly, we ask ourselves what happens if the unknown u is a function of two independent variables instead of just one. Typically, we have either of the two situations illustrated below. For a BVP where u = u(x, y) we could have (u xx + u yy ) = f(x, y) 0 < x <, 0 < y < u(0, y) = u(, y) = 0 u(x, 0) = 2 u(x, ) = Here the notation u xx means the second partial of u with respect to x. We need to make sure that we understand what derivatives of functions of more than one variable means. We can also have an initial boundary value problem (IBVP) Let u = u(x, t) u t u xx = f(x, t) 0 < t T, 0 < x < u(x, 0) = u 0 u(0, t) = u(, t) = 0 5
6 We will see that our techniques for ODEs can be expanded to solve these equations. We need to look at partial derivatives, directional derivatives, vector calculus before we address PDEs Topics Linear algebra ODEs Multivariable calculus PDEs In this course, we will NOT concentrate on numerical methods for solving these problems but rather look at the mathematics behind them. 6
7 Introduction to Linear Algebra Not only is linear algebra essential to so many problems but if you have a firm foundation then it will help you understand more complicated mathematical topics. We have seen that discretization of DEs can lead to solving linear systems A x = b where A is an n n matrix and x, b are n-vectors so we begin by considering this central problem. One approach which works well in learning/reviewing concepts in linear algebra is to visualize in two or three dimensions, then abstract to n dimensions. When considering methods we want to be cognizant of their use in computations. Geometry of Linear Equations We first begin with an example. whose exact solution is (, 2). 2x + y = 4 3x y = One way to solve this is to plot the lines y = 2x 4 and y = 3x and see the point (x, y) where they intersect. In this way we are concentrating on equations, i.e., rows but we also want to think about columns. We can also consider the problem of finding (x, y) such that x ( ) y ( ) = ( ) 4 We say that we want to determine the linear combination of the vectors (2, 3) and (, ) which yield (4, ). In IR 3 when we look at the intersection of the equations then we find the intersection of three planes. For the columns, we now look for a linear combination of the columns (which are vectors in IR 3 ) which yields the right hand side. In IR n each row represents a hyperplane and we want to find the intersection of all of them. Again for the columns, we look for a linear combination of the columns (which are vectors in IR n ) which yields the right hand side which is a vector in IR n. What happens when no solution is found? 2x + y = 4 x + y/2 = When we plot these two equations we find that the lines are parallel and hence there is no point of intersection. If we consider the column approach then we want (x, y) such that ( ) ( ) ( ) 2 4 x + y = 2 7
8 But the column vectors are on the same line and the right hand side vector is not on that line so there is no way to combine them and get the right hand side vector. What is the analogous situation in IR 3? Be careful, there are three cases in IR 3. What happens if infinitely many solutions are found? 2x + y = 4 x + y/2 = 2 When we plot these two equations we find that the lines are the same so every point is a point of intersection and there are infinitely many solutions. If we consider the column approach then we want (x, y) such that x ( ) ( ) 2 + y = 2 ( ) 4 2 But the column vectors are on the same line but now the right hand side vector is also on that line so there is are infinitely many ways to combine them and get the right hand side vector. What is the analogous situation in IR 3? Be careful, there are two cases in IR 3. Gauss Elimination - A systematic method for finding the unique solution to a system of linear equations When you took algebra, you learned this technique which is illustrated in the following example. 4x 6y = 2 2x + 7y + 2z = 9 Recall that we first eliminate the x terms from the second and third equations by multiplying the first equation by a constant and adding to the respective equation. 4x 6y = 2 2x + 7y + 2z = 9 0 8y 2z = y + 3z = 4 We multiplied the first equation by -2 and added to the second; then we multiplied the first equation by - and added to the third. 8
9 Now we want to eliminate y from the last equation. To this end we multiply the SECOND equation by and add to the third equation to get 0 8y 2z = y + 3z = 4 0 8y 2z = 2 z = 2 The third equation only contains one unknown, z, so we solve for that and then in the second equation the unknowns are y, z but now we know z so we solve for y. Finally we substitute the values for y, z into the first equation and solve for x arriving at (,, 2) as the solution. We call the coefficient of x in the first equation (i.e., 2) and the coefficient of y after x has been eliminated (i.e., -8) pivots. When the last equation contains only one variable, the next to last only two variables, etc. then the process to solve the system is called backsolving. When will this process fail? Is it only when the system fails to have a unique solution? 4x 6y = 2 2x 3y = Clearly, the last two equations are essentially the same so we should have infinitely many solutions. Here s what happens with Gauss elimination. 4x 6y = 2 2x + 7y + 2z = 9 0 8y 2z = y z = 6 0 8y 2z = 2 0 = 0 Will it fail at other times? y + z = 3 4x = 6y = 2 2x + 7y + 2z = 9 The procedure, as we described it, fails because x does not appear in the first equation and so we can t use it to eliminate x from the second and third equations. Of course, we could simply reorder the equations. Is having a zero pivot the only time the method can fail if the system has a unique solution? If exact arithmetic is used and no roundoff occurs then the answer is yes. But when we work on a computer we are using finite precision arithmetic; for example /3 will not be entered exactly. Consider the following example where we will only assume we can only 9
10 use two significant digits, i.e., every number can be written as ±0.d d 2 0 e in our solution algorithm. 000 x + y = x + y = 2 whose exact solution is x = 000/999, y = 998/999. We have x y = x y = y =, x = x y = y = where in exact arithmetic we had -000=-999 and 2-000=-998 but these have each been rounded to 000= in two digit arithmetic. The problem here is that the pivot /000 is small compared with the other entries. If we interchange rows, then we don t have any problem. In this example, we multiply the first equation in the last example by 0,000. Now the pivot is not small but we still have problems. 0x + 0, 000y = 0, 000 x + y = 2 whose exact solution is the same as before x = 000/999, y = 998/999. We have 0x + 0, 000y = 0, 000 x + y = 2 0x + 0, 000y = 0, 000 ( 000)y = Using two digit arithmetic we have 000y = 000 or y =. Now 0x 0, 000 = 0, 000 implies x = 0. In this problem the difficulty is caused by scaling. Consequently, when we develop our computational algorithm we need to take all of these things into account. A standard example of a system that is difficult to solve is due to Hilbert. For four equations we have the system x + 2 x x x 4 = b 2 x + 3 x x x 4 = b 2 3 x + 4 x x x 4 = b 3 4 x + 5 x x x 4 = b 4 What makes this system so difficult to solve? Small changes in the data (e.g., entering /3 not exactly) can cause large changes in the solution. We say the system is ill-conditioned. We will need to find a way to determine if a system is ill-conditioned. 0
11 Vectors and Matrices We have seen that if we have n equations in n unknowns then there are n 2 coefficients (some may be zero) and n right hand side components. To efficiently study linear systems we need to write all linear systems in a generic form. To do this we need to review vectors and matrices. Once we write our linear system as a matrix problem, then we can view Gauss elimination in terms of matrices. Throughout we will assume that the entries of our vectors and matrices are real; the results can be easily extended to the situation where the entries are complex. Vectors To sketch IR 2 (Euclidean space in two dimensions) we indicate the origin and the x and y axes. Then any point can be represented as the ordered pair (x, x 2 ) which we can associate with a vector x starting at the origin (0,0) and ending at the point (x, x 2 ). In this case the vector x has a direction and a magnitude. In algebra, we calculated the length of a vector in IR 2 by using the standard Euclidean distance, i.e., x 2 + x2 2. We call a vector which has length one a unit vector. We can think of IR 2 as the set of pairs (x, x 2 ) or equivalently all vectors with two components. In IR n we have n dimensions so a point is represented by the ordered tuple (x, x 2, x 3,..., x n ) and we can associate a vector x as emanating from the origin and terminating at this point. IR n is the set of all n-tuples or equivalently all n-vectors. When we solve a system of n equations in n unknowns then there are n values for the right hand sides and n unknowns so these will be stored as vectors. We will often use i, j, k as notation for unit vectors in the x, y and z directions. This means they have length one and lie along a coordinate axis. How do we perform standard operations with vectors such as scalar multiplication, addition/subtraction and multiplication? Scalar multiplication means we are multiplying our vector a by a number, say k, and each component of the vector is multiplied by k. We have ( ) ( ) a k a k a = k = a 2 k a 2 If we think about this in IR 2 we realize that we are just changing its Euclidean length by the magnitude of k, i.e., the length of k a is k times the length of a. To see this the length of k a is (k a ) 2 + (k a 2 ) 2 = k 2[ (a ) 2 + (a 2 ) 2] = k a 2 + a2 2. Multiplying a vector by - does not change its length but it changes its direction.
12 Addition/subtraction of two vectors is done in the usual manner, i.e., addition/subtraction of corresponding components. We should note that addition only makes sense if the two vectors are of the same length. Because addition and scalar multiplication are performed in the standard ways, the usual properties such as commutative, etc. hold. For example, x + y = y + x α x + β x = (α + β) x α( x + y) = α x + α y Which of the following are defined? If defined, determine the result of the given operation. Here a = ( ) ( ) b = c = (i) the length of 0 c (ii) 2 a b (iiii) 3( c b) For (i) the length of 0 c is just ten times the length of c which is = 5 so the answer is 50. For (ii) a is a vector in IR 2 and b is a vectors in IR 3 so the operation is NOT defined. For (iii) c, b are vectors in IR 2 so the operation is defined. c b = ( 4, 7) T so three times this is ( 2, 2) T. Multiplication of vectors is different than multiplying two numbers. We can multiply two vectors in two ways - in one (the dot or scalar product) the result is a number and in the other (cross product )the result is a vector. Recall that in IR 2 when we took the dot product of two vectors we multiplied corresponding components and added to get ( ) ( ) x y x y = = x y + x 2 y 2 x 2 The same is true in IR n x y = y 2 n x i y i. i= Thus in order for the operation of scalar product to be defined, the vectors have to have the same number of components. Note also that x y = y x In vector calculus you learned an equivalent definition of dot product x y = (magnitude of x)(magnitude of y) cos θ where θ is the angle between the two vectors and we use the standard Euclidean length for the magnitude. Because the cos π/2 = 0 we immediately see that two vectors are perpendicular or orthogonal if their dot product is zero. 2
13 Note that if we take the scalar product of a vector with itself then the result is the square of its Euclidean length; i.e., in IR 2 x x = x 2 + x 2 2 = [ ] 2 x 2 + x2 2 So the Euclidean length of a vector in IR n can be written as [ n ] /2 x x = x 2 i. i= In general, we think of a vector x as a column vector, i.e., x = Sometimes it is useful to use a row vector, i.e., (x, x 2,..., x n ). We write this row vector as x T where the T means transpose. Because in written text it is easier to type a row vector we often write, e.g., (x, x 2 ) T to mean a column vector in IR 2. a second way to multiply vectors is the cross product which results in a vector. We will look at its definition in a later example. x x 2. x n Determine the following. Here a = 2 0 b = 0 4 c = 0 0 (i) a 2 b (ii) 3 c T (iii) are b, c orthogonal vectors? all operations are defined because all are vectors in IR 3. For (i) a 2 b is a (0, 8, 2) T which is ()( 8) + 0(2) = 8. For (ii) we have (3, 0, 0), a row vector. For (iii) the vectors are orthogonal because b c = 0. Matrices Recall that we have n 2 coefficients in our system so we need to store this information. To do this, we introduce matrices which are rectangular arrays of numbers. We say A is an m n matrix if it has m rows and n columns. If the entries of A are denoted a ij where 3
14 i refers to the row and j to the column, then an m n matrix A is written componentwise as a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a 3 a 32 a 33 a 3n..... a m a m2 a m3 a mn an m n matrix has mn entries so we can store our coefficients in an n n matrix. Note that an n-vector could be viewed as an n matrix. a row of a matrix is a row vector and a column is a column vector. Some matrices which have special structure are given individual names. The zero matrix is simply what the name implies, a matrix with all zero entries. The diagonal entries of a matrix are the entries a ii. a diagonal matrix is one which a ij = 0 for all i j; e.g., a 3 3 diagonal matrix has the form A = a a a 33 The identity matrix, usually denoted I, is a diagonal matrix whose diagonal entries are all ones. An upper triangular matrix is one where a ij = 0 for j < i; e.g., a 3 3 upper triangular matrix has the form A = a a 2 a 3 0 a 22 a a 33 Note that by this definition a diagonal matrix is also an upper triangular matrix. Similarly a lower triangular matrix is one where a ij = 0 for i < j. Sometimes we will need a unit lower or upper triangular matrix; these are just special lower or upper triangular matrices which have ones as the diagonal entries; e.g., a 3 3 unit lower triangular matrix is A = 0 0 a 2 0 a 3 a 32 We can also take the transpose of a matrix. A T means to reflect the matrix around the diagonal so if B = A T then b ij = a ji. Note that the diagonal entries are unchanged. If the original matrix is not square, i.e., m n then the transpose is n m. For example, ( ) T (A) T 2 3 = = A matrix which has the property that A = A T matrix must be square. is called symmetric; clearly a symmetric 4
15 How could you describe a matrix which is both lower and upper triangular? Addition and scalar multiplication of matrices is done in the standard way just as we did for vectors. To multiply a matrix by a scalar k we simply multiply each entry by k. To add two matrices, first they must have the same number of rows and columns, then we simply add corresponding components. Because these operations are performed in the standard way, we have the usual laws holding; e.g., A + B = B + A α(a + B) = αa + αb To define matrix multiplication, we could define it by multiplying corresponding entries. However, our goal is to use a matrix and two vectors to represent our linear system. Consequently, we need to define matrix multiplication in a meaningful way for this application. We first look at the definition of an m n matrix A times an n p matrix B. For matrix multiplication to be defined, the number of columns of the first matrix must equal the number of rows of the second matrix. We have A m n B n p = C m p where c ik = n a ij b jk that is, we can view this entry as taking the dot product of the ith row of A and the kth column of B. j= Determine AB and BA, if defined, where A = 3 2 ( ) 0 B = First C = AB is defined because A has two columns and B has two rows. Then C = AB has three rows and two columns and is given by C = ( 2) ( )( 2) + ( ) = ( 2) Now D = BA is not defined because B has two columns and A has three rows. example shows us that, in general, AB BA This in fact, if AB is defined BA may not be. Sometimes we use the terminology premultiply B by A to mean AB and the terminology post-multiply B by A to mean BA. Let I be the n n identity matrix and A an n n matrix. What is AI? IA? 5
16 Clearly pre- or post-multiplying a matrix by the identity matrix has no effect. Consider pre-multiplying the m n matrix A by the m m identity matrix a a 2 a 3 a n a. 2 a 22 a 23 a 2n = A a m a m2 a m3 a mn If A, B are square, i.e., n n then both AB and BA are defined. If AB = BA then we say that the matrices commute. Do all square matrices commute? Clearly no. as a counterexample consider ( 0 2 ) ( ) 2 4 = ( 3 ) but ( 2 4 ) ( ) = 0 2 ( 2 ) 6 3 What is the effect of premultiplying the given 3 3 matrix A by the matrix C = where A = ? Note that C is the identity matrix with the last two rows interchanged. The effect is to interchange the last two rows of A which can be seen by direct multiplication. What is the effect of premultiplying the given 3 3 matrix A by the matrix M = where A = ? The matrix MA is a matrix with the first row of A the same (because the first row of M is the same as in identity matrix) and zeros have been introduced into the (2, ) and (3, ) entries, i.e., = We can take the transpose of the product of two matrices for which multiplication is defined. If A is m n and B is n p then the product AB is m p and its transpose is p m. We can use the following formula for computing (AB) T ; note that B T is p n and A T is n m so B T A T is p m: (AB) T = B T A T 6
. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationMatrix & Linear Algebra
Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationMatrices and Systems of Equations
M CHAPTER 3 3 4 3 F 2 2 4 C 4 4 Matrices and Systems of Equations Probably the most important problem in mathematics is that of solving a system of linear equations. Well over 75 percent of all mathematical
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More information2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1
Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More information10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )
c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationM. Matrices and Linear Algebra
M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.
More informationAn Introduction to Numerical Methods for Differential Equations. Janet Peterson
An Introduction to Numerical Methods for Differential Equations Janet Peterson Fall 2015 2 Chapter 1 Introduction Differential equations arise in many disciplines such as engineering, mathematics, sciences
More informationMatrix Multiplication
3.2 Matrix Algebra Matrix Multiplication Example Foxboro Stadium has three main concession stands, located behind the south, north and west stands. The top-selling items are peanuts, hot dogs and soda.
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationPOLI270 - Linear Algebra
POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and
More informationSection 9.2: Matrices.. a m1 a m2 a mn
Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More information1 Matrices and matrix algebra
1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns
More informationExample: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3
Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination
More informationGetting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1
1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A
More informationPH1105 Lecture Notes on Linear Algebra.
PH05 Lecture Notes on Linear Algebra Joe Ó hógáin E-mail: johog@mathstcdie Main Text: Calculus for the Life Sciences by Bittenger, Brand and Quintanilla Other Text: Linear Algebra by Anton and Rorres Matrices
More informationCS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33
Linear Algebra 1/33 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct
More informationCS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34
Linear Algebra /34 Vectors A vector is a magnitude and a direction Magnitude = v Direction Also known as norm, length Represented by unit vectors (vectors with a length of 1 that point along distinct axes)
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More information7.6 The Inverse of a Square Matrix
7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationMath 3108: Linear Algebra
Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationMAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :
MAC 0 Module Systems of Linear Equations and Matrices II Learning Objectives Upon completing this module, you should be able to :. Find the inverse of a square matrix.. Determine whether a matrix is invertible..
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationMatrix Arithmetic. j=1
An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column
More informationCS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra
CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x
More informationChapter 2: Matrices and Linear Systems
Chapter 2: Matrices and Linear Systems Paul Pearson Outline Matrices Linear systems Row operations Inverses Determinants Matrices Definition An m n matrix A = (a ij ) is a rectangular array of real numbers
More informationMath 360 Linear Algebra Fall Class Notes. a a a a a a. a a a
Math 360 Linear Algebra Fall 2008 9-10-08 Class Notes Matrices As we have already seen, a matrix is a rectangular array of numbers. If a matrix A has m columns and n rows, we say that its dimensions are
More informationLECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS
LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1 Linear equations We now switch gears to discuss the topic of solving linear equations, and more interestingly, systems
More informationElementary Row Operations on Matrices
King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix
More informationx y = 1, 2x y + z = 2, and 3w + x + y + 2z = 0
Section. Systems of Linear Equations The equations x + 3 y =, x y + z =, and 3w + x + y + z = 0 have a common feature: each describes a geometric shape that is linear. Upon rewriting the first equation
More informationChapter 2 Notes, Linear Algebra 5e Lay
Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional
More informationVector, Matrix, and Tensor Derivatives
Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions
More informationMatrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix
Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix Matrix Operations Matrix Addition and Matrix Scalar Multiply Matrix Multiply Matrix
More informationAn Introduction To Linear Algebra. Kuttler
An Introduction To Linear Algebra Kuttler April, 7 Contents Introduction 7 F n 9 Outcomes 9 Algebra in F n Systems Of Equations Outcomes Systems Of Equations, Geometric Interpretations Systems Of Equations,
More informationMatrices. Math 240 Calculus III. Wednesday, July 10, Summer 2013, Session II. Matrices. Math 240. Definitions and Notation.
function Matrices Calculus III Summer 2013, Session II Wednesday, July 10, 2013 Agenda function 1. 2. function function Definition An m n matrix is a rectangular array of numbers arranged in m horizontal
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationA primer on matrices
A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationLinear Algebra Homework and Study Guide
Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationContents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124
Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents
More informationINSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES
1 CHAPTER 4 MATRICES 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES 1 Matrices Matrices are of fundamental importance in 2-dimensional and 3-dimensional graphics programming
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More information6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.
61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationSometimes the domains X and Z will be the same, so this might be written:
II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationLinear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway
Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationTopic 15 Notes Jeremy Orloff
Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMath 123, Week 2: Matrix Operations, Inverses
Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices
More informationThings we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic
Unit II - Matrix arithmetic matrix multiplication matrix inverses elementary matrices finding the inverse of a matrix determinants Unit II - Matrix arithmetic 1 Things we can already do with matrices equality
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationTOPIC III LINEAR ALGEBRA
[1] Linear Equations TOPIC III LINEAR ALGEBRA (1) Case of Two Endogenous Variables 1) Linear vs. Nonlinear Equations Linear equation: ax + by = c, where a, b and c are constants. 2 Nonlinear equation:
More informationFinite Difference Methods for Boundary Value Problems
Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationL. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions.
L Vandenberghe EE133A (Spring 2017) 3 Matrices notation and terminology matrix operations linear and affine functions complexity 3-1 Matrix a rectangular array of numbers, for example A = 0 1 23 01 13
More informationFinite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.
Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form
More informationIntroduction to Matrices and Linear Systems Ch. 3
Introduction to Matrices and Linear Systems Ch. 3 Doreen De Leon Department of Mathematics, California State University, Fresno June, 5 Basic Matrix Concepts and Operations Section 3.4. Basic Matrix Concepts
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationMatrices. Chapter Definitions and Notations
Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which
More informationPrepared by: M. S. KumarSwamy, TGT(Maths) Page
Prepared by: M. S. KumarSwamy, TGT(Maths) Page - 50 - CHAPTER 3: MATRICES QUICK REVISION (Important Concepts & Formulae) MARKS WEIGHTAGE 03 marks Matrix A matrix is an ordered rectangular array of numbers
More informationMethods for Solving Linear Systems Part 2
Methods for Solving Linear Systems Part 2 We have studied the properties of matrices and found out that there are more ways that we can solve Linear Systems. In Section 7.3, we learned that we can use
More informationMAC Module 1 Systems of Linear Equations and Matrices I
MAC 2103 Module 1 Systems of Linear Equations and Matrices I 1 Learning Objectives Upon completing this module, you should be able to: 1. Represent a system of linear equations as an augmented matrix.
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationBASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =
CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationLinear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)
Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More information1300 Linear Algebra and Vector Geometry
1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Matrix Inversion Algorithm One payoff from this theorem: It gives us a way to invert matrices.
More informationAnnouncements Wednesday, October 10
Announcements Wednesday, October 10 The second midterm is on Friday, October 19 That is one week from this Friday The exam covers 35, 36, 37, 39, 41, 42, 43, 44 (through today s material) WeBWorK 42, 43
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationLS.1 Review of Linear Algebra
LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order
More informationMAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:
MAC 2 Module Determinants Learning Objectives Upon completing this module, you should be able to:. Determine the minor, cofactor, and adjoint of a matrix. 2. Evaluate the determinant of a matrix by cofactor
More informationAnnouncements Monday, October 02
Announcements Monday, October 02 Please fill out the mid-semester survey under Quizzes on Canvas WeBWorK 18, 19 are due Wednesday at 11:59pm The quiz on Friday covers 17, 18, and 19 My office is Skiles
More information