1 Solutions to selected problems

Size: px
Start display at page:

Download "1 Solutions to selected problems"

Transcription

1 Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p t + a i+ y end Section., #a,c,d a. v = a 0 + x n i= a i c. v = n i=0 a ix n i d. v = n i= a ix i, z = x n+ Section., # 3,4 3. Recall ln(.) = ln( + 0 ) = ( ) k= k 0 k k P n (0.) := n ( ) k= k 0 k k. By the alternating series theorem, the error in this approximation is bounded by ln(.) P n (0.) 0 (n+) n +. The RHS is less than 0 8 when n 7. Thus, 7 terms are needed.

2 4. We compute f() = 7, f () = 8, f () = 8 and f () = 6. Thus, f( + h) = 7h 0 /0! + 8h /! + 8h /! + 6h 3 /3! = 7 + 8h + 4h + h 3 or equivalently, f(x) = 7 + 8(x ) + 4(x ) + (x ) 3. Section., #a,b a. Since 30 = ( ) (.00...) and 97 = = (00000), is the 3-bit machine representation b. Since = 6 ( + ) = ( ) ( ) and 33 = = (00000), is the 3-bit machine representation. Section., #3a-e 3-bit machine numbers must be in the range of the machine (roughly 6 to 8 ), and they must have at most 4 signicant digits. 3a. Not a machine number: too large to be in machine range 3b. Not a machine number: binary expansion has more than 4 signicant digits 3c-d. Not machine numbers: their binary expansions are nonterminating e. = 56 8 is a machine number: it is in the range of the machine and its binary expansion has signicant digit. Section., #5a-c, e-h Conversions can be done as in Section. #a,b above. a. +0 b. 0 c. d. e. 6 f. 5.5

3 g. h. 0 Section., #4 WLOG assume x is positive and write x = 0 m.a a.... After chopping to keep only 5 signicant digits we get the machine representation ˆx = 0 m.a a... a 4. Observe that x ˆx = 0 m a 5 a m 0 4. Thus, x ˆx x 0m m.a a... = 0 4.a a Section., #9 Write x = n (.a a... a 3 ) and y = m (.b b... b 3 ). Write x = m ( n m (.a a... a 3 ) ) = n ( a a...). Since x < 5 y, we have n m 5, so in the last display a appears in a spot which is at least 5 places to the rights of the decimal point. Thus, x + y = m (.b b... b a a... a 3 ) where a is 5 or more spots to the right of the decimal point. After chopping (or rounding), we get (x + y) = m (.b b... b 3 ) = y. Section., # Using e x = + x + x /! + x 3 /3! +..., write f(x) = e x x as f(x) = x + x3 6 + x It's easy to check that the rst two terms above are enough for 5 signicant digits: f(0.0) Using e and direct calculation gives f(0.0) = =

4 Section., #3 One option is to write f(x) = ( e) + x + x /x + x 3 / Section., #7 Note that if x 0, x (sinh x tanh x) = sinh x tanh x x(sinh x + tanh x) = tanh x sinh x x(sinh x + tanh x), () where the last equality comes the hyperbolic trig identity cosh x = sinh x and sinh x tanh x = sinh x(cosh x ) cosh x The last expression in () avoids loss of signicance near x = 0. = tanh x sinh x. Section., #4 By rewriting f(x) = x + x as f(x) = x + + x. Section. # Using sin x = x x3 3! + x5 5!... and the alternating series theorem, we get For x < /0 this becomes sin x x x 3 3!. sin x x = Section. Computer Problems, #8 See the script section. #8. 4

5 Section 3. Computer Problems, # See the script section 3. #. Section 3. # Newton's method for nding roots of f(x) = x R is x n+ = x n f(x n) f (x n ) = x n x n R x n = x n x n + R x n = (x n + Rxn ). Section 3. # Using # above, x n+ R = ( x n + R ) R = ) (x n + R + R R 4 x n 4 x n = ) (x n R + R = ( x n R ) ( ) x = n R. 4 4 x n x n Letting e n = x n R be the error in the nth approximation, If e n < R this means e n+ x n e n+ = e 4x n. n 4(R e n ) e n. In particular if e 0 R/, an induction argument shows that e n+ R e n. Section 3. #6 When m = 8, and x 0 =., we have x =.0875 and x = When m = and x 0 =., we have x = and x = Note the slower convergence to r = when m = compared to m = 8. Section 3. #0 For f(x) = tan x R we have f(x) f (x) = tan x R sec x = cos x sin x R cos x. 5

6 Thus, the iteration formula x n+ = x n cos x n sin x n + R cos x n can be used for nding solutions to tan x = R. Section 3. Computer Problems, #7,9 See the scripts section 3. #7 and section 3. #9. Section 4. #6 The polynomial, in Newton form, is p(x) = + x 3x(x ) + 4x(x )(x 3). Section 4. #7b The columns can be lled top to bottom with third column: 3, 5, fourth column: 7, 9.5, fth column: 0.5. Thus, the interpolating polynomial is p(x) = 3(x + ) + 7(x + )(x ) + (x + )(x )(x 3). Section 4. #0 Newton's interpolation polynomial is p(x) = 7 + x + 5x(x ) + x(x )(x 3). A nested form for ecient computation is p(x) = 7 + x( + (x )(5 + (x 3))). Section 4. #0 6

7 With x = (,, 0,, ), y = (, 4, 4,, ) we have P 0 (x) = a 0 P (x) = a 0 + a (x + ) P (x) = a 0 + a (x + ) + a (x + )(x + ) P 3 (x) = a 0 + a (x + ) + a (x + )(x + ) + a 3 (x + )(x + )x P 4 (x) = a 0 + a (x + ) + a (x + )(x + ) + a 3 (x + )(x + )x + a 4 (x + )(x + )x(x ), for our Newton polynomials. Using P i (x i ) = y i, i = 0,,, 3, 4, we can solve to get a 0 =, a =, a =, a 3 = 5, a 4 =.5. Alternatively, we may write Q(x) = b 0 + b x + b x + b 3 x 3 + b 4 x 4 and solve b 0 b b b 3 = b 4 to get Of course, P 4 (x) = Q(x). b 0 = 4, b = 8, b = 5.5, b 3 =, b 4 =.5. Section 4. # To interpolate the additional point (3, 0), we look at R(x) = Q(x) + c(x + )(x + )x(x )(x ), and plug in 0 = R(3) to get c = /5. Now R(x) interpolates the points from #0, along with the extra point (3, 0). Section 4. Computer Problems #,,9 See scripts online. Section 4. #43 The Newton form of the interpolating polynomial at x 0, x is P (x) = f[x 0 ] + f[x 0, x ](x x 0 ). 7

8 Plugging in x = x and solving for f[x 0, x ], we nd f[x 0, x ] = P (x ) f[x 0 ] x x 0 = f(x ) f(x 0 ) x x 0. By the mean value theorem, there is ξ (x 0, x ) such that f (ξ) = f(x ) f(x 0 ) x x 0 = f[x 0, x ]. Section 4. #45 Let Q(x) = g(x) + x x i x n x 0 [g(x) h(x)]. Since both g and h interpolate f at i =,..., n, Q(x i ) = g(x i ) + x 0 x i x n x 0 [g(x i ) h(x i )] = f(x i ) + x 0 x i x n x 0 [f(x i ) f(x i )] = f(x i ) for i =,..., n. Since g interpolates x 0, Q(x 0 ) = g(x 0 ) + x 0 x 0 x n x 0 [g(x 0 ) h(x 0 )] = f(x 0 ) + 0 = f(x 0 ), and since h interpolates x n, Q(x n ) = g(x n ) + x 0 x n x n x 0 [g(x n ) h(x n )] = g(x n ) [g(x n ) f(x n )] = f(x n ). Note that this is the main step in proving Theorem in Section 4.. Section 4. # Without loss of generality we can assume x j = 0 and x j+ = h. We must show that (x 0)(x h) h 4 for all x [0, h]. To do this, consider Q(x) := (x 0)(x h) = x hx. Since Q (x) = x h = 0 when x = h/, the function Q has its smallest and largest values on [0, h] at h/ or at the endpoints 0, h. Since Q(0) = Q(h) = 0 we conclude max 0 x h Q(x) = Q(h/) = h /4. Section 4.3 #3 8

9 Solution. We must show that n! (j + )!(n j)! = ( ) n, j = 0,..., n. j + j When j =,..., n, we have and ( n j+ n j) n so that ) ) j + ( n j And when j = 0, we get j+( n j) =. = ( ) n j n ( n j n n =. Section 4. #4 Let x [a, b] and a = x 0 < x <... < x n = b be any choice of nodes. We must show n i=0 x x i h n+ n!, h := max i n (x i x i ). () Notice h is the largest spacing between adjacent nodes. For x j x x j+, x x j x x j+ 4 x j+ x j 4 h. where the rst inequality comes from Section 4., #. Since x x j+, j j j j x x i = (x x i ) (x j+ x i ) = (j i + )h = (j + )! h j. i=0 i=0 Similarly, since x x j, n i=j+ x x i = n (x i x) i=j+ i=0 n (x i x j ) = i=j+ Combining the last three inequalities, we get i=0 n (i j)h = (n j)! h n j. i=j+ n x x i 4 h (j + )!h j (n j)!h n j 4 hn+ n! i=0 Section 4. #8 Let P (x) be the degree interpolant of f at x 0, x, x : P (x) = f[x 0 ] + f[x 0, x ](x x 0 ) + f[x 0, x, x ](x x 0 )(x x ), 9

10 and dene Q(x) = f(x) P (x). Then Q has at least 3 roots namely x 0, x, x so, due to Rolle's theorem, Q has at least roots and Q has at least root, call it ξ. Thus, which shows 0 = Q (ξ) = f (ξ) P (ξ) = f (ξ) f[x 0, x, x ], f[x 0, x, x ] = f (ξ). Section 4. Computer Problem #9 See script online. Section 4.3 #3 Using Taylor's theorem, f(x + h) = f(x) + f (x)h + f (x)h + 6 f (x)h f(x + h) = f(x) + f (x)h + f (x)h f (x)h Straightforward calculations give h [4f(x + h) 3f(x) f(x + h)] = f (x) 3 f (x)h +... Terminating this expansion at second order, we nd the error term h [4f(x + h) 3f(x) f(x + h)] f (x) = 3 f (ξ) h. Section 4.3 #5 We have f(x + h) = f(x) + f (x)h + f (x)h + 6 f (α)h 3, f(x h) = f(x) f (x)h + f (x)h 6 f (β)h 3. Thus, f(x + h) f(x) f (x) = h f (x)h + 6 f (α)h, f(x) f(x h) f (x) = h f (x)h + 6 f (β)h, and using Taylor's theorem 0

11 so ( f(x + h) f(x) + h ) f(x) f(x h) f (x) = h h 6 [f (α) + f (β)]. Section 4.3 #7 The problem with the analysis is that the expansions do not go to high enough order. If we include one more term in the expansions for f(x + h) f(x) and f(x h) f(x), we nd the error is in fact O(h ). Section 4.3 #0 a) We write the interpolating polynomial in Newton form: and compute P (x) = f(x 0 ) + f[x 0, x ](x x 0 ) + f[x 0, x, x ](x x 0 )(x x ) P (x) = f[x 0, x, x ]. By the divided dierence recursion theorem, f[x 0, x, x ] = f[x, x ] f[x 0, x ] x x 0 = f(x ) f(x ) x x f(x ) f(x 0 ) x x 0 x x 0 f(x ) f(x ) f(x ) f(x 0 ) αh h = h + αh = [ f(x0 ) h + α f(x ) α + f(x ) α(α + ) b) Consider P 0 (x) =, P (x) = x x and P (x) = (x x ). We will solve the equations for A, B, C. The equations are P i (x) = AP (x 0 ) + BP (x ) + CP (x ), i = 0,,, 0 = A + B + C 0 = ha + αhc = h A + α h C. It is straightforward to verify that A = h (+α), B = h α, and C = h α(+α). ]. Section 4.3 #4

12 Write f(x + h) = f(x) + f (x)h + f (x)h + 6 f (x)h f (4) (x)h f (5) (x)h f(x h) = f(x) f (x)h + f (x)h 6 f (x)h f (4) (x)h 4 0 f (5) (x)h and similarly f(x + h) = f(x) + f (x)h + f (x)h f (x)h f (4) (x)h f (5) (x)h f(x h) = f(x) f (x)h + f (x)h 4 3 f (x)h f (4) (x)h f (5) (x)h Thus, h [f(x + h) f(x h)] = f (x) + 6 f (x)h + 0 f (5) (x)h (3) and h [f(x + h) f(x h)] = 3 f (x) + 9 f (x)h + 45 f (5) (x)h (4) Notice that multiplying equation (3) by 4/3 and subtracting (4) gives [f(x + h) f(x h)] 3 h [f(x + h) f(x h)] = f (x) 30 f (5) (x)h Terminating this expansion at fourth order, we nd [f(x + h) f(x h)] 3 h [f(x + h) f(x h)] f (x) = 30 f (5) (ξ) h 4. Section 4.3 Computer problems #4 See script online. Section 7. #3 c,d c. In the rst step of naive Gaussian elimination, we divide by 0, so the algorithm fails. However, if we interchange rows and, then we can do naive elimination to obtain x =, x = 7. d. After doing naive elimination on the rst column, we nd a 0 in the (, ) and a in the (3, ) spot. Thus, continuing with naive elimination on the second column, we divide by 0, and the algorithm fails. Section 7. #7 d,e

13 d. The Gaussian elimination sequence is, using gauss_naive.m, /3 /3 3/3 0 /3 /3 3/ /3 0/3 4/ / / /5 0 /3 0 6/45 0 /3 0 6/ / /5 Thus, to four signicant digits, x = 4.66, x = 4.33 and x 3 =.466. e. Using gauss_naive.m again, we nd x =, x =, and x 3 = x 4 = 0. Section 7. Computer problem #3 See script online. Section 7.3 #4 One example is The system has a unique solution, but naive elimination fails at the very rst step. Section 7.3 #6 Let A be a diagonally dominant matrix such that the (i, j)th entry of A is 0 for i > j +. Suppose naive elimination has been applied to the A up to and including row i. Inductively, assume that after these eliminations row i is still diagonally dominant. Let a i,j be the (i, j)th entry of A after these row operations. The next row operation is a i+,j = a i+,j a i+,i a i,i a i,j, j = i,..., n. (5) We check if the (i + )st row of A is still diagonally dominant after this row operation. Notice that the row operation makes the (i +, j) entries all 0 except for j i +. By the induction assumption that the ith row of A is diagonally dominant, n j=i+ a i,j a i,i a i,i+. 3

14 Moreover, the (i + )th for of A is diagonally dominant, since A was diagonally dominant to begin with and the row operations up to row i do not touch row (i + ). Thus, n a i+,j a i+,i+ a i+,i. j=i+ Combining the above displays, n a i+,j a i+,i n ( ) a i,j a j=i+ i,i a i+,j + a i+,i a i,j a j=i+ i,i ( ( a i+,i+ a i+,i ) + a i+,i a i,i+ a i,i a i+,i+ a i,i+ a i,i+. a i,i a i,i+ Thus, after the row operation (5), the (i + )th row of A is still diagonally dominant. ) Section 8. #b,c Using LU_fact.m, the LU factorizations are /3 0 a) L = , U = / / b) L = / /3 0, U = 0 3/4 / /4 0 0 /3 / / / Section 8. #b The factorization is The M from part a) is L = , U =

15 It can be calculated by inverting the individual row operation matrices used to build L, then multiplying these matrices together. Section 8. #4a After the rst row operation, a zero appears in the (, ) spot and so naive elimination fails. Thus, the matrix has no LU factorization. Section 8. # a) The solution is x =, x = 3, x 3 = 0 and x 4 =. b) In matrix notation the system is Since the system is already in lower triangular form, one only needs to scale the columns to get L (which must be unit lower triangular) and adjust U accordingly. Thus, L = / 0 0 /3 /3 0, U = /6 / 9/ Section 8. Computer problems # See script online. Section 8. #6,7,8 #6: c (Theorem, p. 39), #7: b (Theorem, p. 330) #8: a (Theorem, p. 330) Section 8. # a) Since A is symmetric, its condition number can be computed from its eigenvalues. 5

16 Note that λ 0 det(a λi) = λ 0 λ = ( λ)(( + λ) ) + ( + λ) = ( + λ)( ( + λ) ) Thus, the eigenvalues of A in order of decreasing magnitude are λ =, λ =, λ 3 = and so κ(a) = λ λ 3 = b) Since A is not symmetric, its condition number must be calculated from the eigenvalues of A T A (or singular values of A). Note that λ det(a T A λi) = λ λ = ( λ)(( λ) ) [( λ) ] + [ ( λ)] = ( λ)[( λ) 3]. Thus, the eigenvalues of A T A in order of decreasing magnitude are λ = 3 +, λ =, λ 3 = 3. The singular values of A in decreasing order are σ i = λ i, i =,, 3. Thus, κ(a) = σ 3 + = σ 3 3 Section 8. Computer problems #,7 See scripts online. Section 8.3 #8,,3 #8: d (Theorem, p. 345), #: e (Gershgorin's theorem states that all eigenvalues must be in the union of the intervals listed), #3: True (the ith inequality is equivalent to λ C i (a ii, r i ), and every eigenvalue must be in at least one such disk). 6

17 Section 8.4 # Under appropriate assumptions on the inputs (including that A does not have the eigenvalue zero), the output will be r /λ n, with λ n the smallest (in magnitude) eigenvalue of A, and x will be an approximation of the eigenvector corresponding to λ n. Section 8.4 #4,5,6 We use the power method with ϕ(x) = u T x and u = (, 0, 0). In #4, ve iterations give 7/4 λ (5) = 7/5, x (5) =. 7/4 The aim is to estimate the dominant eigenvalue of A and the corresponding eigenvector, λ (5) λ = + /, x (5) x = /. In #5, ve iterations give 99/40 λ (5) = 99/9, x (5) =. 99/40 The aim is to estimate the eigenvalue of A furthest from 4 and the corresponding eigenvector, λ (5) + 4 λ = /, x (5) x = /. In #6, ve iterations give 99/40 λ (5) = 99/58, x (5) =. 99/40 The aim is to estimate smallest eigenvalue of A and the corresponding eigenvector, /λ (5) λ = /, x (5) x = /. Section 8.4 Computer problems #4 See script online. 7

1 Solutions to selected problems

1 Solutions to selected problems Solutions to selected problems Section., #a,c,d. a. p x = n for i = n : 0 p x = xp x + i end b. z = x, y = x for i = : n y = y + x i z = zy end c. y = (t x ), p t = a for i = : n y = y(t x i ) p t = p

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

Solutions Preliminary Examination in Numerical Analysis January, 2017

Solutions Preliminary Examination in Numerical Analysis January, 2017 Solutions Preliminary Examination in Numerical Analysis January, 07 Root Finding The roots are -,0, a) First consider x 0 > Let x n+ = + ε and x n = + δ with δ > 0 The iteration gives 0 < ε δ < 3, which

More information

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013 Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 013 August 8, 013 Solutions: 1 Root Finding (a) Let the root be x = α We subtract α from both sides of x n+1 = x

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Math 502 Fall 2005 Solutions to Homework 3

Math 502 Fall 2005 Solutions to Homework 3 Math 502 Fall 2005 Solutions to Homework 3 (1) As shown in class, the relative distance between adjacent binary floating points numbers is 2 1 t, where t is the number of digits in the mantissa. Since

More information

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places. NUMERICAL METHODS 1. Rearranging the equation x 3 =.5 gives the iterative formula x n+1 = g(x n ), where g(x) = (2x 2 ) 1. (a) Starting with x = 1, compute the x n up to n = 6, and describe what is happening.

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Completion Date: Monday February 11, 2008

Completion Date: Monday February 11, 2008 MATH 4 (R) Winter 8 Intermediate Calculus I Solutions to Problem Set #4 Completion Date: Monday February, 8 Department of Mathematical and Statistical Sciences University of Alberta Question. [Sec..9,

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Chapter 4. Determinants

Chapter 4. Determinants 4.2 The Determinant of a Square Matrix 1 Chapter 4. Determinants 4.2 The Determinant of a Square Matrix Note. In this section we define the determinant of an n n matrix. We will do so recursively by defining

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Interpolation and Approximation

Interpolation and Approximation Interpolation and Approximation The Basic Problem: Approximate a continuous function f(x), by a polynomial p(x), over [a, b]. f(x) may only be known in tabular form. f(x) may be expensive to compute. Definition:

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Midterm Review. Igor Yanovsky (Math 151A TA)

Midterm Review. Igor Yanovsky (Math 151A TA) Midterm Review Igor Yanovsky (Math 5A TA) Root-Finding Methods Rootfinding methods are designed to find a zero of a function f, that is, to find a value of x such that f(x) =0 Bisection Method To apply

More information

Math/CS 466/666: Homework Solutions for Chapter 3

Math/CS 466/666: Homework Solutions for Chapter 3 Math/CS 466/666: Homework Solutions for Chapter 3 31 Can all matrices A R n n be factored A LU? Why or why not? Consider the matrix A ] 0 1 1 0 Claim that this matrix can not be factored A LU For contradiction,

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Math 215 HW #9 Solutions

Math 215 HW #9 Solutions Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Introductory Numerical Analysis

Introductory Numerical Analysis Introductory Numerical Analysis Lecture Notes December 16, 017 Contents 1 Introduction to 1 11 Floating Point Numbers 1 1 Computational Errors 13 Algorithm 3 14 Calculus Review 3 Root Finding 5 1 Bisection

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Introduction and mathematical preliminaries

Introduction and mathematical preliminaries Chapter Introduction and mathematical preliminaries Contents. Motivation..................................2 Finite-digit arithmetic.......................... 2.3 Errors in numerical calculations.....................

More information

Hermite Interpolation

Hermite Interpolation Jim Lambers MAT 77 Fall Semester 010-11 Lecture Notes These notes correspond to Sections 4 and 5 in the text Hermite Interpolation Suppose that the interpolation points are perturbed so that two neighboring

More information

MS 3011 Exercises. December 11, 2013

MS 3011 Exercises. December 11, 2013 MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices. MATH 2050 Assignment 8 Fall 2016 [10] 1. Find the determinant by reducing to triangular form for the following matrices. 0 1 2 (a) A = 2 1 4. ANS: We perform the Gaussian Elimination on A by the following

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam Jim Lambers MAT 460/560 Fall Semester 2009-10 Practice Final Exam 1. Let f(x) = sin 2x + cos 2x. (a) Write down the 2nd Taylor polynomial P 2 (x) of f(x) centered around x 0 = 0. (b) Write down the corresponding

More information

1 What is numerical analysis and scientific computing?

1 What is numerical analysis and scientific computing? Mathematical preliminaries 1 What is numerical analysis and scientific computing? Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations)

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

Introduction to Matrices and Linear Systems Ch. 3

Introduction to Matrices and Linear Systems Ch. 3 Introduction to Matrices and Linear Systems Ch. 3 Doreen De Leon Department of Mathematics, California State University, Fresno June, 5 Basic Matrix Concepts and Operations Section 3.4. Basic Matrix Concepts

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Announcements Monday, November 06

Announcements Monday, November 06 Announcements Monday, November 06 This week s quiz: covers Sections 5 and 52 Midterm 3, on November 7th (next Friday) Exam covers: Sections 3,32,5,52,53 and 55 Section 53 Diagonalization Motivation: Difference

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

A first order divided difference

A first order divided difference A first order divided difference For a given function f (x) and two distinct points x 0 and x 1, define f [x 0, x 1 ] = f (x 1) f (x 0 ) x 1 x 0 This is called the first order divided difference of f (x).

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

CHALLENGE! (0) = 5. Construct a polynomial with the following behavior at x = 0:

CHALLENGE! (0) = 5. Construct a polynomial with the following behavior at x = 0: TAYLOR SERIES Construct a polynomial with the following behavior at x = 0: CHALLENGE! P( x) = a + ax+ ax + ax + ax 2 3 4 0 1 2 3 4 P(0) = 1 P (0) = 2 P (0) = 3 P (0) = 4 P (4) (0) = 5 Sounds hard right?

More information

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute A. [ 3. Let A = 5 5 ]. Find all (complex) eigenvalues and eigenvectors of The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute 3 λ A λi =, 5 5 λ from which det(a λi)

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

ECON 331 Homework #2 - Solution. In a closed model the vector of external demand is zero, so the matrix equation writes:

ECON 331 Homework #2 - Solution. In a closed model the vector of external demand is zero, so the matrix equation writes: ECON 33 Homework #2 - Solution. (Leontief model) (a) (i) The matrix of input-output A and the vector of level of production X are, respectively:.2.3.2 x A =.5.2.3 and X = y.3.5.5 z In a closed model the

More information

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a TE Linear Algebra and Numerical Methods Tutorial Set : Two Hours. (a) Show that the product AA T is a symmetric matrix. (b) Show that any square matrix A can be written as the sum of a symmetric matrix

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

1 Lecture 8: Interpolating polynomials.

1 Lecture 8: Interpolating polynomials. 1 Lecture 8: Interpolating polynomials. 1.1 Horner s method Before turning to the main idea of this part of the course, we consider how to evaluate a polynomial. Recall that a polynomial is an expression

More information

Evaluating Determinants by Row Reduction

Evaluating Determinants by Row Reduction Evaluating Determinants by Row Reduction MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Objectives Reduce a matrix to row echelon form and evaluate its determinant.

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Prepared by: M. S. KumarSwamy, TGT(Maths) Page Prepared by: M. S. KumarSwamy, TGT(Maths) Page - 50 - CHAPTER 3: MATRICES QUICK REVISION (Important Concepts & Formulae) MARKS WEIGHTAGE 03 marks Matrix A matrix is an ordered rectangular array of numbers

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic

Computer Arithmetic. MATH 375 Numerical Analysis. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Computer Arithmetic Computer Arithmetic MATH 375 Numerical Analysis J. Robert Buchanan Department of Mathematics Fall 2013 Machine Numbers When performing arithmetic on a computer (laptop, desktop, mainframe, cell phone,

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

6 EIGENVALUES AND EIGENVECTORS

6 EIGENVALUES AND EIGENVECTORS 6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet

Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet Week # 1 Order of Operations Step 1 Evaluate expressions inside grouping symbols. Order of Step 2 Evaluate all powers. Operations Step

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information