A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

Size: px
Start display at page:

Download "A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis"

Transcription

1 Portland State University PDXScholar University Honors Theses University Honors College 2014 A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Edison Tsai Portland State University Let us know how access to this document benefits you. Follow this and additional works at: Recommended Citation Tsai, Edison, "A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis" (2014). University Honors Theses. Paper /honors.113 This Thesis is brought to you for free and open access. It has been accepted for inclusion in University Honors Theses by an authorized administrator of PDXScholar. For more information, please contact pdxscholar@pdx.edu.

2 A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis by Edison Tsai An undergraduate honors thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in University Honors and Mathematics Thesis Adviser Dr. Jeffrey Ovall Portland State University 2014

3 1. A Brief History of Polynomial Root Finding The problem of solving polynomial equations is one of the oldest problems in mathematics. Many ancient civilizations developed systems of algebra which included methods for solving linear equations. Around 2000 B.C.E. the Babylonians found a method for solving quadratic equations which is equivalent to the modern quadratic formula. They also discovered a method for approximating square roots which turned out to be a special case of Newton s method (which was not discovered until over 3500 years later). Several Italian Renaissance mathematicians found general methods for finding roots of cubic and quartic polynomials. But it is known that there is no general formula for finding the roots of any polynomial of degree 5 or higher using only arithmetic operations and root extraction. Therefore, when presented with the problem of solving a fifth degree or higher polynomial equation, it is necessary to resort to numerical approximations. In fact, even for third and fourth degree polynomials, it is usually much better in practice to use numerical methods because the exact formulas are extremely prone to round-off errors. One of the most well-known numerical methods for solving not only polynomial equations, but for finding roots of any sufficiently well-behaved function in general, is Newton s method. In certain cases, however, such as in the case of repeated roots, Newton s method, as well as many other iterative methods, do not work very well because the convergence is much slower. In addition, iterative methods exhibit sensitive dependence on initial conditions, so that it is essentially impossible to predict beforehand which root an iterative method will converge to for a given initial condition. Even if one had a method for finding all roots of a given polynomial with unlimited accuracy, there is another fundamental obstacle in the problem of finding roots of polynomials. The problem is that most polynomials exhibit a phenomenon known as ill-conditioning, so that a small perturbation to the coefficients of the polynomial can result in large changes in the roots. In general, when attempting to derive numerical approximations to the solution of any mathematical problem (not just finding polynomial roots) it is prudent to avoid ill-conditioning because it can increase sensitivity to round-off errors.

4 2. The Problem of Ill-Conditioning Ill-conditioning is a phenomenon which appears in many mathematical problems and algorithms including polynomial root finding. Other examples of ill-conditioned problems include finding the inverse of a matrix with a large condition number or computing the eigenvalues of a large matrix. In many cases, whether or not a given problem is ill-conditioned depends on the algorithm used to solve the problem. For example, computing the QR factorization of a matrix can be ill-conditioned if one naively applies the Gram-Schmidt orthogonalization procedure in the most straightforward way, but using Householder reflections to compute a QR factorization is numerically stable and much less prone to round-off errors. A classic example of ill-conditioning in the context of finding polynomial roots is the socalled Wilkinson polynomial. This polynomial is defined by Clearly, by definition the roots of this polynomial are the integers from 1 to 20. The expanded form of the product in equation (1) may be given as (1) where the coefficients c i are given by and s j denotes the j th elementary symmetric polynomial (with the convention that s 0 is identically 1). It would be most convenient if a small change to any of the coefficients c i resulted in a similarly small change to the roots of the polynomial. Unfortunately, this is far from the case. Figure 1 shows the roots of 50 different polynomials which differ from the polynomial given by equation (2) only in that each of the coefficients c i has been perturbed by a random amount up to of the original value (i.e. by up to one part in ten billion). We see that although the change in the coefficients is very small by any standard, the effect on the roots is quite drastic. The problem is made worse by the fact that in most, if not all, real world applications, where polynomials are used to model or interpolate data, there will be some uncertainty in the values of the coefficients, arising from uncertainties in whatever measurements are used to generate the data. When the polynomial is ill-conditioned, as in the preceding example, very small uncertainties in the coefficients can lead to large uncertainties in the roots, often many orders of magnitude larger. This issue cannot be resolved simply by choosing a

5 different algorithm for approximating the roots; even if one were able to compute the roots of any polynomial with unlimited accuracy, there would still be a large degree of uncertainty in the computed roots of a polynomial generated from measured data, because the coefficients of the polynomial itself are not exact, whereas any method for computing polynomial roots must necessarily assume that the given coefficients are exact. Thus the only way to reduce the amount of uncertainty to a more reasonable level is to somehow make the problem less ill-conditioned. Figure 1: Ill-conditioning present in the problem of finding the roots of the so-called Wilkinson polynomial. Roots of 51 different degree 20 polynomials are portrayed on the complex plane. The triangles denote the roots of the original unperturbed polynomial, which are simply the integers from 1 to 20, while the small dots denote the roots of 50 polynomials with coefficients differing from the original polynomial by up to one part in ten billion. Because the problem of finding polynomial roots is ill-conditioned, small perturbations to the coefficients lead to large changes in the roots. In the worst case, the imaginary parts of several of the roots change from 0 (for the unperturbed polynomial) to about 5. This is clearly not an acceptable margin of error by any reasonable standard. A well-known and often-used metric for measuring the degree of ill-conditioning of a given problem or algorithm for solving that problem is the condition number. The condition number measures how sensitive a computed answer or output given by the algorithm is to changes in the input values or initial conditions. When defining the condition number, it is important to note that there are two ways in which changes to the initial and computed values may be measured; one may choose to use either the absolute change or the relative change. If we treat an algorithm as a function which takes the initial input values and produces

6 outputs, then the condition number of the j th output y j with respect to the i th input x i measures how large of a change in y j can result from a small perturbation of x i. If we denote a small perturbation of x i by, then the absolute change in x i is simply, whereas the relative change is defined by. Thus the absolute change only measures the numerical difference between the perturbed and original values of a variable, whereas the relative change measures the perturbation as a proportion of the variable s original value.. Usually, in numerical analysis the relative change is more important because the floating-point arithmetic system used by computers has a fixed amount of precision on a relative basis; i.e. in floating-point arithmetic an error margin of ±10 for a computed value of 1000 is just as precise as an error margin of ±1 for a computed value of 100. We think of an algorithm for solving a problem as a function from an arbitrary set X, the set of parameters or initial conditions, to a set Y, the set of solutions. In order to define the condition number, we require that the notion of distance between two elements exists in both the set of initial conditions and the set of solutions; thus we assume that X and Y are normed vector spaces. In the case of polynomial root-finding, X and Y are both vectors of complex numbers: X consists of the polynomial coefficients, and Y consists of the roots. Definition: given a function where X and Y are normed vector spaces, the absolute condition number of f at any is defined by and the relative condition number of f at is defined by We wish to obtain an expression for the condition number of a root of a polynomial with respect to a given coefficient. Here, we are treating the roots of the polynomial as a function of the coefficients, and working in the vector space. However, we are only interested in the sensitivity of any particular root to changes in one particular coefficient at a time. Therefore, we may think of the j th root as being a function of the i th coefficient, holding all other coefficients fixed. In this case we may then refer to the (relative) condition number of the j th root with respect

7 to the i th coefficient, this being defined as the relative condition number of the function (which gives the j th root in terms of the i th coefficient) at the i th coefficient. The following theorem provides an expression for computing the condition number of any root with respect to any coefficient, provided that the root has multiplicity 1. Note that the condition number of a multiple root is always infinite because the derivative of a polynomial at a multiple root is 0, so that small changes to any coefficient can lead to arbitrarily large changes in the roots. Theorem 1: Let be a degree n polynomial with coefficients c i for. If r is a nonzero root of p(x) with multiplicity 1, and, then the relative condition number of r with respect to c j is Proof: Let be any perturbation of the j th coefficient. Define the polynomial as the result of perturbing the j th coefficient of by,so that, and denote the corresponding root of by. Since the coefficients of any polynomial can be given as continuously differentiable functions of the roots (using symmetric polynomials), it follows from the inverse function theorem that the roots are continuous functions of the coefficients as well. In particular, r may be given as a continuous function of the j th coefficient, with all other coefficients being held constant. Therefore, as,, and we have,. By the definition of condition number, we have. Consider the limit. We have Since this limit exists, we must have as well. As an example, consider the coefficient of the Wilkinson polynomial. We see that for the root, the condition number with respect to is,

8 whereas for it is. Thus, the root is sensitive to changes in the leading coefficient whereas the root is virtually unaffected by those same changes. 3. Reducing the Degree of Ill-Conditioning Using a Change of Basis The key to eliminating any sort of undesirable behavior is to first identify the source of the behavior. For example, in order to debug a computer program it is first necessary to identify the origin of the bug, and in order to repair a mechanical failure in a piece of machinery it is first necessary to identify the particular component or components which have failed. In the case of polynomial root-finding, we must first identify the cause of ill-conditioning if we are to attempt to reduce it. Keeping in mind that large condition numbers correspond to ill-conditioned problems, and examining the expression for the condition number given by Theorem 1, we see that there are three factors which may lead to a large condition number κ: a small value of, a large value of, and a large value of will all result in a large κ. It is unhelpful to focus on because the value of cannot be changed without changing the polynomial itself, and besides, most polynomials of even moderately high degree tend to have very large values of at the roots (assuming no multiple roots), so the value of is not the source of the problem anyway. This points to large values of and as the cause of ill-conditioning. Indeed, with the Wilkinson polynomial the condition number of the root with respect to the coefficient is, and we see that the large value of κ is due to the large values of and. The relevant question, therefore, is whether there exists some method of somehow reducing the magnitude of and. The answer to this question is yes, and it involves a bit of clever trickery. Up until now we have assumed that any polynomial will be represented in the form there is really no fundamental reason why this representation should be used. In fact, this representation (the so-called standard form) is in fact decidedly poor for a wide variety problems such as polynomial interpolation. Finding the coefficients of the interpolant polynomial for a given set of data is a horrendously ill-conditioned problem if the standard representation is used. The usual method of avoiding this problem is to use a different basis for representing the, but

9 interpolant polynomial; common choices of basis include the Lagrange and Newton bases which both transform the ill-conditioned problem into a well-conditioned problem. This suggests that it might be possible to reduce the ill-conditioning of polynomial root-finding using a similar change of basis. The fundamental idea is that instead of representing a degree n polynomial as, we may represent it as, where is a basis for the vector space of all polynomials of degree n or less. The standard representation is then a special case of this with. But perhaps a different choice of basis will help alleviate the problem of sensitivity to small changes in the coefficients. In order to determine exactly what basis will help with reducing ill-conditioning, we first require an expression for the condition number of a given root with respect to a given coefficient, using an arbitrary basis. The following theorem, which is a generalization of Theorem 1 to arbitrary bases, provides such an expression. Theorem 2: Let be a basis for, and let be a degree n polynomial. If r is a nonzero root of p(x) with multiplicity 1 and for some, then the relative condition number of r with respect to is Proof: Let be an arbitrary perturbation of the coefficient, and define to be the result of perturbing the j th coefficient of by. Then. Define to be the corresponding root of the perturbed polynomial (in the same manner as for Theorem 1). By the same argument as for Theorem 1, as, also. By the definition of condition number, we have. Consider the limit. We have Since the limit exists, it follows that. As a result of Theorem 2, we see that now large values of result from large values of the coefficients and large values of. Since we cannot directly control the values of the coefficients (as they will depend on the chosen basis), it makes sense to focus on minimizing

10 ; that is, the basis polynomials should have small values near the roots of the original polynomial. Unfortunately, we do not know the roots because they are precisely what we are trying to find! Therefore, a possible strategy for attacking the problem might be as follows: suppose that we have managed to obtain an estimate of the interval in which the roots are contained; denote this interval by. Then we might choose basis polynomials such that is small over the entire interval. Assuming that our estimate is sufficiently accurate, and in particular that all the roots lie inside the interval, the values of the basis polynomials at the roots of the original polynomial will necessarily be small as well. It turns out that there exists a particular set of polynomials which work extremely well for implementing this strategy. These polynomials are the Chebyshev polynomials, and the special property they have that makes them so useful is that for all n, for. In addition, the Chebyshev polynomials form an orthogonal set, which makes them relatively easy to use as a basis for. However, since we must have in order for to be satisfied, we must make a change of variables to map the original interval onto. If we let, then as x ranges over, t ranges over. We may then express the polynomial as a linear combination of Chebyshev polynomials, find its roots, and then reverse the change of variables to obtain the roots of the original polynomial. Our procedure for root-finding is then the following: 1. Start with a set of n+1 data points for, and suppose that the roots of the polynomial (which interpolates these data points) all lie in the interval. 2. Make the change of variables in order to obtain the data points for, with. There exists a unique degree n polynomial such that for. 3. Express the interpolating polynomial as a linear combination of Chebyshev polynomials. 4. Find the roots of. 5. Make the change of variables to find the roots of the original polynomial.

11 Steps (3) and (4) warrant additional discussion. Several possible procedures can potentially be used to express a polynomial as a linear combination of Chebyshev polynomials. One method operates by finding a least-squares approximation to the polynomial, except that in this case the approximation is exact (at least in theory). Therefore, to find the coefficient of the i th Chebyshev polynomial we simply multiply the data points by the Chebyshev polynomial, divide by, and then integrate over the interval. Unfortunately, since we only have a set of points on the graph of (enough to uniquely determine it) but do not have an analytic formula for, a quadrature rule such as the Gauss-Chebyshev quadrature must be used to evaluate the integral. The disadvantage is that using such a quadrature rule requires that the data be sampled at certain, specific, pre-determined points, which in many realworld applications may not always be possible or practical. Another possible method for expressing the polynomial as a linear combination of Chebyshev polynomials is by solving the following system of equations: if we have n+1 data points, so that is degree n, then for, where b j is the coefficient of the j th Chebyshev polynomial T j. In other words, we have where is an (n+1)-by- (n+1) matrix given by, is the vector of coefficients, and is the vector of measured y-values from the given data. Solving this system then gives the coefficients b i. However, this approach also has its own issues. If x i and x j are too close together for some, then the i th and j th rows of the matrix will be almost identical, and the linear system will be ill-conditioned. The result is that there must be a restriction placed on the minimum distance between x values at which measurements are made. Nevertheless, this restriction is much looser than the restriction that measurements must be made at certain specific points (which comes with using Gauss-Chebyshev quadrature to calculate the coefficients). Therefore, this method is more flexible and would likely be preferred for most real-world applications. Accordingly, we have decided to use this method in order to obtain experimental results. The other noteworthy point is the method for finding the roots of the polynomial. When finding the roots of a polynomial which is expressed in terms of Chebyshev polynomials,

12 most advanced methods (such as those based on creating a matrix for which the polynomial is the characteristic polynomial) are unusable because we are not using the standard form. Therefore, we can only resort to more elementary methods such as Newton s method. The primary issue with Newton s method is that it can be unpredictable which particular root will be found, unless the procedure is started sufficiently close to a root. This is problematic if we need to find one particular root of the polynomial. However, we can still hope that by trying enough initial conditions, all roots will eventually be found. This issue is beyond the scope of this research and thus we do not attempt to address it. For the experimental results presented below, we have taken advantage of the fact that the exact roots are known in order to select a set of initial conditions which will converge to all the roots of, but it must be kept in mind that selecting good initial conditions for Newton s method remains a problem in real-world situations where at best only a rough estimate of the roots is available. 4. Experimental Results The procedure described in the previous section was used for generating all the results presented here. In order to obtain a set of data points, the Wilkinson polynomial was evaluated at 21 equally spaced points in the interval (and thus the spacing between successive x values was 1). We used for the interval in which the roots are assumed to lie. For the first experiment, after applying the change of variables and expressing the interpolant polynomial as a linear combination of Chebyshev polynomials, 50 perturbed polynomials were generated by perturbing the coefficients by a random amount up to of the original value (i.e. up to one part in ten billion). This is the same amount which was previously used to demonstrate the ill-conditioning of the Wilkinson polynomial when expressed in standard form. The roots of the 50 perturbed polynomials were then found using Newton s method. Using Newton s method for any function requires a means of evaluating the function and its derivative at any point. This is where another useful property of Chebyshev polynomials comes into play: they can all be quickly and accurately evaluated at any point using the formula (2) Differentiating both sides gives

13 (3) which holds for, with and. Together, equations (2) and (3) provide a means of efficiently evaluating both and its derivative at any point, which allows Newton s method to be used. The results are illustrated in Figure 2, which plots the roots of the 50 perturbed polynomials along with the original roots on the complex plane. Figure 2: Reducing the degree of ill-conditioning of the Wilkinson polynomial by using a basis consisting of Chebyshev polynomials. The triangles show the roots of the original polynomial, which are simply the integers from 1 through 20, while the dots show the roots of 50 polynomials with coefficients in the Chebyshev basis perturbed by a random amount up to one part in ten billion, which is the same amount used with the standard basis to produce Figure 1. At this scale, the roots of the perturbed polynomials are indistinguishable from each other and from the original roots. We see that the roots of the perturbed polynomials are essentially indistinguishable from the original roots. This is clearly a drastic improvement over the situation depicted in Figure 1. Since no effect is discernible here, we may also consider what happens with larger perturbations to the coefficients. Figure 3 shows the result when the maximum size of the perturbations is increased to 10-7 of the original value (one part in ten million).

14 Figure 3: Effects of larger perturbations to the coefficients of up to one part in ten million. This figure was generated using the exact same procedure as Figure 2, with the only difference being the size of the perturbations. We see that the roots towards the middle of the interval are beginning to spread out, but still remain close to the original roots. The roots at the ends of the interval are still essentially indistinguishable from the original roots at this scale. When the size of the perturbations is increased, we see that some of the roots in the middle of the interval are beginning to diverge from the original roots. But they still remain reasonably close, and the results are indisputably an immense improvement over the results obtained with the standard basis as shown in Figure 1, especially considering that the perturbations are now a thousand times larger. Therefore, we may conclude that using Chebyshev polynomials as a basis results in much better conditioning for finding polynomial roots. 5. Possible Directions for Future Research In this research project, we demonstrated that using Chebyshev polynomials as a basis for, the space of all polynomials of degree n or less, drastically improves the conditioning of polynomial root-finding as compared to using the standard basis. Nevertheless, there remain many questions which can be further explored. These include: 1. The significance of ill-conditioning in polynomial root-finding arises from the fact that in real-world applications, where polynomials are used to model data, the data

15 itself will not be exact and so neither will the coefficients. The experimental results presented here were all obtained by perturbing the coefficients by random amounts (up to a certain maximum). What would happen if the data itself was perturbed instead? Note that in real-world situations, the data will have uncertainties in both the x- and y-values. 2. We used Chebyshev polynomials as an alternative basis because they possessed certain properties which were likely to reduce the condition number (and thus improve the conditioning of the problem). However, there is no reason why a different basis might not work just as well or even better in some situations. It might even be possible to create up with an adaptive scheme for root-finding where the data is first analyzed for certain patterns or trends, and then using those patterns a basis is selected to minimize the ill-conditioning for that particular set of data. This needs to be further investigated. 3. Using Newton s method requires that both the function and its derivative can be accurately evaluated at any point. In this case, the special properties of Chebyshev polynomials allowed us to do this, but it might not always be possible to find a simple expression for evaluating other basis polynomials and their derivatives at arbitrary points. Can the existing linear-algebra based methods for root-finding be generalized to work for polynomials expressed in terms of an arbitrary basis? 4. In terms of real-world applications, polynomials are often used for more than just interpolating data. For instance, they can also be used for least-squares approximations, which in many situations models general trends in the data much better than a straight interpolation (where the interpolant polynomial passes through all the data points exactly) does. In these cases, how sensitive are the roots of the polynomial to changes in the data? We conclude that while we have demonstrated a method for improving the conditioning of root-finding in one particular scenario, there are still many questions which need to be answered in order to use this method for real-world applications. The four points listed above represent some, but certainly not all, of the directions in which further investigation might be taken. In general, many of the problems associated with using mathematical theories to model real-world situations remain wide open.

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

ABSTRACT. HEWITT, CHRISTINA M. Real Roots of Polynomials with Real Coefficients. (Under the direction of Dr. Michael Singer).

ABSTRACT. HEWITT, CHRISTINA M. Real Roots of Polynomials with Real Coefficients. (Under the direction of Dr. Michael Singer). ABSTRACT HEWITT, CHRISTINA M. Real Roots of Polynomials with Real Coefficients. (Under the direction of Dr. Michael Singer). Polynomial equations are used throughout mathematics. When solving polynomials

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Intersecting Two Lines, Part Two

Intersecting Two Lines, Part Two Module 1.5 Page 149 of 1390. Module 1.5: Intersecting Two Lines, Part Two In this module you will learn about two very common algebraic methods for intersecting two lines: the Substitution Method and the

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

Grade 8 Chapter 7: Rational and Irrational Numbers

Grade 8 Chapter 7: Rational and Irrational Numbers Grade 8 Chapter 7: Rational and Irrational Numbers In this chapter we first review the real line model for numbers, as discussed in Chapter 2 of seventh grade, by recalling how the integers and then the

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Chapter 1 Computer Arithmetic

Chapter 1 Computer Arithmetic Numerical Analysis (Math 9372) 2017-2016 Chapter 1 Computer Arithmetic 1.1 Introduction Numerical analysis is a way to solve mathematical problems by special procedures which use arithmetic operations

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

College Algebra Through Problem Solving (2018 Edition)

College Algebra Through Problem Solving (2018 Edition) City University of New York (CUNY) CUNY Academic Works Open Educational Resources Queensborough Community College Winter 1-25-2018 College Algebra Through Problem Solving (2018 Edition) Danielle Cifone

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 017/018 DR. ANTHONY BROWN. Lines and Their Equations.1. Slope of a Line and its y-intercept. In Euclidean geometry (where

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon s method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

19. TAYLOR SERIES AND TECHNIQUES

19. TAYLOR SERIES AND TECHNIQUES 19. TAYLOR SERIES AND TECHNIQUES Taylor polynomials can be generated for a given function through a certain linear combination of its derivatives. The idea is that we can approximate a function by a polynomial,

More information

LEAST SQUARES DATA FITTING

LEAST SQUARES DATA FITTING LEAST SQUARES DATA FITTING Experiments generally have error or uncertainty in measuring their outcome. Error can be human error, but it is more usually due to inherent limitations in the equipment being

More information

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 0. Gaussian Elimination with Partial Pivoting 0.2 Iterative Methods for Solving Linear Systems 0.3 Power Method for Approximating Eigenvalues 0.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

11.5 Reduction of a General Matrix to Hessenberg Form

11.5 Reduction of a General Matrix to Hessenberg Form 476 Chapter 11. Eigensystems 11.5 Reduction of a General Matrix to Hessenberg Form The algorithms for symmetric matrices, given in the preceding sections, are highly satisfactory in practice. By contrast,

More information

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 7

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 7 EECS 70 Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 7 Polynomials Polynomials constitute a rich class of functions which are both easy to describe and widely applicable in topics

More information

Chapter 1. Linear Equations

Chapter 1. Linear Equations Chapter 1. Linear Equations We ll start our study of linear algebra with linear equations. Lost of parts of mathematics rose out of trying to understand the solutions of different types of equations. Linear

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x. ESTIMATION OF ERROR Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call the residual for x. Then r := b A x r = b A x = Ax A x = A

More information

Empirical Models Interpolation Polynomial Models

Empirical Models Interpolation Polynomial Models Mathematical Modeling Lia Vas Empirical Models Interpolation Polynomial Models Lagrange Polynomial. Recall that two points (x 1, y 1 ) and (x 2, y 2 ) determine a unique line y = ax + b passing them (obtained

More information

Physics 331 Introduction to Numerical Techniques in Physics

Physics 331 Introduction to Numerical Techniques in Physics Physics 331 Introduction to Numerical Techniques in Physics Instructor: Joaquín Drut Lecture 12 Last time: Polynomial interpolation: basics; Lagrange interpolation. Today: Quick review. Formal properties.

More information

Part IB - Easter Term 2003 Numerical Analysis I

Part IB - Easter Term 2003 Numerical Analysis I Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting.

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

More information

Visualizing Complex Roots

Visualizing Complex Roots The Mathematics Enthusiast Volume 15 Number 3 Number 3 Article 9 7-1-2018 Visualizing Complex Roots William Bauldry Michael J Bosse Steven Otey Let us know how access to this document benefits you Follow

More information

CHAPTER 1: Functions

CHAPTER 1: Functions CHAPTER 1: Functions 1.1: Functions 1.2: Graphs of Functions 1.3: Basic Graphs and Symmetry 1.4: Transformations 1.5: Piecewise-Defined Functions; Limits and Continuity in Calculus 1.6: Combining Functions

More information

= 5 2 and = 13 2 and = (1) = 10 2 and = 15 2 and = 25 2

= 5 2 and = 13 2 and = (1) = 10 2 and = 15 2 and = 25 2 BEGINNING ALGEBRAIC NUMBER THEORY Fermat s Last Theorem is one of the most famous problems in mathematics. Its origin can be traced back to the work of the Greek mathematician Diophantus (third century

More information

1 Computational problems

1 Computational problems 80240233: Computational Complexity Lecture 1 ITCS, Tsinghua Univesity, Fall 2007 9 October 2007 Instructor: Andrej Bogdanov Notes by: Andrej Bogdanov The aim of computational complexity theory is to study

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Evaluating implicit equations

Evaluating implicit equations APPENDIX B Evaluating implicit equations There is a certain amount of difficulty for some students in understanding the difference between implicit and explicit equations, and in knowing how to evaluate

More information

Rectangular Systems and Echelon Forms

Rectangular Systems and Echelon Forms CHAPTER 2 Rectangular Systems and Echelon Forms 2.1 ROW ECHELON FORM AND RANK We are now ready to analyze more general linear systems consisting of m linear equations involving n unknowns a 11 x 1 + a

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Functions and Data Fitting

Functions and Data Fitting Functions and Data Fitting September 1, 2018 1 Computations as Functions Many computational tasks can be described by functions, that is, mappings from an input to an output Examples: A SPAM filter for

More information

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error In this method we assume initial value of x, and substitute in the equation. Then modify x and continue till we

More information

Chapter Five Notes N P U2C5

Chapter Five Notes N P U2C5 Chapter Five Notes N P UC5 Name Period Section 5.: Linear and Quadratic Functions with Modeling In every math class you have had since algebra you have worked with equations. Most of those equations have

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

A Hypothesis about Infinite Series, of the minimally centered variety. By Sidharth Ghoshal May 17, 2012

A Hypothesis about Infinite Series, of the minimally centered variety. By Sidharth Ghoshal May 17, 2012 A Hypothesis about Infinite Series, of the minimally centered variety By Sidharth Ghoshal May 17, 2012 Contents Taylor s Theorem and Maclaurin s Special Case: A Brief Introduction... 3 The Curious Case

More information

Solving the Yang-Baxter Matrix Equation

Solving the Yang-Baxter Matrix Equation The University of Southern Mississippi The Aquila Digital Community Honors Theses Honors College 5-7 Solving the Yang-Baxter Matrix Equation Mallory O Jennings Follow this and additional works at: http://aquilausmedu/honors_theses

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

FACTORIZATION AND THE PRIMES

FACTORIZATION AND THE PRIMES I FACTORIZATION AND THE PRIMES 1. The laws of arithmetic The object of the higher arithmetic is to discover and to establish general propositions concerning the natural numbers 1, 2, 3,... of ordinary

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

CHAPTER 1. REVIEW: NUMBERS

CHAPTER 1. REVIEW: NUMBERS CHAPTER. REVIEW: NUMBERS Yes, mathematics deals with numbers. But doing math is not number crunching! Rather, it is a very complicated psychological process of learning and inventing. Just like listing

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Numerical Methods of Approximation

Numerical Methods of Approximation Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

CP Algebra 2. Unit 2-1 Factoring and Solving Quadratics

CP Algebra 2. Unit 2-1 Factoring and Solving Quadratics CP Algebra Unit -1 Factoring and Solving Quadratics Name: Period: 1 Unit -1 Factoring and Solving Quadratics Learning Targets: 1. I can factor using GCF.. I can factor by grouping. Factoring Quadratic

More information

Tropical Polynomials

Tropical Polynomials 1 Tropical Arithmetic Tropical Polynomials Los Angeles Math Circle, May 15, 2016 Bryant Mathews, Azusa Pacific University In tropical arithmetic, we define new addition and multiplication operations on

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

NUMERICAL MATHEMATICS & COMPUTING 6th Edition

NUMERICAL MATHEMATICS & COMPUTING 6th Edition NUMERICAL MATHEMATICS & COMPUTING 6th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole www.engage.com www.ma.utexas.edu/cna/nmc6 September 1, 2011 2011 1 / 42 1.1 Mathematical

More information

The Gram-Schmidt Process

The Gram-Schmidt Process The Gram-Schmidt Process How and Why it Works This is intended as a complement to 5.4 in our textbook. I assume you have read that section, so I will not repeat the definitions it gives. Our goal is to

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras Lecture - 13 Conditional Convergence Now, there are a few things that are remaining in the discussion

More information

What is proof? Lesson 1

What is proof? Lesson 1 What is proof? Lesson The topic for this Math Explorer Club is mathematical proof. In this post we will go over what was covered in the first session. The word proof is a normal English word that you might

More information

Infinite series, improper integrals, and Taylor series

Infinite series, improper integrals, and Taylor series Chapter 2 Infinite series, improper integrals, and Taylor series 2. Introduction to series In studying calculus, we have explored a variety of functions. Among the most basic are polynomials, i.e. functions

More information

Fitting functions to data

Fitting functions to data 1 Fitting functions to data 1.1 Exact fitting 1.1.1 Introduction Suppose we have a set of real-number data pairs x i, y i, i = 1, 2,, N. These can be considered to be a set of points in the xy-plane. They

More information

8.7 MacLaurin Polynomials

8.7 MacLaurin Polynomials 8.7 maclaurin polynomials 67 8.7 MacLaurin Polynomials In this chapter you have learned to find antiderivatives of a wide variety of elementary functions, but many more such functions fail to have an antiderivative

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0.

3.1 Introduction. Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x x 1.5 =0, tan x x =0. 3.1 Introduction Solve non-linear real equation f(x) = 0 for real root or zero x. E.g. x 3 +1.5x 1.5 =0, tan x x =0. Practical existence test for roots: by intermediate value theorem, f C[a, b] & f(a)f(b)

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

0. Introduction 1 0. INTRODUCTION

0. Introduction 1 0. INTRODUCTION 0. Introduction 1 0. INTRODUCTION In a very rough sketch we explain what algebraic geometry is about and what it can be used for. We stress the many correlations with other fields of research, such as

More information

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact: Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.

More information

GRE Quantitative Reasoning Practice Questions

GRE Quantitative Reasoning Practice Questions GRE Quantitative Reasoning Practice Questions y O x 7. The figure above shows the graph of the function f in the xy-plane. What is the value of f (f( ))? A B C 0 D E Explanation Note that to find f (f(

More information

Course Description for Real Analysis, Math 156

Course Description for Real Analysis, Math 156 Course Description for Real Analysis, Math 156 In this class, we will study elliptic PDE, Fourier analysis, and dispersive PDE. Here is a quick summary of the topics will study study. They re described

More information

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector

More information

Review I: Interpolation

Review I: Interpolation Review I: Interpolation Varun Shankar January, 206 Introduction In this document, we review interpolation by polynomials. Unlike many reviews, we will not stop there: we will discuss how to differentiate

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Section 5.5. Complex Eigenvalues

Section 5.5. Complex Eigenvalues Section 5.5 Complex Eigenvalues A Matrix with No Eigenvectors Consider the matrix for the linear transformation for rotation by π/4 in the plane. The matrix is: A = 1 ( ) 1 1. 2 1 1 This matrix has no

More information

y = A T x = A T (x r + x c ) = A T x r

y = A T x = A T (x r + x c ) = A T x r APPM/MATH 4660 Homework 1 Solutions This assignment is due on Google Drive by 11:59pm on Friday, January 30th Submit your solution writeup and any (well-commented) code that you produce according to the

More information

Cosets, Lagrange s Theorem, and Normal Subgroups

Cosets, Lagrange s Theorem, and Normal Subgroups Chapter 7 Cosets, Lagrange s Theorem, and Normal Subgroups 7.1 Cosets Undoubtably, you ve noticed numerous times that if G is a group with H apple G and g 2 G, then both H and g divide G. The theorem that

More information

Self-healing tile sets: rates of regrowth in damaged assemblies

Self-healing tile sets: rates of regrowth in damaged assemblies University of Arkansas, Fayetteville ScholarWorks@UARK Mathematical Sciences Undergraduate Honors Theses Mathematical Sciences 5-2015 Self-healing tile sets: rates of regrowth in damaged assemblies Kevin

More information

Lecture Notes, Week 6

Lecture Notes, Week 6 YALE UNIVERSITY DEPARTMENT OF COMPUTER SCIENCE CPSC 467b: Cryptography and Computer Security Week 6 (rev. 3) Professor M. J. Fischer February 15 & 17, 2005 1 RSA Security Lecture Notes, Week 6 Several

More information

5.1 Polynomial Functions

5.1 Polynomial Functions 5.1 Polynomial Functions In this section, we will study the following topics: Identifying polynomial functions and their degree Determining end behavior of polynomial graphs Finding real zeros of polynomial

More information

Manipulating Equations

Manipulating Equations Manipulating Equations Now that you know how to set up an equation, the next thing you need to do is solve for the value that the question asks for. Above all, the most important thing to remember when

More information

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ). CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Lecture 20: Lagrange Interpolation and Neville s Algorithm. for I will pass through thee, saith the LORD. Amos 5:17

Lecture 20: Lagrange Interpolation and Neville s Algorithm. for I will pass through thee, saith the LORD. Amos 5:17 Lecture 20: Lagrange Interpolation and Neville s Algorithm for I will pass through thee, saith the LORD. Amos 5:17 1. Introduction Perhaps the easiest way to describe a shape is to select some points on

More information

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky What follows is Vladimir Voevodsky s snapshot of his Fields Medal work on motivic homotopy, plus a little philosophy and from my point of view the main fun of doing mathematics Voevodsky (2002). Voevodsky

More information

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING . Gaussian Elimination with Partial Pivoting. Iterative Methods for Solving Linear Systems.3 Power Method for Approximating Eigenvalues.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi 8 4

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

Math 411 Preliminaries

Math 411 Preliminaries Math 411 Preliminaries Provide a list of preliminary vocabulary and concepts Preliminary Basic Netwon's method, Taylor series expansion (for single and multiple variables), Eigenvalue, Eigenvector, Vector

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information