Jim Lambers MAT 419/519 Summer Session Lecture 13 Notes

Size: px
Start display at page:

Download "Jim Lambers MAT 419/519 Summer Session Lecture 13 Notes"

Transcription

1 Jim Lambers MAT 419/519 Summer Session Lecture 13 Notes These notes correspond to Section 4.1 in the text. Least Squares Fit One of the most fundamental problems in science and engineering is data fitting constructing a function that, in some sense, conforms to given data points. One type of data-fitting technique is interpolation. Interpolation techniques, of any kind, construct functions that agree exactly with the data. That is, given points (x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ), interpolation yields a function f(x) such that f(x i ) = y i for i = 1, 2,..., m. However, fitting the data exactly may not be the best approach to describing the data with a function. High-degree polynomial interpolation can yield oscillatory functions that behave very differently than a smooth function from which the data is obtained. Also, it may be pointless to try to fit data exactly, for if it is obtained by previous measurements or other computations, it may be erroneous. Therefore, we consider another notion of what constitutes a best fit of given data by a function. One alternative approach to data fitting is to solve the minimax problem, which is the problem of finding a function f(x) of a given form for which max f(x i) y i 1 i n is minimized. However, this is a very difficult problem to solve. Another approach is to minimize the total absolute deviation of f(x) from the data. That is, we seek a function f(x) of a given form for which f(x i ) y i is minimized. However, we cannot apply standard minimization techniques to this function, because, like the absolute value function that it employs, it is not differentiable. This defect is overcome by considering the problem of finding f(x) of a given form for which [f(x i ) y i ] 2 is minimized. This is known as the least squares problem. We will first show how this problem is solved for the case where f(x) is a linear function of the form f(x) = a 1 x + a 0, and then generalize this solution to other types of functions. 1

2 When f(x) is linear, the least squares problem is the problem of finding constants a 0 and a 1 such that the function E(a 0, a 1 ) = (a 1 x i + a 0 y i ) 2 is minimized. In order to minimize this function of a 0 and a 1, we must compute its partial derivatives with respect to a 0 and a 1. This yields E E = 2(a 1 x i + a 0 y i ), = 2(a 1 x i + a 0 y i )x i. a 0 a 1 At a minimum, both of these partial derivatives must be equal to zero. This yields the system of linear equations ( m ) ma 0 + x i a 1 = y i, ( m ) ( m x i a 0 + These equations are called the normal equations. Using the formula for the inverse of a 2 2 matrix, [ ] 1 [ a b 1 = c d ad bc we obtain the solutions a 0 = x 2 i ) a 1 = d c x i y i. b a ], ( m ) x2 i ( m y i) ( m x i) ( m x iy i ) m m x2 i ( m x i) 2, a 1 = m m x iy i ( m x i) ( m y i) m m x2 i ( m x i) 2. Example We wish to find the linear function y = a 1 x + a 0 that best approximates the data shown in Table 1, in the least-squares sense. Using the summations x i = , x 2 i = , y i = , x i y i = , we obtain a 0 = a 1 = = = , = =

3 i x i y i Table 1: Data points (x i, y i ), for i = 1, 2,..., 10, to be fit by a linear function We conclude that the linear function that best fits this data in the least-squares sense is y = x The data, and this function, are shown in Figure 1. It is interesting to note that if we define the m 2 matrix A, the 2-vector a, and the m-vector y by 1 x 1 y 1 1 x 2 [ ] A =.., a = a0 y 2, y = a 1., 1 x m y m then a is the solution to the system of equations A T Aa = A T y. These equations are the normal equations defined earlier, written in matrix-vector form. They arise from the problem of finding the vector a such that Aa y is minimized, where, for any vector u, u is the magnitude, or length, of u. This magnitude is equivalent to the square root of the expression we originally intended to minimize, (a 1 x i + a 0 y i ) 2, 3

4 Figure 1: Data points (x i, y i ) (circles) and least-squares line (solid line) but we will see that the normal equations also characterize the solution a, an n-vector, to the more general linear least squares problem of minimizing Aa y for any matrix A that is m n, where m n, and whose columns are linearly independent. We now consider the problem of finding a polynomial of degree n that gives the best leastsquares fit. As before, let (x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) be given data points that need to be approximated by a polynomial of degree n. We assume that n < m 1, for otherwise, we can use polynomial interpolation to fit the points exactly. Let the least-squares polynomial have the form n p n (x) = a j x j. Our goal is to minimize the sum of squares of the deviations in p n (x) from each y-value, 2 n E(a) = [p n (x i ) y i ] 2 = a j x j i y i, 4 j=0 j=0

5 where a is a column vector of the unknown coefficients of p n (x), a 0 a 1 a =.. a n Differentiating this function with respect to each a k yields E n = 2 a j x j i a y i x k i, k = 0, 1,..., n. k j=0 Setting each of these partial derivatives equal to zero yields the system of equations ( n m ) a j = x k i y i, k = 0, 1,..., n. j=0 x j+k i These are the normal equations. They are a generalization of the normal equations previously defined for the linear case, where n = 1. Solving this system yields the coefficients {a j } n j=0 of the least-squares polynomial p n (x). As in the linear case, the normal equations can be written in matrix-vector form A T Aa = A T y, where 1 x 0 x 2 0 x n 0 1 x 1 x 2 1 x n a 0 y 1 1 A = 1 x 2 x 2 2 x n a 1 2, a = , y = y x m x 2 m x n a n y n m The normal equations equations can be used to compute the coefficients of any linear combination of functions {φ j (x)} n j=0 that best fits data in the least-squares sense, provided that these functions are linearly independent. In this general case, the entries of the matrix A are given by a ij = φ i (x j ), for i = 1, 2,..., m and j = 0, 1,..., n. Example We wish to find the quadratic function y = a 2 x 2 + a 1 x + a 0 that best approximates the data shown in Table 2, in the least-squares sense. By defining 1 x 1 x 2 1 y 1 1 x 2 x 2 a 0 2 A =..., a = a 1 y 2, y = a., 2 1 x 10 x 2 10 y 10 5

6 i x i y i Table 2: Data points (x i, y i ), for i = 1, 2,..., 10, to be fit by a quadratic function and solving the normal equations we obtain the coefficients A T Aa = A T y, c 0 = , c 1 = , c 2 = , and conclude that the quadratic function that best fits this data in the least-squares sense is y = x x The data, and this function, are shown in Figure 2. Least-squares fitting can also be used to fit data with functions that are not linear combinations of functions such as polynomials. Suppose we believe that given data points can best be matched to an exponential function of the form y = be ax, where the constants a and b are unknown. Taking the natural logarithm of both sides of this equation yields ln y = ln b + ax. If we define z = ln y and c = ln b, then the problem of fitting the original data points {(x i, y i )} m with an exponential function is transformed into the problem of fitting the data points {(x i, z i )} m with a linear function of the form c + ax, for unknown constants a and c. Similarly, suppose the given data is believed to approximately conform to a function of the form y = bx a, where the constants a and b are unknown. Taking the natural logarithm of both sides of this equation yields ln y = ln b + a ln x. 6

7 Figure 2: Data points (x i, y i ) (circles) and quadratic least-squares fit (solid curve) If we define z = ln y, c = ln b and w = ln x, then the problem of fitting the original data points {(x i, y i )} m with a constant times a power of x is transformed into the problem of fitting the data points {(w i, z i )} m with a linear function of the form c + aw, for unknown constants a and c. Example We wish to find the exponential function y = be ax that best approximates the data shown in Table 3, in the least-squares sense. By defining 1 x 1 z 1 1 x 2 [ ] c A =.., c = z 2, z = a., 1 x 5 z 5 where c = ln b and z i = ln y i for i = 1, 2,..., 5, and solving the normal equations we obtain the coefficients A T Ac = A T z, a = , b = e c = e = , 7

8 i x i y i Table 3: Data points (x i, y i ), for i = 1, 2,..., 5, to be fit by an exponential function and conclude that the exponential function that best fits this data in the least-squares sense is y = e x. The data, and this function, are shown in Figure 3. It can be seen from the preceding discussion and examples that the normal equations can be used to solve any problem that requires finding the vector x R n that minimizes b Ax, where b R m, m n, and A is an m n matrix with linearly independent columns, regardless of the interpretation of these columns. To see this, we define the function ϕ(x) = b Ax 2, x R n. Then, it can be shown through differentiation that ϕ(x) = 2(A T Ax A T b), H ϕ (x) = A T A. If x 0, then Ax 0 because A has linearly independent columns. It follows that x A T Ax = (Ax) Ax = Ax 2 > 0, so H ϕ (x) is positive definite on R n. This leads to the following theorem. Theorem Let A be an m n matrix with linearly independent columns, and let b R m. Then the vector x defined by x = (A T A) 1 A T b, that solves the normal equations A T Ax = A T b, is the strict global minimizer of b Ax, x R n. 8

9 Figure 3: Data points (x i, y i ) (circles) and exponential least-squares fit (solid curve) The matrix A + = (A T A) 1 A T is called the pseudo-inverse, or generalized inverse, of A. When A is a square, invertible matrix, then A + = A 1. Otherwise, A + is the matrix that, as closely as possible, serves as an inverse of A. It should be noted that the condition that A has linearly independent columns is essential, so that A T A is invertible. Exercises 1. Chapter 4, Exercise 1 2. Chapter 4, Exercise 4 3. Chapter 4, Exercise 7 4. Chapter 4, Exercise 10 9

Functions of Several Variables

Functions of Several Variables Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Functions of Several Variables We now generalize the results from the previous section,

More information

Numerical Methods. Lecture Notes #08 Discrete Least Square Approximation

Numerical Methods. Lecture Notes #08 Discrete Least Square Approximation Numerical Methods Discrete Least Square Approximation Pavel Ludvík, March 30, 2016 Department of Mathematics and Descriptive Geometry VŠB-TUO http://homen.vsb.cz/ lud0016/ 1 / 23

More information

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes

Jim Lambers MAT 419/519 Summer Session Lecture 11 Notes Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations

More information

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes

Jim Lambers MAT 460 Fall Semester Lecture 2 Notes Jim Lambers MAT 460 Fall Semester 2009-10 Lecture 2 Notes These notes correspond to Section 1.1 in the text. Review of Calculus Among the mathematical problems that can be solved using techniques from

More information

Simple Iteration, cont d

Simple Iteration, cont d Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 2 Notes These notes correspond to Section 1.2 in the text. Simple Iteration, cont d In general, nonlinear equations cannot be solved in a finite sequence

More information

Lesson 9 Exploring Graphs of Quadratic Functions

Lesson 9 Exploring Graphs of Quadratic Functions Exploring Graphs of Quadratic Functions Graph the following system of linear inequalities: { y > 1 2 x 5 3x + 2y 14 a What are three points that are solutions to the system of inequalities? b Is the point

More information

Logarithmic and Exponential Equations and Change-of-Base

Logarithmic and Exponential Equations and Change-of-Base Logarithmic and Exponential Equations and Change-of-Base MATH 101 College Algebra J. Robert Buchanan Department of Mathematics Summer 2012 Objectives In this lesson we will learn to solve exponential equations

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Least Squares Regression

Least Squares Regression Least Squares Regression Chemical Engineering 2450 - Numerical Methods Given N data points x i, y i, i 1 N, and a function that we wish to fit to these data points, fx, we define S as the sum of the squared

More information

Unconstrained Geometric Programming

Unconstrained Geometric Programming Jim Lambers MAT 49/59 Summer Session 20-2 Lecture 8 Notes These notes correspond to Section 2.5 in the text. Unconstrained Geometric Programming Previously, we learned how to use the A-G Inequality to

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

CURVE FITTING LEAST SQUARE LINE. Consider the class of linear function of the form. = Ax+ B...(1)

CURVE FITTING LEAST SQUARE LINE. Consider the class of linear function of the form. = Ax+ B...(1) CURVE FITTIG LEAST SQUARE LIE Consider the class of linear function of the form y = f( x) = B...() In previous chapter we saw how to construct a polynomial that passes through a set of points. If all numerical

More information

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and

More information

Chapter 8 ~ Quadratic Functions and Equations In this chapter you will study... You can use these skills...

Chapter 8 ~ Quadratic Functions and Equations In this chapter you will study... You can use these skills... Chapter 8 ~ Quadratic Functions and Equations In this chapter you will study... identifying and graphing quadratic functions transforming quadratic equations solving quadratic equations using factoring

More information

Regression and Nonlinear Axes

Regression and Nonlinear Axes Introduction to Chemical Engineering Calculations Lecture 2. What is regression analysis? A technique for modeling and analyzing the relationship between 2 or more variables. Usually, 1 variable is designated

More information

Matrix operations Linear Algebra with Computer Science Application

Matrix operations Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the

More information

Applied Numerical Analysis Homework #3

Applied Numerical Analysis Homework #3 Applied Numerical Analysis Homework #3 Interpolation: Splines, Multiple dimensions, Radial Bases, Least-Squares Splines Question Consider a cubic spline interpolation of a set of data points, and derivatives

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 8: Inverse of a Matrix Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/0 Announcements We will not make it to section. tonight,

More information

Hermite Interpolation

Hermite Interpolation Jim Lambers MAT 77 Fall Semester 010-11 Lecture Notes These notes correspond to Sections 4 and 5 in the text Hermite Interpolation Suppose that the interpolation points are perturbed so that two neighboring

More information

A Library of Functions

A Library of Functions LibraryofFunctions.nb 1 A Library of Functions Any study of calculus must start with the study of functions. Functions are fundamental to mathematics. In its everyday use the word function conveys to us

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Interpolation APPLIED PROBLEMS. Reading Between the Lines FLY ROCKET FLY, FLY ROCKET FLY WHAT IS INTERPOLATION? Figure Interpolation of discrete data.

Interpolation APPLIED PROBLEMS. Reading Between the Lines FLY ROCKET FLY, FLY ROCKET FLY WHAT IS INTERPOLATION? Figure Interpolation of discrete data. WHAT IS INTERPOLATION? Given (x 0,y 0 ), (x,y ), (x n,y n ), find the value of y at a value of x that is not given. Interpolation Reading Between the Lines Figure Interpolation of discrete data. FLY ROCKET

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

Lecture 8: Complete Problems for Other Complexity Classes

Lecture 8: Complete Problems for Other Complexity Classes IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 8: Complete Problems for Other Complexity Classes David Mix Barrington and Alexis Maciel

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 9: Characterizations of Invertible Matrices Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Announcements Review for Exam 1

More information

Introduction to Decision Sciences Lecture 6

Introduction to Decision Sciences Lecture 6 Introduction to Decision Sciences Lecture 6 Andrew Nobel September 21, 2017 Functions Functions Given: Sets A and B, possibly different Definition: A function f : A B is a rule that assigns every element

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

Polynomial Form. Factored Form. Perfect Squares

Polynomial Form. Factored Form. Perfect Squares We ve seen how to solve quadratic equations (ax 2 + bx + c = 0) by factoring and by extracting square roots, but what if neither of those methods are an option? What do we do with a quadratic equation

More information

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines Cubic Splines MATH 375 J. Robert Buchanan Department of Mathematics Fall 2006 Introduction Given data {(x 0, f(x 0 )), (x 1, f(x 1 )),...,(x n, f(x n ))} which we wish to interpolate using a polynomial...

More information

24. x 2 y xy y sec(ln x); 1 e x y 1 cos(ln x), y 2 sin(ln x) 25. y y tan x 26. y 4y sec 2x 28.

24. x 2 y xy y sec(ln x); 1 e x y 1 cos(ln x), y 2 sin(ln x) 25. y y tan x 26. y 4y sec 2x 28. 16 CHAPTER 4 HIGHER-ORDER DIFFERENTIAL EQUATIONS 11. y 3y y 1 4. x yxyy sec(ln x); 1 e x y 1 cos(ln x), y sin(ln x) ex 1. y y y 1 x 13. y3yy sin e x 14. yyy e t arctan t 15. yyy e t ln t 16. y y y 41x

More information

College Algebra. Basics to Theory of Equations. Chapter Goals and Assessment. John J. Schiller and Marie A. Wurster. Slide 1

College Algebra. Basics to Theory of Equations. Chapter Goals and Assessment. John J. Schiller and Marie A. Wurster. Slide 1 College Algebra Basics to Theory of Equations Chapter Goals and Assessment John J. Schiller and Marie A. Wurster Slide 1 Chapter R Review of Basic Algebra The goal of this chapter is to make the transition

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2? Applied Numerical Analysis Interpolation Lecturer: Emad Fatemizadeh Interpolation Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. 0 1 4 1-3 3 9 What for x = 1.? Interpolation

More information

Exploring and Generalizing Transformations of Functions

Exploring and Generalizing Transformations of Functions Exploring and Generalizing Transformations of Functions In Algebra 1 and Algebra 2, you have studied transformations of functions. Today, you will revisit and generalize that knowledge. Goals: The goals

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

Chapter 1: Systems of Linear Equations and Matrices

Chapter 1: Systems of Linear Equations and Matrices : Systems of Linear Equations and Matrices Multiple Choice Questions. Which of the following equations is linear? (A) x + 3x 3 + 4x 4 3 = 5 (B) 3x x + x 3 = 5 (C) 5x + 5 x x 3 = x + cos (x ) + 4x 3 = 7.

More information

Calculus (Math 1A) Lecture 4

Calculus (Math 1A) Lecture 4 Calculus (Math 1A) Lecture 4 Vivek Shende August 31, 2017 Hello and welcome to class! Last time We discussed shifting, stretching, and composition. Today We finish discussing composition, then discuss

More information

Section 7.1 Quadratic Equations

Section 7.1 Quadratic Equations Section 7.1 Quadratic Equations INTRODUCTION In Chapter 2 you learned about solving linear equations. In each of those, the highest power of any variable was 1. We will now take a look at solving quadratic

More information

Jim Lambers MAT 169 Fall Semester Lecture 6 Notes. a n. n=1. S = lim s k = lim. n=1. n=1

Jim Lambers MAT 169 Fall Semester Lecture 6 Notes. a n. n=1. S = lim s k = lim. n=1. n=1 Jim Lambers MAT 69 Fall Semester 2009-0 Lecture 6 Notes These notes correspond to Section 8.3 in the text. The Integral Test Previously, we have defined the sum of a convergent infinite series to be the

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Lecture # 1 - Introduction

Lecture # 1 - Introduction Lecture # 1 - Introduction Mathematical vs. Nonmathematical Economics Mathematical Economics is an approach to economic analysis Purpose of any approach: derive a set of conclusions or theorems Di erences:

More information

Calculus (Math 1A) Lecture 4

Calculus (Math 1A) Lecture 4 Calculus (Math 1A) Lecture 4 Vivek Shende August 30, 2017 Hello and welcome to class! Hello and welcome to class! Last time Hello and welcome to class! Last time We discussed shifting, stretching, and

More information

Math 24 Spring 2012 Questions (mostly) from the Textbook

Math 24 Spring 2012 Questions (mostly) from the Textbook Math 24 Spring 2012 Questions (mostly) from the Textbook 1. TRUE OR FALSE? (a) The zero vector space has no basis. (F) (b) Every vector space that is generated by a finite set has a basis. (c) Every vector

More information

5.1 Least-Squares Line

5.1 Least-Squares Line 252 CHAP. 5 CURVE FITTING 5.1 Least-Squares Line In science and engineering it is often the case that an experiment produces a set of data points (x 1, y 1 ),...,(x N, y N ), where the abscissas {x k }

More information

Interpolating Accuracy without underlying f (x)

Interpolating Accuracy without underlying f (x) Example: Tabulated Data The following table x 1.0 1.3 1.6 1.9 2.2 f (x) 0.7651977 0.6200860 0.4554022 0.2818186 0.1103623 lists values of a function f at various points. The approximations to f (1.5) obtained

More information

Polynomial Interpolation Part II

Polynomial Interpolation Part II Polynomial Interpolation Part II Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Exercise Session

More information

APPENDIX : PARTIAL FRACTIONS

APPENDIX : PARTIAL FRACTIONS APPENDIX : PARTIAL FRACTIONS Appendix : Partial Fractions Given the expression x 2 and asked to find its integral, x + you can use work from Section. to give x 2 =ln( x 2) ln( x + )+c x + = ln k x 2 x+

More information

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems Tom Lyche University of Oslo Norway Lecture 1 INF-MAT3350/4350 2007: Some Tridiagonal Matrix Problems p.1/33 Plan for the day 1. Notation

More information

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2

We could express the left side as a sum of vectors and obtain the Vector Form of a Linear System: a 12 a x n. a m2 Week 22 Equations, Matrices and Transformations Coefficient Matrix and Vector Forms of a Linear System Suppose we have a system of m linear equations in n unknowns a 11 x 1 + a 12 x 2 + + a 1n x n b 1

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another. Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Math 101 Study Session Spring 2016 Test 4 Chapter 10, Chapter 11 Chapter 12 Section 1, and Chapter 12 Section 2

Math 101 Study Session Spring 2016 Test 4 Chapter 10, Chapter 11 Chapter 12 Section 1, and Chapter 12 Section 2 Math 101 Study Session Spring 2016 Test 4 Chapter 10, Chapter 11 Chapter 12 Section 1, and Chapter 12 Section 2 April 11, 2016 Chapter 10 Section 1: Addition and Subtraction of Polynomials A monomial is

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that

More information

Taylor polynomials. 1. Introduction. 2. Linear approximation.

Taylor polynomials. 1. Introduction. 2. Linear approximation. ucsc supplementary notes ams/econ 11a Taylor polynomials c 01 Yonatan Katznelson 1. Introduction The most elementary functions are polynomials because they involve only the most basic arithmetic operations

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have

More information

Algebra. Mathematics Help Sheet. The University of Sydney Business School

Algebra. Mathematics Help Sheet. The University of Sydney Business School Algebra Mathematics Help Sheet The University of Sydney Business School Introduction Terminology and Definitions Integer Constant Variable Co-efficient A whole number, as opposed to a fraction or a decimal,

More information

Applications of the Maximum Principle

Applications of the Maximum Principle Jim Lambers MAT 606 Spring Semester 2015-16 Lecture 26 Notes These notes correspond to Sections 7.4-7.6 in the text. Applications of the Maximum Principle The maximum principle for Laplace s equation is

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

Linear algebra I Homework #1 due Thursday, Oct. 5

Linear algebra I Homework #1 due Thursday, Oct. 5 Homework #1 due Thursday, Oct. 5 1. Show that A(5,3,4), B(1,0,2) and C(3, 4,4) are the vertices of a right triangle. 2. Find the equation of the plane that passes through the points A(2,4,3), B(2,3,5),

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Kernels and the Kernel Trick. Machine Learning Fall 2017

Kernels and the Kernel Trick. Machine Learning Fall 2017 Kernels and the Kernel Trick Machine Learning Fall 2017 1 Support vector machines Training by maximizing margin The SVM objective Solving the SVM optimization problem Support vectors, duals and kernels

More information

Numerical Methods of Approximation

Numerical Methods of Approximation Contents 31 Numerical Methods of Approximation 31.1 Polynomial Approximations 2 31.2 Numerical Integration 28 31.3 Numerical Differentiation 58 31.4 Nonlinear Equations 67 Learning outcomes In this Workbook

More information

Mathematics I. Exercises with solutions. 1 Linear Algebra. Vectors and Matrices Let , C = , B = A = Determine the following matrices:

Mathematics I. Exercises with solutions. 1 Linear Algebra. Vectors and Matrices Let , C = , B = A = Determine the following matrices: Mathematics I Exercises with solutions Linear Algebra Vectors and Matrices.. Let A = 5, B = Determine the following matrices: 4 5, C = a) A + B; b) A B; c) AB; d) BA; e) (AB)C; f) A(BC) Solution: 4 5 a)

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Equations in Quadratic Form

Equations in Quadratic Form Equations in Quadratic Form MATH 101 College Algebra J. Robert Buchanan Department of Mathematics Summer 2012 Objectives In this lesson we will learn to: make substitutions that allow equations to be written

More information

a factors The exponential 0 is a special case. If b is any nonzero real number, then

a factors The exponential 0 is a special case. If b is any nonzero real number, then 0.1 Exponents The expression x a is an exponential expression with base x and exponent a. If the exponent a is a positive integer, then the expression is simply notation that counts how many times the

More information

Tangent Planes, Linear Approximations and Differentiability

Tangent Planes, Linear Approximations and Differentiability Jim Lambers MAT 80 Spring Semester 009-10 Lecture 5 Notes These notes correspond to Section 114 in Stewart and Section 3 in Marsden and Tromba Tangent Planes, Linear Approximations and Differentiability

More information

Preliminaries Lectures. Dr. Abdulla Eid. Department of Mathematics MATHS 101: Calculus I

Preliminaries Lectures. Dr. Abdulla Eid. Department of Mathematics   MATHS 101: Calculus I Preliminaries 2 1 2 Lectures Department of Mathematics http://www.abdullaeid.net/maths101 MATHS 101: Calculus I (University of Bahrain) Prelim 1 / 35 Pre Calculus MATHS 101: Calculus MATHS 101 is all about

More information

Section 5.8 Regression/Least Squares Approximation

Section 5.8 Regression/Least Squares Approximation Section 5.8 Regression/Least Squares Approximation Key terms Interpolation via linear systems Regression Over determine linear system Closest vector to a column space Linear regression; least squares line

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

Polynomial Form. Factored Form. Perfect Squares

Polynomial Form. Factored Form. Perfect Squares We ve seen how to solve quadratic equations (ax 2 + bx + c = 0) by factoring and by extracting square roots, but what if neither of those methods are an option? What do we do with a quadratic equation

More information

Tropical Polynomials

Tropical Polynomials 1 Tropical Arithmetic Tropical Polynomials Los Angeles Math Circle, May 15, 2016 Bryant Mathews, Azusa Pacific University In tropical arithmetic, we define new addition and multiplication operations on

More information

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS The purpose of this chapter is to introduce you to matrix algebra, which has many applications. You are already familiar with several algebras: elementary

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polynomial Functions and Their Graphs Definition of a Polynomial Function Let n be a nonnegative integer and let a n, a n- 1,, a 2, a 1, a 0, be real numbers with a n 0. The function defined by f (x) a

More information

Internet Mat117 Formulas and Concepts. d(a, B) = (x 2 x 1 ) 2 + (y 2 y 1 ) 2., y 1 + y 2. ( x 1 + x 2 2

Internet Mat117 Formulas and Concepts. d(a, B) = (x 2 x 1 ) 2 + (y 2 y 1 ) 2., y 1 + y 2. ( x 1 + x 2 2 Internet Mat117 Formulas and Concepts 1. The distance between the points A(x 1, y 1 ) and B(x 2, y 2 ) in the plane is d(a, B) = (x 2 x 1 ) 2 + (y 2 y 1 ) 2. 2. The midpoint of the line segment from A(x

More information

MATH 22A: LINEAR ALGEBRA Chapter 4

MATH 22A: LINEAR ALGEBRA Chapter 4 MATH 22A: LINEAR ALGEBRA Chapter 4 Jesús De Loera, UC Davis November 30, 2012 Orthogonality and Least Squares Approximation QUESTION: Suppose Ax = b has no solution!! Then what to do? Can we find an Approximate

More information

Section 4.2. Types of Differentiation

Section 4.2. Types of Differentiation 42 Types of Differentiation 1 Section 42 Types of Differentiation Note In this section we define differentiation of various structures with respect to a scalar, a vector, and a matrix Definition Let vector

More information

MAT 107 College Algebra Fall 2013 Name. Final Exam, Version X

MAT 107 College Algebra Fall 2013 Name. Final Exam, Version X MAT 107 College Algebra Fall 013 Name Final Exam, Version X EKU ID Instructor Part 1: No calculators are allowed on this section. Show all work on your paper. Circle your answer. Each question is worth

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Taylor Series and Numerical Approximations

Taylor Series and Numerical Approximations Taylor Series and Numerical Approximations Hilary Weller h.weller@reading.ac.uk August 7, 05 An introduction to the concept of a Taylor series and how these are used in numerical analysis to find numerical

More information

1111: Linear Algebra I

1111: Linear Algebra I 1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 13 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 13 1 / 8 The coordinate vector space R n We already used vectors in n dimensions

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

REU 2007 Apprentice Class Lecture 8

REU 2007 Apprentice Class Lecture 8 REU 2007 Apprentice Class Lecture 8 Instructor: László Babai Scribe: Ian Shipman July 5, 2007 Revised by instructor Last updated July 5, 5:15 pm A81 The Cayley-Hamilton Theorem Recall that for a square

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

i x i y i

i x i y i Department of Mathematics MTL107: Numerical Methods and Computations Exercise Set 8: Approximation-Linear Least Squares Polynomial approximation, Chebyshev Polynomial approximation. 1. Compute the linear

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

Constructions with ruler and compass.

Constructions with ruler and compass. Constructions with ruler and compass. Semyon Alesker. 1 Introduction. Let us assume that we have a ruler and a compass. Let us also assume that we have a segment of length one. Using these tools we can

More information

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20 Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton

More information