Numerical Methods-Lecture VIII: Interpolation

Size: px
Start display at page:

Download "Numerical Methods-Lecture VIII: Interpolation"

Transcription

1 Numerical Methods-Lecture VIII: Interpolation (See Judd Chapter 6) Trevor Gallen Fall, / 113

2 Motivation Most solutions are functions Many functions are (potentially) high-dimensional Want a way to simplify A cloud of points and connecting the dots is one way How should we connect the dots (and choose where they are?) 2 / 113

3 The most basic The basics: Evenly-spaced grid, Connect the dots Linearly or quadratically summarize the line 3 / 113

4 Polynomial Basis We can summarize functions with different bases The most common is the polynomial basis: f (x) = [ ] a 0 a 1 a 2 a n 1 x x 2. x n This is really just the taylor polynomial: f (x) = a 0 + a 1 x + a 2 x a n x n But the polynomials are really colinear This is pretty inefficient coding 4 / 113

5 Monomial Basis 5 / 113

6 Example: Regression Have a cloud of n points (data is Y ), location in grid summarized by X Regress cloud on polynomial basis of dimension i, i < n: ˆβ = (X X ) 1 X Y Then your interpolation is always just: Ŷ = ˆβX This is okay for a lot of problems 6 / 113

7 Example: Regression 7 / 113

8 Example: Regression 8 / 113

9 Example: Regression 9 / 113

10 Example: Regression 10 / 113

11 Example: Regression 11 / 113

12 Example: Regression 12 / 113

13 Runge s Phenomenon: f (x) = 1 1+x / 113

14 Approx: f (x) = 1 1+x. 2 : 1-deg. approx 14 / 113

15 Approx: f (x) = 1 1+x. 2 : 2-deg. approx 15 / 113

16 Approx: f (x) = 1 1+x. 2 : 3-deg. approx 16 / 113

17 Approx: f (x) = 1 1+x. 2 : 4-deg. approx 17 / 113

18 Approx: f (x) = 1 1+x. 2 : 5-deg. approx 18 / 113

19 Approx: f (x) = 1 1+x. 2 : 6-deg. approx 19 / 113

20 Approx: f (x) = 1 1+x. 2 : 7-deg. approx 20 / 113

21 Approx: f (x) = 1 1+x. 2 : 8-deg. approx 21 / 113

22 Approx: f (x) = 1 1+x. 2 : 9-deg. approx 22 / 113

23 Approx: f (x) = 1 1+x. 2 : 10-deg. approx 23 / 113

24 Approx: f (x) = 1 1+x. 2 : 11-deg. approx 24 / 113

25 Approx: f (x) = 1 1+x. 2 : 12-deg. approx 25 / 113

26 Approx: f (x) = 1 1+x. 2 : 13-deg. approx 26 / 113

27 Errors: f (x) = 1 1+x. 2 : 1-deg. approx 27 / 113

28 Errors: f (x) = 1 1+x. 2 : 2-deg. approx 28 / 113

29 Errors: f (x) = 1 1+x. 2 : 3-deg. approx 29 / 113

30 Errors: f (x) = 1 1+x. 2 : 4-deg. approx 30 / 113

31 Errors: f (x) = 1 1+x. 2 : 5-deg. approx 31 / 113

32 Errors: f (x) = 1 1+x. 2 : 6-deg. approx 32 / 113

33 Errors: f (x) = 1 1+x. 2 : 7-deg. approx 33 / 113

34 Errors: f (x) = 1 1+x. 2 : 8-deg. approx 34 / 113

35 Errors: f (x) = 1 1+x. 2 : 9-deg. approx 35 / 113

36 Errors: f (x) = 1 1+x. 2 : 10-deg. approx 36 / 113

37 Errors: f (x) = 1 1+x. 2 : 11-deg. approx 37 / 113

38 Errors: f (x) = 1 1+x. 2 : 12-deg. approx 38 / 113

39 Errors: f (x) = 1 1+x. 2 : 13-deg. approx 39 / 113

40 Errors: f (x) = 1 1+x. 2 : 1-deg. approx 40 / 113

41 Errors: f (x) = 1 1+x. 2 : 2-deg. approx 41 / 113

42 Errors: f (x) = 1 1+x. 2 : 3-deg. approx 42 / 113

43 Errors: f (x) = 1 1+x. 2 : 4-deg. approx 43 / 113

44 Errors: f (x) = 1 1+x. 2 : 5-deg. approx 44 / 113

45 Errors: f (x) = 1 1+x. 2 : 6-deg. approx 45 / 113

46 Errors: f (x) = 1 1+x. 2 : 7-deg. approx 46 / 113

47 Errors: f (x) = 1 1+x. 2 : 8-deg. approx 47 / 113

48 Errors: f (x) = 1 1+x. 2 : 9-deg. approx 48 / 113

49 Errors: f (x) = 1 1+x. 2 : 10-deg. approx 49 / 113

50 Errors: f (x) = 1 1+x. 2 : 11-deg. approx 50 / 113

51 Errors: f (x) = 1 1+x. 2 : 12-deg. approx 51 / 113

52 Errors: f (x) = 1 1+x. 2 : 13-deg. approx 52 / 113

53 Chebychev Polynomials Less violent and colinear polynomials are desirable We also want something that avoids Runge s phenomenon if possible We also want something that s pretty easy to integrate It just so happens Chebychev polynomials are all of these things Defined recursively: T 0 (x) = 1 T 1 (x) = x T n (x) = 2xT n 1 (x) T n 2 (x) Defined over [-1,1]! Needs transformation. 53 / 113

54 Chebychev Basis 54 / 113

55 Properties Also can show that: T n (x) = cosh(n arccosh(x)) And that the roots xj T n are: ( ) 2j 1 x j = cos 2n π j = 1,..., n It just so happens that sampling at Chebychev nodes help minimize the maximum error and make sure it s monotonically decreasing as n increases 55 / 113

56 Interpolation: 1-d You have some function f (x) that you want to interpolate over [a, b]. Define a bridge between your range and the interpolating polynomial s range: Define: z = 2(x a)/(b a) 1, or x = a + (b a)(z+1) 2 Then x = a z = 1 x = b z = 1 x = (a + b)/2 z = 0. Our goal will be, for interpolating polynomial g(z) over [ 1, 1] and f (x) over [a, b], that: g(2(x a)/(b a) 1) = f (x) x 56 / 113

57 Interpolation: 1-d 1. Pick fcn. f (x), degree n, bounds a, b for interpolation. 2. Find Chebychev nodes for f (x) in z space. ( ) 2j 1j z j = cos π j = 1,..., n 2n 3. Convert nodes into x space: x j = a + (b a)(z j + 1) 2 4. Calculate y j = f (x j ) at each x j. This is the value of g(z j ) at each of the corresponding nodes. 5. Given y j, calculate the Chebychev coefficients c j as: c j = 2 n y j T j (z j ) n Your approximation is given by: ˆf (x) = j k=0 c j T j (x) f (x) 57 / 113

58 Fit: f (x) = 1 1+x. 2 : 2-deg. approx 58 / 113

59 Fit: f (x) = 1 1+x. 2 : 3-deg. approx 59 / 113

60 Fit: f (x) = 1 1+x. 2 : 4-deg. approx 60 / 113

61 Fit: f (x) = 1 1+x. 2 : 5-deg. approx 61 / 113

62 Fit: f (x) = 1 1+x. 2 : 6-deg. approx 62 / 113

63 Fit: f (x) = 1 1+x. 2 : 7-deg. approx 63 / 113

64 Fit: f (x) = 1 1+x. 2 : 8-deg. approx 64 / 113

65 Fit: f (x) = 1 1+x. 2 : 9-deg. approx 65 / 113

66 Fit: f (x) = 1 1+x. 2 : 10-deg. approx 66 / 113

67 Fit: f (x) = 1 1+x. 2 : 11-deg. approx 67 / 113

68 Fit: f (x) = 1 1+x. 2 : 12-deg. approx 68 / 113

69 Fit: f (x) = 1 1+x. 2 : 13-deg. approx 69 / 113

70 Errors: f (x) = 1 1+x. 2 : 2-deg. approx 70 / 113

71 Errors: f (x) = 1 1+x. 2 : 3-deg. approx 71 / 113

72 Errors: f (x) = 1 1+x. 2 : 4-deg. approx 72 / 113

73 Errors: f (x) = 1 1+x. 2 : 5-deg. approx 73 / 113

74 Errors: f (x) = 1 1+x. 2 : 6-deg. approx 74 / 113

75 Errors: f (x) = 1 1+x. 2 : 7-deg. approx 75 / 113

76 Errors: f (x) = 1 1+x. 2 : 8-deg. approx 76 / 113

77 Errors: f (x) = 1 1+x. 2 : 9-deg. approx 77 / 113

78 Errors: f (x) = 1 1+x. 2 : 10-deg. approx 78 / 113

79 Errors: f (x) = 1 1+x. 2 : 11-deg. approx 79 / 113

80 Errors: f (x) = 1 1+x. 2 : 12-deg. approx 80 / 113

81 Errors: f (x) = 1 1+x. 2 : 13-deg. approx 81 / 113

82 Interpolation: 2-d 1. Pick fcn f (x, y) deg. n, bounds a,b and c,d for interpolation. 2. Find Chebychev nodes for f (x, y) in z [1] and z [2] space. ( ) ( ) z [1] 2j 1j j = cos π, z [2] 2j 1j j = cos π, j = 1,..., n 2n 2n 3. Convert nodes into x space: (b a)(z[1] j + 1) (d c)(z[2] j + 1) x j = a +, y j = c Calculate ξ i,j = f (x i, y j ) for all points. 5. Given y j, calculate the Chebychev coefficients c j as: n n k=1 l=1 c i,j = ξ i,jt i (z [1] k )T j(z [2] l ) n k=1 T i(z [1] k )T j(z [2] l ) 6. Approximation: n n ( ˆf (x, y) c ij T i 2 x a ) ( b a 1 T j 2 y c ) d b 1 i=1 j=1 82 / 113

83 Monom Fit: f (x) = 1 1+x. 2 : 1-deg. approx 83 / 113

84 Monom Fit: f (x) = 1 1+x. 2 : 2-deg. approx 84 / 113

85 Monom Fit: f (x) = 1 1+x. 2 : 3-deg. approx 85 / 113

86 Monom Fit: f (x) = 1 1+x. 2 : 4-deg. approx 86 / 113

87 Monom Fit: f (x) = 1 1+x. 2 : 5-deg. approx 87 / 113

88 Monom Errors: f (x) = 1 1+x. 2 : 1-deg. approx 88 / 113

89 Monom Errors: f (x) = 1 1+x. 2 : 2-deg. approx 89 / 113

90 Monom Errors: f (x) = 1 1+x. 2 : 3-deg. approx 90 / 113

91 Monom Errors: f (x) = 1 1+x. 2 : 4-deg. approx 91 / 113

92 Monom Errors: f (x) = 1 1+x. 2 : 5-deg. approx 92 / 113

93 Cheby Fit: f (x) = 1 1+x. 2 : 1-deg. approx 93 / 113

94 Cheby Fit: f (x) = 1 1+x. 2 : 2-deg. approx 94 / 113

95 Cheby Fit: f (x) = 1 1+x. 2 : 3-deg. approx 95 / 113

96 Cheby Fit: f (x) = 1 1+x. 2 : 4-deg. approx 96 / 113

97 Cheby Fit: f (x) = 1 1+x. 2 : 5-deg. approx 97 / 113

98 Cheby Fit: f (x) = 1 1+x. 2 : 6-deg. approx 98 / 113

99 Cheby Fit: f (x) = 1 1+x. 2 : 7-deg. approx 99 / 113

100 Cheby Fit: f (x) = 1 1+x. 2 : 8-deg. approx 100 / 113

101 Cheby Fit: f (x) = 1 1+x. 2 : 9-deg. approx 101 / 113

102 Cheby Fit: f (x) = 1 1+x. 2 : 10-deg. approx 102 / 113

103 Cheby Errors: f (x) = 1 1+x. 2 : 1-deg. approx 103 / 113

104 Cheby Errors: f (x) = 1 1+x. 2 : 2-deg. approx 104 / 113

105 Cheby Errors: f (x) = 1 1+x. 2 : 3-deg. approx 105 / 113

106 Cheby Errors: f (x) = 1 1+x. 2 : 4-deg. approx 106 / 113

107 Cheby Errors: f (x) = 1 1+x. 2 : 5-deg. approx 107 / 113

108 Cheby Errors: f (x) = 1 1+x. 2 : 6-deg. approx 108 / 113

109 Cheby Errors: f (x) = 1 1+x. 2 : 7-deg. approx 109 / 113

110 Cheby Errors: f (x) = 1 1+x. 2 : 8-deg. approx 110 / 113

111 Cheby Errors: f (x) = 1 1+x. 2 : 9-deg. approx 111 / 113

112 Cheby Errors: f (x) = 1 1+x. 2 : 10-deg. approx 112 / 113

113 Taste of some technical points If we have some function f (x) and some class of approximating functions, Ω We want to find the function ω(x) Ω such that: ω = arg min f g 2 ω Ω Interpolating at Chebychev nodes allows us to obtain a lower bound for the error: f (x) ˆf n 1 (x) 1 2 n 1 max n! f (n) (ξ) ξ [ 1,1] For more see Judd Chapter 6: many ways to approximate functions. 113 / 113

A first order divided difference

A first order divided difference A first order divided difference For a given function f (x) and two distinct points x 0 and x 1, define f [x 0, x 1 ] = f (x 1) f (x 0 ) x 1 x 0 This is called the first order divided difference of f (x).

More information

Applied Numerical Analysis Quiz #2

Applied Numerical Analysis Quiz #2 Applied Numerical Analysis Quiz #2 Modules 3 and 4 Name: Student number: DO NOT OPEN UNTIL ASKED Instructions: Make sure you have a machine-readable answer form. Write your name and student number in the

More information

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter Key References: Interpolation 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter 6. 2. Press, W. et. al. Numerical Recipes in C, Cambridge: Cambridge University Press. Chapter 3

More information

Chapter 1 Numerical approximation of data : interpolation, least squares method

Chapter 1 Numerical approximation of data : interpolation, least squares method Chapter 1 Numerical approximation of data : interpolation, least squares method I. Motivation 1 Approximation of functions Evaluation of a function Which functions (f : R R) can be effectively evaluated

More information

Approximation, Taylor Polynomials, and Derivatives

Approximation, Taylor Polynomials, and Derivatives Approximation, Taylor Polynomials, and Derivatives Derivatives for functions f : R n R will be central to much of Econ 501A, 501B, and 520 and also to most of what you ll do as professional economists.

More information

Lecture 20: Lagrange Interpolation and Neville s Algorithm. for I will pass through thee, saith the LORD. Amos 5:17

Lecture 20: Lagrange Interpolation and Neville s Algorithm. for I will pass through thee, saith the LORD. Amos 5:17 Lecture 20: Lagrange Interpolation and Neville s Algorithm for I will pass through thee, saith the LORD. Amos 5:17 1. Introduction Perhaps the easiest way to describe a shape is to select some points on

More information

Scientific Computing: Numerical Integration

Scientific Computing: Numerical Integration Scientific Computing: Numerical Integration Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Fall 2015 Nov 5th, 2015 A. Donev (Courant Institute) Lecture

More information

Order of convergence

Order of convergence Order of convergence Linear and Quadratic Order of convergence Computing square root with Newton s Method Given a > 0, p def = a is positive root of equation Newton s Method p k+1 = p k p2 k a 2p k = 1

More information

ERROR IN LINEAR INTERPOLATION

ERROR IN LINEAR INTERPOLATION ERROR IN LINEAR INTERPOLATION Let P 1 (x) be the linear polynomial interpolating f (x) at x 0 and x 1. Assume f (x) is twice continuously differentiable on an interval [a, b] which contains the points

More information

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018

Projection Methods. (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 Projection Methods (Lectures on Solution Methods for Economists IV) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 7, 2018 1 University of Pennsylvania 2 Boston College Introduction Remember that

More information

Lecture 10 Polynomial interpolation

Lecture 10 Polynomial interpolation Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Numerical Methods I Orthogonal Polynomials

Numerical Methods I Orthogonal Polynomials Numerical Methods I Orthogonal Polynomials Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Nov. 4th and 11th, 2010 A. Donev (Courant Institute)

More information

Function approximation

Function approximation Week 9: Monday, Mar 26 Function approximation A common task in scientific computing is to approximate a function. The approximated function might be available only through tabulated data, or it may be

More information

Lecture 5: Using electronics to make measurements

Lecture 5: Using electronics to make measurements Lecture 5: Using electronics to make measurements As physicists, we re not really interested in electronics for its own sake We want to use it to measure something often, something too small to be directly

More information

BSM510 Numerical Analysis

BSM510 Numerical Analysis BSM510 Numerical Analysis Polynomial Interpolation Prof. Manar Mohaisen Department of EEC Engineering Review of Precedent Lecture Polynomial Regression Multiple Linear Regression Nonlinear Regression Lecture

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

Lecture 13: Simple Linear Regression in Matrix Format

Lecture 13: Simple Linear Regression in Matrix Format See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix

More information

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation See your notes. 1. Lagrange Interpolation (8.2) 1 2. Newton Interpolation (8.3) different form of the same polynomial as Lagrange

More information

Lecture 6: Linear Regression

Lecture 6: Linear Regression Lecture 6: Linear Regression Reading: Sections 3.1-3 STATS 202: Data mining and analysis Jonathan Taylor, 10/5 Slide credits: Sergio Bacallado 1 / 30 Simple linear regression Model: y i = β 0 + β 1 x i

More information

An Invitation to Mathematics Prof. Sankaran Vishwanath Institute of Mathematical Sciences, Chennai. Unit I Polynomials Lecture 1A Introduction

An Invitation to Mathematics Prof. Sankaran Vishwanath Institute of Mathematical Sciences, Chennai. Unit I Polynomials Lecture 1A Introduction An Invitation to Mathematics Prof. Sankaran Vishwanath Institute of Mathematical Sciences, Chennai Unit I Polynomials Lecture 1A Introduction Hello and welcome to this course titled An Invitation to Mathematics.

More information

Linear regression COMS 4771

Linear regression COMS 4771 Linear regression COMS 4771 1. Old Faithful and prediction functions Prediction problem: Old Faithful geyser (Yellowstone) Task: Predict time of next eruption. 1 / 40 Statistical model for time between

More information

Notation Nodes are data points at which functional values are available or at which you wish to compute functional values At the nodes fx i

Notation Nodes are data points at which functional values are available or at which you wish to compute functional values At the nodes fx i LECTURE 6 NUMERICAL DIFFERENTIATION To find discrete approximations to differentiation (since computers can only deal with functional values at discrete points) Uses of numerical differentiation To represent

More information

Introduction to Statistical modeling: handout for Math 489/583

Introduction to Statistical modeling: handout for Math 489/583 Introduction to Statistical modeling: handout for Math 489/583 Statistical modeling occurs when we are trying to model some data using statistical tools. From the start, we recognize that no model is perfect

More information

Biostatistics Advanced Methods in Biostatistics IV

Biostatistics Advanced Methods in Biostatistics IV Biostatistics 140.754 Advanced Methods in Biostatistics IV Jeffrey Leek Assistant Professor Department of Biostatistics jleek@jhsph.edu Lecture 12 1 / 36 Tip + Paper Tip: As a statistician the results

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Generating Function Notes , Fall 2005, Prof. Peter Shor

Generating Function Notes , Fall 2005, Prof. Peter Shor Counting Change Generating Function Notes 80, Fall 00, Prof Peter Shor In this lecture, I m going to talk about generating functions We ve already seen an example of generating functions Recall when we

More information

Math 660-Lecture 15: Finite element spaces (I)

Math 660-Lecture 15: Finite element spaces (I) Math 660-Lecture 15: Finite element spaces (I) (Chapter 3, 4.2, 4.3) Before we introduce the concrete spaces, let s first of all introduce the following important lemma. Theorem 1. Let V h consists of

More information

Lecture 5: Using electronics to make measurements

Lecture 5: Using electronics to make measurements Lecture 5: Using electronics to make measurements As physicists, we re not really interested in electronics for its own sake We want to use it to measure something often, something too small to be directly

More information

Taylor Polynomials. James K. Peterson. Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Taylor Polynomials. James K. Peterson. Department of Biological Sciences and Department of Mathematical Sciences Clemson University James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 24, 2013 Outline 1 First Order Approximation s Second Order Approximations 2 Approximation

More information

Lecture 9: Taylor Series

Lecture 9: Taylor Series Math 8 Instructor: Padraic Bartlett Lecture 9: Taylor Series Week 9 Caltech 212 1 Taylor Polynomials and Series When we first introduced the idea of the derivative, one of the motivations we offered was

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Interpolation and Approximation

Interpolation and Approximation Interpolation and Approximation The Basic Problem: Approximate a continuous function f(x), by a polynomial p(x), over [a, b]. f(x) may only be known in tabular form. f(x) may be expensive to compute. Definition:

More information

The answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9.

The answer in each case is the error in evaluating the taylor series for ln(1 x) for x = which is 6.9. Brad Nelson Math 26 Homework #2 /23/2. a MATLAB outputs: >> a=(+3.4e-6)-.e-6;a- ans = 4.449e-6 >> a=+(3.4e-6-.e-6);a- ans = 2.224e-6 And the exact answer for both operations is 2.3e-6. The reason why way

More information

Business Statistics. Lecture 9: Simple Regression

Business Statistics. Lecture 9: Simple Regression Business Statistics Lecture 9: Simple Regression 1 On to Model Building! Up to now, class was about descriptive and inferential statistics Numerical and graphical summaries of data Confidence intervals

More information

Reading. w Foley, Section 11.2 Optional

Reading. w Foley, Section 11.2 Optional Parametric Curves w Foley, Section.2 Optional Reading w Bartels, Beatty, and Barsky. An Introduction to Splines for use in Computer Graphics and Geometric Modeling, 987. w Farin. Curves and Surfaces for

More information

2.2 Graphs of Functions

2.2 Graphs of Functions 2.2 Graphs of Functions Introduction DEFINITION domain of f, D(f) Associated with every function is a set called the domain of the function. This set influences what the graph of the function looks like.

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

1 Motivation for Newton interpolation

1 Motivation for Newton interpolation cs412: introduction to numerical analysis 09/30/10 Lecture 7: Newton Interpolation Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Motivation for Newton interpolation

More information

Instructor (Brad Osgood)

Instructor (Brad Osgood) TheFourierTransformAndItsApplications-Lecture26 Instructor (Brad Osgood): Relax, but no, no, no, the TV is on. It's time to hit the road. Time to rock and roll. We're going to now turn to our last topic

More information

Perturbation and Projection Methods for Solving DSGE Models

Perturbation and Projection Methods for Solving DSGE Models Perturbation and Projection Methods for Solving DSGE Models Lawrence J. Christiano Discussion of projections taken from Christiano Fisher, Algorithms for Solving Dynamic Models with Occasionally Binding

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Math 10-C Polynomials Concept Sheets

Math 10-C Polynomials Concept Sheets Math 10-C Polynomials Concept Sheets Concept 1: Polynomial Intro & Review A polynomial is a mathematical expression with one or more terms in which the exponents are whole numbers and the coefficients

More information

Lecture 07: Interpolation

Lecture 07: Interpolation Lecture 07: Interpolation Outline 1) Definitions, Motivation and Applications of Interpolation 2) Polynomial Interpolation! Definition and uniqueness of the interpolating polynomial P N Finding P N Monomial

More information

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2

Bisection Method. and compute f (p 1 ). repeat with p 2 = a 2+b 2 Bisection Method Given continuous function f (x) on the interval [a, b] with f (a) f (b) < 0, there must be a root in (a, b). To find a root: set [a 1, b 1 ] = [a, b]. set p 1 = a 1+b 1 2 and compute f

More information

Note on Chebyshev Regression

Note on Chebyshev Regression 1 Introduction Note on Chebyshev Regression Makoto Nakajima, UIUC January 006 The family of Chebyshev polynomials is by far the most popular choice for the base functions for weighted residuals method.

More information

Methods of Mathematical Physics X1 Homework 2 - Solutions

Methods of Mathematical Physics X1 Homework 2 - Solutions Methods of Mathematical Physics - 556 X1 Homework - Solutions 1. Recall that we define the orthogonal complement as in class: If S is a vector space, and T is a subspace, then we define the orthogonal

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Algebra Review C H A P T E R. To solve an algebraic equation with one variable, find the value of the unknown variable.

Algebra Review C H A P T E R. To solve an algebraic equation with one variable, find the value of the unknown variable. C H A P T E R 6 Algebra Review This chapter reviews key skills and concepts of algebra that you need to know for the SAT. Throughout the chapter are sample questions in the style of SAT questions. Each

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Roots of equations, minimization, numerical integration

Roots of equations, minimization, numerical integration Roots of equations, minimization, numerical integration Alexander Khanov PHYS6260: Experimental Methods is HEP Oklahoma State University November 1, 2017 Roots of equations Find the roots solve equation

More information

Adaptive Piecewise Polynomial Estimation via Trend Filtering

Adaptive Piecewise Polynomial Estimation via Trend Filtering Adaptive Piecewise Polynomial Estimation via Trend Filtering Liubo Li, ShanShan Tu The Ohio State University li.2201@osu.edu, tu.162@osu.edu October 1, 2015 Liubo Li, ShanShan Tu (OSU) Trend Filtering

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Lecture 19 Multiple (Linear) Regression

Lecture 19 Multiple (Linear) Regression Lecture 19 Multiple (Linear) Regression Thais Paiva STA 111 - Summer 2013 Term II August 1, 2013 1 / 30 Thais Paiva STA 111 - Summer 2013 Term II Lecture 19, 08/01/2013 Lecture Plan 1 Multiple regression

More information

Linear Models in Machine Learning

Linear Models in Machine Learning CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,

More information

LECTURE 5, FRIDAY

LECTURE 5, FRIDAY LECTURE 5, FRIDAY 20.02.04 FRANZ LEMMERMEYER Before we start with the arithmetic of elliptic curves, let us talk a little bit about multiplicities, tangents, and singular points. 1. Tangents How do we

More information

ALGEBRAIC GEOMETRY HOMEWORK 3

ALGEBRAIC GEOMETRY HOMEWORK 3 ALGEBRAIC GEOMETRY HOMEWORK 3 (1) Consider the curve Y 2 = X 2 (X + 1). (a) Sketch the curve. (b) Determine the singular point P on C. (c) For all lines through P, determine the intersection multiplicity

More information

CHAPTER 4. Interpolation

CHAPTER 4. Interpolation CHAPTER 4 Interpolation 4.1. Introduction We will cover sections 4.1 through 4.12 in the book. Read section 4.1 in the book on your own. The basic problem of one-dimensional interpolation is this: Given

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX 100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX.1 Norms If we have an approximate solution at a given point and we want to calculate the absolute error, then we simply take the magnitude

More information

Polynomial Interpolation Part II

Polynomial Interpolation Part II Polynomial Interpolation Part II Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Exercise Session

More information

Computational Physics (6810): Session 8

Computational Physics (6810): Session 8 Computational Physics (6810): Session 8 Dick Furnstahl Nuclear Theory Group OSU Physics Department February 24, 2014 Differential equation solving Session 7 Preview Session 8 Stuff Solving differential

More information

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight

Applied Numerical Analysis (AE2220-I) R. Klees and R.P. Dwight Applied Numerical Analysis (AE0-I) R. Klees and R.P. Dwight February 018 Contents 1 Preliminaries: Motivation, Computer arithmetic, Taylor series 1 1.1 Numerical Analysis Motivation..........................

More information

Polynomial Interpolation with n + 1 nodes

Polynomial Interpolation with n + 1 nodes Polynomial Interpolation with n + 1 nodes Given n + 1 distinct points (x 0, f (x 0 )), (x 1, f (x 1 )),, (x n, f (x n )), Interpolating polynomial of degree n P(x) = f (x 0 )L 0 (x) + f (x 1 )L 1 (x) +

More information

Econ Slides from Lecture 10

Econ Slides from Lecture 10 Econ 205 Sobel Econ 205 - Slides from Lecture 10 Joel Sobel September 2, 2010 Example Find the tangent plane to {x x 1 x 2 x 2 3 = 6} R3 at x = (2, 5, 2). If you let f (x) = x 1 x 2 x3 2, then this is

More information

Lecture 1: Period Three Implies Chaos

Lecture 1: Period Three Implies Chaos Math 7h Professor: Padraic Bartlett Lecture 1: Period Three Implies Chaos Week 1 UCSB 2014 (Source materials: Period three implies chaos, by Li and Yorke, and From Intermediate Value Theorem To Chaos,

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

INTRODUCTION TO COMPUTER METHODS FOR O.D.E. INTRODUCTION TO COMPUTER METHODS FOR O.D.E. 0. Introduction. The goal of this handout is to introduce some of the ideas behind the basic computer algorithms to approximate solutions to differential equations.

More information

Local Linear Estimation with Censored Data

Local Linear Estimation with Censored Data Introduction Main Result Comparison with Related Works June 6, 2011 Introduction Main Result Comparison with Related Works Outline 1 Introduction Self-Consistent Estimators Local Linear Estimation of Regression

More information

Chebfun and equispaced data

Chebfun and equispaced data Chebfun and equispaced data Georges Klein University of Oxford SIAM Annual Meeting, San Diego, July 12, 2013 Outline 1 1D Interpolation, Chebfun and equispaced data 2 3 (joint work with R. Platte) G. Klein

More information

Interpolation and extrapolation

Interpolation and extrapolation Interpolation and extrapolation Alexander Khanov PHYS6260: Experimental Methods is HEP Oklahoma State University October 30, 207 Interpolation/extrapolation vs fitting Formulation of the problem: there

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

Linear Methods for Prediction

Linear Methods for Prediction Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we

More information

Damped Oscillators (revisited)

Damped Oscillators (revisited) Damped Oscillators (revisited) We saw that damped oscillators can be modeled using a recursive filter with two coefficients and no feedforward components: Y(k) = - a(1)*y(k-1) - a(2)*y(k-2) We derived

More information

EXPONENT REVIEW!!! Concept Byte (Review): Properties of Exponents. Property of Exponents: Product of Powers. x m x n = x m + n

EXPONENT REVIEW!!! Concept Byte (Review): Properties of Exponents. Property of Exponents: Product of Powers. x m x n = x m + n Algebra B: Chapter 6 Notes 1 EXPONENT REVIEW!!! Concept Byte (Review): Properties of Eponents Recall from Algebra 1, the Properties (Rules) of Eponents. Property of Eponents: Product of Powers m n = m

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Lagrange Interpolation and Neville s Algorithm. Ron Goldman Department of Computer Science Rice University

Lagrange Interpolation and Neville s Algorithm. Ron Goldman Department of Computer Science Rice University Lagrange Interpolation and Neville s Algorithm Ron Goldman Department of Computer Science Rice University Tension between Mathematics and Engineering 1. How do Mathematicians actually represent curves

More information

SVMs, Duality and the Kernel Trick

SVMs, Duality and the Kernel Trick SVMs, Duality and the Kernel Trick Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 26 th, 2007 2005-2007 Carlos Guestrin 1 SVMs reminder 2005-2007 Carlos Guestrin 2 Today

More information

Statistical Machine Learning, Part I. Regression 2

Statistical Machine Learning, Part I. Regression 2 Statistical Machine Learning, Part I Regression 2 mcuturi@i.kyoto-u.ac.jp SML-2015 1 Last Week Regression: highlight a functional relationship between a predicted variable and predictors SML-2015 2 Last

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 Scribed by Luowen Qian Lecture 8 In which we use spectral techniques to find certificates of unsatisfiability

More information

Analysis of Algorithms I: Perfect Hashing

Analysis of Algorithms I: Perfect Hashing Analysis of Algorithms I: Perfect Hashing Xi Chen Columbia University Goal: Let U = {0, 1,..., p 1} be a huge universe set. Given a static subset V U of n keys (here static means we will never change the

More information

Foundation of Intelligent Systems, Part I. Regression 2

Foundation of Intelligent Systems, Part I. Regression 2 Foundation of Intelligent Systems, Part I Regression 2 mcuturi@i.kyoto-u.ac.jp FIS - 2013 1 Some Words on the Survey Not enough answers to say anything meaningful! Try again: survey. FIS - 2013 2 Last

More information

Global polynomial interpolants suffer from the Runge Phenomenon if the data sites (nodes) are not chosen correctly.

Global polynomial interpolants suffer from the Runge Phenomenon if the data sites (nodes) are not chosen correctly. Piecewise polynomial interpolation Global polynomial interpolants suffer from the Runge Phenomenon if the data sites (nodes) are not chosen correctly. In many applications, one does not have the freedom

More information

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation

More information

Ordinary Differential Equations

Ordinary Differential Equations CHAPTER 8 Ordinary Differential Equations 8.1. Introduction My section 8.1 will cover the material in sections 8.1 and 8.2 in the book. Read the book sections on your own. I don t like the order of things

More information

Linear Least Square Problems Dr.-Ing. Sudchai Boonto

Linear Least Square Problems Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkuts Unniversity of Technology Thonburi Thailand Linear Least-Squares Problems Given y, measurement signal, find

More information

CMSC858P Supervised Learning Methods

CMSC858P Supervised Learning Methods CMSC858P Supervised Learning Methods Hector Corrada Bravo March, 2010 Introduction Today we discuss the classification setting in detail. Our setting is that we observe for each subject i a set of p predictors

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

A WEDGE-OF-THE-EDGE THEOREM: ANALYTIC CONTINUATION OF MULTIVARIABLE PICK FUNCTIONS IN AND AROUND THE BOUNDARY

A WEDGE-OF-THE-EDGE THEOREM: ANALYTIC CONTINUATION OF MULTIVARIABLE PICK FUNCTIONS IN AND AROUND THE BOUNDARY A WEDGE-OF-THE-EDGE THEOREM: ANALYTIC CONTINUATION OF MULTIVARIABLE PICK FUNCTIONS IN AND AROUND THE BOUNDARY J E PASCOE Abstract In 1956, quantum physicist N Bogoliubov discovered the edge-ofthe-wedge

More information

1 (20 pts) Nyquist Exercise

1 (20 pts) Nyquist Exercise EE C128 / ME134 Problem Set 6 Solution Fall 2011 1 (20 pts) Nyquist Exercise Consider a close loop system with unity feedback. For each G(s), hand sketch the Nyquist diagram, determine Z = P N, algebraically

More information

1 Reductions from Maximum Matchings to Perfect Matchings

1 Reductions from Maximum Matchings to Perfect Matchings 15-451/651: Design & Analysis of Algorithms November 21, 2013 Lecture #26 last changed: December 3, 2013 When we first studied network flows, we saw how to find maximum cardinality matchings in bipartite

More information

1 Least Squares Estimation - multiple regression.

1 Least Squares Estimation - multiple regression. Introduction to multiple regression. Fall 2010 1 Least Squares Estimation - multiple regression. Let y = {y 1,, y n } be a n 1 vector of dependent variable observations. Let β = {β 0, β 1 } be the 2 1

More information

Minimal Element Interpolation in Functions of High-Dimension

Minimal Element Interpolation in Functions of High-Dimension Minimal Element Interpolation in Functions of High-Dimension J. D. Jakeman, A. Narayan, D. Xiu Department of Mathematics Purdue University West Lafayette, Indiana Random Model as a function We can consider

More information

CS S Lecture 5 January 29, 2019

CS S Lecture 5 January 29, 2019 CS 6363.005.19S Lecture 5 January 29, 2019 Main topics are #divide-and-conquer with #fast_fourier_transforms. Prelude Homework 1 is due Tuesday, February 5th. I hope you ve at least looked at it by now!

More information

One-Parameter Processes, Usually Functions of Time

One-Parameter Processes, Usually Functions of Time Chapter 4 One-Parameter Processes, Usually Functions of Time Section 4.1 defines one-parameter processes, and their variations (discrete or continuous parameter, one- or two- sided parameter), including

More information

1 Polynomial approximation of the gamma function

1 Polynomial approximation of the gamma function AM05: Solutions to take-home midterm exam 1 1 Polynomial approximation of the gamma function Part (a) One way to answer this question is by constructing the Vandermonde linear system Solving this matrix

More information

Week 3: Linear Regression

Week 3: Linear Regression Week 3: Linear Regression Instructor: Sergey Levine Recap In the previous lecture we saw how linear regression can solve the following problem: given a dataset D = {(x, y ),..., (x N, y N )}, learn to

More information

Decision Trees. Machine Learning CSEP546 Carlos Guestrin University of Washington. February 3, 2014

Decision Trees. Machine Learning CSEP546 Carlos Guestrin University of Washington. February 3, 2014 Decision Trees Machine Learning CSEP546 Carlos Guestrin University of Washington February 3, 2014 17 Linear separability n A dataset is linearly separable iff there exists a separating hyperplane: Exists

More information

The Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof:

The Normal Equations. For A R m n with m > n, A T A is singular if and only if A is rank-deficient. 1 Proof: Applied Math 205 Homework 1 now posted. Due 5 PM on September 26. Last time: piecewise polynomial interpolation, least-squares fitting Today: least-squares, nonlinear least-squares The Normal Equations

More information

The number of message symbols encoded into a

The number of message symbols encoded into a L.R.Welch THE ORIGINAL VIEW OF REED-SOLOMON CODES THE ORIGINAL VIEW [Polynomial Codes over Certain Finite Fields, I.S.Reed and G. Solomon, Journal of SIAM, June 1960] Parameters: Let GF(2 n ) be the eld

More information