Physics 331 Introduction to Numerical Techniques in Physics

Similar documents
SMOOTH APPROXIMATION OF DATA WITH APPLICATIONS TO INTERPOLATING AND SMOOTHING. Karel Segeth Institute of Mathematics, Academy of Sciences, Prague

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Vectors in Function Spaces

Multiplication of Polynomials

Chapter Six. Polynomials. Properties of Exponents Algebraic Expressions Addition, Subtraction, and Multiplication Factoring Solving by Factoring

There are two things that are particularly nice about the first basis

5.3. Polynomials and Polynomial Functions

Expansion of Terms. f (x) = x 2 6x + 9 = (x 3) 2 = 0. x 3 = 0

Section 3.6 Complex Zeros

Review: complex numbers

MATH 423 Linear Algebra II Lecture 3: Subspaces of vector spaces. Review of complex numbers. Vector space over a field.

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

POLYNOMIAL EXPRESSIONS PART 1

Functions of Several Variables: Limits and Continuity

LESSON 9.1 ROOTS AND RADICALS

A polynomial is an algebraic expression that has many terms connected by only the operations of +, -, and of variables.

Polynomial Interpolation Part II

Unit 2-1: Factoring and Solving Quadratics. 0. I can add, subtract and multiply polynomial expressions

Numerical Methods I Orthogonal Polynomials

Concept Category 4. Polynomial Functions

Polynomials and Polynomial Equations

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

Transpose & Dot Product

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

Appendix A. Vector addition: - The sum of two vectors is another vector that also lie in the space:

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Transpose & Dot Product

Solving Quadratic & Higher Degree Equations

Algebra 1 Seamless Curriculum Guide

Applied Numerical Analysis Quiz #2

Bézier Curves and Splines

Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur

Lesson 21 Not So Dramatic Quadratics

Lecture 4 February 2

Just DOS Difference of Perfect Squares. Now the directions say solve or find the real number solutions :

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Lecture 10 Polynomial interpolation

Solving Quadratic & Higher Degree Equations

Methods of Mathematical Physics X1 Homework 2 - Solutions

Math (P)Review Part I:

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

Linear Algebra- Final Exam Review

Algebra 1 Mod 1 Review Worksheet I. Graphs Consider the graph below. Please do this worksheet in your notebook, not on this paper.

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Abstract Vector Spaces and Concrete Examples

Designing Information Devices and Systems II Fall 2016 Murat Arcak and Michel Maharbiz Homework 11. This homework is due November 14, 2016, at Noon.

Final Exam. 1 True or False (15 Points)

Honors Advanced Mathematics November 4, /2.6 summary and extra problems page 1 Recap: complex numbers

Great Theoretical Ideas in Computer Science

2. If the values for f(x) can be made as close as we like to L by choosing arbitrarily large. lim

Interpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter

k is a product of elementary matrices.

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

A quadratic expression is a mathematical expression that can be written in the form 2

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

Topic 7: Polynomials. Introduction to Polynomials. Table of Contents. Vocab. Degree of a Polynomial. Vocab. A. 11x 7 + 3x 3

Polynomial and Synthetic Division

Review for Mastery. Integer Exponents. Zero Exponents Negative Exponents Negative Exponents in the Denominator. Definition.

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Basic Linear Algebra in MATLAB

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Interpolation and extrapolation

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Harmonic Oscillator I

Cubic Splines MATH 375. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Cubic Splines

Introduction to Signal Spaces

LINEAR ALGEBRA REVIEW

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Multiplying a Monomial times a Monomial. To multiply a monomial term times a monomial term with radicals you use the following rule A B C D = A C B D

Chapter 4: Radicals and Complex Numbers

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Polynomial comes from poly- (meaning "many") and -nomial (in this case meaning "term")... so it says "many terms

The Gram-Schmidt Process

Linear Algebra Basics

Recall that any inner product space V has an associated norm defined by

5.2 Polynomial Operations

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Lesson #9 Simplifying Rational Expressions

Eigenvectors and Hermitian Operators

5.1 Simplifying Rational Expressions

Interpolation and Approximation

Solving Polynomial and Rational Inequalities Algebraically. Approximating Solutions to Inequalities Graphically

22 Approximations - the method of least squares (1)

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

A Brief Outline of Math 355

Solving Equations Quick Reference

2, or x 5, 3 x 0, x 2

Ch. 12 Rational Functions


(II.B) Basis and dimension

Q You mentioned that in complex analysis we study analytic functions, or, in another name, holomorphic functions. Pray tell me, what are they?

Chapter Five Notes N P U2C5

2 Analogies between addition and multiplication

Section September 6, If n = 3, 4, 5,..., the polynomial is called a cubic, quartic, quintic, etc.

Lecture Notes 1: Vector spaces

27 Wyner Math 2 Spring 2019

Lecture 3. Theorem 1: D 6

Transcription:

Physics 331 Introduction to Numerical Techniques in Physics Instructor: Joaquín Drut Lecture 12

Last time: Polynomial interpolation: basics; Lagrange interpolation.

Today: Quick review. Formal properties. Polynomial interpolation: Splines! Practice

Polynomial interpolation: basics. Given n points (x,y), they uniquely determine a polynomial of degree m = n-1. Why use a polynomial? What are the advantages? Any disadvantages you can think of? Advice: Always keep in mind the oscillating behavior of polynomials. Resist their nice properties... unless you know for sure that the underlying function is a polynomial! Try to avoid high-order (use your own judgement to decide). Last but not least: remember that polynomials can be numerically unstable beasts!

Some formal properties of polynomials Polynomials of degree less or equal to n > 0 form an n+1 dimensional vector space. You can add and subtract, multiply by a constant, etc. A possible basis is the set of monomials: {1,x,x 2,...,x n } The components of the vectors are the coefficients {a n } P (x) =a 0 + a 1 x + a 2 x 2 +... + a n x n, such that

Wait a vector space? What is that? A set of objects we call vectors, e.g. n-tuples (a1,a2,a3,,an), where the entries are real numbers. A set of objects we call scalars, e.g. the set of real numbers. Such that The scalars form a field, which means that addition and multiplication are defined in the usual way (associativity, commutativity, distributive property), those operations are closed in the set, and inverses and null elements 0 and 1 exist and are in the set as well.

Wait a vector space? What is that? A set of objects we call vectors, e.g. n-tuples (a1,a2,a3,,an), where the entries are real numbers. A set of objects we call scalars, e.g. the set of real numbers. Such that The scalars form a field, which means that addition and multiplication are defined in the usual way (associativity, commutativity, distributive property), those operations are closed in the set, and inverses and null elements 0 and 1 exist and are in the set as well. A set of operations between vectors and scalars exists: Vector-vector addition + is associative and commutative, and there is a null vector 0. Scalar-vector multiplication is closed in the set and obeys associative, commutative, and distributive properties Note: No vector-vector multiplication of any kind is defined at this stage!

Wait a vector space? What is that? Examples?

Wait a vector space? What is that? Examples - Vectors on the plane with reals (or rationals) as scalars. - Complex numbers (as vectors) with reals (or rationals) as scalars. - n-tuples with real entries and reals (or rationals) as scalars. - Polynomials with real coefficients and reals (or rationals) as scalars.

Wait what happened to the dot product between vectors? The scalar (or dot, or inner ) product An operation involving two vectors v, w mapped onto a scalar. We denote that resulting scalar by (v, w) or sometimes v. w This mapping is such that: (v, w) = (w, v)* Conjugate property (x v, w) = x (v, w) where x is any scalar (v + u, w) = (v, w) + (u, w) Linearity (v, v) > 0 (v, v) = 0 implies v = 0 Positive definiteness A vector space armed with an inner product is called an inner product space.

Some formal properties of polynomials To extend the interpretation of polynomials as vectors, one defines spaces of functions, which are infinite-dimensional vector spaces (specifically, Hilbert spaces). Imagine interpreting a function as a continuous vector v! [v] i = v i vector and its discrete coordinates f! f x = f(x) function and its continuous coordinates Instead of v w = X i v i w i we ll have f g = Z dxf(x)g(x)

Some formal properties of polynomials Polynomials can only be square integrable if we define them on a bounded subset of the real line, e.g. [-1,1] Why? Then, the dot-product definition Z f g = dxf(x)g(x) makes sense. A more general definition of the dot product for such function spaces is possible: f g = Z dx W (x)f(x)g(x) where we have used a positive definite weight function W(x). If we orthonormalize a set of polynomials according to the above dot product, we obtain different sets of well-known polynomials (next slide). There is a well-known algorithm for orthonormalization: Gram-Schmidt. We will return to this algorithm in a few lectures.

Some formal properties of polynomials

Polynomial interpolation: easiest way. Given n points (x,y), they uniquely determine a polynomial of degree m = n-1. f(x) =a m x m + a m 1 x m 1 +... + a 0 How do we find the n = m+1 coefficients based on our n datapoints? (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) a m x m 1 + a m 1 x m 1 1 +... + a 0 = y 1 a m x m 2 + a m 1 x m 1 2 +... + a 0 = y 2 a m x m n + a m 1 x m 1 n... +... + a 0 = y n A nice linear set of equations for the coefficients, but generally a bad idea due to ill-conditioning: the determinant is usually close to zero

Polynomial interpolation: Lagrange version. Given n points (x,y), they uniquely determine a polynomial of degree m = n-1. f(x) =a m x m + a m 1 x m 1 +... + a 0 How do we find the n = m+1 coefficients based on our n datapoints? (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) n = m+1 = 2 case, by definition: f(x) =y = a 2 (x x 1 )+a 1 (x x 2 ) n = m+1 = 3 case, by definition: f(x) =y = a 1 (x x 2 )(x x 3 )+a 2 (x x 1 )(x x 3 ) + a 3 (x x 1 )(x x 2 ) What about in general?

Polynomial interpolation: Lagrange version. General expression Lagrange functions f(x) = nx y i L i (x) L i (x) = ny (x x j ) i=1 j=1;j6=i (x i x j ) Keep in mind: These polynomials are used by evaluating the Lagrange functions at the point of interest x, every time a new point is needed! This is much more costly than computing coefficients once and for all and using them later for any new point we may want! However, it can be expected to be numerically more stable! No matrix inversion needed!

Polynomial interpolation Polynomials have great continuity and differentiability properties. However, if we have many datapoints (and you don t need that many to see this), a polynomial interpolation will quickly develop unwanted oscillations. The final interpolating form will depend strongly on the support points! What can we do about this?

Splines!

Polynomial interpolation: linear splines. (x 1,y 1 ) For each pair of points, use the linear Lagrange form: (x 2,y 2 ) (x 3,y 3 )... (x n,y n )

Polynomial interpolation: linear splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) For each pair of points, use the linear Lagrange form: E.g. f(x) =y = a 2 (x x 1 )+a 1 (x x 2 ) = y 2 (x x 1 ) (x 2 x 1 ) + y 1...for points in the first interval. (x x 2 ) (x 1 x 2 )

Polynomial interpolation: linear splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) For each pair of points, use the linear Lagrange form: E.g. In general, f(x) =y = a 2 (x x 1 )+a 1 (x x 2 ) = y 2 (x x 1 ) (x 2 x 1 ) + y 1...for points in the first interval. f i (x) =y i+1 (x x i ) (x i+1 x i ) + y i (x x 2 ) (x 1 x 2 ) (x x i+1 ) (x i x i+1 ) valid in [x i,x i+1 ]

Polynomial interpolation: linear splines. y (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) For each pair of points, use the linear Lagrange form: E.g. In general, f(x) =y = a 2 (x x 1 )+a 1 (x x 2 ) = y 2 (x x 1 ) (x 2 x 1 ) + y 1...for points in the first interval. f i (x) =y i+1 (x x i ) (x i+1 x i ) + y i (x x 2 ) (x 1 x 2 ) (x x i+1 ) (x i x i+1 ) valid in [x i,x i+1 ] x

Polynomial interpolation: quadratic splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 ) In each interval, we propose a quadratic form: f i (x) =a i x 2 + b i x + c i... (x n,y n )

Polynomial interpolation: quadratic splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) In each interval, we propose a quadratic form:...but then we have: n datapoints f i (x) =a i x 2 + b i x + c i which determine 3 coefficients at each of the n-1 intervals, i.e. 3(n-1) coefficients total How do we proceed?

Polynomial interpolation: quadratic splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 ) In each interval, we propose a quadratic form:...but then we have: f i (x) =a i x 2 + b i x + c i... (x n,y n ) n datapoints which determine 3 coefficients at each of the n-1 intervals, i.e. 3(n-1) coefficients total How do we proceed? Demand that f i (x i )=y i and f i (x i+1 )=y i+1 i.e. the function should pass through the data That s 2 conditions per interval, i.e. 2(n-1) conditions. We have n-1 left!

Polynomial interpolation: quadratic splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 ) In each interval, we propose a quadratic form:...but then we have: f i (x) =a i x 2 + b i x + c i... (x n,y n ) n datapoints which determine 3 coefficients at each of the n-1 intervals, i.e. 3(n-1) coefficients total How do we proceed? Demand that f i (x i )=y i and f i (x i+1 )=y i+1 i.e. the function should pass through the data That s 2 conditions per interval, i.e. 2(n-1) conditions. We have n-1 left! At the interior points (i.e. all except i=1 and i=n), match the first derivative from both sides: f 0 i(x i+1 )=f 0 i+1(x i+1 ) =) 2a i x i+1 + b i =2a i+1 x i+1 + b i+1 That s 1 condition per interior point, i.e. (n-2) conditions. We have 1 left!

Polynomial interpolation: quadratic splines. Use a linear interpolation for the first interval, instead of a quadratic form. That gives a total of 3(n-1) conditions for our 3(n-1) coefficients. Not only is the resulting interpolation continuous, its derivative is continuous too! y x But maybe we want more than that...

Polynomial interpolation: cubic splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) In each interval, we propose a cubic form: f i (x) =a i x 3 + b i x 2 + c i x + d i 4 coefficients at each of the n-1 intervals, i.e. 4(n-1) coefficients total How do we proceed?

Polynomial interpolation: cubic splines. (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 )... (x n,y n ) In each interval, we propose a cubic form: f i (x) =a i x 3 + b i x 2 + c i x + d i 4 coefficients at each of the n-1 intervals, i.e. 4(n-1) coefficients total How do we proceed? Match the function: 2(n-1) conditions At the interior points, match the first derivative from both sides: (n-2) conditions At the interior points, match the second derivative from both sides: (n-2) conditions That s a total of 4n-6 conditions; we need 2 more! Take the second derivative to vanish at the first and last points.