q 1 q 2 q M 1 p 0 p 1 p M 1

Similar documents
SYDE 112, LECTURE 7: Integration by Parts

CHAPTER 4. Interpolation

Functions of Several Variables: Limits and Continuity

Getting Started with Communications Engineering

But, there is always a certain amount of mystery that hangs around it. People scratch their heads and can't figure

Linear Least-Squares Data Fitting

In other words, we are interested in what is happening to the y values as we get really large x values and as we get really small x values.

Polynomials and Polynomial Equations

MITOCW ocw f99-lec23_300k

Sequences and Series

Note: Please use the actual date you accessed this material in your citation.

Advanced Structural Analysis Prof. Devdas Menon Department of Civil Engineering Indian Institute of Technology, Madras

MITOCW ocw-18_02-f07-lec02_220k

Ordinary Differential Equations

5.2 Polynomial Operations

Solving Quadratic & Higher Degree Equations

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On

Sec. 1 Simplifying Rational Expressions: +

Eigenvalues and Eigenvectors: An Introduction

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

DIFFERENTIAL EQUATIONS

MATH 22 FUNCTIONS: ORDER OF GROWTH. Lecture O: 10/21/2003. The old order changeth, yielding place to new. Tennyson, Idylls of the King

6: Polynomials and Polynomial Functions

CALCULUS I. Review. Paul Dawkins

Rational Expressions & Equations

+ y = 1 : the polynomial

NCERT solution for Integers-2

INTEGRATING RADICALS

Sporadic and related groups. Lecture 15 Matrix representations of sporadic groups.

MITOCW ocw f99-lec16_300k

1 Maintaining a Dictionary

MAT224 Practice Exercises - Week 7.5 Fall Comments

Solution to Proof Questions from September 1st

There is a unique function s(x) that has the required properties. It turns out to also satisfy

Numerical Methods of Approximation

Math Precalculus I University of Hawai i at Mānoa Spring

Lecture 9: Elementary Matrices

Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

five line proofs Victor Ufnarovski, Frank Wikström 20 december 2016 Matematikcentrum, LTH

Advanced Structural Analysis Prof. Devdas Menon Department of Civil Engineering Indian Institute of Technology, Madras

What if the characteristic equation has complex roots?

C if U can. Algebra. Name

Continuity and One-Sided Limits

Unit VIII Matrices, Eigenvalues and Eigenvectors

MITOCW ocw f99-lec05_300k

Computational Fluid Dynamics Prof. Dr. Suman Chakraborty Department of Mechanical Engineering Indian Institute of Technology, Kharagpur

Math Precalculus I University of Hawai i at Mānoa Spring

MITOCW MITRES18_005S10_DiffEqnsMotion_300k_512kb-mp4

Eigenvectors and Hermitian Operators

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

Select/ Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras

MITOCW ocw-5_60-s08-lec17_300k

Factoring. there exists some 1 i < j l such that x i x j (mod p). (1) p gcd(x i x j, n).

Numerical Methods. Equations and Partial Fractions. Jaesung Lee

Chapter Six. Polynomials. Properties of Exponents Algebraic Expressions Addition, Subtraction, and Multiplication Factoring Solving by Factoring

Quadratic Equations Part I

Math 300 Introduction to Mathematical Reasoning Autumn 2017 Inverse Functions

The minimal polynomial

this, you take the matrix and then you calculate something called eigenvalues and eigenvectors. Do you know what those are? I didn't think you did,

Graphical Analysis and Errors MBL

Solving Quadratic & Higher Degree Equations

Descriptive Statistics (And a little bit on rounding and significant digits)

Tips and Tricks in Real Analysis

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Iterative Methods for Solving A x = b

Basic methods to solve equations

19. TAYLOR SERIES AND TECHNIQUES

Problem Solving in Math (Math 43900) Fall 2013

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Functions

(4.1) 22 Ch. 3. Sec. 3. Line Fitting

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

0. Introduction 1 0. INTRODUCTION

Bézier Curves and Splines

EXTRA CREDIT FOR MATH 39

Select/Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras

Algebra & Trig Review

Chapter 2. Limits and Continuity

An Intuitive Introduction to Motivic Homotopy Theory Vladimir Voevodsky

MITOCW ocw f07-lec39_300k

Curve Fitting. Objectives

Manipulating Equations

Differential Equations

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology Delhi

MITOCW ocw f99-lec17_300k

Second-Order Homogeneous Linear Equations with Constant Coefficients

Jordan normal form notes (version date: 11/21/07)

Directed Reading B. Section: Tools and Models in Science TOOLS IN SCIENCE MAKING MEASUREMENTS. is also know as the metric system.

Graphs of polynomials. Sue Gordon and Jackie Nicholas

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

PSY 310: Sensory and Perceptual Processes 1

Ch1 Algebra and functions. Ch 2 Sine and Cosine rule. Ch 10 Integration. Ch 9. Ch 3 Exponentials and Logarithms. Trigonometric.

Algebraic. techniques1

MITOCW ocw f99-lec01_300k

Integration by substitution

THE NULLSPACE OF A: SOLVING AX = 0 3.2

MAT1302F Mathematical Methods II Lecture 19

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 00 Dec 27, 2014.

Section 1.2 A Catalog of Essential Functions

Transcription:

3 Regularization Since we got a nice taste of what regularization is, let s see a generic problem of the type that is usually handled with regularization The simplest such problem is smooth interpolation in one dimension This is an important problem, not because it is such a good way to interpolate, but because it can be generalized in many different ways and form the basis of many very applied algorithms like snakes, membranes, and thin plates Despite their silly names these are very important and very applied techniques (I do not want to scare you from going to the doctor) in medical imaging and other areas The problem is best explained by Fig 31 We want to find a line that goes near near these points and is smooth Both the words near and smooth admit different interpretations but we aim for the simplest For the near we have only one choice if we want to keep things simple and this is least squares For the smooth we have more choices but all of them involving a form of least squares We can minimiza the square of the first derivative, the square of the second or a weighted sum of the two We can go to the third derivative or beyond but this is not very common Finally we have to decide how much emphasis we put on the nearness or the smoothness which is controlled by the parameter λ in the expression q 1 q q q M p p 1 p M Figure 31: The question is how to find a line that passes near all these points and is smooth The word smooth can have different meanings as can the word near 14 Ch 8 Sec Differential For mulation

Spetsakis S = S ph + λs sm where S is the expression we want to minimize, S ph represents the physical constraint, that relates to the nearness and S sm represents the smoothness Let us now start with the physical contraint We have to determine a function f (x) that passes near the points [x i, y i ], or in more mathematical terms f ( p i ) q i is small for all i Since we know and love least squares, that s what we use: S ph = M Σ ( f ( p i ) q i ) (31) 31 Representation of an Arbitrary Function The next problem is how we represent function f The choices are many We could, for example approximate f with a polynomial But we would need a very high degree poynomial for a function that tries to pass close to many points and such polynomials are difficult to handle We could also represent f as a sum of sines and cosines, but this will lead to complicated equations that involve the inversion of large matrices with very feww zeros The best method is to represent f by a set of regular points that it passes through and interpolate between these points with low degree polynomials This leads to simple equations that involve the inversion of matrices that are mostly zeros To make our life even easier we will interpolate between successive points with straight lines So we define f as x <1: 1 x <: x 3: f (x) = k x k + 1: N 1 x < N : (1 x)y + xy 1 ( x)y 1 + (x 1)y 1 (3 x)y + (x )y 1 (k + 1 x)y k + (x k)y k+1 (N x) + (x N + 1) where y, y 1,etc are the unknown parameters and once we have them then we know function f Now for every i the corresponding term in the summation of Eq (31) ( f ( p i ) q i ) if we set k = p i, inother words k is the biggest integer that does not exceed p i, becomes 3 Dealing with the Physical Constraint ((k + 1 p i )y k + ( p i k)y k+1 q i ) To minimize S ph we differentiate with respect to the unknowns y l for l = N, but for every l only the term that happens to have a y k such that k = l or a y k+1 such that Ch 8 Sec 3 Regularization 143

y + 1 = l will be non zero So each term in the summation of Eq (31) will contribute one equation when l = k and one more when l = k + 1 The first differention then looks like and finaly we get this equation y k ((k + 1 p i )y k + ( p i k)y k+1 q i ) = (k + 1 p i )((k + 1 p i )y k + ( p i k)y k+1 q i ) (k + 1 p i ) y k + y k+1 = (k + 1 p i )q i Similarly, when we differentiate with respect to y k+1 we get (k + 1 p i )(p i k)y k + ( p i k) y k+1 = ( p i k)q i All is nice and good, but what do we do with these two equations Where do they fitand how dowefind the y k s These two equations can be seen as the following system of linear equation written in matrix form (k + 1 p i ) ( p i k) y k y k+1 = (k + 1 p i )q i ( p i k)q i We can write the same equation in an even better way by including all the y k s and a big square matrix in which all but four of its elements are zero and similarly for the vector of the knowns on the right hand side (k + 1 p i ) ( p i k) y y k y k+1 = (k + 1 p i )q i ( p i k)q i This is an equation that has an enviable property It is tridiagonal, ie only three diagonals have non zero elements Now notice that for every p i we get one k = p i and one such equation as above Then all we have to do is add these equations together and since the matrices are all tridiagonal, we get a tridiagonal matrix in the end The matrix might not be invertible, but then we are not done yet We still have the smoothing term But before we go there, in case anyone is curious what would happen if instead of linear interpolation we used quadratic or, even better, cubic, the matrix would have five or seven non zero diagonals These are not as easy to invert as tridiagonal, but still better than full dense matrices 33 Dealing with the Smoothness Constraint The smoothness component of the least squares has to take the form of the square of the first derivative Everything else would be too much work for our sensitive minds So 144 Ch 8 Sec 3 Regularization

Spetsakis S sm = N Σ (y i+1 y i ) where we approximate the first derivative with a finite difference We could use something fancier but then we are discussing the basic principles, not showing off our tolerance to mathematical brutality Differentiating with respect to the unknowns y l we get where N N Σ (y i+1 y i ) = Σ (y i+1 y i ) = N Σ N (y i+1 y i ) (y i+1 y i ) Σ (δ (i + 1 l) δ (i l))(y i+1 y i ) Σ δ (i + 1 l) (y i+1 y i ) N Σ δ (i l) (y i+1 y i ) N δ (n) = n = : n : which means that from the each summation the only terms that survive are the ones where i + 1 = l or i = l 1and i = l respectively Sothe derivative of S sm with respect to the unknowns y l becomes 1 (y l y l ) (y l+1 y l ) and since this is equated to zero we finally get y l + y l y l+1 = It is hard not to notice that it is essentially the second derivative as wecould have guessed if we remembered anything at all from the previous section Anyway, if we write it in matrix form as above, for every y l we get Ch 8 Sec 3 Regularization 145

y y l y l y l+1 = Since we get this single equation for y l,ifwecombine the equation for all the y l s weget y y l y l y l+1 = which involves another tridiagonal matrix We can add the two tridiagonal matrices, and the corresponding vectors of knowns (the one of them is conveniently all zeros) and get the final tridiagonal system of equations to solve The solution is particularly fast because mathematicians, being very smart, have inv ented nice methods to solve them fast Unfortunately, when we go to two dimensions, we are not so lucky The mathematicians that worked on that particular kind of matrices (tridiagonal with fringes) were not as smart and we do not have that good solutions 146 Ch 8 Sec 3 Regularization