Surface Fitting. Moving Least Squares. Preliminaries (Algebra & Calculus) Preliminaries (Algebra & Calculus) Preliminaries (Algebra & Calculus) ( )(

Similar documents
Empirical Models Interpolation Polynomial Models

Pre-Calculus Midterm Practice Test (Units 1 through 3)

Warm-Up. Given f ( x) = x 2 + 3x 5, find the function value when x = 4. Solve for x using two methods: 3x 2 + 7x 20 = 0

Linear Algebra for Machine Learning. Sargur N. Srihari

Extrema of Functions of Several Variables

3.2. Polynomial Functions and Their Graphs. Copyright Cengage Learning. All rights reserved.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Determining the Intervals on Which a Function is Increasing or Decreasing From the Graph of f

Final Exam C Name i D) 2. Solve the equation by factoring. 4) x2 = x + 72 A) {1, 72} B) {-8, 9} C) {-8, -9} D) {8, 9} 9 ± i

HOMEWORK 7 SOLUTIONS

Algebra 2A Unit 1 Week 1 Day Activity Unit 1 Week 2 Day Activity Unit 1 Week 3 Day Activity Unit 2 Week 1 Day Activity

You may not start to read the questions printed on the subsequent pages until instructed to do so by the Invigilator.

Section 4.1: Polynomial Functions and Models

MA 323 Geometric Modelling Course Notes: Day 07 Parabolic Arcs

Preliminaries Lectures. Dr. Abdulla Eid. Department of Mathematics MATHS 101: Calculus I

Questionnaire for CSET Mathematics subset 1

Non-linear Dimensionality Reduction

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

Chapter 7 Polynomial Functions. Factoring Review. We will talk about 3 Types: ALWAYS FACTOR OUT FIRST! Ex 2: Factor x x + 64

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

Analyzing Functions Maximum & Minimum Points Lesson 75

8. Diagonalization.

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Final Exam A Name. 20 i C) Solve the equation by factoring. 4) x2 = x + 30 A) {-5, 6} B) {5, 6} C) {1, 30} D) {-5, -6} -9 ± i 3 14

Polynomial functions right- and left-hand behavior (end behavior):

Section 3.2 Polynomial Functions and Their Graphs

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Functions of Several Variables

Albert Einstein High School Summer Task Cover Sheet

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

MATRIX LIE GROUPS AND LIE GROUPS

Lesson 2.1: Quadratic Functions

A.P. Calculus Holiday Packet

False. 1 is a number, the other expressions are invalid.

SMOOTH APPROXIMATION OF DATA WITH APPLICATIONS TO INTERPOLATING AND SMOOTHING. Karel Segeth Institute of Mathematics, Academy of Sciences, Prague

Functions of Several Variables

LINEAR ALGEBRA QUESTION BANK

300-Level Math Courses

Upon successful completion of MATH 220, the student will be able to:

Transpose & Dot Product

4.3 How derivatives affect the shape of a graph. The first derivative test and the second derivative test.

INTRODUCTION TO ALGEBRAIC GEOMETRY

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 6

Some notes on Chapter 8: Polynomial and Piecewise-polynomial Interpolation

SOL Warm-Up Graphing Calculator Active

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Bishop Kelley High School Summer Math Program Course: Honors Pre-Calculus

MTH 2310, FALL Introduction

Linear Models Review

A FEW APPLICATIONS OF DIFFERENTIAL FORMS. Contents

Algebra II Vocabulary Alphabetical Listing. Absolute Maximum: The highest point over the entire domain of a function or relation.

Riemann surfaces. Paul Hacking and Giancarlo Urzua 1/28/10

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

Transpose & Dot Product

Multivariable Calculus

Advanced Algebra 2 - Assignment Sheet Chapter 1

13 Path Planning Cubic Path P 2 P 1. θ 2

(1) Let π Ui : U i R k U i be the natural projection. Then π π 1 (U i ) = π i τ i. In other words, we have the following commutative diagram: U i R k

Pre-Calculus: Functions and Their Properties (Solving equations algebraically and graphically, matching graphs, tables, and equations, and

1 Lagrange Multiplier Method

Chapter 8 ~ Quadratic Functions and Equations In this chapter you will study... You can use these skills...

Differentiation. f(x + h) f(x) Lh = L.

Department of Mathematics The Ohio State University

Planar interpolation with a pair of rational spirals T. N. T. Goodman 1 and D. S. Meek 2

Harbor Creek School District. Algebra II Advanced. Concepts Timeframe Skills Assessment Standards Linear Equations Inequalities

Common Core Algebra 2 Review Session 1

2. TRIGONOMETRY 3. COORDINATEGEOMETRY: TWO DIMENSIONS

Region 16 Board of Education. Precalculus Curriculum

Spectral Processing. Misha Kazhdan

Optimization Problems. By Tuesday J. Johnson

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Arsène Pérard-Gayot (Slides by Piotr Danilewski)

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Problem 1: (3 points) Recall that the dot product of two vectors in R 3 is

Advanced Algebra II 1 st Semester Exam Review

Dot product. The dot product is an inner product on a coordinate vector space (Definition 1, Theorem

Intersection of Ellipsoids

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Department of Mathematics The Ohio State University

Conceptual Questions for Review

4th Quarter Pacing Guide LESSON PLANNING

8.1 Polynomial Functions

Polynomial Functions and Their Graphs

Table of contents. Polynomials Quadratic Functions Polynomials Graphs of Polynomials Polynomial Division Finding Roots of Polynomials

CHAPTER 4 DIFFERENTIAL VECTOR CALCULUS

Computing the Hausdorff Distance between Two B-Spline Curves. Zachi Shtain

MAT 460: Numerical Analysis I. James V. Lambers

Mock Final Exam Name. Solve and check the linear equation. 1) (-8x + 8) + 1 = -7(x + 3) A) {- 30} B) {- 6} C) {30} D) {- 28}

Section 2.2 (e-book 3.2) Polynomials

Math 302 Outcome Statements Winter 2013

Student Study Session Topic: Interpreting Graphs

Reconstructing an Ellipsoid from its Perspective Projection onto a Plane

Tangent Planes, Linear Approximations and Differentiability

Math 2204 Multivariable Calculus Chapter 14: Partial Derivatives Sec. 14.7: Maximum and Minimum Values

II. DIFFERENTIABLE MANIFOLDS. Washington Mio CENTER FOR APPLIED VISION AND IMAGING SCIENCES

= first derivative evaluated at that point: ( )

CMPSCI 791BB: Advanced ML: Laplacian Learning

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

Information About Ellipses

Transcription:

Preliminaries Algebra Calculus Surface Fitting Moving Least Squares Gradients If F is a function assigning a real value to a D point the gradient of F is the vector: Preliminaries Algebra Calculus Extrema If F is a function assigning a real value to a D point then p is an extremum of F only if the gradient of F at p is zero: 0 Preliminaries Algebra Calculus Dot Products If F has the form: for some fixed q then: Preliminaries Algebra Calculus Dot Products If F has the form: for some fixed q then: Preliminaries Algebra Calculus Dot Products If F has the form: then:

Preliminaries Algebra Calculus Dot Products If F has the form: then: Preliminaries Algebra Calculus Lagrangians The extrema of F subject to the constraint Gpc can be found by solving: λ i.e. At p the function F should only be changing in a direction that is perpendicular to the constraint. Preliminaries Algebra Calculus Lagrangians and Symmetric Matrices If M is a symmetric matrix p is an extremum of: subject to the constraint p if and only if p is an eigen-vector of M. λ McLain 9 Challenge: Given a discrete sampling of a function McLain 9 Challenge: Given a discrete sampling of a function complete the function to all of space. Constant For each point in space we can find the closest sample point and use its value.? The reconstruction is piecewise constant

Challenge Given a set of D points {p i } with associated real values i define a function Φ defined over all of D space that fitsapproximates the sample data. Weighted Averaging Given a set of D points {p i x i y i } with associated real values i for any point pxy set: Φ Properties of the weight function r: It should drop off with distance Drop off too slow blurring Drop off too sharp numerical instability Key Idea McLain 9 This type of fitting can be viewed as function optimization: Generalized Framework This type of fitting can be viewed as function optimization: Let L be a space of functions at each point p find p L minimizing the weighted error: and set: Φ Generalized Framework In the case that L is the space of constant order polynomials this reduces to solving for the real value p that minimizes: Generalized Framework In the case that L is the space of constant order polynomials this reduces to solving for the real value p that minimizes: Thus:

!!.- 6 8 6 * Example: We can let L be the six-dimensional space of second order polynomials: L {c 00 c 0 xc 0 yc xyc 0 x c 0 y } Example: We can let L be the six-dimensional space of second order polynomials: L {c 00 c 0 xc 0 yc xyc 0 x c 0 y } Then we would like to solve for the polynomial p xy minimizing: Solving for p : Let C be the vector of coefficients: 00 0 0 0 0 And let V i be vector: Solving for p : To solve for p we need to find the vector C p minimizing: [ ] This lets us write: Solving for p : [ ] Setting the derivative with respect to C p equal to zero gives: Solving for p : 6 Using the fact that w uv wu t v gives: 0

.- 6 8 6 *! Solving for p : 6 Using the fact that w uv wu t v gives: 0 Solving for p : Given the optimal coefficients of the polynomial we can now set the value at the point p: Φ Using higher order functions gives us more freedom to better fit the data MLS vs. Splines MLS A continuous set of functions glued together with the weight function. Continuity defined by the continuity of the weight function. No structure on the distribution of the sample points. Splines A discrete set of functions designed to glue together at the end points. Continuity defined by the order of the polynomial at the joints. Sample points are distributed with a regular grid topology. What is a manifold? What is a manifold? A smooth -manifold embedded in D is a surface smoothly stitched together from the graphs of functions. MLS Approach Levin As in McLain define a continuous set of graphs and stitch them together using the weighting function: f x x f x x f x x

6 MLS Approach Levin As in McLain define a continuous set of graphs and stitch them together using the weighting function: Given a set of points {r i } and a weighting function for any point r find a graph that locally fits the data points. MLS Approach Levin As in McLain define a continuous set of graphs and stitch them together using the weighting function:. For each point define locally best fitting plane. Using the height of the points from the plane as the sample value apply MLS to complete the function. MLS Approach Basic Approach Given a set of points {r i } and a weighting function for any point r find a graph that locally fits the data points.. Plane fitting: Solve for the plane with normal a r and containing the point q r that minimizes: Plane Fitting Basic Approach Solving for q r Taking the derivative with respect to q r and setting the result equal to zero gives: Plane Fitting Basic Approach Solving for q Taking the derivative with respect to q and setting the result equal to zero gives: Plane Fitting Basic Approach Solving for a r Using the fact that w! uv wu t v gives: *.- 0

0. @ A @ * - @ A! * 6? : : : 9 9 9 8 Plane Fitting Basic Approach Solving for a r Since the matrix: is symmetric a r must be an eigen-vector. MLS Approach Basic Approach. Function fitting: Fix an orthonormal basis {v r w r } perpendicular to a r and express r i as the D sample: with value: Now use the McLain approach to fit a quadratic polynomial P r xy as the graph. MLS Approach Basic Approach Projection: Given a set of points {r i } and a weighting function for any point r:. Find the fitting plane a r q r at r. Find the fitting polynomial P r xy at r. Express r as: Set the projection of r to be: π > ; < MLS Approach Basic Approach Advantage: Gives a way of sending points to the surface without explicitly defining where the surface is. Limitations: The projection operator π is not actually a projection: ππr πr. If r is close to the surface πr will not necessarily be mapped onto the approximating surface. MLS Approach Levin The projection operator is not actually a projection because:. The fitting plane computed for r is not the same fitting plane computed for πr. The graph fitted to r is not the same graph fitted to πr since the weighting coefficients change: π@ MLS Approach Levin Problem The fitting plane computed for r is not the same fitting plane computed for πr Observation The projection πr only moves r along the normal direction of the plane.

MLS Approach Levin Problem The fitting plane computed for r is not the same fitting plane computed for πr MLS Approach Levin Problem. The graph fitted to r is not the same graph fitted to πr π Observation The projection πr only moves r along the normal direction of the plane. Solution Modify the definition of best fitting plane so that it locally only depends on the line from r in the direction of a r. MLS Approach Levin Problem. The graph fitted to r is not the same graph fitted to πr π Solution Modify the weighting for the best fitting function so that it only depends on q r and not on r: MLS Approach Basic Approach Advantages: The projection operator π is a projection for points sufficiently close to the surface: ππrπr If r is close to the surface πr will be mapped onto the approximating surface. MLS Approach Basic Approach Disadvantages: By changing the fitting function: it is now necessary to optimize the fitting plane over the weighting functions which can be very difficult. 8