Math (P)Review Part II:

Similar documents
Math (P)Review Part I:

DISCRETE DIFFERENTIAL GEOMETRY: AN APPLIED INTRODUCTION Keenan Crane CMU /858B Fall 2017

Frequency, Vibration, and Fourier

Ideas from Vector Calculus Kurt Bryan

Introduction to Algebra: The First Week

1 Vector Calculus. 1.1 Dot and Cross Product. hu, vi Euc := u v cos(q). Vector Calculus Review

Sums of Squares (FNS 195-S) Fall 2014

Electromagnetic Theory Prof. D. K. Ghosh Department of Physics Indian Institute of Technology, Bombay

February 27, 2019 LECTURE 10: VECTOR FIELDS.

MATH 12 CLASS 2 NOTES, SEP Contents. 2. Dot product: determining the angle between two vectors 2

Coordinate systems and vectors in three spatial dimensions

Math 291-2: Lecture Notes Northwestern University, Winter 2016

100 CHAPTER 4. SYSTEMS AND ADAPTIVE STEP SIZE METHODS APPENDIX

Dot Products, Transposes, and Orthogonal Projections

1 Review of the dot product

There are two things that are particularly nice about the first basis

Generating Function Notes , Fall 2005, Prof. Peter Shor

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

CS 468, Lecture 11: Covariant Differentiation

Vector calculus background

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

,, rectilinear,, spherical,, cylindrical. (6.1)

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) FALL 2016 HOMEWORK 0.5 MATH REVIEW/PREVIEW VECTOR CALCULUS

Physics 110. Electricity and Magnetism. Professor Dine. Spring, Handout: Vectors and Tensors: Everything You Need to Know

15. LECTURE 15. I can calculate the dot product of two vectors and interpret its meaning. I can find the projection of one vector onto another one.

Lecture I: Vectors, tensors, and forms in flat spacetime

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

A Primer on Three Vectors

Resonance and response

Main topics for the First Midterm Exam

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2012/2013, 4th quarter. Lecture 2: vectors, curves, and surfaces

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Math 416, Spring 2010 More on Algebraic and Geometric Properties January 21, 2010 MORE ON ALGEBRAIC AND GEOMETRIC PROPERTIES

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication.

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra /34

Course Notes Math 275 Boise State University. Shari Ultman

Note: Every graph is a level set (why?). But not every level set is a graph. Graphs must pass the vertical line test. (Level sets may or may not.

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Class Meeting # 1: Introduction to PDEs

1 Notes for lectures during the week of the strike Part 2 (10/25)

Chapter 8. Rigid transformations

LINEAR ALGEBRA KNOWLEDGE SURVEY

8.5 Taylor Polynomials and Taylor Series

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2011/2012, 4th quarter. Lecture 2: vectors, curves, and surfaces

Vectors Coordinate frames 2D implicit curves 2D parametric curves. Graphics 2008/2009, period 1. Lecture 2: vectors, curves, and surfaces

Vectors. September 2, 2015

Ordinary Least Squares Linear Regression

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Lecture Notes for MATH6106. March 25, 2010

Math 320-3: Lecture Notes Northwestern University, Spring 2015

Topic 7 Notes Jeremy Orloff

Upon successful completion of MATH 220, the student will be able to:

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math 5a Reading Assignments for Sections

Note: Please use the actual date you accessed this material in your citation.

Solve Wave Equation from Scratch [2013 HSSP]

Notes on multivariable calculus

CURRENT MATERIAL: Vector Calculus.

Metric spaces and metrizability

FINAL EXAM STUDY GUIDE

1.1. Fields Partial derivatives

Vectors and their uses

Vector Basics, with Exercises

Vector Geometry. Chapter 5

4.1 Distance and Length

2. FUNCTIONS AND ALGEBRA

Vectors and Coordinate Systems

(arrows denote positive direction)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Euclidean Space. This is a brief review of some basic concepts that I hope will already be familiar to you.

The geometry of least squares

NOTES ON VECTORS, PLANES, AND LINES

Math 302 Outcome Statements Winter 2013

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225

Chapter 0 of Calculus ++, Differential calculus with several variables

Finding Limits Graphically and Numerically

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Linear Algebra 1/33

Lecture 2: Linear regression

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

MITOCW ocw-18_02-f07-lec02_220k

February 13, Option 9 Overview. Mind Map

Chapter 0. Preliminaries. 0.1 Things you should already know

Introduction and some preliminaries

Mathematics for Intelligent Systems Lecture 5 Homework Solutions

Vectors Part 1: Two Dimensions

Math Linear Algebra II. 1. Inner Products and Norms

What is proof? Lesson 1

Vectors for Beginners

1. FUNDAMENTAL CONCEPTS AND MATH REVIEW

A Introduction to Matrix Algebra and the Multivariate Normal Distribution

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Lecture 2: Vector Spaces, Metric Spaces

2.1 Definition. Let n be a positive integer. An n-dimensional vector is an ordered list of n real numbers.

ARE211, Fall 2004 CONTENTS. 4. Univariate and Multivariate Differentiation (cont) Four graphical examples Taylor s Theorem 9

LINEAR ALGEBRA: THEORY. Version: August 12,

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

II. Unit Speed Curves

Sometimes the domains X and Z will be the same, so this might be written:

Transcription:

Math (P)Review Part II: Vector Calculus Computer Graphics

Assignment 0.5 (Out today!) Same story as last homework; second part on vector calculus. Slightly fewer questions

Last Time: Linear Algebra Touched on a variety of topics: Don t have time to cover everything! But there are some fantastic lectures online: http://bit.ly/2bfjliy

Vector Calculus in Computer Graphics Today s topic: vector calculus. Why is vector calculus important for computer graphics? - Basic language for talking about spatial relationships, transformations, etc. - Much of modern graphics (physically-based animation, geometry processing, etc.) formulated in terms of partial differential equations (PDEs) that use div, curl, Laplacian... - As we saw last time, vector-valued data is everywhere in graphics!

Euclidean Norm Last time, developed idea of norm, which measures total size, length, volume, intensity, etc. For geometric calculations, the norm we most often care about is the Euclidean norm Euclidean norm is any notion of length preserved by rotations/translations/reflections of space. In orthonormal coordinates: Not true for all norms! (Can you think of an example?) WARNING: This quantity does not encode geometric length unless vectors are encoded in an orthonormal basis. (Common source of bugs!)

Euclidean Inner Product / Dot Product Likewise, lots of possible inner products intuitively, measure some notion of alignment. For geometric calculations, want to use inner product that captures something about geometry! For n-dimensional vectors, Euclidean inner product defined as In orthonormal Cartesian coordinates, can be represented via the dot product WARNING: As with Euclidean norm, no geometric meaning unless coordinates come from an orthonormal basis. Q: How do you express the Euclidean inner product in an arbitrary basis?

Cross Product Inner product takes two vectors and produces a scalar In 3D, cross product is a natural way to take two vectors and get a vector, written as u x v Geometrically: - magnitude equal to parallelogram area - direction orthogonal to both vectors - but which way? Use right hand rule (Q: Why only 3D?)

Cross Product, Determinant, and Angle More precise definition (that does not require hands): θ is angle between u and v det is determinant of three column vectors Uniquely determines coordinate formula: (mnemonic) Useful abuse of notation in 2D:

Cross Product as Quarter Rotation Simple but useful observation for manipulating vectors in 3D: cross product with a unit vector N is equivalent to a quarterrotation in the plane with normal N: Q: What is N x (N x u)? Q: If you have u and N x u, how do you get a rotation by some arbitrary angle θ?

Matrix Representation of Dot Product Often convenient to express dot product via matrix product: By the way, what about some other inner product? E.g., <u,v> := 2 u1 v1 + u1 v2 + u2 v1 + 3 u2 v2 Q: Why is matrix representing inner product always symmetric (A T =A)?

Matrix Representation of Cross Product Can also represent cross product via matrix multiplication: (Did we get it right?) Q: Without building a new matrix, how can we express v x u? A: Useful to notice that v x u = -u x v (why?). Hence,

Determinant Q: How do you compute the determinant of a matrix? A: Apply some algorithm somebody told me once upon a time: Totally obvious right? Q: No! What the heck does this number mean?!

Determinant, Volume and Triple Product Better answer: det(u,v,w) encodes (signed) volume of parallelpiped with edge vectors u, v, w. Relationship known as a triple product formula (Q: What happens if we reverse order of cross product?)

Determinant of a Linear Map Q: If a matrix A encodes a linear map f, what does det(a) mean?

Representing Linear Maps via Matrices Key example: suppose I have a linear map How do I encode as a matrix? Easy: a vectors become matrix columns: Now, matrix-vector multiply recovers original map:

Determinant of a Linear Map Q: If a matrix A encodes a linear map f, what does det(a) mean? A: It measures the change in volume. Q: What does the sign of the determinant tell us, in this case? A: It tells us whether orientation was reversed (det(a) < 0) (Do we really need a matrix in order to talk about the determinant of a linear map?)

Other Triple Products Super useful for working w/ vectors in 3D. E.g., Jacobi identity for the cross product: Why is it true, geometrically? There is a geometric reason, but not nearly as obvious as det: has to do w/ fact that triangle s altitudes meet at a point. Yet another triple product: Lagrange s identity (Can you come up with a geometric interpretation?)

Differential Operators - Overview Next up: differential operators and vector fields. Why is this useful for computer graphics? - Many physical/geometric problems expressed in terms of relative rates of change (ODEs, PDEs). - These tools also provide foundation for numerical optimization e.g., minimize cost by following the gradient of some objective. *

Derivative as Slope Consider a function f(x): R R What does its derivative f mean? One interpretation rise over run Leads to usual definition: Careful! What if slope is different when we walk in opposite direction? Differentiable at x0 if f + = f -. Many functions in graphics are NOT differentiable!

Derivative as Best Linear Approximation Any smooth function f(x) can be expressed as a Taylor series: constant linear quadratic quadratic constant linear Replacing complicated functions with a linear (and sometimes quadratic) approximation is a favorite trick in graphics algorithms we ll see many examples.

Derivative as Best Linear Approximation Intuitively, same idea applies for functions of multiple variables:

How do we think about derivatives for a function that has multiple variables?

Directional Derivative One way: suppose we have a function f(x1,x2) - Take a slice through the function along some line - Then just apply the usual derivative! - Called the directional derivative take a small step along u

Gradient nabla Given a multivariable function, gradient assigns a vector at each point: (Ok, but which vectors, exactly?)

Gradient in Coordinates Most familiar definition: list of partial derivatives I.e., imagine that all but one of the coordinates are just constant values, and take the usual derivative Practically speaking, two potential drawbacks: Role of inner product is not clear (more later!) No way to differentiate functions of functions F(f) since we don t have a finite list of coordinates x 1,, x n Still, extremely common way to calculate the gradient

Example: Gradient in Coordinates

Gradient as Best Linear Approximation Another way to think about it: at each point x0, gradient is the vector that leads to the best possible approximation Starting at x 0, this term gets: bigger if we move in the direction of the gradient, smaller if we move in the opposite direction, and doesn t change if we move orthogonal to gradient.

Gradient As Direction of Steepest Ascent Another way to think about it: direction of steepest ascent I.e., what direction should we travel to increase value of function as quickly as possible? This viewpoint leads to algorithms for optimization, commonly used in graphics. *

Gradient and Directional Derivative At each point x, gradient is unique vector such that for all u. In other words, such that taking the inner product w/ this vector gives you the directional derivative in any direction u. Can t happen if function is not differentiable! (Notice: gradient also depends on choice of inner product...)

Example: Gradient of Dot Product Consider the dot product expressed in terms of matrices: What is gradient of f with respect to u? One way: write it out in coordinates: (equals zero unless i = k) In other words: Not so different from!

Gradients of Matrix-Valued Expressions EXTREMELY useful in graphics to be able to differentiate matrix-valued expressions Ultimately, expressions look much like ordinary derivatives Excellent resource: Petersen & Pedersen, The Matrix Cookbook At least once in your life, work these out meticulously in coordinates (to convince yourself they re true). Then forget about coordinates altogether!

Advanced*: L 2 Gradient Consider a function of a function What is the gradient of F with respect to f? Can t take partial derivatives anymore! Instead, look for function F such that for all functions u, What is directional derivative of a function of a function?? Don t freak out just return to good old-fashioned limit: This strategy becomes much clearer w/ a concrete example... *as in, NOT on the test! (But perhaps somewhere in the test of life )

Advanced Visual Example: L 2 Gradient Consider function for f: [0,1] R I claim the gradient is: Does this make sense intuitively? How can we increase inner product with g as quickly as possible? - inner product measures how well functions are aligned - g is definitely function best-aligned with g! - so to increase inner product, add a little bit of g to f (Can you work this solution out formally?)

Advanced Example: L 2 Gradient Consider function for arguments f: [0,1] R At each point f0, we want function F such that for all functions u Expanding 1st term in numerator, we get Hence, limit becomes The only solution to for all u is not much different from!

Key idea: Once you get the hang of taking the gradient of ordinary functions, it s (superficially) not much harder for more exotic objects like matrices, functions of functions,

Vector Fields Gradient was our first example of a vector field In general, a vector field assigns a vector to each point in space E.g., can think of a 2-vector field in the plane as a map For example, saw a gradient field (for function f(x,y) = x 2 + y 2 )

Q: How do we measure the change in a vector field?

Divergence and Curl Two basic derivatives for vector fields: X How much is field shrinking/expanding? Y How much is field spinning? div X curl Y

Divergence Also commonly written as Suggests a coordinate definition for divergence Think of as a vector of derivatives Think of X as a vector of functions Then divergence is

Divergence - Example Consider the vector field Divergence is then

Curl Also commonly written as Suggests a coordinate definition for curl This time, think of as a vector of just three derivatives: Think of X as vector of three functions: Then curl is (2D curl : )

Curl - Example Consider the vector field (2D) Curl is then

Notice anything about the relationship between curl and divergence?

Divergence vs. Curl (2D) Divergence of X is the same as curl of 90-degree rotation of X: Playing these kinds of games w/ vector fields plays an important role in algorithms (e.g., fluid simulation) (Q: Can you come up with an analogous relationship in 3D?)

Example: Fluids w/ Stream Function Ando et al, A Stream Function Solver for Liquid Simulations (SIGGRAPH 2015)

Laplacian One more operator we haven t seen yet: the Laplacian Unbelievably important object in graphics, showing up across geometry, rendering, simulation, imaging - basis for Fourier transform / frequency decomposition - used to define model PDEs (Laplace, heat, wave equations) - encodes rich information about geometry

Laplacian Visual Intuition Q: For ordinary function f(x), what does 2nd derivative tell us? concave convex f Df Likewise, Laplacian measures curvature of a function.

Laplacian Many Definitions Maps a scalar function to another scalar function (linearly!) Usually* denoted by Delta Many starting points for Laplacian: - divergence of gradient - sum of 2nd partial derivatives - gradient of Dirichlet energy - by analogy: graph Laplacian - variation of surface area - trace of Hessian *Or by, but we ll reserve this symbol for the Hessian

Laplacian Example Let s use coordinate definition: Consider the function We have and Hence, Interesting! Does this always happen?

Hessian Our final differential operator Hessian will help us locally approximate complicated functions by a few simple terms Recall our Taylor series How do we do this for multivariable functions? Already talked about best linear approximation, using gradient: Hessian gives us next, quadratic term.

Hessian in Coordinates Typically denote Hessian by symbol Just as gradient was vector that gives us partial derivatives of the function, Hessian is operator that gives us partial derivatives of the gradient : For a function f(x): R n R, can be more explicit: Q: Why is this matrix always symmetric?

Taylor Series for Multivariable Functions Using Hessian, can now write 2nd-order approximation of any smooth, multivariable function f(x) around some point x 0 : constant linear quadratic Can write this in matrix form as Will see later on how this approximation is very useful for optimization!

Next time: Rasterization Next time, we ll talk about drawing a triangle And it s a lot more interesting than it might seem Also, what s up with these jagged lines?