Scattered Interpolation Survey

Size: px
Start display at page:

Download "Scattered Interpolation Survey"

Transcription

1 Scattered Interpolation Survey j.p.lewis u. southern california Scattered Interpolation Survey p.1/53

2 Scattered vs. Regular domain Scattered Interpolation Survey p.2/53

3 Motivation modeling animated character deformation texture synthesis stock market prediction neural networks machine learning... Scattered Interpolation Survey p.3/53

4 Machine Learning Score credit card applicants Each person has N attributes: income, age, gender, credit rating, zip code,... i.e. each person is a point in an N-dimensional space training data: some individuals have a score 1 = grant card, others -1 = deny card Scattered Interpolation Survey p.4/53

5 Machine Learning From training data, learn a function R N 1, 1... by interpolating the training data Scattered Interpolation Survey p.5/53

6 Texture Synthesis (blackboard drawing) Scattered Interpolation Survey p.6/53

7 Stock Market Prediction (blackboard drawing) Scattered Interpolation Survey p.7/53

8 Modeling Deforming a face mesh Images: Jun-Yong Noh and Ulrich Neumann, CGIT lab Scattered Interpolation Survey p.8/53

9 Shepard Interpolation ˆd(x) = wk (x)d k wk (x) weights set to an inverse power of the distance: w k (x) = x x k p. Note: singular at the data points x k. Scattered Interpolation Survey p.9/53

10 Shepard Interpolation improved higher order versions in Lancaster Curve and Surface Fitting book Scattered Interpolation Survey p.10/53

11 Natural Neighbor Interpolation Image: N. Sukmar, Natural Neighbor Interpolation and the Natural Element Method (NEM) Scattered Interpolation Survey p.11/53

12 Wiener interpolation linear estimator ˆx t = w k x t+k orthogonality E[(x t ˆx t )x m ] = 0 E[x t x m ] = E[ w k x t+k x m ] autocovariance E[x t x m ] = R(t m) linear system R(t m) = w k R(t + k m) Note no requirement on the actual spacing of the data. Related to the Kriging method in geology. Scattered Interpolation Survey p.12/53

13 Applications: Wiener interpolation Lewis, Generalized Stochastic Subdivision, ACM TOG July 1987 Scattered Interpolation Survey p.13/53

14 Laplace/Poisson Interpolation Objective: Minimize a roughness measure, the integrated derivative (or gradient) squared: df 2 dx dx f 2 ds Scattered Interpolation Survey p.14/53

15 Laplace/Poisson interpolation minimize R (f (x)) 2 dx F(y, y, x) = y 2 δf = F dy dy dɛ δɛ = F dy q δɛ de R F dɛ = R F dy q dx dy q dx = F dy q R F dy q b = 0 a de dɛ = R d dx = 2 d dx d dx F dy qdx = 0 F dy qdx df dx = 2 d2 f dx 2 = 2 2 f = 0 should come out like d2 f dx 2 = 2 = 0 F dy = 2y = 2 df dx now change q to q integration by parts because q is zero at both ends variation of functional is zero at minimum Scattered Interpolation Survey p.15/53

16 Laplace/Poisson: Discrete Local viewpoint: roughness R = u 2 du (u k+1 u k ) 2 for a particular k: dr = d [(u k u k 1 ) 2 + (u k+1 u k ) 2 ] du k du k = 2(u k u k 1 ) 2(u k+1 u k ) = 0 u k+1 2u k + u k 1 = 0 2 u = 0 Notice: D T D =...1, 2, 1 Scattered Interpolation Survey p.16/53

17 Laplace/Poisson Interpolation Discrete/matrix viewpoint: Encode derivative operator in a matrix D D = min f f T D T Df Scattered Interpolation Survey p.17/53

18 Laplace/Poisson Interpolation min f f T D T Df 2D T Df = 0 i.e. d 2 f dx = 0 or 2 2 = 0 f = 0 is a solution; last eigenvalue is zero, corresponds to a constant solution. Scattered Interpolation Survey p.18/53

19 Laplace/Poisson: solution approaches direct matrix inverse (better: Choleski) Jacobi (because matrix is quite sparse) Jacobi variants (SOR) Multigrid Scattered Interpolation Survey p.19/53

20 Jacobi iteration matrix viewpoint Ax = b (D + E)x = b split into diagonal D, non-diagonal E Dx = Ex + b x = D 1 Ex + D 1 b call B = D 1 E,z = D 1 b x Bx + z D 1 is easy hope that largest eigenvalue of B is less than 1 Scattered Interpolation Survey p.20/53

21 Jacobi iteration Local viewpoint Jacobi iteration sets each f k to the solution of its row of the matrix equation, independent of all other rows: Arc f c = b r A rk f k = b k j k A rj f j f k b k A kk j k A kj /A kk f j Scattered Interpolation Survey p.21/53

22 Jacobi iteration apply to Laplace eqn Jacobi iteration sets each f k to the solution of its row of the matrix equation, independent of all other rows: In 2D,...f t 1 2f t + f t+1 = 0 2f t = f t 1 + f t+1 f k 0.5 (f[k 1] + f[k + 1]) f[y][x] = 0.25 * ( f[y+1][x] + f[y-1][x] + f[y][x-1] + f[y][x+1] ) Scattered Interpolation Survey p.22/53

23 But now let s interpolate 1D case, say f 3 is known. Three eqns involve f 3. Subtract (a multiple of) f 3 from both sides of these equations: f 1 2f 2 + f 3 = 0 f 1 2f = f 3 f 2 2f 3 + f 4 = 0 f f 4 = 2f 3 f 3 2f 4 + f 5 = 0 0 2f 4 + f 5 = f 3 L = one column is zeroed 7 5 Scattered Interpolation Survey p.23/53

24 Multigrid r is known, e is not Ax = b x = x + e r = A x b r = Ax + Ae b r = Ae For Laplace/Poisson, r is smooth. So decimate, solve for e, interpolate. And recurse... Scattered Interpolation Survey p.24/53

25 Exciting demo Scattered Interpolation Survey p.25/53

26 Recovered fur Scattered Interpolation Survey p.26/53

27 Recovered fur: detail Scattered Interpolation Survey p.27/53

28 Poor interpolation Scattered Interpolation Survey p.28/53

29 Membrane vs. Thin Plate Left - membrane interpolation, right - thin plate. Scattered Interpolation Survey p.29/53

30 Thin plate spline Minimize the integrated second derivative squared (approximate curvature) min f ( d 2 f dx 2 ) 2 dx Scattered Interpolation Survey p.30/53

31 Radial Basis Functions ˆd(x) = N k w k φ( x x k ) Scattered Interpolation Survey p.31/53

32 Radial Basis Functions (RBFs) any function other than constant can be used! common choices: Gaussian φ(r) = exp( r 2 /σ 2 ) Thin plate spline φ(r) = r 2 log r Hardy multiquadratic φ(r) = (r 2 + c 2 ),c > 0 Notice: the last two increase as a function of radius Scattered Interpolation Survey p.32/53

33 RBF versus Shepard s Scattered Interpolation Survey p.33/53

34 Solving Thin plate interpolation if few known points: use RBF if many points use multigrid instead but Carr/Beatson et. al. (SIGGRAPH 01) use Greengart FMM for RBF with large numbers of points Scattered Interpolation Survey p.34/53

35 Radial Basis Functions NX ˆd(x) = w k φ( x x k ) e = X (d(x j ) ˆd(x j )) 2 j k = X NX (d(x j ) w k φ( x j x k )) 2 j k NX = (d(x 1 ) w k φ( x 1 x k )) 2 + (d(x 2 ) k NX w k φ( x 2 x k )) 2 + k Scattered Interpolation Survey p.35/53

36 Radial Basis Functions define R jk = φ( x j x k ) = (d(x 1 ) (w 1 R 11 + w 2 R 12 + w 3 R 13 + )) 2 +(d(x 2 ) (w 1 R 21 + w 2 R 22 + w 3 R 23 + )) 2 + +(d(x m ) (w 1 R m1 + w 2 R m2 + w 3 R m3 + )) 2 + d dw m = 2(d(x 1 ) (w 1 R 11 + w 2 R 12 + w 3 R 13 + ))R 1m +2(d(x 2 ) (w 1 R 21 + w 2 R 22 + w 3 R 23 + ))R 2m + +2(d(x m ) (w 1 R m1 + w 2 R m2 + w 3 R m3 + )) + = 0 put R k1,r k2,r k3, in row m of matrix. Scattered Interpolation Survey p.36/53

37 Radial Basis Functions ˆd(x) = N k e = (d Φw) 2 w k φ( x x k ) e = (d Φw) T (d Φw) de dw = 0 = ΦT (d Φw) Φ T d = Φ T Φw w = (Φ T Φ) 1 Φ T d Scattered Interpolation Survey p.37/53

38 Where does TPS kernel come from Fit an unknown function f to the data y k, regularized by minimizing a smoothness term. E[f] = (f k y k ) 2 + λ Pf 2 ( ) d e.g. Pf f = dx dx 2 Variational derivative of E wrt f leads to a differential equation P Pf(x) = 1 (f(x) yk )δ(x x k ) λ Scattered Interpolation Survey p.38/53

39 Where does TPS kernel come from Solve linear differential equation by finding Green s function of the differential operator, convolving it with the RHS (works only for a linear operator). Schematically, Lf = rhs L is the operator P P, rhs is the data fidelity f = g rhs f obtained by convolving g rhs Lg = δ choosing rhs = δ gives this eqn g is the convolutional inverse of L. Scattered Interpolation Survey p.39/53

40 Where does TPS kernel come from In summary, the kernel g is the inverse Fourier transform of the reciprocal of the Fourier transform of the adjoint-squared smoothing operator P. Scattered Interpolation Survey p.40/53

41 Where does TPS kernel come from Fit an unknown function f to the data y k, regularized by minimizing a smoothness term. E[f] = (f k y k ) 2 + λ Pf 2 e.g. Pf 2 = A similar discrete version. ( d 2 f dx 2 ) 2 dx E[f] = (f y) S S(f y) + λf P Pf Scattered Interpolation Survey p.41/53

42 Where does TPS kernel come from (continued) A similar discrete version. E[f] = (f y) S S(f y) + λf P Pf To simplify things, here the data points to interpolate are required to be at discrete sample locations in the vector y, so the length of this vector defines a sample rate (reasonable). S is a selection matrix with 1s and 0s on the diagonal (zeros elsewhere). It has 1s corresponding to the locations of data in y. y can be zero (or any other value) where there is no data. P is a diagonal-constant matrix that encodes the discrete form of the regularization operator. E.g. to minimize the integrated curvature, rows of P will contain: 2 3 2,1,0,0, , 2,1,0, ,1, 2,1,... Scattered Interpolation Survey p.42/53

43 Where does TPS kernel come from Take the derivative of E with respect to the vector f, 2S(f y) + λ2p Pf = 0 P Pf = 1 S(f y) λ Multiply by G, being the inverse of P P: f = GP Pf = 1 GS(f y) λ So the RBF kernel comes from G = (P P) 1. Scattered Interpolation Survey p.43/53

44 Where does TPS kernel come from: D (Discrete version) RBF kernel is G = (P P) 1. Take SVD P = UDV P P = V D 2 V The inverse of V D 2 V is V D 2 V. eigenvectors of a circulant matrix are sinusoids, and P is diagonal-constant (toeplitz?), or nearly circulant. So V D 2 V is approximately the same as taking the Fourier transform and then the reciprocal (remembering that D are the singular values of P not P P ) Scattered Interpolation Survey p.44/53

45 Matrix regularization Find w to minimize (Rw b) T (Rw b). If the training points are very close together, the corresponding columns of R are nearly parallel. Difficult to control if points are chosen by a user. Add a term to keep the weights small: w T w. minimize (Rw b) T (Rw b) + λw T w R T (Rw b) + 2λw = 0 R T Rw + 2λw = R T b (R T R + 2λI)w = R T b w = (R T R + 2λI) 1 R T b Scattered Interpolation Survey p.45/53

46 Applications: Pose Space Deformation Lewis/Cordner/Fong, SIGGRAPH 2000 incorporated in Softimage Scattered Interpolation Survey p.46/53

47 Applications: Pose Space Deformation Scattered Interpolation Survey p.47/53

48 Pose Space Deformation Scattered Interpolation Survey p.48/53

49 Applications: Matrix virtual city Scattered Interpolation Survey p.49/53

50 Smart Point Placement for Thin Plate Scattered Interpolation Survey p.50/53

51 Smart Point Placement for Thin Plate Scattered Interpolation Survey p.51/53

52 Smart Point Placement for Thin Plate Scattered Interpolation Survey p.52/53

53 Smart Point Placement for Thin Plate Scattered Interpolation Survey p.53/53

R&D Topics in Computer Film

R&D Topics in Computer Film . p.1/?? R&D Topics in Computer Film j.p.lewis . p.2/?? R&D Topics in Computer Film (About the author) Computer Graphics and Immersive Technology Lab, U. Southern California; Digimax previously: ILM, ESC,

More information

scattered data interpolation for computer graphics

scattered data interpolation for computer graphics scattered data interpolation for computer graphics J.P. Lewis Weta Digital Ken Anjyo OLM Digital Fred Pighin (*) Google Inc SIGGRAPH Asia 2010 Course: Scattered Data for Computer Graphics Monday, January

More information

Lifting Detail from Darkness

Lifting Detail from Darkness Lifting Detail from Darkness J.P.Lewis zilla@computer.org Disney The Secret Lab Lewis / Detail from Darkness p.1/38 Brightness-Detail Decomposition detail image image intensity Separate detail by Wiener

More information

Slide05 Haykin Chapter 5: Radial-Basis Function Networks

Slide05 Haykin Chapter 5: Radial-Basis Function Networks Slide5 Haykin Chapter 5: Radial-Basis Function Networks CPSC 636-6 Instructor: Yoonsuck Choe Spring Learning in MLP Supervised learning in multilayer perceptrons: Recursive technique of stochastic approximation,

More information

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

D. Shepard, Shepard functions, late 1960s (application, surface modelling) Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

Scalable kernel methods and their use in black-box optimization

Scalable kernel methods and their use in black-box optimization with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Partititioned Methods for Multifield Problems

Partititioned Methods for Multifield Problems Partititioned Methods for Multifield Problems Joachim Rang, 22.7.215 22.7.215 Joachim Rang Partititioned Methods for Multifield Problems Seite 1 Non-matching matches usually the fluid and the structure

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Machine Learning: Basis and Wavelet 김화평 (CSE ) Medical Image computing lab 서진근교수연구실 Haar DWT in 2 levels

Machine Learning: Basis and Wavelet 김화평 (CSE ) Medical Image computing lab 서진근교수연구실 Haar DWT in 2 levels Machine Learning: Basis and Wavelet 32 157 146 204 + + + + + - + - 김화평 (CSE ) Medical Image computing lab 서진근교수연구실 7 22 38 191 17 83 188 211 71 167 194 207 135 46 40-17 18 42 20 44 31 7 13-32 + + - - +

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 2000 Eric de Sturler 1 12/02/09 MG01.prz Basic Iterative Methods (1) Nonlinear equation: f(x) = 0 Rewrite as x = F(x), and iterate x i+1

More information

Fundamental Solutions and Green s functions. Simulation Methods in Acoustics

Fundamental Solutions and Green s functions. Simulation Methods in Acoustics Fundamental Solutions and Green s functions Simulation Methods in Acoustics Definitions Fundamental solution The solution F (x, x 0 ) of the linear PDE L {F (x, x 0 )} = δ(x x 0 ) x R d Is called the fundamental

More information

Positive Definite Kernels: Opportunities and Challenges

Positive Definite Kernels: Opportunities and Challenges Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Matrix Notation Mark Schmidt University of British Columbia Winter 2017 Admin Auditting/registration forms: Submit them at end of class, pick them up end of next class. I need

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 1: Introduction to Multigrid 1 12/02/09 MG02.prz Error Smoothing 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Initial Solution=-Error 0 10 20 30 40 50 60 70 80 90 100 DCT:

More information

9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee

9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee 9.520: Class 20 Bayesian Interpretations Tomaso Poggio and Sayan Mukherjee Plan Bayesian interpretation of Regularization Bayesian interpretation of the regularizer Bayesian interpretation of quadratic

More information

Dimensionality Reduction and Principle Components Analysis

Dimensionality Reduction and Principle Components Analysis Dimensionality Reduction and Principle Components Analysis 1 Outline What is dimensionality reduction? Principle Components Analysis (PCA) Example (Bishop, ch 12) PCA vs linear regression PCA as a mixture

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Vectors [and more on masks] Vector space theory applies directly to several image processing/ representation problems

Vectors [and more on masks] Vector space theory applies directly to several image processing/ representation problems Vectors [and more on masks] Vector space theory applies directly to several image processing/ representation problems 1 Image as a sum of basic images What if every person s portrait photo could be expressed

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes Root finding: 1 a The points {x n+1, }, {x n, f n }, {x n 1, f n 1 } should be co-linear Say they lie on the line x + y = This gives the relations x n+1 + = x n +f n = x n 1 +f n 1 = Eliminating α and

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat

More information

State Space Representation of Gaussian Processes

State Space Representation of Gaussian Processes State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)

More information

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces (Recap) Announcement Room change: On Thursday, April 26th, room 024 is occupied. The lecture will be moved to room 021, E1 4 (the

More information

Spatial Process Estimates as Smoothers: A Review

Spatial Process Estimates as Smoothers: A Review Spatial Process Estimates as Smoothers: A Review Soutir Bandyopadhyay 1 Basic Model The observational model considered here has the form Y i = f(x i ) + ɛ i, for 1 i n. (1.1) where Y i is the observed

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b

AN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran

Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran Assignment 2 (Sol.) Introduction to Machine Learning Prof. B. Ravindran 1. Let A m n be a matrix of real numbers. The matrix AA T has an eigenvector x with eigenvalue b. Then the eigenvector y of A T A

More information

Linear Models for Regression. Sargur Srihari

Linear Models for Regression. Sargur Srihari Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

f(s) e -i n π s/l d s

f(s) e -i n π s/l d s Pointwise convergence of complex Fourier series Let f(x) be a periodic function with period l defined on the interval [,l]. The complex Fourier coefficients of f( x) are This leads to a Fourier series

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Gaussian with mean ( µ ) and standard deviation ( σ)

Gaussian with mean ( µ ) and standard deviation ( σ) Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (

More information

Calculus of Variations and Computer Vision

Calculus of Variations and Computer Vision Calculus of Variations and Computer Vision Sharat Chandran Page 1 of 23 Computer Science & Engineering Department, Indian Institute of Technology, Bombay. http://www.cse.iitb.ernet.in/ sharat January 8,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Laplace Transform Integration (ACM 40520)

Laplace Transform Integration (ACM 40520) Laplace Transform Integration (ACM 40520) Peter Lynch School of Mathematical Sciences Outline Basic Theory Residue Theorem Ordinary Differential Equations General Vector NWP Equation Lagrangian Formulation

More information

Laplace-Beltrami: The Swiss Army Knife of Geometry Processing

Laplace-Beltrami: The Swiss Army Knife of Geometry Processing Laplace-Beltrami: The Swiss Army Knife of Geometry Processing (SGP 2014 Tutorial July 7 2014) Justin Solomon / Stanford University Keenan Crane / Columbia University Etienne Vouga / Harvard University

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Q1 Q2 Q3 Q4 Tot Letr Xtra

Q1 Q2 Q3 Q4 Tot Letr Xtra Mathematics 54.1 Final Exam, 12 May 2011 180 minutes, 90 points NAME: ID: GSI: INSTRUCTIONS: You must justify your answers, except when told otherwise. All the work for a question should be on the respective

More information

Spectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course

Spectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course Spectral Algorithms I Slides based on Spectral Mesh Processing Siggraph 2010 course Why Spectral? A different way to look at functions on a domain Why Spectral? Better representations lead to simpler solutions

More information

FFTs in Graphics and Vision. The Laplace Operator

FFTs in Graphics and Vision. The Laplace Operator FFTs in Graphics and Vision The Laplace Operator 1 Outline Math Stuff Symmetric/Hermitian Matrices Lagrange Multipliers Diagonalizing Symmetric Matrices The Laplacian Operator 2 Linear Operators Definition:

More information

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS DEPARTMENT OF MATHEMATICS Ma322 - Final Exam Spring 2011 May 3,4, 2011 DO NOT TURN THIS PAGE UNTIL YOU ARE INSTRUCTED TO DO SO. Be sure to show all work and justify your answers. There are 8 problems and

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write 1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical

More information

Covariance to PCA. CS 510 Lecture #14 February 23, 2018

Covariance to PCA. CS 510 Lecture #14 February 23, 2018 Covariance to PCA CS 510 Lecture 14 February 23, 2018 Overview: Goal Assume you have a gallery (database) of images, and a probe (test) image. The goal is to find the database image that is most similar

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3 Felix Kwok February 27, 2004 Written Problems 1. (Heath E3.10) Let B be an n n matrix, and assume that B is both

More information

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems Index A-conjugate directions, 83 A-stability, 171 A( )-stability, 171 absolute error, 243 absolute stability, 149 for systems of equations, 154 absorbing boundary conditions, 228 Adams Bashforth methods,

More information

Exercise List: Proving convergence of the Gradient Descent Method on the Ridge Regression Problem.

Exercise List: Proving convergence of the Gradient Descent Method on the Ridge Regression Problem. Exercise List: Proving convergence of the Gradient Descent Method on the Ridge Regression Problem. Robert M. Gower September 5, 08 Introduction Ridge regression is perhaps the simplest example of a training

More information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information Contents Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information xi xiv xvii xix 1 Preliminaries 1 1.0 Introduction.............................

More information

A New Trust Region Algorithm Using Radial Basis Function Models

A New Trust Region Algorithm Using Radial Basis Function Models A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Machine Learning - Waseda University Logistic Regression

Machine Learning - Waseda University Logistic Regression Machine Learning - Waseda University Logistic Regression AD June AD ) June / 9 Introduction Assume you are given some training data { x i, y i } i= where xi R d and y i can take C different values. Given

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Numerical solution of surface PDEs with Radial Basis Functions

Numerical solution of surface PDEs with Radial Basis Functions Numerical solution of surface PDEs with Radial Basis Functions Andriy Sokolov Institut für Angewandte Mathematik (LS3) TU Dortmund andriy.sokolov@math.tu-dortmund.de TU Dortmund June 1, 2017 Outline 1

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov

More information

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3, MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Machine Learning Torsten Möller Möller/Mori 1 Reading Chapter 3 of Pattern Recognition and Machine Learning by Bishop Chapter 3+5+6+7 of The Elements of Statistical Learning

More information