Numerical Algorithms for Visual Computing 2008/09 Example Solutions for Assignment 4. Problem 1 (Shift invariance of the Laplace operator)

Similar documents
Report on Image warping

NUMERICAL DIFFERENTIATION

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

APPENDIX A Some Linear Algebra

Convexity preserving interpolation by splines of arbitrary degree

More metrics on cartesian products

Approximate Smallest Enclosing Balls

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

EEE 241: Linear Systems

Math 217 Fall 2013 Homework 2 Solutions

Lecture 2: Numerical Methods for Differentiations and Integrations

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

3 Basic boundary value problems for analytic function in the upper half plane

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Chapter Newton s Method

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Lecture 12: Discrete Laplacian

Lecture 10 Support Vector Machines II

6.3.4 Modified Euler s method of integration

Lecture 21: Numerical methods for pricing American type derivatives

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Dynamics of a Superconducting Qubit Coupled to an LC Resonator

Chapter 4: Root Finding

Integrals and Invariants of Euler-Lagrange Equations

Differentiating Gaussian Processes

Some modelling aspects for the Matlab implementation of MMA

Generalized Linear Methods

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Singular Value Decomposition: Theory and Applications

Expectation propagation

CALCULUS CLASSROOM CAPSULES

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

MMA and GCMMA two methods for nonlinear optimization

Affine transformations and convexity

First day August 1, Problems and Solutions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

2 Finite difference basics

The Geometry of Logit and Probit

1 Matrix representations of canonical matrices

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

Math 205A Homework #2 Edward Burkard. Assume each composition with a projection is continuous. Let U Y Y be an open set.

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

Important Instructions to the Examiners:

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

Formulas for the Determinant

Problem Set 9 Solutions

Difference Equations

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

PETER HENR IC I TECHNICAL REPORT NO. CS 137 JULY COMPUTER SCIENCE DEPARTMENT School of Humanities and Sciences STANFORD UN IVERS ITY

Group Analysis of Ordinary Differential Equations of the Order n>2

Canonical transformations

Errors for Linear Systems

Integrals and Invariants of

PHYS 705: Classical Mechanics. Calculus of Variations II

Lecture 3. Ax x i a i. i i

Lagrangian Field Theory

Some basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

where the sums are over the partcle labels. In general H = p2 2m + V s(r ) V j = V nt (jr, r j j) (5) where V s s the sngle-partcle potental and V nt

% & 5.3 PRACTICAL APPLICATIONS. Given system, (49) , determine the Boolean Function, , in such a way that we always have expression: " Y1 = Y2

Discretization. Consistency. Exact Solution Convergence x, t --> 0. Figure 5.1: Relation between consistency, stability, and convergence.

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

Modelli Clamfim Equazioni differenziali 22 settembre 2016

10-701/ Machine Learning, Fall 2005 Homework 3

(2mn, m 2 n 2, m 2 + n 2 )

Lecture Notes on Linear Regression

CHARACTERISTICS OF COMPLEX SEPARATION SCHEMES AND AN ERROR OF SEPARATION PRODUCTS OUTPUT DETERMINATION

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

Newton s Method for One - Dimensional Optimization - Theory

DUE: WEDS FEB 21ST 2018

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Finding Dense Subgraphs in G(n, 1/2)

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

Calculus of Variations Basics

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

Advanced Circuits Topics - Part 1 by Dr. Colton (Fall 2017)

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Exercise Solutions to Real Analysis

Consistency & Convergence

CSCE 790S Background Results

MATH 281A: Homework #6

Lecture 4: Universal Hash Functions/Streaming Cont d

MATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Appendix B. Criterion of Riemann-Stieltjes Integrability

Implicit Integration Henyey Method

A DARK GREY P O N T, with a Switch Tail, and a small Star on the Forehead. Any

ALGEBRA HW 7 CLAY SHONKWILER

Solving Nonlinear Differential Equations by a Neural Network Method

This model contains two bonds per unit cell (one along the x-direction and the other along y). So we can rewrite the Hamiltonian as:

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Transcription:

Numercal Algorthms for Vsual Computng 008/09 Example Solutons for Assgnment 4 Problem (Shft nvarance of the Laplace operator The Laplace equaton s shft nvarant,.e., nvarant under translatons x x + a, y y + b, a, b R. The shft nvarance can then be wrtten as u xx + u yy u x x + u y y To see ths explctly, we consder x, y as mappngs dependng on x and y, respectvely, and we compute y(y y b, x(x x a y(y y, x(x x. It s not wrong to consder x x(x, y, y y(x, y, so that y y, x x. It follows u x x u(x, y x x ( u(x, y x x x x x x u x(x, y x x x x u(x, y u x (x, y u xx (x, y. u y y u yy follows analogously. Problem (What s the matrx, what s the matrx?

. For the orderng (u u u 3 u 4 u 5 u 6 u 7 u 8 u 9 u 0 u u u 3 u 4 u 5 u 6 ( and the underlyng process [ u+,j u j + u,j x + u ],j+ u j + u,j y f j ( we get the followng matrx system 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 (3. For the orderng (u u 3 u 6 u 0 u u 5 u 9 u 3 u 4 u 8 u u 5 u 7 u u 4 u 6 (4

we get the followng matrx system 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 (5 Problem 3 (Crossng dervatves. We have been gven the followng cross dervatve dscretsaton by use of central dfference methods x y ( u,j+ u,j x h ( u,j+ x h x h u +,j+ u,j+ 4h ( u,j u +,j u,j 4h 4h (u +,j+ u,j+ u +,j + u,j wth h x y. For ths 4 ponts we can compute the D- 3

Taylorexpanson ( ( u(x 0, y 0 f(x, y + 0 x f(x, y(x x 0 + y f(x, y(y y 0 + (( ( 0 x f(x, y(x x 0 + x y f(x, y(x x 0(y y 0 ( + f(x, y y (y y 0 Ths gves for our smple ponts the followng approxmaton: u(x ± h, y + h u ± hu x + u y + h (u xx ± u xy + u yy + 6 h3 (±u xxx + 3u xxy ± u xyy + 3u yyy + 4 h4 (u xxxx ± 4u xxxy + 6u xxyy ± 4u xyyy + u yyyy + O(h 5 u(x + h, y h u + hu x u y + h (u xx u xy + u yy + 6 h3 (u xxx 3u xxy + u xyy 3u yyy + 4 h4 (u xxxx 4u xxxy + 6u xxyy 4u xyyy + u yyyy + O(h 5 u(x h, y h u hu x u y + h (u xx + u xy + u yy 6 h3 (u xxx + 3u xxy + u xyy + 3u yyy + 4 h4 (u xxxx + 4u xxxy + 6u xxyy + 4u xyyy + u yyyy + O(h 5 4

Now we can nput ths approxmaton nto our four pxel scheme: u xy 4h (u +,j+ u,j+ u +,j + u,j 4h (u + hu x + u y + h (u xx + u xy + u yy + 6 h3 (u xxx + 3u xxy + u xyy + 3u yyy + 4 h4 (u xxxx + 4u xxxy + 6u xxyy + 4u xyyy + u yyyy + O(h 5 (u hu x + u y + h (u xx u xy + u yy + 6 h3 ( u xxx + 3u xxy u xyy + 3u yyy + 4 h4 (u xxxx 4u xxxy + 6u xxyy 4u xyyy + u yyyy + O(h 5 (u + hu x u y + h (u xx u xy + u yy + 6 h3 (u xxx 3u xxy + u xyy 3u yyy + 4 h4 (u xxxx 4u xxxy + 6u xxyy 4u xyyy + u yyyy + O(h 5 +u hu x u y + h (u xx + u xy + u yy 6 h3 (u xxx + 3u xxy + u xyy + 3u yyy + 4 h4 (u xxxx + 4u xxxy + 6u xxyy + 4u xyyy + u yyyy + O(h 5 5

If we combne ths terms together, we wll get: (u ( + +hu 4h x ( + +hu y ( + + h (u xx + u xy + u yy u xx + u xy u yy u xx + u xy u yy + u xx + u xy + u yy 8u xy + 6 h3 (u xxx ( + +3u xxy ( + + 3u xyy ( + +u yyy ( + + 4 h4 (u xxxx ( + +6u xxyy ( + Ths sums up to +4u xxxy ( + + + 4 +4u xyyy ( + + + 4 4h (4h u xy + 6 4 (u xxxy + u xyyy + O(h 5 u xy + 6 h (u xxxy + u xyyy + O(h 3 u xy + 6 h u xy + O(h 3 ( + 6 h u xy + O(h 3 O(h Overall we get a O(h error term for the cross dervatve approxmaton.. Ths dscretsaton s sotropc, as the error term ncorporates an addtonal sotropc Laplace operator onto u xy, whch we wanted to approxmante n the frst place. +u yyyy ( + + O(h 5 Problem 4 (Cookng norms. We want to prove the followng statement: n x x x 3 n x 6

We wll do ths step by step. So at frst we prove the frst nequalty x n x n n n n x max x max x. max x Now we wll have a closer look at the second nequalty x x. For ths, however we consder now the squared norms, as ths does not volate the monotoncty of the normng functon: x max,...,n x x x. Now we only need to prove the last nequalty, so we compute whch concludes the proof. x ( x n max x n max x n max x n x. We want to prove the followng statement: n x x x 3 n x 7

At frst, we wll prove the thrd nequalty by use of the Cauchy-Schwarz nequalty ( ( ( x y, so we can compute x x ( ( x x n x. From ths, the frst nequalty s easly dervable. The bggest problem s now the second nequalty. For ths we wll now consder the vector y x x x x k k We wll now show that y whch wll help us later. y We use ths result now n: what we wanted to show. P n k x k x x ( x k k n k x y x n k x k x x x x x y x x,. Problem 5 (Provng Banach Let an arbtrary x 0 D be gven. As F : D D, the sequence (x k k N0 s unquely determned by x k+ F (x k and for k N t holds: x k+ x k F (x k F (x k L x k x k L x k x k... (. 8

Also, by teraton of the frst approxmaton t follows for k n: and from that also for m n: x k+ x k L k+ n x n x n (. x m x n (x m x m + (x m x m +... (x n+ x n m x k+ x k Trangle-Inequalty kn ( m L k+ m x n x n kn ( L j x n x n j ( L L j x n x n j Therefore t holds for m n: L L x n x n Ln L x x 0 see ( or (. x m x n L L x n x n Ln L x x 0 ( Due to L < t holds that L n 0 (n and therefore (x n n N0 s a Cauchy sequence that s convergent wth the lmt x. As D s compact, wth, x D. F s contnuousand therefore the lmt x s a fxed pont of F. x s the only fxed pont n D. If x x were another fxed pont, t would follow: x x F (x F (x L x x < x x whch s a contradcton. The proposed error approxmaton follows drectly from (, f m. 9