Nonseparable multivariate wavelets. Ghan Shyam Bhatt. A dissertation submitted to the graduate faculty

Similar documents
Boundary functions for wavelets and their properties

Construction of Multivariate Compactly Supported Orthonormal Wavelets

Matrix-Valued Wavelets. Ahmet Alturk. A creative component submitted to the graduate faculty

Lectures notes. Rheology and Fluid Dynamics

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES

We have to prove now that (3.38) defines an orthonormal wavelet. It belongs to W 0 by Lemma and (3.55) with j = 1. We can write any f W 1 as

Biorthogonal Spline Type Wavelets

COMPACTLY SUPPORTED ORTHONORMAL COMPLEX WAVELETS WITH DILATION 4 AND SYMMETRY

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

Construction of Biorthogonal B-spline Type Wavelet Sequences with Certain Regularities

Digital Image Processing

Quintic deficient spline wavelets

MRA Frame Wavelets with Certain Regularities Associated with the Refinable Generators of Shift Invariant Spaces

WAVELETS WITH SHORT SUPPORT

Band-limited Wavelets and Framelets in Low Dimensions

Chapter 7 Wavelets and Multiresolution Processing. Subband coding Quadrature mirror filtering Pyramid image processing

DAVID FERRONE. s k s k 2j = δ 0j. s k = 1

Chapter 7 Wavelets and Multiresolution Processing

From Fourier to Wavelets in 60 Slides

Wavelet Bi-frames with Uniform Symmetry for Curve Multiresolution Processing

Wavelets and Filter Banks

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Lecture 7 Multiresolution Analysis

An Introduction to Wavelets and some Applications

axioms Construction of Multiwavelets on an Interval Axioms 2013, 2, ; doi: /axioms ISSN

Harmonic Analysis: from Fourier to Haar. María Cristina Pereyra Lesley A. Ward

Wavelets in Scattering Calculations

2 Infinite products and existence of compactly supported φ

Lecture 16: Multiresolution Image Analysis

Wavelets, Multiresolution Analysis and Fast Numerical Algorithms. G. Beylkin 1

MGA Tutorial, September 08, 2004 Construction of Wavelets. Jan-Olov Strömberg

BAND-LIMITED REFINABLE FUNCTIONS FOR WAVELETS AND FRAMELETS

c 1999 Society for Industrial and Applied Mathematics

1 The Continuous Wavelet Transform The continuous wavelet transform (CWT) Discretisation of the CWT... 2

Construction of Orthonormal Quasi-Shearlets based on quincunx dilation subsampling

Available at ISSN: Vol. 2, Issue 2 (December 2007) pp (Previously Vol. 2, No.

Construction of Multivariate Compactly Supported Tight Wavelet Frames

An Introduction to Filterbank Frames

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets

Sparse Multidimensional Representation using Shearlets

Wavelets and Image Compression Augusta State University April, 27, Joe Lakey. Department of Mathematical Sciences. New Mexico State University

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

New Design of Orthogonal Filter Banks Using the Cayley Transform

Isotropic Multiresolution Analysis: Theory and Applications

1. Fourier Transform (Continuous time) A finite energy signal is a signal f(t) for which. f(t) 2 dt < Scalar product: f(t)g(t)dt

The Application of Legendre Multiwavelet Functions in Image Compression

Digital Image Processing

Frame Diagonalization of Matrices

EXAMPLES OF REFINABLE COMPONENTWISE POLYNOMIALS

WAVELETS WITH COMPOSITE DILATIONS

REGULARITY AND CONSTRUCTION OF BOUNDARY MULTIWAVELETS

Pairs of Dual Wavelet Frames From Any Two Refinable Functions

Bin Han Department of Mathematical Sciences University of Alberta Edmonton, Canada T6G 2G1

Shift-Invariant Spaces and Linear Operator Equations. Rong-Qing Jia Department of Mathematics University of Alberta Edmonton, Canada T6G 2G1.

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

UNIVERSITY OF WISCONSIN-MADISON CENTER FOR THE MATHEMATICAL SCIENCES. Tight compactly supported wavelet frames of arbitrarily high smoothness

Bin Han Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada

ORTHOGONAL WAVELET FRAMES AND VECTOR-VALUED WAVELET TRANSFORMS

Multiresolution image processing

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Multiscale Frame-based Kernels for Image Registration

Wavelets and Multiresolution Processing

Vectors in Function Spaces

Wavelet Analysis. Willy Hereman. Department of Mathematical and Computer Sciences Colorado School of Mines Golden, CO Sandia Laboratories

DISCRETE CDF 9/7 WAVELET TRANSFORM FOR FINITE-LENGTH SIGNALS

Analysis of Fractals, Image Compression and Entropy Encoding

Applied and Computational Harmonic Analysis 11, (2001) doi: /acha , available online at

On Dual Wavelet Tight Frames

A Comparative Study of Non-separable Wavelet and Tensor-product. Wavelet; Image Compression

Approximation Properties and Construction of Hermite Interpolants and Biorthogonal Multiwavelets. Bin Han

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Wavelet Bases of the Interval: A New Approach

Construction of Biorthogonal Wavelets from Pseudo-splines

MAT Linear Algebra Collection of sample exams

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

arxiv: v2 [math.fa] 27 Sep 2016

1 Matrices and Systems of Linear Equations. a 1n a 2n

Generalized Shearlets and Representation Theory

Let p 2 ( t), (2 t k), we have the scaling relation,

Perfect Reconstruction Two- Channel FIR Filter Banks

Quantum Computing Lecture 2. Review of Linear Algebra

LECTURE 5: THE METHOD OF STATIONARY PHASE

Foundations of Matrix Analysis

A CLASS OF M-DILATION SCALING FUNCTIONS WITH REGULARITY GROWING PROPORTIONALLY TO FILTER SUPPORT WIDTH

An Adaptive Pseudo-Wavelet Approach for Solving Nonlinear Partial Differential Equations

Construction of wavelets. Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam

Kernel Method: Data Analysis with Positive Definite Kernels

Compactly Supported Tight Frames Associated with Refinable Functions 1. Communicated by Guido L. Weiss Received July 27, 1999

The following definition is fundamental.

Boolean Inner-Product Spaces and Boolean Matrices

Linear Algebra Review. Vectors

CHAPTER VIII HILBERT SPACES

Wavelets on hierarchical trees

Math 52H: Multilinear algebra, differential forms and Stokes theorem. Yakov Eliashberg

A Tutorial on Wavelets and their Applications. Martin J. Mohlenkamp

Symmetric Wavelet Tight Frames with Two Generators

Linear Algebra March 16, 2019

Transform methods. and its inverse can be used to analyze certain time-dependent PDEs. f(x) sin(sxπ/(n + 1))

arxiv:math/ v1 [math.ca] 19 May 2004

CS 246 Review of Linear Algebra 01/17/19

Transcription:

Nonseparable multivariate wavelets by Ghan Shyam Bhatt A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Applied Mathematics Program of Study Committee: Fritz Keinert Major Professor Wolfgang Kliemann Scott Hansen Sunder Sethuraman Khalid Boushaba Iowa State University Ames Iowa 4 Copyright c Ghan Shyam Bhatt 4 All rights reserved

ii Graduate College Iowa State University This is to certify that the doctoral dissertation of Ghan Shyam Bhatt has met the dissertation requirements of Iowa State University Committee Member Committee Member Committee Member Committee Member Major Professor For the Major Program

iii DEDICATION To my daughter Aastha

iv TABLE OF CONTENTS ABSTRACT vi CHAPTER 1 Introduction 1 CHAPTER Scalar Wavelet Theory 5 1 Computing Point Values 6 Multiresolution Analysis 7 3 Decomposition and Reconstruction 1 4 Symbol Modulation Matrix and Polyphase Matrix 1 5 Moments 14 6 Approximation Order and Accuracy 14 7 Lifting 15 8 Factorization 17 9 Completion 18 CHAPTER 3 The Multivariate Setting 31 Lattices 3 Refinable Vector Functions 1 33 Support 5 34 Estimating the Support 6 35 Computing Point Values 9 CHAPTER 4 Multivariate Wavelet Theory 3 41 Multiresolution Analysis Multiwavelets 3 4 Decomposition and Reconstruction 34

v 43 Symbol and Modulation Matrix 35 44 Polyphase Components and Polyphase Matrix 37 45 Necessary and Sufficient Conditions for the Existence of Wavelets 39 CHAPTER 5 Computing Moments and Approximation Order 41 51 Moments 41 5 Approximation Order and Accuracy 46 53 Approximation Order in the Scalar Case 49 CHAPTER 6 Lifting Multivariate Wavelets 53 61 Lifting 53 CHAPTER 7 Factorization 56 CHAPTER 8 Completion 58 81 The Completion Problem 58 8 A new Biorthogonal Completion Algorithm 6 BIBLIOGRAPHY 73 ACKNOWLEDGEMENTS 76

vi ABSTRACT We review the one-dimensional setting of wavelet theory and generalize it to nonseparable multivariate wavelets This process presents significant technical difficulties Some techniques of the one-dimensional setting carry over in a more or less straightforward way; some do not generalize at all The main results include the following: an algorithm for computing the moments for multivariate multiwavelets; a necessary and sufficient condition for the approximation order; the lifting scheme for multivariate wavelets; and a generalization of the method of Lai [1 for the biorthogonal completion of a polyphase matrix under suitable conditions One-dimensional techniques which cannot be generalized include the factorization of the polyphase matrix and a general solution to the completion problem

1 CHAPTER 1 Introduction Wavelet bases have proved useful for a number of applications in signal processing numerical analysis operator theory and other fields like physics engineering etc [9 In the one-dimensional setting a refinable function is the solution of the two scale recursion relation φ(x = k h k φ(x k where h k are called the filter coefficients We assume that only finitely many coefficients are nonzero; this produces φ with compact support Under some additional conditions φ gives rise to a multiresolution approximation and a wavelet function ψ The wavelet function ψ has the property that the family of functions ψ jk (x = j/ ψ( j x k forms an orthonormal basis for L (R Orthogonality compactness of support approximation order vanishing moments symmetry smoothness and decay are some of the important properties of a wavelet In the one-dimensional setting it is possible to construct wavelets with desirable specific properties For example the Daubechies family of wavelets [9 provides a very good example of wavelets with compact support and arbitrary regularity There are several applications that require a higher-dimensional setting for example image processing A natural approach to generalize the idea to higher dimensions is through tensor products of one dimensional wavelets This approach however has a major drawback: it favors the horizontal and vertical directions The most general approach is to dilate by using an expansive integer matrix M which maps the integer lattice Γ = Z d into a sublattice [1 [3 We have the following recursion relation in the higher dimensional setting φ(x = m k h k φ(mx k

where h k are r r matrices and φ : R d C r It is known that (m 1r wavelets are needed to generate L (R d [8 [1 where m = det M An orthonormal wavelet set associated with the dilation matrix M is a finite set ψ (i i = 1 (m 1r such that m j/ ψ (i (M j x k i = 1 (m 1r j Z k Z d forms an orthogonal basis for L (R d If r = 1 we call it a scalar scaling function or a scalar wavelet Unfortunately the techniques of the one-dimensional setting cannot be applied to the higher dimensional setting The first reason is that given a d variable scaling function it is difficult to construct d variable wavelets The second reason is that one cannot necessarily factor a multivariate trigonometric polynomial The third reason is that it is hard to generalize the Fejer-Riesz lemma to multivatiate trigonometric polynomials Also we lack tools to investigate their properties Cohen and Daubechies [8 constructed nonseparable orthonormal wavelets for the class of dilation with m = det M = with an arbitrary number of vanishing moments These turned out to be discontinuous In the same paper however they present an example of arbitrarily smooth biorthogonal nonseparable wavelet basis for the quincunx dilation Kovacevic and Vetterli in [3 constructed a continuous compactly supported scaling function (K-V scaling function in R using the standard quincunx as the dilation matrix Madych and Gröchenig in [5 constructed several nonseparable Haar-type scaling functions in R d which are discontinuous characteristic functions of compact sets It has also been shown by Belogay and Wang in [ that there exists a family of compactly supported scaling function on R with arbitrary smoothness that are refinable with respect to a matrix that gives the column lattice see section (31 He and Lai in [13 have several other examples Although many special multivariate nonseparable wavelets have been constructed it is still an open problem how to construct multivariate compactly supported orthogonal wavelets for any given compactly supported scaling function The construction of wavelets can be put in terms of a matrix completion problem [11 where the first row is given and we seek a paraunitary completion of it In particular if a wavelet basis exists it is required to have (m 1r wavelet functions It has been proved that under

3 certain conditions the completion can be done [1 [17 but no constructive method has be suggested so far Even under these additional conditions the completion does not necessarily preserve the orthogonality compactness of the support or the regularity In [17 Judith and Marc showed that in general it is not always possible to obtain wavelets that correspond to a given scaling function with desirable properties They started with a particular continuous scaling function and showed that there is no continuous wavelet that goes with it Similarly methods for computing moments and approximation orders in higher dimensions are not very clear In the one-dimensional case a wavelet has p vanishing moments iff the corresponding scaling function has approximation order p and the moments are fairly easy to compute In the multivariate case the calculation of moments becomes much more complicated Using the notations of C Heil from [[1 we have worked out an algorithm for computing moments in chapter 5 The connection between vanishing moments and approximation order is likewise much more complex in the multivariate case We have established theorems corresponding to the scalar results Lifting a procedure for constructing wavelets with desired properties such as approximation order and symmetry from simpler wavelets is well studied in the one-dimensional case [15 [16 and Keinert generalized this idea to the case of multiwavelets in his paper Raising multiwavelet approximation order through lifting [7 The idea of lifting wavelets [15 for higher approximation order does apply to the higher dimensional setting (chapter 6 We have generalized the lifting procedure to multivariate wavelets and derived the conditions necessary to raise the approximation order In the scalar case the polyphase matrix of any orthogonal wavelet can be factored into easier terms either lifting factors or projection factors Keinert [6 recently wrote a book Wavelets and Multiwavelets [6 where lifting and the projection factors are described in the multiwavelet setting This does not seem to be possible in general in the multivariate case While such factorizations may not be possible in general it may be possible under some conditions on the dilation matrix and on the placement of the filter coefficients This case needs to be investigated

4 Generalizing the method of Lai [1 under some extra conditions we show (chapter 8 that the completion can be done so as to have compactly supported wavelets The completion we obtain gives us wavelets with the same regularity as the scaling function and the support remains compact although it gets bigger than the support of the scaling function

5 CHAPTER Scalar Wavelet Theory Definition 1 A refinable function is a function φ : R C which satisfies a two-scale refinement equation or recursion relation of the form φ(x = k h k φ(x k where {h k } k Z l (Z are called the recursion coefficients Our special interest are the functions that have compact support which implies that the {h k } are finitely supported The refinable function φ is called orthogonal if < φ(x φ(x k >= δ k k Z Two refinable functions φ and φ are called biorthogonal if < φ(x φ(x k >= δ k k Z We also call φ the dual of φ Example 1: Haar scaling function h: φ(x = { 1 if x < 1 otherwise The recursion coefficients for h are h = h 1 = 1/ Example : Scaling function for the Daubechies wavelet with two vanishing moment db The recursion coefficients are given by h = 1 + 3 4 h 1 = 3 + 3 4 h = 3 3 4 h 3 = 1 3 4

6 Theorem 1 A necessary condition for orthogonality is h k h k l = δ l (1 k where the * denotes the complex conjugate Similarly a necessary condition for biorthogonality is Proof: This is proved in [6 h k h k l = δ l k However this condition is not sufficient to ensure orthogonality For example φ(x = φ(x + φ(x 3 satisfies the orthogonality condition (1 but its solution φ(x = χ [3 does not have orthogonal integer translates There are several sufficient conditions for example the convergence of cascade algorithm [6[9 (see next section 1 Computing Point Values The cascade algorithm is fixed point iteration applied to the refinement equation It can be used to find approximate point values Definition The cascade algorithm consists of selecting a suitable starting function φ ( (x L and then producing a sequence of functions φ (n+1 (x = k h k φ (n (x k Theorem If the cascade algorithm converges for both φ and φ then the necessary condition for the orthogonality is also the sufficient condition Proof: See [1 The point values of the scaling function can also be obtained by solving an eigenvalue problem It usually works for continuous φ but may fail in some cases The refinement equation for integer points is equivalent to an eigenvalue problem φ = T φ

7 Example: The Daubechies scaling function db has support [ 3 The recursion relation for the integer points in the support leads to φ( h φ( φ(1 φ( = h h 1 h φ(1 h 3 h h 1 φ( φ(3 h 3 φ(3 Since h h 3 are not 1/ we know that φ( = φ(3 = and the above problem reduces to [ φ(1 φ( = [ h1 h h 3 h The solution normalized to φ(1 + φ( = 1 is φ(1 = 1 + 3 [ φ(1 φ( φ( = 1 3 Then we can use the refinement equation to obtain values at half integers quarter integers and so on Multiresolution Analysis Multiresolution analysis (MRA forms the most important concept for the construction of the scaling function wavelets and the development of the algorithms Multiresolution analysis can be viewed as a sequence of approximations of a given function at different resolutions Definition 3 A multiresolution analysis on R is a doubly infinite nested sequence of subspaces {V j } of L (R V 1 V V 1 V with properties (i clos L ( j Z V j = L (R (ii j Z V j = {} (iii φ(x V j if and only if φ(x V j+1 (iv φ(x V j φ(x j k V j for all j and k Z (v There exists a function φ(x L (R called the scaling function such that {φ(x k k Z}

8 1 1 1 1 1 1 Figure 1 Haar scaling function and Haar wavelet forms a Riesz basis for V That is for every f V there exists a unique sequence {α n } n Z such that f(x = n Z α n φ(x n with convergence in L (R and A α n α n φ(x n B α n n Z n Z n Z with < A B < constants independent of f V Notation: φ jk (x = j/ φ( j x k Since V V 1 and since {φ 1k (x} k Z is a basis for V 1 φ(x = k h k φ(x k for some h k That is φ is refinable Example: The Haar scaling function h and the Daubechies scaling function db both define MRAs and have compact support Suppose that φ L (R is a scaling function which generates an MRA {V j } One can show under some mild conditions that there exists a dual scaling function φ L (R which

9 1 1 1 1 1 3 1 3 Figure Scaling function for db wavelet and db wavelet satisfies the biorthogonality relations < φ(x φ(x k >= δ k k Z and which generates a dual MRA {Ṽj} The wavelet space W (resp W is the complement of V (resp Ṽ in V 1 (resp Ṽ 1 such that V W = {} V 1 = V W and Ṽ W = {} Ṽ 1 = Ṽ W Under mild conditions the space W (resp W is generated by the integer translates of a function ψ L (R (resp ψ L (R ψ (resp ψ is called the wavelet (dual wavelet function It satisfies a two scale relation ψ(x = k g k φ(x k or ψ(x = k g k φ(x k for some g k or g k Note: We also call {h k } the scaling filter or the lowpass filter and {g k } the wavelet filter or the highpass filter

1 Example: Haar wavelet h ψ(x = This obeys the following two scale relations { 1 if x < 1/ 1 if 1/ x < 1 φ(x = φ(x + φ(x 1 ψ(x = φ(x φ(x 1 Here h 1 = 1 h = 1 g 1 = 1 and g = 1 We summarize the properties of orthogonal wavelets as follows Theorem 3 Let {V j } be an orthogonal MRA with scaling filter h k and wavelet filter g k Then (i k (ii k (iii k h k = ; g k = ; h k h k n = k g k g k n = δ n; (iv k g k h k n = for all n Z; (v k h m k h n k + k g m k g n k = δ n m Note: Condition (i is referred to as a normalization condition Condition (iii and (iv are referred to as orthogonality conditions Condition (v is referred to as the perfect reconstruction condition Proof: This is proved in [14 3 Decomposition and Reconstruction Given a function f(x L (R define for k j Z c jk =< f φ jk > and d jk =< f ψ jk >

11 Since we have V 1 = V W at level n we have V n = V n 1 W n 1 = V n W n W n 1 = = V W 1 W n 1 We have the following theorem Theorem 4 which implies that Proof: This is proved in [6 V n = L (R = n 1 k= k= W k W k Theorem 5 Let V j be an orthogonal MRA with scaling function φ and wavelet ψ Let h k and g k be the corresponding recursion coefficients Then the decomposition relations are given by c j 1k = n c jn h n k d j 1k = n c jn g n k and the reconstruction relations are given by c jk = n c j 1n h k n + n d j 1n g k n Proof: This is proved in [14 Definition 4 Let c k be the signal The downsampling operator is defined by ( c k = c k The upsampling operator is defined by { ck/ if k is even ( c k = if k is odd

1 The downsampling is obtained by removing every odd term in c k and the upsampling is obtained by inserting a zero between adjacent entries of c k Decomposition can be thought of as convolution with filters h nand g n respectively followed by downsampling that is c k = (c 1n h n k and d k = (c 1n g n k and the reconstruction can be thought of as upsampling followed by convolution with h k and g k c 1k = ( c n h k + ( d n g k 4 Symbol Modulation Matrix and Polyphase Matrix Definition 5 The symbol of a refinable function is the trigonometric polynomial H(ξ = 1 h k e ikξ k By theorem 3(i H( = 1 The orthogonality conditions (theorem 3 are equivalent to H(ξ + H(ξ + π = 1 G(ξ + G(ξ + π = 1 ( H(ξG (ξ + H(ξ + πg (ξ + π = Together these conditions are known as quadrature mirror filter conditions (QMF The biorthogonality condition turns out to be H(ξ H (ξ + H(ξ + π H (ξ + π = 1 G(ξ G (ξ + G(ξ + π G (ξ + π = 1 (3 H(ξ G (ξ + H(ξ + π G (ξ + π = The recursion relations in the frequency domain can be written as φ(ξ = H(ξ/ φ(ξ/

13 where φ(ξ = is the Fourier transform of φ Likewise φ(xe ixξ dx ψ(ξ = G(ξ/ φ(ξ/ where G(ξ = 1 g k e ikξ H(ξ and G(ξ are the the symbols for the scaling function and the wavelet respectively Definition 6 The matrix is called the modulation matrix M(ξ = k [ H(ξ H(ξ + π G(ξ G(ξ + π Definition 7 We define the polyphase symbols as H (z = k h k z k and H 1 (z = k h k+1 z k Note that H(z = H (z + zh 1 (z where z = e iξ The polyphase symbols of the corresponding wavelet are defined similarly Definition 8 The matrix is called the polyphase matrix [ H (ξ H P (ξ = 1 (ξ G (ξ G 1 (ξ Conditions ( are equivalent to M(ξM(ξ = P (ξp (ξ = I Definition 9 A trigonometric matrix polynomial A(ξ is called paraunitary if A(ξA(ξ = A(ξ A(ξ = I Note: M being paraunitary is equivalent to P being paraunitary

14 5 Moments Definition 1 The j th continuous moments of φ and ψ are defined by µ j = ν j = x j φ(xdx x j ψ(xdx Any wavelet ψ that comes from an MRA must satisfy ν = This is the zeroth moment of ψ ψ(xdx = Definition 11 The j th discrete moments are defined by Note: m = 1 m j = n j = 1 k j h k k 1 k j g k k One can show that and µ j = j j s= ν j = j j s= ( j s ( j s m j s µ s (4 n j s µ s In particular we can choose µ to be an arbitrary nonzero number The remaining µ j ν j are then uniquely defined 6 Approximation Order and Accuracy We say that a refinable function has approximation order k if for k N 1 the polynomial x k can be reproduced exactly as a linear combination of its integer shifts It turns out that if the dual wavelet ψ has N vanishing moments then the scaling function φ has approximation order N as given by the following theorem

15 Theorem 6 Let φ(x be a compactly supported scaling function associated with an MRA Let ψ(x be the dual wavelet Then for each N the following are equivalent (i x k ψ(xdx = for k N 1 R (ii n g nn k = for k N 1 (iii H(ξ can be factored as ( 1 + e iξ N H(ξ = L(ξ for some π - periodic trigonometric polynomial L(ξ (iv H (k (π = for k N 1 Note: The conditions in this theorem are necessary conditions for approximation order N They are sufficient if the cascade algorithm converges Proof: This is proved in [6 Example: h and db as explained earlier have approximation orders zero and one respectively The symbol for the scaling function h factors as H(ξ = 1 + e iξ where L(ξ = 1 and the symbol for the Daubechies scaling function db factors as ( 1 + e iξ ( 1 + 3 H(ξ = + 1 3 e iξ 7 Lifting Using the lifting scheme one can start with a very simple or trivial multiresolution analysis and gradually work one s way up to a multiresolution analysis with particular properties It is one of the most elegant ways of generating a biorthogonal MRA Definition 1 A filter pair H G is complementary if the corresponding polyphase matrix P (z has determinant 1 Theorem 7 Let (H G be complementary then any other finite filter G new complementary to H is of the form G new = G(z + H(zs(z

16 where s(z is a Laurent polynomial Conversely any filter of this form is complementary to G Proof: This is proved in [15 The new polyphase matrix can be written as [ P new (z = 1 s(z 1 P (z This creates a new dual scaling function whose filter is given by the dual polyphase matrix P new (z = [ 1 s(z 1 1 P (z which implies that the new wavelet the dual scaling function and wavelets are given by G new (z = G(z + H(zs(z H new (z = H(z G(zs(z G new (z = G(z The scaling function here does not change at all while the wavelet and the dual scaling function change according to the above relations The dual wavelet also changes but in a less fundamental way than the wavelet and the scaling function More precisely the dual wavelet changes because the dual scaling function from which it is built changes while the filter coefficients remain exactly the same The power behind the lifting scheme is that through s(z we have full control over all wavelets and dual functions that can be built from a particular scaling function This means that we can start from a simple or trivial set of biorthogonal functions and use suitable s(z so that after lifting the wavelet has desirable properties The lifting scheme can also be used for the dual scaling function and the wavelet in a similar way; this is called dual lifting Lifting and dual lifting can be iterated to get an MRA with desired properties Example: Lifting the Haar wavelet h We start from the Haar wavelet and try to use the lifting scheme to increase the number of vanishing moments of the wavelet from one to two Initially H(z = H(z = 1 + z

17 and After lifting we get G(z = G(z = 1 z G new (z = G(z + H(zs(z We need G new ( = for one vanishing moment which implies that s( = For two vanishing moments we have to have G new ( = G new ( = Let s work with z = e iξ for simplicity Thus or G new ( = G ( + H (s( + H(s ( = s ( = G ( H( = i 4 We can choose (for symmetry s(ξ = ( i/4 sin ξ Thus the new wavelet symbol under lifting can be written as or G new (ξ = 1 e iξ + s(ξ(1 + e iξ = 1 e iξ i sin ξ 8 = 1 e iξ eiξ e iξ 16 ie iξ sin ξ 8 eiξ e 3iξ 16 G new (z = 1 16 z 1 16 z 1 + 1 z + 1 16 z3 1 16 z 1 The corresponding dual scaling symbol Hnew can be written as H new (z = 1 + 1 z + 1 16 z 1 16 z 1 1 16 z3 + 1 16 z 1 8 Factorization Various techniques to factor existing wavelet filters into basic building blocks are known For example it is known that every polyphase matrix in one dimension factors into lifting factors viz P (z = m i=1 [ 1 si (z 1 [ 1 t i (z 1 [ k 1/k

18 where s i (z and t i (z are Laurent polynomials and k is a constant [15 It is also known that the polyphase matrix of every orthogonal wavelet can be factored [6 in the form P (z = QF 1 (z F n (z where Q is a constant orthogonal matrix and each F k is a projection factor of the form F (z = (I uu + uu z for some unit vector u 9 Completion A completion problem for a wavelet is the problem of finding the corresponding wavelet given the scaling function This is equivalent to completing the polyphase matrix to a paraunitary matrix when the first row is given Let be the determinant of P Since P P = I [ P H = G H1 G = 1 [ G1 H 1 = P 1 1 G H so G = H1 G 1 = H Since the determinant must be a monomial for compactly supported wavelets we have = αz k α = 1 Thus the matrix completion problem in the one dimensional case can always be solved In short given H H 1 we can always find G G 1 a such that P is paraunitary Example: Daubechies scaling function db Given the scaling function and its symbol H(ξ = 1 (h + h 1 z + h z + h 3 z 3 the polyphase matrix has the following paraunitary completion [ P = h + h z (h 1 + h 3 z 1 h 1 + h 3 z h + h z 1

19 or equivalently the following paraunitary completion which keeps the same support [ h + h P = z h 1 + h 3 z h 1 z h 3 h z + h

CHAPTER 3 The Multivariate Setting 31 Lattices A matrix M is said to be a dilation matrix if it has integer entries and all of its eigenvalues are greater than one in absolute value A linear combination of N vectors x 1 x x N is an expression of the form N i=1 c ix i where c i are real numbers The c i are called coefficients The set of vectors x 1 x x N is said to be linearly independent if N i=1 c ix i = implies c i = for all i If the set of vectors x 1 x x d is linearly independent then the totality of the vectors of the form { d i=1 n ix i n i Z} is called a d-dimensional lattice We denote it by Γ In such a case M = [x 1 x x d is called the sampling matrix [19 and is said to generate the lattice If M is the identity matrix then each x i is a unit vector pointing in the i th direction and the resulting lattice Γ is Z d Note: Note that several different dilation matrices M may produce the same lattice Definition 13 Given a dilation matrix M the fundamental parallelepiped of the lattice Γ is defined by F (M = {y R d y = Mx for some x [ 1 d } The fundamental parallelepiped depends on the matrix M not just on the lattice Γ Let Γ be some lattice and consider the order m group Γ MΓ where m = det M Its complete set of representatives is called the digit set and denoted by D = {d d 1 d m 1 } More precisely we take D = Γ F and d D (Γ + d = Z d Example 1: Standard quincunx The dilation matrix M = [ 1 1 1 1

1 5 5 4 4 3 3 1 1 1 3 4 5 1 3 4 5 Figure 31 Standard and nonstandard quincunx and column lattices gives rise to the quincunx lattice shown in fig 31 (left with digit set {( t (1 t } Example : Nonstandard quincunx The dilation matrix M = [ 1 1 1 1 gives rise to the same lattice as before but the digit set is {( t ( 1 t } Example 3: Column lattice The dilation matrix M = [ 1 gives rise to the column sublattice shown in fig 31 (right with digit set {( t (1 t } 3 Refinable Vector Functions Let M be a fixed dilation matrix associated with the lattice Γ Z d The equation φ(x = m k Λ h k φ (Mx k x R d (31 where Λ is some finite subset of Γ and h k are fixed r r matrices is called the refinement equation A solution of the refinement equation is called a vector scaling function or refinable vector function r is called the multiplicity of φ If the matrix = 1 h k has eigenvalues m k Λ

λ 1 = 1 and λ λ r < 1 then there exists a non-zero compactly supported distribution φ(x = (φ 1 (x φ (x φ r (x t satisfying the refinement equation (31 [1 Furthermore ˆφ(ξ is a continuous vector function and ˆφ( where ˆφ is the Fourier transform of φ When r = 1 we say that (31 is a single function refinement equation φ is orthogonal if < φ(x j φ(x k >= δ jk I where I is the r r identity matrix If f and g are r 1 vector functions the inner product is defined as the following: f1 g 1 f1 g r < f g >= fg = f g1 f3 g1 fr g1 f gr f3 gr fr gr The * denotes the complex conjugate transpose Let φ(x be a refinable vector function Using the notation φ nj (x = m n φ(m n x j we get Therefore φ(m n x j = m 1 h k φ(m(m n x j k φ nj (x = m n+1 k Λ = m 1 h k φ(m n+1 x Mj k k Λ h k φ(m n+1 x Mj k k Λ = k Λ h k φ n+1mj+k (x Let Mj + k = p and define h p = for p / Λ Then φ nj (x = p Γ h p Mj φ n+1p (x (3 Lemma 1 If φ is refinable and orthogonal we have δ jk I = p h p Mj h p Mk (33

3 18 16 14 1 1 8 6 4 5 1 15 5 3 Figure 3 K Λ corresponding to standard quincunx Proof: Using orthogonality and the relation (3 δ jk I = < φ(x j φ(x k > = < p h p Mj φ 1p q h q Mk φ 1q > = h p Mj < φ 1p φ 1q > h q Mk pq = p h p Mj h p Mk However these are only necessary conditions Sufficient conditions can be found in [ and [3 Example: Let H denote the scaling function corresponding to M = [ 1 1 1 1 {( Λ = ( 1 } with the coefficients of Haar scaling function h = h 1 = 1 The scaling function is twodimensional Haar scaling function φ = χ s where S is as shown in fig 3 Example:Let M be the standard quincunx and let Λ = {( ( ( ( 1 3 }

4 and let unspecified coefficients h h 1 h h 3 be placed at the corresponding positions Then the orthogonality conditions are h + h 1 + h + h 3 = 1 h h + h 1 h 3 = In this case the coefficients of the scalar Daubechies wavelet with two vanishing moments will work Let DB denote the scaling function corresponding to M = [ 1 1 1 1 {( Λ = ( 1 ( ( 3 } with the coefficients of Daubechies scaling function db The corresponding scaling function is shown in fig 34 Example: Let M be the standard quincunx and let Λ = {( 1 1 ( 1 ( ( 1 ( ( 3 ( 1 1 ( 1 } where the unspecified coefficients h i are placed as shown: h 6 h 7 h h 3 h 4 h 5 h h 1 The orthogonality conditions are h + h 1 + h + h 3 + h 4 + h 5 + h 6 + h 7 = 1 h h + h 1 h 3 + h 4 h 6 + h 5 h 7 = h h 4 + h 1 h 5 + h h 6 + h 3 h 7 = h h 4 + h 3 h 5 = h h 6 + h 1 h 7 =

5 The two-particular family of solutions of the above equations is h = h 1 = h = h 3 = h 4 = h 5 = h 6 = h 7 = µ(1 ν + µ + µν (µ + 1(ν + 1 µ(1 + ν µ + µν (µ + 1(ν + 1 µν(1 + ν µ + µν (µ + 1(ν + 1 µν(1 ν + µ + µν (µ + 1(ν + 1 ν(1 ν + µ + µν (µ + 1(ν + 1 ν(1 + ν µ + µν (µ + 1(ν + 1 (1 + ν µ + µν (µ + 1(ν + 1 (1 ν + µ + µν (µ + 1(ν + 1 The corresponding scaling functions aew known as K-V scaling functions after Kovacević- Vetterli [3 33 Support Our interest is in compactly supported scaling functions which implies that {h k } k Λ are finitely supported For k Z d let w k : R d R d be the affine map w k (x = M 1 (x + k Let H(R d be the space of all non-empty compact subsets of R d The Hausdorff metric h( is defined by h(b C = inf{ɛ > : B C ɛ and C B ɛ } where B ɛ = {x R n : dist(x B < ɛ}

6 Under the Hausdorff metric H(R d is a complete metric space For a finite set H Z d define the iterated function system (IFS w H (B = w k (B k H = M 1 (B + H where B H(R d Since M is expansive there exists a matrix norm with M 1 < 1 and therefore w H is a contractive map on R d By the contraction mapping theorem there exists a unique compact set K H such that w H (K H = K H K H is called the attractor of the IFS generated by H and can be expressed as K H = clos( M j (H (34 j=1 with convergence in the Hausdorff norm Our interest will be the attractor associated with the IFS generated by the support of the refinement coefficients Λ The following theorem estimates the support of the scaling function Theorem 8 If a function φ : R n C r is a compactly supported solution of the refinement equation then supp(φ K Λ Proof: If x supp(φ then Mx k supp(φ for some k Λ Therefore x M 1 supp(f+ M 1 Λ or supp(φ M 1 Λ + M 1 supp(φ Iterating this gives supp(φ M 1 Λ + M Λ + M 3 Λ + + M j Λ + M j supp(φ Since M 1 is a contraction this gives supp(φ Thus M j (Λ + ɛ for all ɛ j=1 supp(φ K Λ by (34 34 Estimating the Support In many cases M n = ci for some n N c R In those cases we can use (34 to estimate the support of φ

7 Example 1: M = [ 1 1 1 1 Here M = I We have the following table {( Λ = ( 1 } If (x y K Λ then k ( M k ( 1 M k 1 ( t (1/ 1/ t ( t (1/ t 3 ( t (1/4 1/4 t 4 ( t (1/4 t 5 ( t (1/8 1/8 t 6 ( t (1/8 t 7 ( t (1/16 t x 1/ + 1/ + 1/4 + 1/4 + 1/8 + 1/8 + = y 1/ + 1/4 + 1/8 + 1/16 + 1/3 + = 1 so supp φ [ [ 1 The actual support is the set shown in fig 3 Example : Here M 4 = 4I M = [ 1 1 1 1 {( Λ = ( 1 } If (x y K Λ then k ( M k ( M k 1 1 ( t (1/ 1/ t ( t (1/ t 3 ( t (1/4 1/4 t 4 ( t ( 1/4 t 5 ( t ( 1/8 1/8 t 6 ( t ( 1/8 t 7 ( t ( 1/16 1/16 t 8 ( t ( 1/16 t 9 ( t (1/3 1/3 t 1 ( t (1/64 1/ t x 1/ + 1/ + 1/4 + 1/3 + 1/3 + 1/64 + = (1/ + 1/ + 1/4[1 + 1/16 + 1/16 + = 4/3

8 1 8 6 4 4 6 8 1 1 5 5 1 15 Figure 33 K Λ corresponding to non-standard quincunx x [1/8 + 1/8 + 1/16 + = 1/3 y 1/ + 1/16 + 1/16 + 1/3 + = /3 y [1/4 + 1/4 + 1/8 + = /3 so supp φ [ 1/3 4/3 [ /3 /3 The actual support is the set shown in fig 33 Theorem 9 S(Λ + j = S(Λ + (M Ij S(Λ + p = S(Λ + (M I 1 p In words: shifting the support of φ by j can be achieved by shifting the position of the coefficients h k by (M Ij Conversely shifting the position of the coefficients h k by p shifts the support of φ by (M I 1 p Proof: Let S = k M 1 (S + k

9 Then S + j = k M 1 (S + k + j = k M 1 (S + k + Mj = k M 1 (S + k + Mj + j j = k M 1 (S + j + k + (M Ij The second equality is obtained by setting j = (M I 1 p Theorem 1 Assume A is a nonsingular matrix which commutes with M Then S(AΛ = AS(Λ In words: replacing Λ by AΛ replaces S by AS Proof: AM = MA AM 1 = M 1 A Then AS = k AM 1 (S + k = k M 1 A(S + k = k M 1 (AS + Ak = p AΛ M 1 (AS + p 35 Computing Point Values The point values of the refinable function can be approximated by using the following cascade algorithm as in the univariate case Definition 14 The cascade algorithm consists of selecting a suitable starting function φ ( (x L and then producing a sequence of functions φ (n+1 (x = m k h k φ (n (Mx k Theorem 11 If the cascade algorithm converges for both φ and φ then the necessary condition for the orthogonality is also the sufficient condition

3 4 3 1 1 3 1 4 6 Figure 34 DB Scaling function and its support Proof: This is proved in [ The point values of the scaling function can also be obtained by solving an eigenvalue problem as in the scalar case Let x 1 x l be the points with integer coordinates inside the support of φ Let φ = φ(x 1 φ(x φ(x l Writing out the recursion relation for each φ(x j we get an eigenvalue problem φ = T φ Example 1: Two-dimensional Haar function H The support is shown in fig (3 There are no interior points with integer coordinater inside the support and the method requires φ= on the boundary so this is not going to work Example : Two-dimensional Daubechies wavelet DB The support is shown in fig 34

31 There are 4 interior points namely {( 1 t (3 1 t (3 t (4 t } T = h 1 h h 1 h h 3 h h 3 h 3 We normalized the point values so as to have the sum equal to 1 φ( 1 = 1 ( + 3 φ(3 1 = 1 φ(3 = 1 φ(4 = 1 ( 3 Example 3 Kovacević-Vetterli (K-V There are 18 interior points

3 CHAPTER 4 Multivariate Wavelet Theory 41 Multiresolution Analysis Multiwavelets Definition 15 A multiresolution approximation (MRA of L (R d is a nested sequence of subspaces satisfying (i clos( j V ( j = L (R d (ii j V ( j = {} (iii f(x V ( j f(mx V ( j+1 (iv f(x V ( j f(x + M j k V ( j V ( 1 V ( V ( 1 V ( (v There exists a refinable vector function φ ( k Z d so that {φ ( j (x + k : j = 1 r k Z d } forms a Riesz basis of V ( φ ( is a called a multiscaling function The MRA is orthogonal if φ ( is orthogonal Assuming orthogonality (iii implies that { mφ ( i (Mx k} i= 1rk Z d is an orthonormal basis for the space V ( 1 and since V ( V ( 1 we have φ ( (x = m k Λ h ( k φ( (Mx k x R d (41 for some h ( k where m is the normalizing factor that preserves the L -norm This shows that any scaling function φ is a refinable function However not every refinable function generates an MRA

33 Now let us consider the subspace W the orthogonal complement of V ( in V ( 1 Since the determinant of M is m m 1 wavelets are needed to characterize the wavelet space W [8 Let us assume the existence of m 1 wavelets φ (1 φ (m 1 and let W = V (1 V ( V (m 1 where V (i = span(φ (i (x k k Z d i = 1 m 1 Let W j = {φ(m j x : φ(x W } The sequence of spaces {W j } satisfies conditions similar to conditions (i through (v of an MRA [6 We have the following lemma similar to the scalar case Lemma which implies that V ( n = n 1 k= L (R d = clos( W k k= Proof: The proof is similar to the one presented in [6 W k The spaces W j are called the wavelet spaces Let us assume the existence of the wavelets and let the scaling function span V ( j Then which gives the following recursion relations V ( 1 = V ( V (1 V ( V (m 1 φ ( (x = m k Λ φ (µ (x = m k Λ h ( k φ( (Mx k (4 h (µ k φ( (Mx k µ = 1 m 1 (43 Definition 16 The wavelets are called orthogonal if < φ (µ (x j φ (ν (x k >= δ µν δ jk I (44 As in the proof of lemma 1 this implies that the necessary condition for orthogonality is δ µν δ jk I = p h (µ p Mj h (ν p Mk

34 Definition 17 The wavelets are called biorthogonal if < φ (µ (x j φ (ν (x k >= δ µν δ jk I (45 φ is called the dual of φ In such a case the MRA generated by φ ( is called the dual MRA and the corresponding wavelets φ (i i = 1 m 1 are called the dual wavelets If φ is orthogonal then φ = φ 4 Decomposition and Reconstruction Let us take an arbitrary f(x V ( n for some n According to the decomposition V ( n = V ( n 1 V (1 n 1 V ( n 1 V (m 1 n 1 we have f(x = j f ( nj φ ( nj (x = µ k f (µ n 1k φ(µ n 1k (x where f (µ nk = < f φ (µ nk > It is enough to compute the inner product at one level We have < φ ( (µ nj φ n 1k > = < φ ( nj i h (µ ( i Mk φ ni > = h (µ j Mk < φ (µ n 1k φ ( nj > = h (µ j Mk Thus the decomposition and the reconstruction equations can be written as f (µ n 1k = j f ( nj h (µ j Mk (46 f ( nj = f (µ n 1k h (µ j Mk (47 µk

35 43 Symbol and Modulation Matrix Definition 18 Given a complex valued vector r = [r 1 r r n t an integer valued vector s = [s 1 s s n t and an integer-valued matrix L = [L L 1 L n where L i is the i th column of L Then the vector r raised to the vector s power is a scalar and is defined to be r s = r s 1 1 rs rs n n and r raised to the power L is a row vector defined as r L = [r L 1 r L r Ln In particular if z = (z 1 z z d and k = (k 1 k k d then z k = z k 1 1 zk d d z j = e iξ j Definition 19 The symbols of the scaling function and wavelet coefficients are defined by H (µ (ξ = 1 m k h (µ k e i<k ξ> or in z notation Since we have H (µ (z = 1 m φ (µ (x = m 1 k Λ k h (µ k z k h (µ k φ( (Mx k ˆφ (µ (ξ = H (µ (M t ξ ˆφ ( (M t ξ Lemma 3 Let Γ = MZ d be the lattice and D be the digit set Then for k Z d e πi<k M T d> = d D Proof: This is proved in [4 { m if k Γ otherwise Theorem 1 The orthogonality conditions in terms of symbols H are H (µ (ξ + πm T dh (ν (ξ + πm T d = δ µν I (48 d D

36 Proof: Since we find Then H (µ (ξ + πm T d = H (µ (ξ = 1 m = k 1 m k 1 m H (µ (ξ + πm T dh (ν (ξ + πm T d d = 1 m = j d j k k h (µ k e i<k ξ> h (µ k e i<k ξ+πm T d> h (µ k e i<k ξ> e πi<km T d> h (µ j h (ν k e i<k jξ> e πi<k jm T d> k h (µ j h (ν k Setting j = k M l and using lemma this equals h (µ k Ml h(ν k e i<mlξ> = { l k kj by the orthogonality condition Example 1: Standard quincunx M = [ 1 1 1 1 e i<k jξ> 1 e πi<k jm T d> m h (µ k Ml h(ν k } d e i<mlξ> = δ µν I which has {( t (1 t } as its digit set The orthogonality condition turns out to be H( ξ + H(ξ + (π π t = 1 Example : Non-standard quincunx M = [ 1 1 1 1 The orthogonality condition is H(ξ + H(ξ + (π π t = 1 Example 3: Column lattice M = [ 1

37 The orthogonality condition turns out to be H(ξ + H(ξ + (π t = 1 Definition The modulation matrix is H ( (ξ H ( (ξ + πm t d 1 H ( (ξ + πm t d m 1 H (1 (ξ H (1 (ξ + πm t d 1 H (1 (ξ + πm t d m 1 M(ξ = H ( (ξ H ( (ξ + πm t d 1 H ( (ξ + πm t d m 1 H (m 1 (ξ H (m 1 (ξ + πm t d 1 H (m 1 (ξ + πm t d m 1 where d d 1 d m 1 is any ordering of the digits with d = The modulation matrices in the standard quincunx non-standard quincunx and the column cases are respectively and [ H M(ξ = ( (ξ H ( (ξ + (π π t H (1 (ξ H (1 (ξ + (π π t [ H M(ξ = ( (ξ H ( (ξ + (π π t H (1 (ξ H (1 (ξ + (π π t M(ξ = [ H ( (ξ H ( (ξ + (π t H (1 (ξ H (1 (ξ + (π t 44 Polyphase Components and Polyphase Matrix An arbitrary k Z d can be written uniquely as k = Mj + d where Mj Γ and d D To make the notation easier we fix some ordering d d m 1 with d = Notation Let h (µk j = h (µ Mj+d k Definition 1 The polyphase symbols are H (µk (ξ = j h (µ Mj+d k e i<j ξ> = j h (µk j e i<jξ>

38 The symbol can be expressed in terms of polyphase components as H (µ (z = 1 m 1 z d j H (µj (z M m where j = 1 m 1 The decomposition and reconstruction relations can be written in polyphase form as j= f (µ n 1k = j = k = k f ( nj h (µ j Mk l l f ( nml+d k h(µ Ml Mk+d k f (k nl h (µk l k (49 and f (µk nl = k f n 1k h (µ (µk l k (41 µ k which are the sums of convolutions Definition The polyphase matrix is P (ξ = H ( (ξ H (1 (ξ H (m 1 (ξ H (1 (ξ H (11 (ξ H (1m 1 (ξ H ( (ξ H (1 (ξ H (m 1 (ξ H (m 1 (ξ H (m 11 (ξ H (m 1m 1 (ξ The orthogonality condition is P (ξp (ξ = I (411 Example 1: The Haar wavelet H The symbol is H ( (z = 1 + z 1 Thus H ( (z = 1 + z 1 1 H ( (zh ( (z + H ( ( zh ( ( z = 1 4 [(1 + z 1(1 + 1 z 1 + (1 z 1 (1 1 z 1 = 1

39 where 1 z = ( 1 z 1 1 z The orthogonality condition is satisfied The polyphase symbols are H ( (z = H (1 (z = 1 The scaling function is the two-dimensional Haar scaling function φ = χ S where S is as shown in fig 3 Example : The Daubechies wavelet DB The symbol is H ( (z = 1 (h + h 1 z 1 + h z 1 + h 3 z 3 1 The polyphase symbols are H ( (z = j h Mj z j = h + h z 1 z and H (1 (z = h 1 + h 3 z 1 z The corresponding scaling function is shown in fig 34 Example 3: K-V scaling function The symbol is H ( (z = 1 (h + h 3 z 1 + h 4 z 1 + h 5 z 3 1 + h z 1 z 1 + h 1 z 1z 1 + h 6 z 1 z + h 7 z 1z The polyphase symbols are H ( (z = h + h 6 z 1 + h z + h 4 z 1 z H (11 (z = h 3 + h 7 z 1 + h 1 z + h 5 z 1 z 45 Necessary and Sufficient Conditions for the Existence of Wavelets Now we state the necessary and sufficient conditions such that the lattice translates of {φ (j : j = 1 m 1} will form an orthonormal basis for W

4 Theorem 13 Let {V ( j } be an orthogonal MRA for L (R d Then the following are equivalent (i {φ (j (x k : j = 1 m 1 k Γ} will form an orthonormal basis for W (ii P(ξ is paraunitary ae (iii M(ξ is paraunitary ae (iv The recursion coefficients {h (µ k : j = 1m 1 } satisfy δ µν δ jk = p h (µ p Mj h (ν p Mk Proof: This is proved in [11 Thus once an MRA has been found we can construct a wavelet basis for L (R d if we can complete a paraunitary matrix namely P We will consider the completion problem in section 81

41 CHAPTER 5 Computing Moments and Approximation Order 51 Moments We will use the standard multi-index notation x α = x α 1 1 xα d d where α = (α 1 α α d is a d-tuple of non-negative integers The degree of α is α = α 1 + α + + α d The number of different multi-indices α of degree s is Let d s = D α = α 1 x α 1 1 ( s + d 1 d 1 α x α Definition 3 The discrete moments are defined by α d x α d d j = 1 m 1 m (j α = 1 m k k α h (j k = i α D α H (j ( Definition 4 The continuous moments are defined by j = 1 m 1 µ (j α = x α φ (j (xdx = i α D α ˆφ(j ( Note: m (j α is an r r matrix and µ(j α is an r 1 vector In sections (51 and (5 we will drop the subscript and always consider φ = φ ( mα = m ( α For a particular s choose any fixed ordering of all α with α = s This ordering will be

4 fixed from now on Let α j be the j th index We collect the d s monomials x α in the chosen order to form a column vector of monomials x α 1 X [s x α (x = x α ds x Rd Notation: We also use the notation [x α α =s to represent a column vector of monomials in the given order Given any expansive d d matrix M = [m ij ij=1d with scalar entries let M [s = [m [s ij be the d s d s matrix whose scalar entries m [s ij Dilation of X [s (x by M obeys the rule (Mx α d s i = m [s ij xα j j=1 are defined by the equation X [s (Mx = M [s X [s (x If λ = (λ 1 λ d t is the vector of eigenvalues of M then (λ α 1 λ α ds t is the vector of all eigenvalues of M [s [ Note: Since the eigenvalues of M [s are of the form λ α j M [s is invertible since all of the eigenvalues of M are strictly greater than 1 in absolute value It is easy to show that (M [s 1 = (M 1 [s Consider µ [s i = x α i φ(xdx and µ [s = µ [s is a column vector of size rd s 1 where each µ [s i is of size r 1 i = 1 d s Let The size of m [s is d s r r m [s = µ [s 1 µ [s µ [s d s mα 1 mα mα ds

43 Let I be the r r identity matrix (M [s Iµ [s = = m [s 11 I m[s 1 I m[s 1d s I m [s 1 I m[s I m[s d s I m [s d I s1 m[s d s I m[s m [s 11 µ[s 1 + + m[s 1d s µ [s d s d sd s I m [s 1 µ[s 1 + + m[s d s µ [s d s m [s d s 1 µ[s 1 + + m[s d s d s µ [s d s µ [s 1 µ [s µ [s d s [s (m 11 xα 1 + + m [s 1d s x α ds φ(xdx [s (m = 1 xα 1 + + m [s d s x α ds φ(xdx [s (m d s 1 xα 1 + + m [s d s d s x α ds φ(xdx [s k (m 11 = xα 1 + + m [s 1d s x α ds hk φ(mx kdx [s m k (m 1 xα 1 + + m [s d s x α ds hk φ(mx kdx [s k (m d s 1 xα 1 + + m [s d s d s x α ds hk φ(mx kdx X = k [s (Mx 1 h k φ(mx kdx k X [s (Mx h k φ(mx kdx m k X [s (Mx ds h k φ(mx kdx X k [s (y 1 h k φ(y kdy k X [s (y h k φ(y kdy = = = = 1 m k X [s (y ds h k φ(y kdy X k [s (x + k 1 h k φ(xdx k X [s (x + k h k φ(xdx 1 m k X [s (x + k ds h k φ(xdx [ ( 1 α m β α β [ β α m α β µ β ( α β x β k α β h k φ(xdx α =s α =s

44 where ( α β = ( α1 β 1 ( αr β r otherwise if β i α i for each i This can be used to compute µ [s recursively Note that µ [ is an eigenvector of m [ which is defined uniquely up to a constant The rest of them are then uniquely defined Some special cases: Case I : d = 1 M = so M [s = s This is leads to (4 in section 5 µ [s = µ s = s s β= ( s β Case II: Arbitrary d and α = so M [ = 1 Thus m s β µ β µ [ = m µ [ In the scalar case r = 1 µ = m µ m = 1 therefore we can take µ = 1 Case III: Arbitrary d and α = 1 We have M [1 = M Therefore (M Iµ [1 = m [1 µ + (I m µ [1 so µ [1 = (M I I m 1 m [1 µ Again for the scalar case r = 1 m = 1 µ = 1 and the above reduces to Mµ [1 = m [1 µ + µ [1 therefore Example: Haar wavelet H µ [1 = (M I 1 m [1 m = 1 m (1 = 1 m (1 =

45 therefore m [1 = µ = 1 ( 1/ µ [1 = (M I 1 ( 1/ = ( 1 1 ( 1/ = ( 1 1/ Verify: µ (1 = = = = = x 1 dx 1 dx S 1 x +1 1 1 = 1 x 1 dx 1 dx x [ 1 x +1 x 1 x dx (x + 1 dx [ 1 x + 1 x 1 likewise µ (1 = = 1 x +1 1 = 1 x x dx 1 dx [x x 1 dx x +1 x Example: Daubechies wavelet DB m = 1 m (1 = 1 (h 1 + h + 3h 3 = 3 3 m (1 =

46 therefore m [1 = ( 3 3 µ = 1 ( µ [1 = (M I 1 = ( 3 3 3 3 3 3 = ( 1 1 ( 3 3 5 Approximation Order and Accuracy The accuracy of a refinable function f : R d C r is the largest integer k such that every polynomial q(x with degree < k can be reproduced from linear combinations of the translates of f along Γ The precise definition is given below Definition 5 A function f : R d C r has accuracy k if for every polynomial q(x with degree < k there exist 1 r vectors {c k } k Γ such that q(x = c kf(x + k k Γ We use the terms that we used when computing moments Let us consider the approximation order p For all s < p we have X [s (x = k g k φ(x k (51 where each g k is a d s r matrix Using the two scale refinement equation X [s (x = m kl g k h l φ (Mx Mk l = m kp g k h p Mk φ (Mx p Letting Mx = y (M 1 [s X [s (y = m kp g k h p Mk φ (y p X [s (y = M [s m kp g k h p Mk φ (y p (5

47 From (51 and 5 we have g p = mm [s k g k h p Mk for all p Now We note that X [s (x = k g 11 g 1 g 1r g 1 g g r g ds1 g ds g dsr k φ 1 (x k φ (x k φ r (x k < X [s (x φ(x k >= [y α(k α =s where thus we get y α(k = β α ( α β µ β kα - β g k = [y α(k α =s We have the following theorem Theorem 14 A necessary condition for a refinable function to have approximation order s is [y α(p α =s = mm k [y α(k α =s h p Mk for all p (53 The condition is also sufficient if the cascade algorithm converges Special case: α = Here Therefore the above condition reduces to y (p = µ µ = mm k µ h p Mk for all p Special case: α = 1 Here g k = y (1 (k y (1 (k y (1 (k

48 Let e i be the i th standard unit vector in R d and let k = (k 1 k d then Thus g k = y ei (k = µ e i + k i µ y (1 (k y (1 (k y (1 (k Therefore for approximation order two we have to have = µ[1 + k µ µ [1 + p µ = mm k ( µ [1 + k µ h p Mk for all p Examples 1: Haar wavelet H Order one: H ( ( = H ( ( = 1 so the approximation order condition for order one is satisfied ( Order two: For l = For l = ( 1 µ [1 + lµ = µ [1 + lµ = ( 1 1/ ( 1 1/ ( 1 + = ( 1/ But and for l = M (µ [1 + kµ h l Mk = ( 1 1 1 1 k k M (µ [1 + kµ h l Mk = ( 1 1 1 1 ( and for l = ( 1 Example : Daubechies wavelet DB ( 1 1/ 1 = ( 3/ 1/ ( ( 1 1 3/ = 1/ 1/ respectively so no approximation order two holds Order one: H ( ( = h + h = 1 and H (1 ( = h 1 + h 3 = 1

49 so the approximation condition for order one is satisfied ( Order : For l = and for l = ( 1 µ [1 + lµ = µ [1 + lµ = ( 3 3 (3 3/ ( 3 3 (3 3/ ( 1 + = ( 4 3 (3 3/ and m M (µ [1 + kµ h l Mk k = ( 1 1 1 1 m M (µ [1 + kµ h l Mk k = ( 1 1 1 1 {( 3 3 (3 3/ {( 3 3 (3 3/ [( 3 3 h + (3 3/ ( 1 + 1 ( 3 3 (3 3/ = [( 3 3 h 1 + (3 3 h } ( 1 + 1 ( 4 3 (3 3/ = h 3 } The approximation order condition for order two is also satisfied It can be similarly checked that it does not have approximation order three 53 Approximation Order in the Scalar Case We now switch back to the full notation so that φ ( = φ denotes the scaling function and φ (1 φ (m 1 denote the wavelets We also restrict attention to the scalar case Note that the word scalar refers to r = 1 These functions are still functions of x R d Theorem 15 m ( α = for all 1 < α p iff µ ( α = for all 1 α p

5 Similarly m (j α = for all α p iff µ (j α = for all α p j = 1 m 1 Proof We prove by induction We have φ ( (M t ξ = H ( (ξ φ ( (ξ φ (j (M t ξ = H (j (ξ φ(ξ for j = 1 m 1 Let ξ = then H ( ( = 1 H (j ( = for all j = 1 m 1 by the property of the wavelets So for α = we are done For large p let D [p = [D α α =p with same order as in X [p ie Now differentiating the above equation D [p = D α 1 D α D α dp D [p φ( (M t ξ = D [p (H ( (ξ φ ( (ξ we find that M [p D [p φ( (M t ξ = ( ( D [p H (ξ ( φ( (ξ + H (ξ D [p φ( (ξ + middle order terms Let ξ = Assume that all the partial derivatives of H ( (ξ for < α p are zero for ξ = (ie mα = Then the first term on the right and all the middle order terms vanish and we get M [p D [p φ( ( = H ( D [p φ( ( Since M [p is non-singular (because M is and H ( ( is not an eigenvalue of M [p this implies D [p φ( ( = which implies that µ ( α = for all α with α = p Conversely assume µ( α = for 1 < α p and let for q < p all partial derivatives of H ( of order q be zero The left

51 hand side is equal to zero and all the terms on the right except the first one are equal to zero by induction hypothesis Therefore the m ( α Notation: Let ω i = Similarly for the wavelets = πm t d i where M is the dilation matrix Theorem 16 φ ( satisfies the approximation order condition (53 of order p D α H ( (ξ = for all α p 1 ξ= ω i except for H ( ( Proof: We prove by induction We have the orthogonality conditions H ( (ξh ( (ξ + H ( (ξ + ω 1 H ( (ξ + ω 1 + + H ( (ξ + ω m 1 H ( (ξ + ω m 1 = 1 H ( (ξh (i (ξ + H ( (ξ + ω 1 H (i (ξ + ω 1 + + H (ξ + ω m 1 H (i (ξ + ω m 1 = for i = 1 m 1 Since the modulation matrix is unitary its columns are orthogonal too H ( (ξh ( (ξ + ω i + H (1 (ξh (1 (ξ + ω i + + H (m 1 (ξh (m 1 (ξ + ω i = (54 for all i = 1 m 1 Putting ξ = we get H ( (H ( (ω i + H (1 (H (1 (ω i + + H (m 1 (H (m 1 (ω i = Differentiating (54 we get α m 1 k= H ( (ω i = for all j = 1 m 1 β= ( α β D β H (k (ξd α β H (k (ξ + ω i = Plugging in ξ = all the terms in the sum except the first one are equal to zero because φ has approximation order p and thus all the derivatives up to order of p of H (i vanish at We are left with α β= ( α β D β H ( (D α β H ( (ω i =

5 In the last sum because of the induction hypothesis all terms disappear except for β = Thus we have D α H ( (ξ = ξ=ω i for all i = 1 m 1 and α p 1 Conversely if the derivatives in the statement of the theorem vanish in a very similar way it can be shown that the all the moments up to order p of the wavelets vanish which in turn implies the approximation order Theorem 17 Let φ ( φ ( be a pair of dual scaling functions Then φ ( satisfies the approximation order condition (53 of order p D α H(i (ξ = for all α p 1 i = 1 m 1 ξ= Proof: Let φ ( φ ( be a pair of dual scaling functions and φ (i φ (i i = 1 m 1 be the corresponding wavelets Since φ ( has approximation order p ψi has p vanishing moments We prove by induction We have biorthogonality condition H ( (ξ H (i (ξ + H ( (ξ + ω 1 H (i (ξ + ω 1 + + H ( (ξ + ω m 1 H (i (ξ + ω m 1 = for i = 1 m 1 Let α = and let ξ = Since φ ( has approximation order p we are left with β= H ( ( H (i ( = H (i ( = Now differentiating the first term of the above sum and evaluating at ξ = we get α ( α D α H ( (D α β H β (i ( = All the other terms give α β= for all p = 1 m 1 ( α β D α H ( (ω p D α β H (i (ω p = which are all zero by the last theorem Thus again by the same argument β = is the only term that contributes because of the induction hypothesis Thus D α H(i ξ = for all α p 1 i = 1 m 1 ξ=