Reconstruction of Periodic Bandlimited Signals from Nonuniform Samples. Evgeny Margolis

Size: px
Start display at page:

Download "Reconstruction of Periodic Bandlimited Signals from Nonuniform Samples. Evgeny Margolis"

Transcription

1 Reconstruction of Periodic Bandlimited Signals from Nonuniform Samples Evgeny Margolis

2 Reconstruction of Periodic Bandlimited Signals from Nonuniform Samples Research Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Electrical Engineering Evgeny Margolis Submitted to the Senate of the Technion - Israel Institute of Technology Iyyar, 5764 Haifa May, 2004

3 This Research Thesis Was Done Under The Supervision of Dr. Yonina C. Eldar in the Department of Electrical Engineering Acknowledgements I would first like to thank my advisor, Dr. Yonina Eldar, for her support, patience, and shoving me in the right direction in critical moments. Thank you for the knowledge I acquired from you. I feel extremely fortunate to have an advisor like you. Thanks to Prof. Arie Feuer of the Technion Israel Institute of Technology for first introducing me the field of nonuniform sampling. The initial step for this research was carried out from these fruitful discussions. Thanks to Prof. Amir Averbuch of the Tel Aviv University for raising the issue of stability of the algorithms proposed in this work. This topic stimulated the significant phase of my research. I want to also thank Prof. Michael Unser and Dr. Thierry Blu from EPFL, Swiss, and Dr. Thomas Strohmer from University of California, Davis for valuable suggestions related to this work. Thanks to every member of the research group under the supervision of Dr. Yonina Eldar. Ami, Liron, Zvika, Tsvika, Nagesh, Moshe, and Noam, I thank you all for your friendship, and for all the help through the endless discussions we had. The Generous Financial Help of the Technion is Gratefully Acknowledged.

4 Contents Abstract 1 List of Symbols and Abbreviations 3 1 Introduction Sampling Theory Nonuniform Sampling Reconstruction from Nonuniform Samples Periodic Signals This Thesis: Its Aims Thesis Outline Preliminary Notions Periodic Bandlimited Signals Frames and Bases Definitions Stability Examples Nonuniform Sampling of Periodic Bandlimited Signals Reconstruction from Nonuniform Samples Properties of the Reconstruction Functions Stability Analysis Uniform Sampling Reconstruction Formula Stability Analysis i

5 3.3 Recurrent Nonuniform Sampling Derivation of the Reconstruction Functions Stability Analysis Examples Frame-Based Reconstruction Reconstruction from Nonuniform Samples Using Frames Stability Analysis Uniform Sampling Recurrent Nonuniform Sampling Simulation Results Reconstruction using LTI filters Uniform Sampling Recurrent Nonuniform Sampling Interpolation and Reconstruction Using a Discrete-Time Filterbank Sampling in Polar Coordinates Sampling Approach in Cartesian Coordinates Sampling Approaches in Polar Coordinates First Sampling Strategy Second Sampling Strategy Uniform and Recurrent Nonuniform Sampling in Polar Coordinates Uniform Sampling Recurrent Nonuniform Sampling Discussion Tomographic Image Reconstruction Computerized Tomography Computerized Tomography Projections Reconstruction Principles Reconstruction Method Simulations ii

6 8 Summary and Topics for Future Research Contribution Ongoing Work Symmetric Expansion Reconstruction with Nonuniform B-splines Recovery of Missing Pixels in Bandlimited Images A Proof of the Reconstruction Theorem from Nonuniform Samples 106 B Expansions of the Reconstruction Functions 111 C Span of the Reconstruction Functions 114 D Shepp-Logan head phantom 118 Hebrew Abstract ℵ iii

7

8 List of Figures 2-1 Examples of an orthogonal basis (a), a basis (b), a tight frame with r = 1.5 (c), and a frame with r = 2 (d) for the space R Example of a periodic bandlimited signal with uniform (top) and nonuniform (bottom) samples taken over one period T = 10. The locations of the sampling points t p are marked by the symbol and the sampled values x(t p ) by a symbol. Two dashed lines serve as delimiters for one period [0, 10] of the signal x(t) Two sets of 7 nonuniform samples where t 0 = 0.7, t 1 = 1.7, t 2 = 3.4, t 3 = 4.2, t 4 = 5.4, t 6 = 9, and T = 10. The location of the 6th point is (a) t a 5 = 7.4 and (b) t b 5 = Reconstruction (dashed line) of the 10-periodic bandlimited signal 0.7 cos(πt) (solid line) from noisy uniform samples (marked by dots) Reconstruction (dashed line) of the 10-periodic 4-bandlimited signal (solid line) from noisy uniform samples (marked by dots) Sampling distribution for N r = 3 and M r = Example of a periodic bandlimited signal with recurrent nonuniform samples taken over one period T = 10 with N r = 3 and M r = 5. The locations of the sampling points t p are marked by the symbol and the sampled values x(t p ) by a symbol. Two dashed lines serve as delimiters for one period [0, 10] of the signal x(t) Recurrent nonuniform sampling model Sampling distribution for N r = 2 and M r = Condition number κ of the set of reconstruction functions for the set of recurrent nonuniform samples considered in Example 3.2, as a function of t v

9 3-10 Sampling distribution for N r = 3 and M r = Condition number κ of the set of reconstruction functions for the set of recurrent nonuniform samples considered in Example 3.3, as a function of t 1 and t Power spectrum of a signal (solid line) and a white noise (dashed line) Comparison between the condition number κ of the reconstruction method of Theorem 3.1 (dashed line) and of the frame method of Theorem 4.1 (solid line), as a function of t 1. The set of recurrent nonuniform samples is defined in Example 3.2 and the space of the sampled signal is V 2 (r = 2) Comparison between the condition number κ of the reconstruction method of Theorem 3.1 (left) and of the frame method of Theorem 4.1 (right), as a function of t 1 and t 2. The set of recurrent nonuniform samples is defined in Example 3.3 and the space of the sampled signal is V 3 (r = 2.14) Reconstruction of a 10-periodic 4-bandlimited signal (solid line) from 18 nonuniform (top), uniform (middle), and recurrent nonuniform (bottom) noisy samples (marked by dots). Simulation results of the reconstruction method of Theorem 3.1 (dash-dotted line) and frame method of Theorem 4.1 (dashed line) are compared Reconstruction from uniform samples using a continuous-time filter Frequency response of the reconstruction filter H(ω) of (5.5) for N = 9 (a) and N = 10 (b) Reconstruction from recurrent nonuniform samples using a continuous-time filterbank Frequency response of H 2 (ω) for the reconstruction methods of Theorem 3.1 (a) and Theorem 4.1 (b) The Interpolation Identity Reconstruction from recurrent nonuniform samples using a discrete-time filter bank Butzer and Hinsen sampling approach in Cartesian coordinates vi

10 6-2 First sampling strategy. Samples are taken along azimuthal coordinate with fixed radius. Average density of samples in radial direction is greater than one and number of samples on each circle is N = Second sampling strategy. Samples are taken along radial lines with fixed azimuthal coordinate. Average density of samples on each line is greater than one and number of rays from origin for azimuthal interpolation is N = 10 (always even) Uniform sampling in polar coordinates Recurrent nonuniform sampling in polar coordinates with M θ = 12, N θ = 3, and N r = Uniform sampling in θ coordinate and recurrent nonuniform sampling with N r = 3 in r coordinates A typical CT scanner construction: X-ray tube (1), detector (2), sliding bed (3) and patient s body (4) Radon Transform (left) and the Fourier Slice Theorem. A parallel projection P θ (r) of an image f(x, y) is a radial line of the Fourier transform F (u, v) of the object Fourier Slice Theorem Gridding from polar to cartesian coordinates Reconstruction of a Shepp-Logan phantom (top-left) in computerized tomography from its frequency domain samples lying on the uniformly distributed radial lines (top-right) Reconstruction of a Shepp-Logan phantom (top-left) in computerized tomography from its frequency domain samples lying on the two sets of uniformly distributed radial lines (top-right) Reconstruction of a Shepp-Logan phantom (top-left) in computerized tomography from its frequency domain samples lying on the nonuniformly distributed radial lines (top-right) Examples of periodic (left) and symmetric (right) expansions of nonuniform samples vii

11 8-2 Examples of periodic bandlimited (left) and B-spline (right) reconstructions from nonuniform samples Recovery of missing pixels. Left: Original digital image of Lena with pixels. Middle: Lena with 43% randomly missing pixels. Right: Reconstructed image D-1 Shepp-Logan head phantom in space domain viii

12 Abstract The most common sampling used in the context of DSP is uniform sampling. However, in many practical situations the data can not be sampled uniformly. The problem of reconstructing a signal from its nonuniform samples arises in a variety of fields such as image processing, astronomy, geophysics, speech processing, communication theory, and medical imaging. In most practical applications we are given only a finite number of samples, which can be periodically extended across the boundaries. The assumption that the reconstructed signal is periodic and bandlimited provides a simple and appropriate way to handle the problem of reconstruction of finite dimensional signals. In this work, we introduce new closed-form algorithms for reconstructing a periodic bandlimited signal from its nonuniform samples. We analyze the stability of the proposed algorithms, discuss advantages, disadvantages, and properties of each method. We also provide experimental evidence to support our theoretical results. Some special structures of the sampling points are investigated and we show that uniform sampling results in the most stable and simple reconstruction algorithm. For these sampling schemes, we develop an efficient implementation of the reconstruction algorithm using continuous-time or discrete-time LTI filters. These algorithms can be used in a wide range of signal processing applications. We then consider the problem of reconstructing from nonuniform samples of a twodimensional signal, given in polar coordinates. Any function f(r, θ) given in polar coordinates is 2π-periodic in θ. We first expand sampling and reconstruction approach developed for Cartesian coordinates into polar coordinates system. Specifically, we present two strategies for nonuniform sampling of the signals in polar coordinates. Then, for the special structures of the nonuniform samples taken in polar coordinates, we develop an efficient reconstruction using a bank of continuous-time two-dimensional filters. As an application 1

13 of these algorithms, we apply them to reconstruction of space-limited medical images from their nonuniform frequency domain samples, which are usually taken in polar coordinates. 2

14 List of Symbols and Abbreviations DSP 1D 2D FB LTI SNR MSE DFT FFT CT MRI Digital Signal Processing One Dimension Two Dimension Filterbank Linear Time Invariant Signal-to-Noise Ratio Mean Square Error Discrete Fourier Transform Fast Fourier Transform Computerized Tomography Magnetic Resonance Imaging x(t) X(ω) T K V K P K N T r N r M r Periodic bandlimited signal Fourier transform of the signal x(t) Period of the signal x(t) Highest harmonic of x(t), i.e., X(ω) = 0 for ω > 2πK/T Space of T -periodic K-bandlimited signals Orthogonal projection onto the space V K Number of arbitrary spaced samples of the signal x(t) Period of recurrent nonuniform samples Number of recurrent nonuniform samples in one period T t Number of periods of recurrent nonuniform samples in one period T of the signal x(t) 3

15 t p h p (t) D N (t) F Location of the p-th sample of the signal x(t) Set of N reconstruction functions Dirichlet kernel of degree N Set transformation from l 2 to V (ϕ) F Adjoint transformation from V (ϕ) to l 2 S R I A B r κ λ i (M) Frame operator Frame correlation Identity operator Lower frame bound Upper frame bound Redundancy of the frame Condition number of the frame Eigenvalues of the matrix M j 1 L 2 Hilbert space of square-integrable functions L 2 [0, T ] Hilbert space of square-integrable functions on the interval [0, T ] l 2 Absolute value, Inner product l2 Norm in l 2 L2 Norm in L 2 Hilbert space of square summable sequences mod Modulo operator Floor operator 4

16 Chapter 1 Introduction 1.1 Sampling Theory Digital signal processing and image processing theories rely on sampling a continuous-time signal to obtain a discrete-time representation of the signal. At some juncture one must ask how samples of a continuously-varying signal relate to that signal. This question is answered by signal sampling theory (or sampling theory for short). One might also ask what a signal (continuous or discrete) says about the system from which it came. That is the art of signal analysis. Sampling theory has a long history and finds its roots in the work of Cauchy [1] and Gauss. Its name has almost become synonymous with that of C. E. Shanon, who amongst others is credited with the statement of the uniform sampling theorem [2]. The Shannon- Whittaker theory states that from periodic observations (uniformly spaced samples) one may reconstruct a signal that contains no frequencies above half the sampling rate (a limit to which Nyquist s name has become attached). The reconstruction formula that complements the sampling theorem is f(x) = f(kt ) sinc(x/t k), (1.1) k Z where T is a sampling step. Equation (1.1) may be interpreted as a linear combination of the shifted and rescaled versions of the sinc-function; sinc(x) = sin(πx)/(πx). This formula is exact if f(x) is bandlimited to ω max π/t ; this upper limit is the Nyquist frequency. In the mathematical literature, (1.1) is known as the cardinal series expansion, which is often attributed to Whittaker [3]. 5

17 Shannon s sampling theorem and its corresponding reconstruction are best understood in the frequency domain, where the reconstruction is equivalent to low-pass filtering. The importance and attraction of this theorem is that the samples of a bandlimited signal (at an appropriate rate) contain all information needed to reconstruct that signal. Nowadays, the subject of sampling had reached what seemed to be a very mature state. The research in this area had become very mathematically oriented. Recently, there has been strong revival of the subject, which was motivated by the intense activity taking place around wavelets [4, 5, 6]. It soon become clear that the mathematics of wavelets were also applicable to sampling. This led researchers to reexamine some of the foundations of Shannon s theory and develop more general formulations, many of which turn out to be quite practical from the point of view of implementations. For a classical perspective on sampling, we refer to Unser s excellent tutorial article, which gives a very comprehensive view of the subject up to the resent years [7]. This article provides a good summary of sampling theory with emphasis on regular sampling, where the sampling grid is uniform. Like all good theorems, Shannon reconstruction theorem raised many additional questions. People began to ask whether the sampling has to be uniform, and what to do when it is not. 1.2 Nonuniform Sampling There are a variety of applications in which the samples can not be collected uniformly and the data are known only on a nonuniformly spaced sampling set. This nonuniformity is a fact of life and prevents the use of the standard reconstruction techniques developed mainly for uniformly spaced samples. The following examples are typical and indicate that nonuniform sampling problems are pervasive in science and engineering. Geophysics: When one wishes to collect geophysical data (e.g., electrical resistivity of the ground, gravitational or magnetic potentials) [8, 9], such scalar or vector fields can only be sampled at points on the Earth s surface (or down holes) to which the investigator has access. The observations therefore tend to be irregular, and clustered, because on setting up the equipment the user finds it convenient to make several sets of readings close by. 6

18 Medical imaging: The problem of signal reconstruction from its nonuniform frequency domain samples arises in computerized tomography (CT) and magnetic resonance imaging (MRI) [10]. Polar sampling strategies and linear spiral scan techniques [11, 12], which are widely used in MRI, provide practical advantages in the context of medical imaging. Since the geometry is polar rather than Cartesian, the observations are not on a simple Cartesian grid. Astronomical measurements: When measuring the star luminosity, for example, one only has access to it at certain time of the day or the year [13, 14]. There are also the difficulties of adverse weather conditions preventing observations being made, and of equipment faults. Communication theory (missing data): When data from a uniformly sampled signal are lost, the result is generally a sequence of nonuniform samples [15, 16, 17]. Often, missing samples are due to the partial distortion of storage devices, e.g., scratches on a CD. A good example is the restoration of audio signals [18]. Control theory: The objective is to sample adaptively and irregularly to reduce the volume of data that the controller has to cope with [19, 20]. One can imagine that this might work well if the controller needs to be active only when the system enters a certain region of its state space. Other applications using nonuniform sampling sets occur in spectroscopy [21], general signal/image processing [22], and biomedical imaging. There are also applications where we can benefit from introducing irregular sampling. The key point is that by uniform sampling one cannot identify frequencies above the Nyquist critical frequency of half the sampling rate. Frequencies above that critical rate get folded back into the interval [ f s /2, f s /2], where f s is a sampling rate. This effect is known as aliasing. With nonuniform sampling this restriction disappears for multiband signals. We define a set I as a union of all intervals where the Fourier transform of the signals is not zero. Such a signal is usually called a multiband signal. If a set I is of measure B then reconstruction is possible providing the average sampling rate exceeds B and providing I is known. B is known as the Nyquist-Landau rate. This result was proven by Landau in [23, 24] and leads to same applications. 7

19 High-speed signal analyzers: The digital alias-free signal processing system was recently developed at the institute of Electronics and Computer Science in Riga (Latvia) [25]. It is currently able to identify frequency components up to 1.2GHz, when its nonuniform samples have average rate of only 80MHz. The Nyquist limit has been exceeded by a factor of 30. Today s technology does not allow for analogue-to-digital converters (ADCs) to operate at such high rates. Synthetic-aperture radar (SAR): It is a well-established technique for imaging the ground to one side of an airborne platform. Moving target detection is a very useful application for SARs. Unfortunately, using conventional imaging data for detecting moving targets leads to ambiguities in the targets positions and velocities. By using a nonuniform pulse repetition interval the proposed radar overcomes this limitation and allows the azimuthal data to be focused at any velocity of interest [26]. Among other applications, nonuniform sampling provides an advantage in image registration techniques [27]. As we showed, the nonuniform sampling is widely used in different applications. Now we are interested in a question how and under what conditions the nonuniformly sampled signal can be perfectly reconstructed. In these cases, reconstruction techniques developed for uniformly spaced samples [2], cannot be applied. 1.3 Reconstruction from Nonuniform Samples In the nonuniform case, it is well established that a bandlimited signal is uniquely determined from its samples, provided that the average sampling rate exceeds the Nyquist rate, where the average sampling interval is defined as lim n (x n /n). The essential result is incorporated in the following theorem by Yao and Thomas [28]. Theorem 1.1 (Yao and Thomas). Let f(x) be a finite energy bandlimited signal such that its Fourier transform F (ω) = 0 for ω > W ɛ for some 0 < ɛ W. f(x) is uniquely determined by its samples f(x n ) if x n n π W < L <, x n x k > δ > 0, n k. (1.2) 8

20 The reconstruction is given by f(x) = n= G(x) f(x n ) G (x n )(x x n ), (1.3) where G(x) = (x x 0 ) n= n 0 (1 xxn ), (1.4) G (x n ) is the derivative of G(x) evaluated at x = x n, and if x n = 0 for some n, then x 0 = 0. Different extensions of the nonuniform sampling theorem are known [29]. Specifically, Yen considered the case where a finite number of uniform sampling points migrate in a uniform distribution to near distinct points. He proved that the bandlimited signal f(x) remains uniquely defined by the new set of nonuniform samples. Yen also considered the case of recurrent nonuniform sampling. In this form of sampling, the sampling points are divided into groups of N nonuniformly spaced points. The groups have a recurrent period, which is denoted by T, that is equal to N times the Nyquist period T Q. Denoting the points in the first recurrent group by t r, r = 0, 1,... N 1, the complete set {t r } N 1 r=0 of sampling points are t r + nt, r = 0, 1,... N 1, n Z, (1.5) where T = NT Q. The reconstruction is given by f(x) = N 1 n= r=0 x(t r + nt ) a r( 1) nn N 1 q=0 sin(π(t t q)/t ) π(t nt t r )/T (1.6) where a r = N 1 q=0 q r 1 sin(π(t p t q )/T ). (1.7) A review of these reconstruction methods can be found in a tutorial article by Jerri [30]. Recently, efficient filterbank reconstruction technique from recurrent nonuniform samples was developed in [31]. Another well known sampling theorem by Papoulis [32], which generalizes uniform sampling of a signal, states that a bandlimited signal can be reconstructed from uniformly 9

21 spaced samples of the outputs of M linear time-invariant (LTI) systems with the signal as their input, sampled at one-mth of the Nyquist rate. This method is also known as multichannel sampling. Recurrent nonuniform and derivative sampling [29, 33] are special cases of Papoulis formulation. Recently, it was shown that the sufficient condition for the reconstruction using Papoulis generalized expansion is also necessary condition [34]. Direct implementation of all the reconstruction methods mentioned above is computationally difficult. Because of these difficulties, many of the reconstruction algorithms for the nonuniform sampling problem are based on iterative methods [35, 36, 37]. These algorithms are still computationally demanding. In addition to the difficulties in implementation of these methods, there are practical limitations which prevent their numerical implementation on a digital computer. Specifically, the sets of reconstruction functions defined in (1.4) and (1.6) typically have infinite length. Thus, in practice to overcome this problem the infinite functions involved in the reconstruction must be truncated. Another issue is that, in most practical applications we have only a finite number of samples and thus we lack the full data for determining the set of reconstruction functions in (1.4) and the reconstruction process in (1.3) or (1.6). As a consequence of this fact a perfect reconstruction of the infinite-length signal impossible. In both cases the problem of perfect reconstructing the infinite-length signal reduces to the problem of approximating a signal from a finite number of nonuniform samples. Therefor, we rise the question of how to approximate efficiently the signal from its finite number of samples. 1.4 Periodic Signals In this work, we consider the problem of reconstructing a periodic bandlimited signal from nonuniform samples. Given a finite number of samples we can always extended them periodically across the boundaries. Generally, periodic expansion of a finite number of samples provides a simple and appropriate way to reconstruct the signal [36, 38, 39]. Fortunately, in many practical situations the functions under consideration are not arbitrary, but possess some smoothness properties, so that the signal can be regarded as approximately bandlimited. Engineers often imply that the function involved in reconstruction is bandlimited. In this case the assumption that the reconstructed signal is periodic and bandlimited provides 10

22 good results. This approach is frequently encountered in image processing [40, 41, 42, 43], where we are given a finite number of a space-limited two-dimensional function. There are also applications, where the underlying signal is periodic. For example, a two dimensional signal given in polar coordinates is periodic in its azimuthal coordinate [44]. One example, where the periodic representation is especially relevant, is the parametric representation of closed curves in terms of splines [45] or Fourier basis functions [46]. The problem of reconstructing a periodic bandlimited signal from an odd number of uniform samples was first considered by Cauchy [1]. Later Schanze [47] developed a reconstruction method from any even number of uniform samples. Reconstructing a periodic bandlimited signal from nonuniform samples is considerably more complicated. In the case of odd number of nonuniform samples, the reconstruction can be obtained using the Lagrange interpolation formula for trigonometric polynomials [48]. For even number of samples, Lagrange interpolation for exponential polynomials results in a complex valued interpolation function [20, 49]. Several reconstruction methods for periodic bandlimited signals from nonuniform samples have been previously suggested. These methods involve iterative algorithms [36, 50], which are computationally demanding and have potential issues of convergence. Recently, there has been some work on reconstructing a signal from a finite number of samples by applying Neumann boundary conditions [51], i.e., a symmetric extension across the end points of the sampling intervals. This approach may improve the quality of reconstruction at the boundaries of the signal when the signal is not really periodic. Reconstruction of a non-bandlimited periodic signal from uniform samples was discussed in [52]. Despite the fact that the problem of reconstruction a periodic bandlimited signal from uniform [1, 47, 53, 54, 55, 56, 57] and nonuniform [49, 58] samples is well developed, there are still many theoretical gaps and unanswered questions. Specifically, we observe that there is not real valued functions for reconstruction of periodic bandlimited signal from an even number of nonuniform samples. We can also show that in the oversampled case the reconstruction methods of Cauchy, Schanze, and Lagrange provide reconstruction in the space which is larger than the space of the signal involved in reconstruction. In noisy environments, this fact inserts perturbations to the reconstruction which are not in the space of the reconstructed signal, i.e., high frequency components of the noise. 11

23 There are also some open questions concerning the stability of the reconstruction methods. Particularly, we show an interesting fact that the reconstruction methods developed by Schanze for an even number of uniform samples is less stable compared to Cauchy s formula for an odd number of uniform samples. 1.5 This Thesis: Its Aims Combining questions of signal analysis with the facts that the signal involved in reconstruction is periodic and we are given a finite number of its nonuniform samples results in the following list of questions. The aims of this thesis are to help answer these questions. Q1 Given the finite number of nonuniformly spaced samples, how do we reconstruct an appropriate periodic bandlimited signal? Q2 How does perturbation or quantization of the signal samples affect its reconstruction? Q3 How should we generalize efficient signal processing techniques such as filtering and convolution to cope with the special cases of distribution of sampling points? Q4 Can we extend one-dimensional results to higher dimensions? None of the questions is easy to answer. The question Q1 and Q2 are answered in Chapters 3 and 4, where we develop two new algorithms for reconstructing a periodic bandlimited signal from nonuniform samples and discuss stability of the reconstruction. Chapter 5, which is concerned with Q3, shows that signal reconstruction may be processed using filtering techniques even when the sampling is not uniform. Then Chapter 6 helps to answer Q4; one-dimensional results are extended to two-dimension, where underlying 2D signal is periodic in one dimension. 1.6 Thesis Outline This thesis can roughly be divided into two parts: Chapters 2-5 introduce new algorithms and efficient methods for reconstruction of periodic bandlimited signals from nonuniform 12

24 samples. Chapters 6-7 extend the one dimensional results of the previous chapters onto the reconstruction of two dimensional signals sampled in polar coordinates. Chapter 2 summarizes relevant background material. We start with a brief definition of the space of the periodic signals bandlimited in frequency domain and introduce the essential results from frame theory. In Chapter 3, we develop a new theorem for reconstruction of a periodic bandlimited signal from nonuniformly spaced samples. We discuss the stability of the proposed algorithm and simplify the reconstruction functions for two special cases of sampling points: Uniform sampling and recurrent nonuniform sampling. In Chapter 4, based on the theory of frames and the reconstruction method of Chapter 3, we develop a new algorithm for the oversampled case. We then compare these two algorithms, discuss advantages, disadvantages, and properties of each method. We also provide experimental evidence to support our theoretical results. Chapter 5 develops an efficient interpretation of the reconstruction process from uniform and recurrent nonuniform samples using LTI filters. Chapters 6-7 focus specifically on reconstruction of two dimensional signals sampled in polar coordinates. Chapter 6 introduces various sampling strategies in polar coordinates and develops a filterbank interpretation of the reconstruction from recurrent nonuniform samples in polar coordinates. As an application of these results, we apply them in Chapter 7 to reconstruction of tomographic images from their frequency domain samples, which are taken in polar coordinates. Finally, a summary concluding the work and description of some topics for further research are proposed in Section 8. In the various sections, key results are stated and their detailed derivation is included in the appropriate appendix (Appendices A-D). 13

25 Chapter 2 Preliminary Notions 2.1 Periodic Bandlimited Signals In this work, we consider the problem of reconstructing a periodic bandlimited signal x(t) from its nonuniform samples. A real periodic signal x(t) L 2 [0, T ], with period T, has a Fourier series representation x(t) = a (a k cos(2πkt/t ) + b k sin(2πkt/t )) = k=1 c k exp{j2πkt/t }, (2.1) k= where L 2 [0, T ] is the space of square-integrable functions on the interval [0, T ] and a k, b k, and c k are the Fourier coefficients of the trigonometric and exponential representations of x(t) respectively, which are given by a k = 2 T b k = 2 T T 0 T 0 x(t) cos(2πkt/t )dt, x(t) sin(2πkt/t )dt, (2.2) and c k = 1 T T 0 x(t) exp{ j2πkt/t }dt. (2.3) The Fourier transform of x(t) is given by X(ω) = x(t)e jωt dt = k= c k e j2πkt/t e jωt dt = k= ( c k δ ω 2πk ), (2.4) T 14

26 where δ(ω) is the Dirac delta function. A T -periodic signal x(t) is said to be bandlimited to 2πK/T if c k = 0 for k > K. Such signals are also known as trigonometric polynomials of degree K [59]. We will denote the space of T -periodic signals bandlimited to 2πK/T by V K and we will say that such signals are K-bandlimited. From (2.1) we conclude that the space of T -periodic K-bandlimited functions is spanned by the 2K + 1 orthogonal functions cos(2πkt/t ), 0 k K and sin(2πkt/t ), 1 k K, namely { 1 V K = span 2, cos ( 2πt T ), sin ( 2πt T ),..., cos ( 2πKt T ), sin ( 2πKt T )}. (2.5) Thus, the dimension of the space V K is M = 2K +1. Theorem 3.1 of Section 3.1 asserts that a signal x(t) V K can be perfectly reconstructed from a finite number N of its arbitrary spaced samples, where N 2K + 1. We define an inner product of two T -periodic functions x(t) and y(t) as x(t), y(t) = 1 T T 0 x(t)y (t)dt, (2.6) where y (t) is the complex conjugate of the function y(t). The squared norm of a T -periodic function x(t) is given by the square root of the inner product of x(t) with itself: x(t) L2 [0,T ] = x(t), x(t) = ( 1 T T 0 x(t) 2 dt) 1/2. (2.7) In the next section we present a brief introduction to the theory of frames, which we will rely on throughout this work. 2.2 Frames and Bases Frames and bases provide a general framework for studying nonuniform sampling of periodic bandlimited signals. Given a basis for a vector space each element in the space can be written uniquely as a linear combination of the basis elements. A frame is essentially an overcomplete basis, that leads to redundant signal expansions. As we will show, this redundancy can be useful. 15

27 2.2.1 Definitions Frames were originally introduced by Duffin and Schaeffer [60] in the context of nonharmonic Fourier series. Recent interest in frames has been spurred by their utility in analyzing discrete wavelet transforms and time-frequency decompositions [4]. Below, we present some basic results from frame theory in finite-dimensional vector spaces. For a comprehensive treatment of frames and bases see, e.g., [61]. Our approach to reconstructing a periodic K-bandlimited signal x(t) from its N nonuniform samples {x(t i )} N i=1 is to represent it as a linear combination of functions ϕ i(t) L 2 [0, T ], i.e., N x(t) = x(t i )ϕ i (t). (2.8) i=1 The corresponding functional space V (ϕ) L 2 [0, T ], i.e., the space of the reconstructed signal x(t) V (ϕ), is defined as { N V (ϕ) = c i ϕ i (t) i=1 } c R N, (2.9) The set of functions {ϕ i (t)} N i=1 can either be linearly independent, in which case they form a basis for V (ϕ), or they can be linearly dependent, in which case they form a frame for V (ϕ). The family of functions {ϕ i (t)} N i=1 constants A > 0 and B < such that for all x(t) V (ϕ) [4] V (ϕ) is called a frame for V (ϕ) if there exist A x(t) 2 N x(t), ϕ i (t) 2 B x(t) 2. (2.10) i=1 The constants A and B are called the frame bounds and r = N/M is the redundancy, where M is the dimension of V (ϕ). The upper bound B guarantees the numerical stability of the reconstruction and is automatically satisfied if N is finite. The lower bound A guarantees that the vectors {ϕ i (t)} N i=1 span V (ϕ), thus M N. This condition turns out to be sufficient; every finite set of vectors is a frame for its span. If the two frame bounds are equal, A = B, then the frame is called a tight frame. For a tight frame, the frame condition 16

28 (2.10) can de rewritten as N x(t), ϕ i (t) 2 = A x(t) 2, x(t) V (ϕ). (2.11) i=1 If, in addition, ϕ i (t) = 1 for all i = 1,..., N, then it can be shown that A = r = N/M. If N = M then {ϕ i (t)} N i=1 is an orthonormal basis. We now define the set transformation F, associated with the frame {ϕ i (t)} N i=1, as a linear operator from R N to V (ϕ), which is given by F c = N c i ϕ i (t), c R N. (2.12) i=1 F is usually called the pre-frame operator, or the synthesis operator. The corresponding adjoint operator F from V (ϕ) to R N is given by (F x(t)) i = x(t), ϕ i (t), x(t) V (ϕ), (2.13) and is called the analysis operator. By composing F with its adjoint F, we obtain the frame operator S = F F from V (ϕ) to V (ϕ) as Sx(t) = N x(t), ϕ i (t) ϕ i (t). (2.14) i=1 Using definition (2.14), the frame condition (2.10) can be rewritten as AI S BI, (2.15) where I is identity operator on V (ϕ). It was proved in [4], that the frame operator S is invertible and from (2.15) we conclude that its eigenvalues λ j (S) lie in the interval [A, B]; in the tight frame case, all of the eigenvalues are equal. Since we want the frame bounds to be as tight as possible, A = min(λ j (S)) and B = max(λ j (S)) is the optimal choice for the frame bounds [61]. We also define the operator R = F F, which is known as the frame correlation, has a 17

29 matrix representation with entries R ij = ϕ i (t), ϕ j (t). (2.16) The fact that the nonzero eigenvalues of the two operators S and R are equal [62], allows us to calculate the eigenvalues λ j (S) of the operator S and the optimal frame bounds in (2.10) by first computing the eigenvalues of R, which is much easier since R is a matrix while S is an operator Stability One of the most important properties of a reconstruction algorithm is its stability, namely the affect of a small perturbation of the samples on the reconstructed signal. We represent our reconstruction algorithm as a linear combination of functions {ϕ i (t)} (2.8) x(t) = N x(t i )ϕ i (t). (2.17) i=1 If the samples c = {x(t i )} N i=1 of the signal x(t) in (2.17) are perturbed by a sequence w, then, according to [63], the perturbation in the output x w (t) satisfies A P w xw (t) L2 [0,T ] B P w, (2.18) where P is an orthogonal projection onto the range space (image) of the operator F and constants A and B are the tightest possible bounds in (2.10). Here c denotes the squared norm of vector c in an N-dimensional vector space, which is given by c = 1 N N c 2 i. (2.19) i=1 Combining relation (2.18) with (2.10), we can also show that A P w B c x w(t) L2 [0,T ] x(t) L2 [0,T ] B P w A c. (2.20) 18

30 Based on the inequality (2.20), Unser and Zerubia [64] defined the condition number of the reconstruction algorithm as the ratio κ = B A. (2.21) This quantity (2.21) provides an indicator of the stability and overall robustness of the reconstruction algorithm. The optimal situation is obviously κ = 1, which holds in the case of an orthonormal basis or a tight frame. We also observe that the orthogonal projection P in (2.20) may significantly reduce the squared-norm of the perturbing sequence w and the reconstruction algorithm (2.17) will result in a more stable reconstruction. We note that in the case where {ϕ i (t)} N i=1 of (2.17) is a set of linearly independent functions, i.e., forms a basis for the space V (ϕ) defined in (2.9), the range space of the operator F is of dimension N. Therefore, the orthogonal projection P in this case satisfies P w = w, i.e., P = I, and the expression (2.20) can be rewritten as 1 w κ c x w(t) L2 [0,T ] x(t) L2 [0,T ] κ w c. (2.22) From (2.22) we conclude that if the set of reconstruction functions constitute a basis then the stability of reconstruction is indicated only by the value of κ. The results presented in this section will be used for stability analysis of the reconstruction algorithms in Chapters 3 and 4, where the sets of reconstruction functions constitute a basis and a frame, respectively Examples In this section, to illustrate the concepts of frames and bases, we present several examples in the vector space R 2. Specifically, we now consider four sets of vectors in the space R 2 : Set 1 Set 2 Set 3 Set 4 a 1 = (1, 0) b 1 = (1, 0.25) c 1 = (0, 1) d 1 = (0.75, 0) a 2 = (0, 1) b 2 = (0.5, 0.5) c 2 = ( 3/2, 1/2) d 2 = (1, 1) c 3 = ( 3/2, 1/2) d 3 = ( 0.8, 0.2) d 4 = (0.9, 0.45) These sets (from 1 to 4) are presented in Figs. 2-1 (a)-(d), respectively. From Fig. 2-1 (a)-(b) we can immediately conclude that the sets 1 and 2 form bases for R 2, since there are two 19

31 y y a 2 a 1 x b 2 b 1 x (a) (b) y y c 1 d 2 c 2 c 3 x 2 d 3 d 1 d 4 x (c) (d) Figure 2-1: Examples of an orthogonal basis (a), a basis (b), a tight frame with r = 1.5 (c), and a frame with r = 2 (d) for the space R 2. linearly independent vectors in each set. We also observe that the sets 3 and 4 constitute frames for R 2 with redundancy ratio r = 3/2 and r = 2, respectively. To support these observations and for deeper investigation of the properties of these sets we resort to the tools presented in the previous sections. We first can calculate the correlation matrix R (2.16) of each set, where the inner product of two vectors a = (a x, a y ) and b = (b x, b y ) is defined by a, b = a x b x + a y b y. Computing the eigenvalues of these correlation matrices, we obtain {1, 1}, {0.096, 1.467}, {1.5, 1.5, 0}, and {1.141, 3.113, 0, 0} (2.23) for the sets 1-4, respectively. Since there are two nonzero eigenvalues in each set, we conclude that all four sets span the two dimensional space R 2, i.e., each of them forms either a basis, or a frame for R 2. We now calculate the condition number κ (2.21) of each set, which is defined as the ratio of the largest and smallest nonzero eigenvalues of the matrix R. From 20

32 (2.23) we have that κ 1 = 1, κ 2 = 15.28, κ 3 = 1, and κ 4 = (2.24) From the last result we conclude that the sets 1 and 3 constitute an orthonormal basis and a tight frame, respectively. We also note that the set 2, due to the large value of κ, results in a very unstable representation of vectors in the space R 2. In Chapters 3 and 4 we provide stability analysis for the sets of reconstruction functions which span the space of the periodic bandlimited signals. The set of functions in Chapter 3 forms a basis and in Chapter 4 a frame. 21

33 Chapter 3 Nonuniform Sampling of Periodic Bandlimited Signals The most common sampling used in the context of DSP is uniform sampling. However, in many practical situations the data can not be sampled uniformly. In cases where the data is sampled nonuniformly, standard reconstruction techniques developed mainly for uniformly spaced samples [2], cannot be applied. A formula for reconstructing a nonperiodic bandlimited signal from arbitrary spaced samples was proposed by Yao and Thomas in [28]. Several extensions of the nonuniform sampling theorem for special distributions of sampling points are known, among them jittered sampling, shifted sampling, recurrent nonuniform sampling, etc. [29, 31, 65]. A review of these reconstruction methods can be found in [30]. Another well known sampling theorem by Papoulis [32], which generalizes uniform sampling of a signal, states that a bandlimited signal can be reconstructed from uniformly spaced samples of the outputs of M linear time-invariant (LTI) systems with the signal as their input, sampled at one-mth of the Nyquist rate. Numerical implementation of these reconstruction methods on a digital computer is not possible due to the infinite dimension of the problem, i.e., there are an infinite number of sampling points and the reconstruction functions typically have infinite length. Thus, in practice to overcome this problem the infinite functions involved in the reconstruction must be truncated. Another issue is that, in most practical applications we have only a finite number of samples, which usually makes a perfect reconstruction of the infinite-length signal impossible. In both cases the problem of perfectly reconstructing the infinite-length 22

34 signal reduces to the problem of approximating a signal from a finite number of nonuniform samples. Thus, efficient approximation techniques of finite-dimensional models are required. Generally, when dealing with finite-dimensional signals, approximation by trigonometric polynomials provides a simple and appropriate way to handle the reconstruction [36, 38, 39]. In Section 2.1, we showed that the trigonometric polynomial of degree K is a periodic K- bandlimited signal. In other words, we assume that the reconstructed signal is periodic and bandlimited, i.e., a smooth signal which is periodically extended across the boundaries. This approach is frequently encountered in image processing [40, 41, 42, 43]. There are also applications, where the nonuniformly sampled signal is periodic and bandlimited. For example, a two dimensional signal sampled in polar coordinates is periodic in its azimuthal coordinate. Several reconstruction methods of periodic bandlimited signals from nonuniform samples have been previously suggested. These methods involve iterative algorithms [36, 50], which are computationally demanding and have potential issues of convergence. Recently, there has been some work on reconstructing a signal from a finite number of samples by applying Neumann boundary conditions [51], i.e., a symmetric extension across the end points of the sampling intervals. This approach may improve the quality of reconstruction at the boundaries of the signal when the signal is not really periodic. Reconstruction of a non-bandlimited periodic signal from uniform samples was discussed in [52]. In this chapter, we consider the problem of reconstructing a periodic bandlimited signal from nonuniform samples. In Fig. 3-1 we show an example of a distribution of N uniform (top) and nonuniform (bottom) samples of a T -periodic K-bandlimited signal with T = 10, K = 4 and N = 15. Theorem 3.1 of Section 3.1 asserts that such a signal can be perfectly reconstructed from a finite number N of its arbitrary spaced samples, where N 2K + 1, and provides a noniterative reconstruction algorithm. Then, in the same section, we analyze the stability of reconstruction and discuss some of its properties. In Sections 3.2 and 3.3, we consider two special distributions of sampling points: Uniform and recurrent nonuniform. We show that for these cases the reconstruction functions as well as stability analysis of the algorithm are simplified significantly. 23

35 Figure 3-1: Example of a periodic bandlimited signal with uniform (top) and nonuniform (bottom) samples taken over one period T = 10. The locations of the sampling points t p are marked by the symbol and the sampled values x(t p ) by a symbol. Two dashed lines serve as delimiters for one period [0, 10] of the signal x(t). 3.1 Reconstruction from Nonuniform Samples The problem of reconstructing a T -periodic K-bandlimited signal x(t) from its samples was first considered by Cauchy in 1841 [1] and was later investigated by several authors [47, 49, 53, 54, 55, 56, 57, 58]. In Section 2.1, we defined the space of T -periodic K-bandlimited signals V K and we showed that any signal x(t) V K has the Fourier series representation x(t) = K k= K c k e j2πkt/t. (3.1) Therefore, a straightforward approach to the problem of reconstructing x(t) of (3.1) from N nonuniform samples {x(t p )} N 1 p=0, is to solve the set of N linear equations with 2K

36 unknowns {c k } K k= K K x(t p ) = c k e j2πktp/t, p = 0,..., N 1, (3.2) k= K Equations (3.2) can be equivalently expressed in matrix form as x = A c, (3.3) where A = x = c = e j2πkt 0/T 1 e j2πkt 0/T e j2πkt 1/T 1 e j2πkt 1/T....., (3.4) e j2πkt N 1/T 1 e j2πkt N 1/T ( T x(t 0 ) x(t 1 ) x(t N 1 )), ( ) T c K c 0 c K. This method requires computing the inverse (if N = 2K + 1) or pseudo inverse (if N > 2K + 1) of an N (2K + 1) matrix A of (3.4), which is computationally demanding for large values of N. Moreover, once the set {c k } K k= K is found, the continuous-time signal x(t) is represented as a linear combination of the complex exponential functions according to (3.1). In addition to the computational complexity, this method does not provide insight into the behavior of the reconstruction algorithm in noisy environments and we can not use efficient signal processing techniques such as filtering and convolution in the reconstruction process. Instead, we propose an alternative reconstruction approach, where we directly represent the signal x(t) as a linear combination of reconstruction functions {h p (t)}, i.e., x(t) = N 1 p=0 x(t p )h p (t). (3.5) This method together with the theory of frames allows us to analyze the stability of the reconstruction algorithm, which we provide in this chapter. It will also allow us to de- 25

37 velop efficient filter and filterbank structures for reconstruction from uniform and recurrent nonuniform samples, which are developer in Chapter 5. The set of reconstruction functions {h p (t)} for N = 2K + 1 uniform samples was first derived by Cauchy [1], and later by Stark [53], Brown [55] and Schanze [47] in different ways. Reconstruction from any even number N of uniform samples was considered in [47] and [54]. As we show in this work, these results are all special cases of Theorem 3.1 derived in this section. Reconstructing a periodic bandlimited signal from nonuniform samples is considerably more complicated. When the number of sampling points N is odd and it satisfies N 2K + 1, the reconstruction can be obtained using the Lagrange interpolation formula for trigonometric polynomials [48]. For any even N greater than 2K +1, Lagrange interpolation for exponential polynomials generates a set of complex valued functions {h p (t)} [49, 20]. As a result, in the present of noise the reconstructed signal x(t) may include a complex part. In Theorem 3.1 below we show that reconstruction can be obtained using real valued functions that are simpler than those derived in [49]. The proof of this theorem is given in Appendix A. Theorem 3.1. Let x(t) be a T -periodic signal bandlimited to 2πK/T. Then x(t) can be perfectly reconstructed from its N 2K + 1 nonuniformly spaced samples x(t p ) as x(t) = N 1 p=0 x(t p )h p (t), (3.6) where h p (t) = N 1 q=0 q p sin(π(t t q )/T ) sin(π(t p t q )/T ), ( π(t tp ) cos T ) N 1 q=0 q p sin(π(t t q )/T ) sin(π(t p t q )/T ), N odd; N even. (3.7) The proof of Theorem 3.1 follows from Yen s formula [29] for signal reconstruction from recurrent nonuniform samples, and from the fact that x(nt + t p ) = x(t p ). In the next subsection we discuss properties of the reconstruction algorithm proposed by Theorem

38 3.1.1 Properties of the Reconstruction Functions We first note that in the case of odd number N of nonuniform samples the reconstruction algorithm proposed by Theorem 3.1 is Lagrange interpolation for trigonometric polynomials [48]. However, in the case of N even Theorem 3.1 presents a new reconstruction method. In Appendix B we expand the reconstruction functions {h p (t)} of (3.7) onto sums of complex exponential functions h p (t) = N/2 l= N/2 c pl e j2πlt/t, (3.8) where denotes the floor operator, which rounds down to the nearest integer and the complex coefficients c pl are the result of the expansion. This representation of the reconstruction functions will be used in various sections in this work. Specifically, we will use (3.8) in Chapter 4 to develop an alternative reconstruction algorithm for nonuniformly oversampled periodic bandlimited signals. From (3.8) we may conclude that these functions are periodic in T and have a limited number of harmonics, i.e., they are bandlimited. We can also show that the functions {h p (t)} N 1 p=0 are linearly independent, i.e., they form a basis. Theorem 3.2 below defines the space of periodic bandlimited signals for which the set of functions {h p (t)} N 1 p=0 constitute a basis. The proof of this theorem is given in Appendix C. Theorem 3.2. The set of reconstruction functions {h p (t)} N 1 p=0 a basis for the space V, where V (N 1)/2, V = V (N 2)/2 sin(π(nt σt )/T ), defined by (3.7) constitutes N odd; N even. (3.9) Here σ t = N 1 p=0 t p (3.10) and V K is the space of T -periodic K-bandlimited functions. It follows directly from Theorem 3.1 that the set of functions {h p (t)} N 1 p=0 provides perfect reconstruction for any T -periodic K-bandlimited function, where K (N 1)/2. In the proof of Theorem 3.2 we show that function sin(π(nt σ t )/T ) can be perfectly reconstructed from an even number N of its nonuniform samples. 27

39 From Theorem 3.2, we observe that in the case of N even, the functions {h p (t)} N 1 p=0 do not span a complete space of periodic bandlimited signals. To complete this space to V N/2 we need to add to it the function sin(π(nt θ)/t ), where θ σ t. We will show in this chapter that this fact may result in less stable behavior of the reconstruction algorithm for N even. An important observation from Theorem 3.2 is that in the oversampled case, i.e., N > 2K + 1, the functions {h p (t)} span a space which is larger than the space V K containing the signal x(t) that is sampled. This fact is the basis for the development of an alternative reconstruction method in Chapter 4. We can immediately verify that the reconstruction functions {h p (t)} N 1 p=0 have the interpolation property, namely h p (t k ) = = N 1 q=0 q p cos 1, k = p, sin(π(t k t q )/T ) sin(π(t p t q )/T ), ) N 1 ( π(tk t p ) T 0, k p, q=0 q p sin(π(t k t q )/T ) sin(π(t p t q )/T ), N odd; N even, of Theorem 3.1 k, p = 0, 1,..., N 1. (3.11) If x(t) is not bandlimited, then the reconstruction x(t) given by Theorem 3.1 is not equal to x(t). Nonetheless, the interpolation property (3.11) guaranties consistent reconstruction of the signal x(t), i.e., x(t p ) = x(t p ). Consistency of the reconstruction algorithm is an important property for many signal/image processing applications [63, 66]. One such application is recovery of missing samples from the set of remaining samples [15, 16]. In this application, consistency of the reconstruction algorithm guarantees the desirable property that the set of the remaining samples stays unchanged after reconstruction. Of course not all reconstruction algorithms provide consistent reconstruction of the signal, as we will see in Chapter 4. To conclude this section we summarize two basic properties of the reconstruction functions {h p (t)} developed in Theorem 3.1. Interpolation property (3.11), which guarantees consistent reconstruction. The functions form a basis, i.e., there is the set of linearly independent functions. In the next section we provide a stability analysis for the reconstruction algorithm 28

40 proposed by Theorem Stability Analysis A very important characterization of any reconstruction algorithm is its stability. We say that a reconstruction algorithm is stable if small perturbations of the samples x(t p ) results in small perturbations of the reconstruction x(t). The relation of these two perturbations is well defined by expression (2.20) or in our case, where the set of reconstruction functions constitutes a basis, by (2.22). From (2.22), we conclude that the condition number κ of (2.14) is a good indicator of the stability and overall robustness of the reconstruction algorithm. The condition number is defined as the ratio of the largest and smallest eigenvalues of the frame operator S, given in (2.14). Using the fact that the nonzero eigenvalues of the operator S and the correlation matrix R of (2.16) are the same [62], the condition number κ can be calculated using the eigenvalues of the correlation matrix R. For the reconstruction algorithm of Theorem 3.1 the entries of the matrix R are given by (2.16) R i,j = h i (t), h j (t), (3.12) where, denotes the inner product of two periodic functions (2.6). Due to the fact that the set {h p (t)} N 1 p=0 constitutes a basis, i.e., this is a set of N linearly independent functions, the matrix R is of full rank and has no zero eigenvalues. Therefore, we calculate the condition number κ of the reconstruction algorithm of Theorem 3.1 as the ratio of the largest and smallest eigenvalues of the correlation matrix R, given by (3.12). We observe from (3.12) that κ depends on the distribution of the sampling points only and not on the sampled signal. This number may be very large in the critical cases, where samples are very close to each other or there exists large gaps between sampling points. As a result, the reconstruction algorithm becomes very unstable in these cases. To illustrate this point, we present two examples, where 7 sampling points are nonuniformly spaced over one period of the signal T. In this case the set of reconstruction functions is a basis for the space V 3. The first set of these nonuniform samples is presented in Fig. 3-2(a). The second set, depicted in Fig. 3-2(b), differs from the first in the location of one sample only. The change in location of the 6th sample in the second set results in a large gap between samples t b 5 and t 6, and samples t 4 and t b 5 are now very close to each other. 29

41 0 t 0 t 1 t 2 t 3 t 4 t a 5 t 6 T (a) 0 t 0 t 1 t 2 t 3 t 4t b 5 t 6 T (b) t Figure 3-2: Two sets of 7 nonuniform samples where t 0 = 0.7, t 1 = 1.7, t 2 = 3.4, t 3 = 4.2, t 4 = 5.4, t 6 = 9, and T = 10. The location of the 6th point is (a) t a 5 = 7.4 and (b) tb 5 = 5.7. t Given the set of 7 nonuniform samples we obtain the set of 7 reconstruction functions according to Theorem 3.1. Substituting the two sets of reconstruction functions into (3.12), results in two 7 7 correlation matrices R a and R b, where the inner product was obtained by numerical integration. The eigenvalues of the matrices R a and R b are given by λ(r a ) = {0.58, 0.66, 0.79, 1.13, 1.24, 1.62, 4.79}, (3.13) and λ(r b ) = {0.47, 0.62, 0.68, 1.12, 1.79, 2.89, }, (3.14) respectively. From (3.13) and (3.14) it follows that the condition numbers of two sampling schemes are equal to κ a = 8.26 and κ b = , respectively. From the last result we conclude that reconstruction from the first set of samples (Fig. 3-2(a)) is more stable compared to the second set (Fig. 3-2(b)). Thus, in the case where both sampled sequences x(t a p) and x(t b p) are perturbed by noise of the same average power, the first set will result in a significantly lower reconstruction error. Calculation of the condition number κ directly from the eigenvalues of the correlation matrix R can be computationally demanding for large values of N. In Sections 3.2 and 3.3, we present methods for efficient evaluation of the condition number for two special cases of sampling distributions: Uniform and recurrent nonuniform sampling. We will also show that as the samples approach the uniform distribution the condition number κ tends to the minimal value and reaches this value when the samples are uniform. We now consider two special cases of Theorem 3.1: Uniform sampling and recurrent nonuniform sampling, which are very common in signal processing algorithms [31, 67, 68]. As we show, for these sampling schemes, the reconstruction formula of Theorem 3.1 can be 30

42 simplified. In Chapter 5 we show that in these cases the reconstruction can be obtained using LTI filters. 3.2 Uniform Sampling The most popular form of sampling used in the context of DSP is uniform sampling (Fig. 3-1-top), due to its simplicity and stability of reconstruction. In this section we develop an efficient formula for reconstruction from uniform samples and analyze the stability of the reconstruction algorithm Reconstruction Formula We now consider the case of uniform sampling, where the sampling points are uniformly distributed over one period of a T -periodic signal x(t). The set of sampling points is given by t p = pt N, p = 0, 1,... N 1. (3.15) To simplify the expression for h p (t) of Theorem 3.1 for this case we first introduce two trigonometric identities [69, p. 752] N 1 k=0 ( sin θ + kπ ) N = sin(nθ), (3.16) 2N 1 and [69, p. 752] N 1 k=1 sin kπ N = N. (3.17) 2N 1 Substituting the set of uniform samples (3.15) into the product of sine functions of (3.7), we have N 1 q=0 q p sin(π(t t q )/T ) sin(π(t p t q )/T ) = = = N 1 q=0 sin(π(t t q)/t ) sin(π(t t p )/T ) N 1 q=0 sin(π(t p t q )/T ) q p N 1 q=0 sin ( π T t qπ N ) ( ) (p q)π N sin(π(t t p )/T ) N 1 q=0 sin q p ( 1) N 1 N 1 q=0 sin ( π T t + qπ N sin(π(t t p )/T )( 1) N 1 N 1 k=1 sin ( kπ N ) ) (3.18) 31

43 where the last equation results from the facts that sin(θ) = sin( θ) and sin(θ ϕπ) = sin(θ + (1 ϕ)π). Using (3.16) and (3.17), we develop an equivalent representation for this product of sines, which holds only for the case of uniformly spaced data N 1 q=0 q p sin(π(t t q )/T ) sin(π(t p t q )/T ) = 2 N 1 sin(nπt/t ) N2 N 1 sin(π(t t p )/T ) = sin(nπ(t t p)/t ) N sin(π(t t p )/T ). (3.19) The last expression is a scaled Dirichlet kernel D N (t) [59, 48], which plays an important role in the theory of digital signal processing. Substituting (3.19) into (3.7), we have that h p (t) = cos sin(nπ(t t p)/t ) N sin(π(t t p)/t ) (, π(t tp ) T ) N odd; (3.20) sin(nπ(t tp )/T ) N sin(π(t t p)/t ), N even, which is equal to the interpolation function derived in [1], [53], [55] and [47]. In the next section we explore the properties of the reconstruction functions (3.20) for odd and even number of sampling points Stability Analysis In this section we provide a stability analysis of the reconstruction functions (3.20), which is based on the calculation of the correlation matrix R, defined in (3.12). Due to the special structure of the matrix R for uniformly spaced samples, the calculation of its eigenvalues and the condition number κ is immediate. We show, that in the case of odd N the set of reconstruction functions {h p (t)} N 1 p=0 hold true for even N. constitutes an orthonormal basis. This result does not For further investigation of the properties of the set {h p (t)} N 1 p=0 of (3.20), we first express the functions as sums of cosines, where for simplicity t t p is replaced by τ. For the case 32

44 of odd N, we have h p (t) = sin(nπτ/t ) N sin(πτ/t ) = 1 e jπnτ/t e jπnτ/t N e jπτ/t e jπτ/t = 1 (e jπτ/t e jπτ/t )(e jπ(n 1)τ/T + e jπ(n 3)τ/T e jπ(n 1)τ/T ) N e jπτ/t e jπτ/t = 1 N + 2 N (N 1)/2 l=1 cos(2πlτ/t ). (3.21) Using (3.21), the entry (i, j) of the correlation matrix R, i.e., the inner product of two periodic functions h i (t) and h j (t) from the set {h p (t)} N 1 p=0, is given by h i (t), h j (t) = (3.22) ( ) T 1 (N 1)/2 N T 2 + cos(2πl(t t i )/T ) 1 (N 1)/2 2 + cos(2πl(t t j )/T ) dt. 0 l=1 Using the orthogonality of cosine functions in (3.22), we have h i (t), h j (t) = 4 T 1 (N 1)/2 T N cos(2πl(t t i )/T ) cos(2πl(t t j )/T ) dt l=1 = 1 N T (N 1)/2 T N 2 cos 2 (2πl(t t i )/T ) cos(2πl(t i t j )/T )dt 0 l=1 = 1 N (N 1)/2 T N 2 cos(2πl(t i t j )/T ) l=1 = 1 N (N 1)/2 N 2 cos(2πl(t i t j )/T ) l=1 l=1 T 0 cos 2 (2πl(t t i )/T )dt = 1 N (N 1)/2 N 2 cos(2πl(i j)/n), (3.23) l=1 where the last equation results from substituting t i = it/n. For i = j, equation (3.23) results in h i (t), h i (t) = 1 N N 1 N 2 = 1 2 N. (3.24) 33

45 For the case i j, we use the fact [69, p. 641] N cos(2πlk/n) = 0, for k = 1,..., N. (3.25) l=1 Combining (3.25) with cos(2πlk/n) = cos(2π(n l)k/n) we can show that for i j (N 1)/2 l=1 Substituting (3.26) into (3.23), we have cos(2πl(i j)/n) = 1 2. (3.26) h i (t), h j (t) = 1 N ( N 2 1 ) = 0. (3.27) 2 Finally, from (3.24) and (3.27) we conclude that correlation matrix R (3.12) of the set {h p (t)} (3.20) is diagonal matrix, which is given by R = 1 N I N N, (3.28) where I N N is the N N identity matrix. Therefore, for N odd the set {h p (t)} constitutes an orthonormal basis for the space V (N 1)/2, thus its frame bounds (2.10) are equal (A = B) and its condition number is κ = 1, which is the lowest possible value for κ. Due to the low condition number, the set {h p (t)} of (3.20) provides stable reconstruction in the presence of noise. For the case of N even, the expansion similar to (3.21) results in h p (t) = cos(πτ/t ) sin(nπτ/t ) N sin(πτ/t ) = 1 (e jπτ/t + e jπτ/t )(e jπnτ/t e jπnτ/t ) 2N e jπτ/t e jπτ/t = 1 2N (ejπτ/t + e jπτ/t )(e jπ(n 1)τ/T + e jπ(n 3)τ/T e jπ(n 1)τ/T ) = 1 2N (e jπnτ/t + 2e jπ(n 3)τ/T + 2e jπ(n 5)τ/T e jπnτ/t ) = 1 N + 2 N N/2 1 l=1 cos(2πlτ/t ) + 1 cos(πnτ/t ). (3.29) N 34

46 Using the orthogonality of cosine functions in (3.29), the inner product of two reconstruction functions h i (t) and h j (t) for N even is given by h i (t), h j (t) = 1 N N/2 1 T T N 2 cos(2πl(t i t j )/T ) cos 2 (2πl(t t i )/T )dt + 0 From (3.30), we have that for i = j l=1 + 1 T N 2 cos(πn(t i t j )/T ) T 0 cos 2 (πn(t t i )/T )dt = 1 N N/2 1 N 2 cos(2πl(t i t j )/T ) + 1 2N 2 cos(πn(t i t j )/T ) l=1 = 1 N N/2 1 N 2 cos(2πl(i j)/n) + 1 cos(π(i j)) 2N 2 l=1 = 1 N N/2 1 N 2 cos(2πl(i j)/n) + ( 1)i j 2N 2. (3.30) l=1 h i (t), h i (t) = 1 N ( ) N N N 2 = 2N 1 2N 2. (3.31) For i j, we observe that the N/2 1 arguments of cosine in (3.30) are uniformly distributed from 2π(i j)/n to (π(i j) 2π(i j)/n). Using the fact that cos(t) = cos(π(i j) t) for odd integer i j, we have that For i j even, using (3.25) we show that h i (t), h j (t) = 1 N N 2 = 1 2N 2. (3.32) N/2 1 l=1 Substituting (3.33) into (3.30) we have cos(2πl(i j)/n) = 1. (3.33) h i (t), h j (t) = 1 N N 2 ( 1) + 1 2N 2 = 1 2N 2. (3.34) Combining (3.31), (3.32), and (3.34), it follows that the correlation matrix R of the set 35

47 {h p (t)} (3.20) for N even is a symmetric Toeplitz matrix of the form a b b b b b a b b b b b a b R =, (3.35) b b b a b b a where b = 1/2N 2 and a = 1/N b. We observe that R is a circulant matrix, which is a special case of a Toeplitz matrix where each row is a cyclic shift of the row above it [70]. Denoting the N values of the first row of the matrix R by r n, n = 0,..., N 1, each entry (i, j) of the matrix R is given by R i,j = r (j i) mod N, (3.36) where mod is a modulo operator computes the remainder of the division between two operands. It was shown in [70] that the N eigenvalues {λ m } m=0 N 1 of the circulant matrix can be easily calculated as the discrete Fourier transform of the first row of the matrix, namely λ m = N 1 n=0 r n e j2πnm/n. (3.37) Substituting the values of the first row of R into (3.37), the eigenvalues of R are given by N 1 λ m = (a + b)e j2π0m/n + ( b)( 1) n e j2πnm/n. (3.38) For m = N/2, equation (3.38) results in n=0 N 1 λ N/2 = (a + b) + ( b)( 1) n e jπn n=0 N 1 = (a + b) + ( b)( 1) n ( 1) n n=0 = a (N 1)b. (3.39) 36

48 For m N/2, using the formula for calculating the sum of a finite geometric series, equation (3.38) can be rewritten as N 1 λ m = (a + b) b ( e j2πm/n ) n n=0 = (a + b) b ( e j2πm/n ) N 1 e j2πm/n 1 = (a + b) + b e j2πm 1 e j2πm/n + 1 = 1 1 (a + b) + b e j2πm/n + 1 = a + b. (3.40) For a + b 0 and a (N 1)b 0, which is satisfied in our case, the matrix R is of full rank. Therefore, in this case, the functions {h p (t)} are linearly independent and constitute a basis for the space V (N 2)/2 cos(πnt/t ). (3.41) The last expression was obtained from substituting the set of uniform samples (3.15) into (3.10), i.e., σ t = N 1 p=0 t p = T N N 1 p=0 p = T (N 1) 2 (3.42) and the fact that sin(θ π/2) = cos(θ). The condition number κ, which is defined as the ratio of the largest and smallest eigenvalues of R (2.21), is given by κ = a + b = 2. (3.43) a (N 1)b From (3.43) we conclude that the set of functions {h p (t)} is not an orthonormal basis, for which κ = 1. We also observe that κ in (3.43) does not depend on N and T, thus the set never converges to an orthonormal basis. As a result, reconstruction from an even number of uniform samples, where κ = 2, is less stable compared to N odd, where κ = 1 and the set {h p (t)} is orthonormal. We would like pay special attention to the last result, which is rather surprising. From relation (2.22) we conclude that a large condition number κ may increase or decrease the 37

49 1 Reconstruction from N=10 uniform samples Reconstruction from N=11 uniform samples Figure 3-3: Reconstruction (dashed line) of the 10-periodic bandlimited signal 0.7 cos(πt) (solid line) from noisy uniform samples (marked by dots). reconstruction error. Our goal is to detect the critical situation, which results in the highest ratio in (2.22), i.e., κ = 2. From (3.41), we conclude that for even number of uniform samples the set of reconstruction functions spans a non-complete space of T -periodic bandlimited signals, where the function cos(πn t/t ) appears without its orthogonal partner sin(πnt/t ). Intuitively, this fact is the reason for the high value of κ, as illustrated by the following example. Example 3.1. We consider the problem of reconstructing a 10-periodic signal x(t) = 0.7 cos(πt) from N = 10 and N = 11 uniform samples. It is well established by (3.41) that this signal, which is presented in Fig. 3-3, can be perfectly reconstructed from these sets of samples using (3.20). We perturb these sets of 10 and 11 uniform samples by noise sequences {w 1 } and {w 2 }, which are randomly generated in Matlab such that w 1 x(t p1 ) = w 2 = 0.2. (3.44) x(t p2 ) The result of the reconstructions of the signal x(t) from these noisy uniform samples is 38

50 presented in Fig. 3-3 (dashed lines). Calculating the squared norm of the reconstruction errors by numerical integration we obtain that for even N = 10 uniform samples x(t) x 1 (t) L2 [0,T ] x(t) L2 [0,T ] = 0.28 = , (3.45) and for odd N = 11 x(t) x 2 (t) L2 [0,T ] x(t) L2 [0,T ] = 0.2. (3.46) In other words, reconstruction of the 10-periodic function x(t) = cos(πnt/t ) from even N uniform samples results in an unstable reconstruction, i.e., gain in the noise to signal ratio by a factor close to 2, while for N odd this factor is equal to 1. Therefore, for more stable behavior of the reconstruction algorithm in the case of N even the reconstructed signal should not include a cosine component like in (3.41). The reconstruction of a periodic bandlimited signal x(t) V (N 2)/2 (without a cosine component like in (3.41)) from an even number N of uniform samples results in a reconstruction with gain factor close to 1, as illustrated by Fig Fig. 3-4 presents results of reconstruction of 10-periodic 4-bandlimited signal x(t) V 4 from N = 10 and N = 11 noisy uniform samples. The two noise sequences {w 1 } and {w 2 } are randomly generated in Matlab, according to (3.44). Calculating the squared norm of the reconstruction errors we obtain that for odd N = 11 the gain in the noise to signal ratio is 1, which is expected due to orthogonality of a set of the reconstruction functions. In the case of N = 10, the gain in the noise to signal ratio is Therefore, we can reach more stable reconstruction in the case of even N if the signal does not include cosine component like in (3.41). We conclude this section by the following theorem. Theorem 3.3. In the cases of odd and even number of uniform samples the sets of reconstruction functions (3.20) form an orthonormal and not an orthonormal bases, respectively, with the condition numbers 1, N odd; κ = 2, N even. (3.47) As a result, an even number of uniform samples may result in a less stable reconstruction of a periodic bandlimited signal. In the next section we investigate another sampling scheme, where the complete set 39

51 1 Reconstruction from N=10 uniform samples Reconstruction from N=11 uniform samples Figure 3-4: Reconstruction (dashed line) of the 10-periodic 4-bandlimited signal (solid line) from noisy uniform samples (marked by dots). of sampling points consists of several sets of uniform samples, i.e., recurrent nonuniform sampling. Results derived in this section will be useful for simplifying the reconstruction functions in this case and for stability analysis of the reconstruction. We will compare the results of both sections and show that a uniform distribution of samples provides the most stable reconstruction. 3.3 Recurrent Nonuniform Sampling In this section, we consider the case of recurrent nonuniform sampling. In this form of sampling, the sampling points are divided into groups of N r nonuniformly spaced points. The groups have a recurrent period, which is denoted by T r. One group of nonuniform samples repeats itself M r times along the T -periodic signal x(t), where M r T r = T. Denoting the points in the first recurrent group by t r, r = 0, 1,... N r 1, the complete set {t p } N 1 p=0 40

52 of sampling points in one period T is t p {t r + nt r, r = 0, 1,... N r 1, n = 0, 1,... M r 1}, (3.48) or equivalently p t p = t p mod Nr + T r, p = 0, 1,... N 1, (3.49) N r An example of a sampling distribution for the case N r = 3 and M r = 2 is depicted in Fig In Fig. 3-6 we show an example of a T -periodic K-bandlimited signal with recurrent nonuniform distribution of the sampling points, where T = 10, K = 4, N r = 3, M r = 5, and N = 15. T = 2T r t 0 t 0 t 1 t 2 t 0 + T r t 1 + T r t 2 + T r T Figure 3-5: Sampling distribution for N r = 3 and M r = Figure 3-6: Example of a periodic bandlimited signal with recurrent nonuniform samples taken over one period T = 10 with N r = 3 and M r = 5. The locations of the sampling points t p are marked by the symbol and the sampled values x(t p ) by a symbol. Two dashed lines serve as delimiters for one period [0, 10] of the signal x(t). Recurrent nonuniform samples can be regarded as a combination of N r sequences of uniform samples with M r points each, taken with interval T r. This interpretation of the sampling method is schematically shown in Fig

53 T r e jwt 0 x(t 0 + nt r ) x(t) e jwt 1 T r x(t 1 + nt r )... e jwt N r 1 T r x(t Nr 1 + nt r ) Figure 3-7: Recurrent nonuniform sampling model. Recurrent nonuniform sampling arises in a variety of applications, among them conversion of continuous-time signals to discrete-time signals using a set of N r A/D converters, each operating at the rate 1/T r. In the general case, these N r converters are not necessarily synchronized, leading to a recurrent nonuniform distribution of the samples. From a practical point of view it is much easier to design a set of N r converters each with rate 1/T r than one converter working at the higher rate of N r /T r. The theory of recurrent nonuniform sampling for non periodic signals is well developed. A formula for reconstructing a non periodic bandlimited signal from recurrent nonuniform samples was derived by Yen in [29], and later Eldar and Oppenheim [31] proved it in an easier way. Efficient implementations of this formula using a bank of continuous-time LTI filters or a bank of discrete-time LTI filters, was developed in [31]. We show that similar results may be obtained in the case of periodic signals. In this section, we first present a formula for reconstructing a periodic bandlimited signal from recurrent nonuniform samples and then provide a stability analysis of this reconstruction method. Later, in Chapter 5, we develop an efficient implementation of this reconstruction formula using continuous-time and discrete-time LTI filters Derivation of the Reconstruction Functions Substituting the recurrent nonuniform sampling set of (3.48) into (3.7) for an even number of samples, i.e., N = M r N r is even, and using the identity (3.19) which is satisfied for 42

54 equally spaced samples and the fact that sin(π(t θ)/t ) = sin(π(t θ T )/T ), we have ( π(t tp ) h p (t) = cos T where ( π(t tp ) = cos T ( π(t tp ) = cos T ) N 1 ( π(t tp ) = b p cos T q=0 q p ) Mr 1 m=1 sin(π(t t q )/T ) sin(π(t p t q )/T ) sin(π(t t p mt r )/T ) sin(π(t p t p mt r )/T ) ) N sin(mr π(t t p )/T ) r 1 M r sin(π(t t p )/T ) q=0 q p ) Nr 1 N r 1 M r 1 q=0 m=0 q p mod N sin(m r π(t t q )/T ) sin(m r π(t p t q )/T ) sin(π(t t q mt r )/T ) sin(π(t p t q mt r )/T ) q=0 sin(m r π(t t q )/T ), (3.50) sin(π(t t p )/T ) 1 b p = M Nr 1 r q=0,q p sin(m rπ(t p t q )/T ). (3.51) Similarly, we can derive the reconstruction function for an odd number of samples, which together with (3.50) results in Q N r 1 q=0 sin(m r π(t t q )/T ) b p sin(π(t t h p (t) = p)/t ), N odd; Q N r 1 q=0 sin(m r π(t t q )/T ) b p cos(π(t t p )/T ) sin(π(t t p )/T ), N even, (3.52) where b p = 1 M Nr 1 r q=0,q p sin(m rπ(t p t q )/T ). (3.53) From the definition of the reconstruction functions (3.52) we can immediately verify that if two functions h i (t) and h j (t) belong to one of the N r uniform sets (see Fig. 3-7), i.e., i j = mn r where m is an integer, then one function is a shifted version of another, namely h p (t) = h (p mod Nr ) ( ) p t T r. (3.54) N r Direct reconstruction using (3.52) is now easier than the original expression (3.7) but is still computationally difficult. In Chapter 5, using the fact that all the functions in the set (3.52) are shifted versions of N r functions (3.54), we develop efficient implementations of (3.52) using banks of continuous-time and discrete-time LTI filters. 43

55 3.3.2 Stability Analysis We now provide an analysis of the stability of the reconstruction, using the set of reconstruction functions {h p (t)} of (3.52). Similar to the case of uniform sampling we calculate the correlation matrix R of the set {h p (t)} and provide an efficient technique for determining its eigenvalues. We also present two examples of recurrent nonuniform samples and investigate the behavior of the condition number as a function of the sampling distribution. As we will see, the optimal distribution of the samples, which results in the lowest value of κ, is uniform. Using (3.54) and the fact that the inner product (2.6) of two periodic functions x(t) and y(t) is shift invariant, namely x(t τ), y(t τ) = x(t), y(t), (3.55) we can show that the entries of the correlation matrix R in the case of recurrent nonuniform sampling, have the following relation R(i, j) = R((i + nn r ) mod N, (j + nn r ) mod N). (3.56) From (3.56), we conclude that the matrix R has the following form A 0 A 1 A 2 A Mr 1 A Mr 1 A 0 A 1 A Mr 2 R = A Mr 2 A Mr 1 A 0, (3.57) A 1 A 2 A 0 where the submatrices A r, r = 0,..., M r 1 are square N r N r matrices with entries A r (i, j) = h i (t), h j+rnr (t). (3.58) We observe that R is a block circulant matrix, where each row of sub matrices is a cyclic shift of the row above it [71]. Since every correlation matrix R is Hermitian, the following 44

56 is satisfied A r = A Mr r r = 1,..., M r 1; A r = A r r = 0. (3.59) To compute the eigenvalues of R we define the discrete Fourier components of R as  k = M r 1 r=0 W kr A r, k = 0,..., M r 1, (3.60) 2π j where W = e Mr. Let {λ k,i } Nr 1 i=0 be the N r eigenvalues of Âk for every k = 0,..., M r 1. It was shown in [71] that the eigenvalues of an Hermitian block circulant matrix are the eigenvalues of the discrete Fourier components. Therefore, the eigenvalues of the matrix R of (3.57) are given by λ(r) = λ k,i, i = 0,..., N r 1, k = 0,..., M r 1. (3.61) The last result (3.61), significantly simplifies the calculation of the condition number κ of the reconstruction algorithm in the case of recurrent nonuniform samples. Instead of computing N eigenvalues of an N N matrix R, which may be very large, we compute N r eigenvalues of the M r matrices Âk defined by (3.60), where typically N r << N. This fact simplifies significantly the computation of κ in the next section Examples We will now present two examples of recurrent nonuniform sampling. We will demonstrate the behavior of the condition number κ as a function of the sample locations and show that the optimal distribution of the samples, which results in the lowest possible value of the condition number, is uniform. Example 3.2. We consider a recurrent nonuniform set of samples, where N r = 2, M r = 5 and the period of the reconstructed signal is T = 10. The remaining parameters can be calculated as T r = T/M r = 2 and N = N r M r = 10 (even). Without loss of generality, we will assume throughout this example that t 0 = 0. Thus the sample t 1, which dictates the position of the second uniform sequence of samples, may take on values in the range 0 < t 1 < T r. An example of this set of samples is presented in Fig. 3-8, where the vector illustrates the mobility of the sample t 1. 45

57 T = 5T r t 0 0 t 1 T r T t Figure 3-8: Sampling distribution for N r = 2 and M r = log 10 (κ) t 1 Figure 3-9: Condition number κ of the set of reconstruction functions for the set of recurrent nonuniform samples considered in Example 3.2, as a function of t 1. The behavior of the condition number as a function of the location of t 1 is shown in Fig. 3-9, where the smallest and largest eigenvalues of the reconstruction functions were obtained from (3.61) by computing the eigenvalues of matrices, which is much easier than direct calculation of the eigenvalues of the matrix R. We also note that the eigenvalues λ 1 and λ 2 of a 2 2 matrix A satisfy λ 1 λ 2 = det(a) and λ 1 + λ 2 = trace(a), which makes their calculation very simple. We observe that the condition number κ becomes very large as the sample t 1 gets closer to another sample, t 0 or t 0 + T r. As a result, the reconstruction from this set of samples is very unstable. However, when the distance of t 1 from t 0 or t 0 + T r grows the value of κ decreases and then the reconstruction algorithm is more stable. The condition number achieves a minimal value when t 1 = 1, for which the recurrent nonuniform sampling set 46

58 becomes uniform. As predicted by (3.43), for N even, this minimal value is κ = 2. Example 3.3. We now consider a case of odd number N of recurrent nonuniform samples, i.e., N r and M r are both odd with values N r = 3 and M r = 5. The period of the reconstructed signal is chosen to be T = 15. Hence, the recurrent period of the sampling set is T r = 3 and the total number of samples is N = 15. Similar to the previous example, without loss of generality, we assume that t 0 = 0. Then, the distribution of the samples depends on the locations of the samples t 1 and t 2, where 0 < t 1 < t 2 < T r. An example of this set of samples is presented in Fig. 3-10, where by the vectors we emphasize the mobility of the samples t 1 and t 2. T = 5T r t 0 t 1 t 2 0 T r T t Figure 3-10: Sampling distribution for N r = 3 and M r = log 10 (κ) t t Figure 3-11: Condition number κ of the set of reconstruction functions for the set of recurrent nonuniform samples considered in Example 3.3, as a function of t 1 and t 2. The behavior of the condition number as a function of the locations of t 1 and t 2 is 47

59 shown in Fig. 3-9, where the smallest and largest eigenvalues of the reconstruction functions were obtained from (3.61) by computing eigenvalues of matrices instead of direct calculation of the eigenvalues of the matrix R. The condition number κ becomes very large when all three samples t 0, t 1, and t 2 or at least two of them are very close together. As a result, reconstruction from this set of samples is very unstable. We observe that κ achieves a minimal value for (t 1, t 2 ) = (1, 2), for which the recurrent nonuniform sampling set becomes uniform. As predicted by the results of Section 3.2, this minimal value is κ = 1, since the set of reconstruction functions constitutes an orthogonal basis for N odd. 48

60 Chapter 4 Frame-Based Reconstruction Theorem 3.1 provides perfect reconstruction of periodic bandlimited signals from nonuniform redundant (N > 2K +1) and nonredundant (N = 2K +1) samples. From Theorem 3.2 it follows that in the case of redundant sampling, the set of reconstruction functions proposed by Theorem 3.1 span a space which is larger than the space of the signal V K. In this case, the fact that the signal is oversampled is not taken into account in the reconstruction process, i.e., the narrow band signal bandlimited to 2πK/T is reconstructed with functions of wider support. The question is how and what can we gain from the redundancy of the signal representation. The answer for this question can be found in the literature [72, 6, 73, 74]. One of the main reasons for oversampling is to reduce the average power of the additive noise or quantization error, when the samples {x(t p )} of the signal x(t) are corrupted by noise or quantized prior to reconstruction. To clarify how the redundancy may contribute to the reconstruction process we present an example of a uniformly oversampled non-periodic bandlimited signal. A non-periodic signal f(t) is said to be bandlimited to W Q if its Fourier transform F (Ω) is zero for W Q Ω. The Nyquist period is then given by T Q = π/w Q. The Shannon-Wittaker theory guarantees perfect reconstruction of such signals from uniformly spaced samples taken with the Nyquist interval T Q, where sinc(t/t Q ) is chosen as the reconstruction function [2]. Reconstruction with sinc(t/t Q ) is equivalent to ideal low-pass filtering in the frequency domain, where the support of the filter is [ W Q, W Q ]. In the oversampled case where the sampling period is T 0 < T Q, according to Shannon- 49

61 W 0 W Q W Q W 0 Ω Figure 4-1: Power spectrum of a signal (solid line) and a white noise (dashed line). Wittaker theory perfect reconstruction is obtained with an ideal low-pass filter of support [ W 0, W 0 ], where W 0 = π/t 0. Obviously, this method leads to perfect reconstruction of the signal f(t) but in the case of oversampling it is not unique. Actually, every set of functions ( ) t nt sinc, n Z (4.1) T guarantees perfect reconstruction for T 0 T T Q. When the samples f n = f(nt o ) of the signal are corrupted by white noise, e.g., as a result of quantization [73], we obtain only an approximated version of the signal f(t). Fig. 4-1 shows the power spectral density of the signal bandlimited to W Q and white noise. From Fig. 4-1, we can immediately conclude that the optimal choice is to use a low-pass filter with support [ W Q, W Q ]. This choice was mathematically proved in [6, p. 137], where it was also shown that the set of shifted versions of the function sinc(t/t Q ) constitutes a tight frame for the space of signals bandlimited to W Q. This approach provides a great advantage in A/D conversion techniques, where it is practically much easier to increase the sampling rate than to reduce the average power of a quantization error, i.e., to reduce the step of the quantization [74, 75]. This oversampling example is easily analyzed and understood without frame formalism. However, in more complicated representations the frame approach is needed. For example, irregularly oversampled signals and redundant windowed Fourier and wavelet frames were discussed in [76]. Frame based algorithms for redundant sampling and reconstruction in arbitrary spaces were proposed in [72, 77]. In this chapter, we develop a method for reconstructing a periodic bandlimited signal from redundant nonuniform samples, where the set of reconstruction functions constitutes a 50

62 frame for the space of the signal involved in the reconstruction. Exploiting the redundancy of the sampling points, this method results in a more stable reconstruction in the presence of noise, when compared to Theorem 3.1. However, the reconstruction is not consistent in a noisy environment. In the non redundant case, where N = 2K + 1, this method is equivalent to the reconstruction theorem of Chapter 3. The organization of this chapter is as follows. In Section 4.1, we first derive the orthogonal projection P K of a T -periodic signal x(t) onto the space V K. Then, applying this projection onto the set of reconstruction functions developed in Section 3.1 results in a new set of functions {P K h p (t)} N 1 p=0, which constitute a frame for V K. We then discuss the properties of this set of reconstruction functions. In Section 4.2, we show that for uniformly spaced samples the set of projected reconstruction functions constitute a tight frame for the space V K. In Section 4.3, we discuss reconstruction from recurrent nonuniform samples. We show both theoretically and through simulation results, in Section 4.4, that the frame based methods developed in this chapter lead to more stable reconstruction algorithms in comparison with the methods of Chapter 3. The price payed for the stability is the consistency of reconstruction algorithm, i.e., we show that the interpolation property (3.11) no longer holds when using the method developed in Section Reconstruction from Nonuniform Samples Using Frames In the oversampled case, if the samples are corrupted by noise, then applying a low pass filter of support [ 2πK/T, 2πK/T ] to the reconstructed signal can reduce the average power of the noise. We denote this low pass filtering as an operator P K, which zeros all harmonics higher than K of the periodic signal x(t). Formally, P K is operator defined by K P K x(t) = c k e j2πkt/t, (4.2) k= K where x(t) is a T -periodic signal not necessarily bandlimited. We can immediately show that the operator P K defined in (4.2), is nothing but an orthogonal projection of a T -periodic signal x(t) onto the subspace of T -periodic K-bandlimited signals V K. Indeed, P K x(t) = x(t), x(t) V K ; (4.3) P K x(t) = 0, x(t) VK, 51

63 where V K is the space of functions orthogonal to V K, i.e., the space of T -periodic functions with c k = 0 for k K. Applying P K to the reconstructed signal of (3.6) results in N 1 x(t) = P K p=0 x(t p )h p (t) = N 1 p=0 x(t p )P K h p (t), (4.4) which is equivalent to a reconstruction with the set of functions {P K h p (t)}. To determine the properties of this set of reconstruction functions, we rely on the following proposition, which is given in [61, p. 95] without a proof. Proposition 4.1 ([61, p. 95]). Let {ϕ i (t)} N i=1 be a Riesz basis for the space W with frame bounds A W and B W, and let P denote the orthogonal projection of W onto a closed subspace V. Then {P ϕ i (t)} N i=1 is a frame for V with frame bounds A V and B V, such that A V A W and B V B W. Proof: When {ϕ i (t)} N i=1 is a Riesz basis for W, we know that (2.10) A W x(t) 2 N x(t), ϕ i (t) 2 B W x(t) 2, (4.5) i=1 for x(t) W. Given a closed subspace V of W, any function x(t) W can we written as x(t) = x v (t) + x v (t), where x v (t) V, x v (t) V, and V V = W. If the expression (4.5) is satisfied for every x(t) W, then it is also satisfied for every x v (t) V W, namely A W x v (t) 2 N x v (t), ϕ i (t) 2 B W x v (t) 2. (4.6) i=1 Using the properties of an orthogonal projection, we have that N x v (t), ϕ i (t) 2 = i=1 N x v (t), P ϕ i (t) 2. (4.7) i=1 From (4.6) and (4.7), we conclude that the set {P ϕ i (t)} N i=1 is a frame for V with frame bounds A V and B V, such that A V A W and B V B W. Computing the functions {P K h p (t)} in (4.4)explicitly leads to the following reconstruction theorem. 52

64 Theorem 4.1. Let x(t) be a T -periodic signal bandlimited to 2πK/T. Then x(t) can be perfectly reconstructed from its N 2K + 1 nonuniformly spaced samples x(t p ) as x(t) = N 1 p=0 x(t p )h p (t), (4.8) where For N odd h p (t) = α p0 K 2 + (α pk cos(2πkt/t ) β pk sin(2πkt/t )). (4.9) k=1 α pk = a p( 1) k 2 N 2 β pk = a p( 1) k 2 N 2 ϕ G pk cos(πϕ/t ), ϕ G pk sin(πϕ/t ), (4.10) where G pk is the set of all possible sums of values {t q } N 1 q=0,q p, when (N 1)/2 + k of them are chosen with negative sign and (N 1)/2 k are chosen with positive sign. For N even α pk = a p( 1) k 2 N 1 β pk = a p( 1) k 2 N 1 ϕ G + pk ϕ G pk sin(πϕ/t ) ϕ G pk cos(πϕ/t ) ϕ G + pk sin(πϕ/t ), cos(πϕ/t ), (4.11) where the set {G + pk G pk } consists of all possible sums of values {t q} N 1 q=0, when N/2 + k of them are chosen with negative sign and N/2 k are positive. The value of t p appears with positive and negative signs in G + pk and G pk, respectively. Proof: In Appendix B we show that the reconstruction functions (3.7) of Theorem 3.1 can be expressed as a sum of complex exponents (B.1), (B.5) or as a sum of trigonometric functions (B.4), (B.8) for cases of odd and even N, respectively. Applying the orthogonal projection P K of (4.2) onto (B.4) and (B.8), i.e., zeroing all frequencies higher than 2πK/T, results in the set of reconstruction functions defined in (4.9). 53

65 Proposition 4.1 leads directly to the conclusion that the set of functions {h p (t)} of (4.9) constitutes a frame for the space of T -periodic signals bandlimited to 2πK/T. The redundancy ratio r of this frame, which is defined as the ratio of the number of functions in the frame and the dimension of the space, is given by r = N 2K + 1. (4.12) In the ideal case in which x(t) is K-bandlimited and its samples are not corrupted by noise, the reconstruction functions (3.7) and (4.9) both, lead to perfect reconstruction of x(t). However, when x(t) is not truly bandlimited or its samples {x(t p )} are corrupted by noise, these two methods lead to different reconstructions. We note that the interpolation property (3.11) no longer holds when using (4.9), i.e., the reconstructed signal x(t) does not necessarily satisfy x(t p ) = x(t p ) Stability Analysis In the non ideal case, where the samples {x(t p )} N 1 p=0 of the periodic bandlimited signal x(t) are perturbed by a sequence {w p } N 1 p=0, the reconstruction x w(t) provided by Theorem 4.1, is not equal to the original signal x(t). The question is how much the reconstructed signal x w (t) differs from the signal x(t). The range of possible perturbations of x w (t) with respect to x(t) is given by the following expression (2.20) 1 P w κ c x w(t) L2 [0,T ] x(t) L2 [0,T ] P w κ c, (4.13) where κ is the condition number of the reconstruction algorithm, and P is an orthogonal projection onto the range space of the analysis operator F, which is given by (2.13) (F x(t)) p = x(t), h p (t), x(t) V K. (4.14) From (4.13) we observe that there are two parameters κ and P, which may reduce the squared norm of the reconstruction error. Comparing expression (4.13) with (2.22), which corresponds to reconstruction with bases, we conclude that using reconstruction methods of Theorem 3.1 and 4.1 with the same values of condition number κ, the frame-based method of Theorem 4.1 may result in a lower reconstruction error due to projection of the noise 54

66 sequence w. It follows from Proposition 4.1 that the frame bounds A, B of the set (4.9) are more tight than the bounds of the set (3.7). As a result, the condition number κ = A/B (2.21) of the frame (4.9) is smaller than κ of (3.7). Similar to Section 3.1, the frame bounds of (4.9) can be found as the smallest and largest non zero eigenvalues of the frame correlation matrix R. Since the set of functions {h p (t)} of (4.9) is a frame for the (2K + 1)-dimensional space V K, the rank of the N N matrix R is 2K + 1, i.e., N 2K 1 out of N eigenvalues of R are zero. In an orthogonal projection P of (4.13), any sequence {w p } N 1 p=0 has an orthogonal decomposition w p = w + p + w p with respect to P, where P w + p = w + p and P w p = 0. Due to the orthogonality of w + p and w p, the energy of the noise sequence {w p } is given by w p 2 = w + p 2 + w p 2. (4.15) In the reconstruction process, all the noise energy outside the range space of P is automatically nullified, i.e., P w p = 0. In our case, w p corresponds to the high frequency components of the noise sequens w p, i.e., frequencies higher than 2πK/T. Increasing the redundancy ratio r of the frame increases the null space of the projector P, so that a larger portion of the noise is removed. Assuming that the sequence w p is a white noise sequence, its energy is uniformly distributed in the space R N, i.e., its energy is constant at all frequencies. For a white noise the following is satisfied w p + 2 = 1 r w p 2 and wp 2 = r 1 w p 2. (4.16) r In was shown in [73], that for a wide range of signals the quantization error is nearly a white noise sequence. Thus, in the case where the error results from quantization of the samples x(t p ), the equations (4.16) are nearly true. Both facts, smaller condition number κ and noise reduction by orthogonal projection of the noise sequence {w p } N 1 p=0 in (4.13), make the frame based method of Theorem 4.1 more robust in noisy environments than the reconstruction algorithm of Theorem 3.1. To compare these two reconstruction methods, we use the two sets of 7 nonuniform samples considered in Section 3.1. These two sets are presented in Fig. 3-2 and differ in the location of the 6 th sample only. 55

67 In this example, we assume that the periodic signal x(t) is 2-bandlimited. Using Theorem 4.1, we obtain two sets of reconstruction functions {h a p(t)} N 1 p=0 and {hb p(t)} N 1 p=0 corresponding to sampling sets {t a p} N 1 p=0 and {tb p} N 1 p=0 respectively. Both sets constitute frames for the space V 2 with redundancy ratio r = 7/5 = 1.4. Substituting these sets of reconstruction functions into (2.16) results in two 7 7 frame correlation matrices R a and R b respectively, where the inner products was obtained by numerical integration. The eigenvalues of the matrices R a and R b are given by λ(r a ) = {0, 0, 0.71, 1.04, 1.12, 1.45, 2.25}, (4.17) and λ(r b ) = {0, 0, 0.62, 1.02, 1.1, 1.53, 180}, (4.18) respectively. Since the rank of both matrices is 5, 2 of the 7 eigenvalues are zero. As predicted by Proposition 4.1, the frame bounds A a = 0.71 and A b = 0.62 are larger than the bounds obtained in Section 3.1, A a = 0.58 and A b = 0.47, respectively. At the same time, the upper frame bounds B a = 2.25 and B b = 180 of the frames are lower than B a = 4.79 and B b = of Section 3.1. As a result, the condition numbers of the frames reduce to κ a = 3.17 and κ b = compared with their values κ a = 8.26 and κ b = in Section 3.1. We see from these examples that using a frame with redundancy r = 1.4 significantly reduces the condition number of the set of reconstruction functions. Frames with higher redundancy ratio will result in even lower κ. We also note, that in the frame based method, the second sampling set (Fig. 3-2(b)) remains less stable compared to the first set (Fig. 3-2(a)) due to the large gap and two close samples in the distribution of the sampling points. To conclude this section we summarize the basic properties of the reconstruction functions {h p (t)} developed in Theorem 4.1 and compare them to the properties of the recon- 56

68 struction functions of Theorem 3.1. Theorem 3.1 Theorem 4.1 Perfect reconstruction in V K Interpolation property holds Perfect reconstruction in V K Interpolation property does not hold Forms a basis for a space larger than V K Less stable in noisy environments Forms a frame for the space V K More stable in noisy environments In the next sections we apply an orthogonal projection P K onto the sets of reconstruction functions developed in Chapter 3 for two special cases of distribution of sampling points: Uniform and recurrent nonuniform. We provide stability analysis and compare frame methods to the reconstruction functions of Chapter Uniform Sampling In this section we consider the frame based reconstruction of uniformly oversampled periodic bandlimited signals. This method may be important in many practical applications due to its ability to eliminate the high frequency components of the noise. We will show that this reconstruction method is also very simple and the set of reconstruction functions constitutes a tight frame (for even and odd N), so that the reconstruction algorithm is very stable. The set of reconstruction functions for uniform sampling was given in (3.20). It was shown in (3.21) and (3.29), that these functions can be equivalently expressed as a sum of complex exponents h b p(t) = 1 N 1 N (N 1)/2 l= (N 1)/2 N/2 1 l= N/2+1 e j2πl(t t p)/t, e j2πl(t tp)/t + 1 2N (e j2πn(t tp)/t + e j2πn(t tp)/t ), N odd; N even. (4.19) 57

69 Orthogonal projection of these functions (4.19) onto the space V K results in h p (t) = P K h b p(t) = 1 K N k= K e j2πk(t t p)/t = (ejπ(t t p)/t e jπ(t t p)/t ) K k= K ej2πk(t t p)/t N(e jπ(t t p)/t e jπ(t t p)/t ) = (ejπ(2k+1)(t t p)/t e jπ(2k+1)(t t p)/t ) N(e jπ(t t p)/t e jπ(t t p)/t ) = sin(π(2k + 1)(t t p)/t ). (4.20) N sin(π(t t p )/T ) This reconstruction function was already known [54] but here for the first time we introduce it in the framework of frame theory. As expected, in the nonredundant case where N = 2K + 1, both reconstruction functions (3.20) and (4.20) are the same. We now show that the reconstruction functions (4.20) constitute a tight frame for the space V K. Proposition 4.2. Let {ϕ i (t)} N i=1 be an orthogonal basis for the space W, and let P denote the orthogonal projection of W onto a closed subspace V, then {P ϕ i (t)} N i=1 is a tight frame for V. Proof: According to Proposition 4.1, the set {P ϕ i (t)} N i=1 is a frame for V with frame bounds A V A W and B V B W. Since {ϕ i (t)} N i=1 is an orthogonal basis for W, its bounds are equals, i.e., A W = B W. Therefore, the bounds of the frame {P ϕ i (t)} N i=1 are also equal, which leads to the conclusion that {P ϕ i (t)} N i=1 is a tight frame for V. It follows from Proposition 4.2 that in the case of odd N, the set of functions {h p (t)} (4.20) constitutes a tight frame for the space of T -periodic K-bandlimited signals V K. We now show that the tight frame condition (2.11) is satisfied for the set (4.20) for every N (odd and even) as well, namely there exists a constant A > 0 such that N 1 p=0 x(t), h p (t) 2 = A x(t) 2, (4.21) for all x(t) V K. To show (4.21), we calculate the inner product of a signal x(t) from the space V K of (4.2) with one of the reconstruction functions h p (t) of (4.20). Using the 58

70 orthogonality of exponents, we obtain x(t), h p (t) 2 = = = 1 T 1 N 1 N = 1 N 2 T 0 K k= K K k= K K k= K T K 1 T c k e j2πkt/t 1 K e j2πk(t tp)/t dt N k= K 2 c k e j2πkt/t e j2πk(t tp)/t dt 0 k= K l= K c k e j2πktp/t 2 K c k c l e j2πp(k+l)/n, (4.22) where in last equation we substituted the value of the sample t p = pt/n. Summation of (4.22) over all values of p and using the fact that 2 n 1 m=0 e j2πmk/n = n, k mod n = 0; 0, otherwise, (4.23) results in N 1 p=0 x(t), h p (t) 2 = 1 N 1 N 2 = 1 N 2 = 1 N K K p=0 k= K l= K K k= K K k= K Nc k c k c k c l e j2πp(k+l)/n c k 2. (4.24) The last equation results from the fact that for a real periodic function x(t), c k = c k. The right hand side of the expression (4.21), i.e., the squared norm of the T -periodic function 59

71 x(t), is calculated according to (2.7) and is given by x(t) 2 = x(t), x(t) = 1 T = 1 T T K = x(t)x (t)dt 0 T K 0 k= K k= K c k e j2πkt/t K l= K c l e j2πlt/t dt c k 2. (4.25) Combining (4.24) and (4.25), we conclude that equation (4.21) is satisfied with constant A = 1/N, i.e., the set of functions defined in (4.20) constitutes a tight frame for the space V K for every N > 2K + 1. In addition we note that since each function of the frame has the same squared norm h p (t) 2 = 2K + 1 N 2, (4.26) the redundancy ratio r of the tight frame is given by r = which is equivalent to the expression (4.12). A h p (t) 2 = 1/N (2K + 1)/N 2 = N 2K + 1, (4.27) We conclude that in the case of uniform sampling the set of reconstruction functions (4.20) developed in this section constitutes a tight frame for the space V K, which leads to the lowest possible condition number κ = 1. Comparing it to the set (3.20) of Section 3.2, we notice an improvement in condition number in the case of even number of sampling points, where it is reduced from 2 to 1. value, i.e., κ = 1. In the case of N odd κ stays at its optimal However, the condition number is not the main improvement of the frame method. The major contribution of the new set of reconstruction functions (4.20) is their ability to remove the high frequency components of the noise. As it was shown in (4.15), in case of white additive noise the power of the noise is reduced by a factor of r. Increasing frame redundancy increases the portion of removed noise, which results in a low reconstruction error in a noisy environment. This approach is frequently used in practical applications [74, 75], where increase in quantization step, i.e., quantization error, and increase in sampling frequency, i.e., redundancy ratio r, compensate each other and 60

72 result in the same quality of reconstruction. In the next section, we discuss the frame reconstruction method from recurrent nonuniform samples. 4.3 Recurrent Nonuniform Sampling Recurrent nonuniform oversampling is also of high interest in signal processing due to its ability to reduce the energy of the noise. For instance, in [78] the scheme of recurrent nonuniform oversampling of the signals and their derivatives was presented and analyzed. Mathematical properties of different finite dimensional approximation methods, among them periodic expansions, of oversampled filterbanks were discussed in [79]. In this section, we consider the problem of recurrent nonuniform oversampling of periodic bandlimited signals. Similar to previous sections we develop a set of reconstruction functions, which constitute a frame for the space V K, by orthogonal projection of the set {h p (t)} developed in Section 3.3. We do not give an explicit formula for this set of functions due to complexity of representation, but we do discuss its properties. It can be shown that the operator P K commutes with the shift operation, namely P K x(t τ) = (P K x)(t τ). (4.28) From this fact, we conclude that the set of projected reconstruction functions satisfies property (3.54), i.e., all functions in this set are shifted versions of N r functions. Therefore, the frame correlation matrix R of this set of reconstruction functions, which is now of rank 2K + 1, is a block circulant matrix. In Section 3.3, we presented an efficient method for calculating the eigenvalues of block circulant matrices. Using this method, we can calculate the eigenvalues of R and determine the condition number κ. We now compare the stability of two reconstruction methods using Examples 3.2 and 3.3 introduced in Section 3.3. In Example 3.2, we considered a set of N = 10 recurrent nonuniform samples, with N r = 2, M r = 5, and T = 10. We fixed the value of t 0 = 0 and t 1 was allowed to change in a range [0, T r ]. This set of samples is shown in Fig We define the space of the T -periodic signal x(t) involved in reconstruction to be V 2, i.e., the redundancy of the frame is r = 10/5 = 2. 61

73 log 10 (κ) t 1 Figure 4-2: Comparison between the condition number κ of the reconstruction method of Theorem 3.1 (dashed line) and of the frame method of Theorem 4.1 (solid line), as a function of t 1. The set of recurrent nonuniform samples is defined in Example 3.2 and the space of the sampled signal is V 2 (r = 2). The behavior of the condition number of two reconstruction methods, as a function of the location of t 1 is shown in Fig As predicted by Proposition 4.1, the condition number of the frame method is significantly lower than κ of the set of reconstruction functions of Theorem 3.1. We observe, that in the neighborhood of t 1 = 1 the condition number of frame based method is very low and it achieves the minimal value for t 1 = 1, where the recurrent nonuniform sampling set becomes uniform and the set of reconstruction functions constitutes a tight frame. We note the fact, that in the reconstruction method of Theorem 3.1, the lowest achievable value of κ for this sampling set was 2 (see 3.43). In Example 3.3, we considered a set of N = 15 recurrent nonuniform samples, with N r = 3, M r = 5, and T = 15. Here again t 0 = 0 and 0 < t 1 < t 2 < T r. This set of samples is shown in Fig In this example, we assume that the space of the T -periodic signal x(t) involved in the reconstruction is V 3, i.e., the redundancy of the frame is r = 15/7 = The behavior of the condition number of two reconstruction methods, as a function of t 1 and t 2 is shown in Fig We observe, that condition number of the frame method is lower for every t 1 and t 2. It can also be seen, that in the neighborhood of (t 1, t 2 ) = (1, 2) the 62

74 4 4 log 10 (κ) 3 2 log 10 (κ) t t t t 1 Figure 4-3: Comparison between the condition number κ of the reconstruction method of Theorem 3.1 (left) and of the frame method of Theorem 4.1 (right), as a function of t 1 and t 2. The set of recurrent nonuniform samples is defined in Example 3.3 and the space of the sampled signal is V 3 (r = 2.14). condition number of the frame based method is very low and it achieves the minimal value κ = 1 when the set of samples is uniformly distributed, i.e., (t 1, t 2 ) = (1, 2). Although, the condition number of the second set is also 1 at (t 1, t 2 ) = (1, 2), it grows more rapidly as (t 1, t 2 ) departs from (1, 2). Fig. 4-2 and 4-3 provide a good tool for the practical design of sampling and reconstruction systems. By determining the maximal value for κ, which we are able to tolerate in our reconstruction system, we can find the minimal allowed shifts between the sets of uniform samples in the recurrent nonuniform sampling scheme. Given this information, we can prevent critical (nonstable) distributions of samples. 4.4 Simulation Results In this section, we complete the theoretical discussions of Chapters 3 and 4 by simulating the reconstruction methods presented in these Chapters. Specifically, we compare performance of both reconstruction methods, of Theorem 3.1 and of Theorem 4.1, in a noisy environment. We also compare results of reconstruction from nonuniform, uniform and recurrent nonuniform sampling schemes. Note that in the ideal case, where we have the correct samples of the signal, there is no need for simulations; all reconstruction methods 63

75 presented in this work guarantee perfect reconstruction of T -periodic K-bandlimited signals for any distribution of sampling points. We first created in Matlab a T -periodic K-bandlimited signal x(t), with T = 10 and K = 4, which belongs to the 9-dimensional space V 4. Then, three sets of 18 sampling points {t p } were considered: nonuniform, uniform, and recurrent nonuniform with N r = 3, where the nonuniform points where randomly chosen. The redundancy of these sampling sets is r = 2. Each set of samples {x(t p )} was perturbed by another set {w p }, which is also randomly generated in Matlab. The sequence {w p } is a white gaussian noise with statistics N(0, 0.01). In Fig. 4-4, we present the signal x(t) marked by a solid line, and the noisy samples {x p + w p }, marked by dots. The results of the reconstruction of the signal x(t) from nonuniform, uniform, and recurrent nonuniform noisy samples are shown in Fig. 4-4 in the top, middle, and bottom plots, respectively. The reconstructions using the basis (3.7) of Theorem 3.1 and the frame (4.9) of Theorem 4.1 are presented by dash-doted and dashed curves, respectively. An important observation is that the dash-doted curve passes through the noisy samples at all plots. This result was expected, since the set of reconstruction functions of Theorem 3.1 satisfies the interpolation property (3.11). We define the indicator of the quality of the reconstruction as a signal to noise ratio (SNR), given by x(t) 2 L SNR = 10 log 2 [0,T ] 10 x(t) x(t) 2, (4.29) L 2 [0,T ] where x(t) L2 [0,T ] and x(t) x(t) L2 [0,T ] are the energies of the signal and the reconstruction error. For the reconstructed signals x(t) of Fig. 4-4, the SNR values are summarized in Table 4.1. Table 4.1: Signal to Noise Ratio of Two Reconstruction Methods for Different Sets of Sampling Points Reconstruction SNR [db] Method Nonuniform Uniform Recurrent Nonuniform Theorem Theorem We can clearly see from Fig. 4-4 and Table 4.1 that the reconstructed signal x(t) is closer 64

76 to the original signal x(t) when the frame method of Theorem 4.1 is used for reconstruction. Probably the most surprising result of Table 4.1 is the fact that the frame method results in the same reconstruction quality for uniform and recurrent nonuniform sampling sets. We know that condition number κ of the set of reconstruction functions from recurrent samples is significantly higher than κ = 1 of the tight frame of uniform samples (see Fig. 3-10). Thus, we obviously expect less stable behavior in the case of recurrent nonuniform samples. What happened here is that a large portion of the noise energy was removed by the orthogonal projection P (4.13). This fact, once again proves the efficiency of the noise reduction ability of the frame method introduced by Theorem 4.1. We conclude this section by noting that reconstruction method of Theorem 4.1 provides better reconstruction in noisy environments with respect to the technique of Theorem 3.1. On the other hand, owing to (3.11), the method of Theorem 3.1 provides consistent reconstruction of the signals. This property is very useful in many signal processing applications [63], e.g., in interpolation theory. These conclusions were supported by theoretical derivations and simulation results. 65

77 1.5 Nonuniform Sampling Uniform Sampling Recurrent Nonuniform Sampling Figure 4-4: Reconstruction of a 10-periodic 4-bandlimited signal (solid line) from 18 nonuniform (top), uniform (middle), and recurrent nonuniform (bottom) noisy samples (marked by dots). Simulation results of the reconstruction method of Theorem 3.1 (dash-dotted line) and frame method of Theorem 4.1 (dashed line) are compared. 66

78 Chapter 5 Reconstruction using LTI filters In Chapters 3 and 4 we simplified the sets of reconstruction functions for uniform and recurrent nonuniform sampling. Direct implementation of these methods is still computationally difficult. In this chapter we consider the problem of efficient reconstruction methods from uniform and recurrent nonuniform sets of samples. In Section 5.1 we show that the reconstruction of a periodic bandlimited signal from uniform samples can be performed with an LTI filter. Then, in Sections 5.2 and 5.3, we develop banks of continuous-time and discrete-time filters, respectively, to reconstruct the signal from recurrent nonuniform samples. We also discuss properties of the filters involved in reconstruction. 5.1 Uniform Sampling In this section, we show that the reconstruction of a periodic bandlimited signal from a uniform set of samples can be efficiently implemented with an LTI filter. reconstruction (3.6) can be expressed as Indeed, the x(t) = s(t) h(t), (5.1) where s(t) is an impulse train of samples, s(t) = N 1 p=0 x(t p )δ(t t p ). (5.2) 67

79 For the basis representation of Chapter 3 (3.20), h(t) is given by h(t) = sin(nπt/t ) N sin(πt/t ), and for the frame representation of Chapter 4 (4.20), we have N odd; (5.3a) cos(πt/t ) sin(nπt/t ) N sin(πt/t ), N even, h(t) = sin((2k + 1)πt/T ). (5.3b) N sin(πt/t ) Since both signals s(t) and h(t) are of finite length T, or periodic in T, the convolution operation in (5.1) is cyclic, i.e., s(t) h(t) = T 0 s(τ)h((t τ) mod T )dt. (5.4) From (5.4) it follows that x(t) is obtained by filtering s(t) with an LTI filter with impulse response h(t) given by (5.3). This filtering operation is schematically shown in Fig x ( ) nt N H(ω) x(t) N 1 n=0 δ(t nt/n) Figure 5-1: Reconstruction from uniform samples using a continuous-time filter. Using (3.21), it follows that the frequency response H(ω) of the continuous-time filter h(t) of (5.3a) for N odd is given by H(ω) = = = 1 N sin(nπt/t ) N sin(πt/t ) e jωt dt 1 N + 2 N (N 1)/2 n= (N 1)/2 δ (N 1)/2 l=1 cos(2πlτ/t ) e jωt dt ( ω 2πn ). (5.5a) T For N even, we can use (3.29) to show that the frequency response H(ω) of the continuous- 68

80 time filter h(t) of (5.3a) is equal to H(ω) = = cos(πt/t ) sin(nπt/t ) N sin(πt/t ) e jωt dt 1 N + 1 N cos(πnτ/t ) + 2 N N/2 1 l=1 = 1 ( 2N δ ω Nπ ) + 1 ( T 2N δ ω + Nπ ) + 1 T N cos(2πlτ/t ) e jωt dt (N 2)/2 n= (N 2)/2 ( δ ω 2πn ).(5.5b) T For the frame (5.3b) we obtain an expression which is similar to (5.5a) with highest frequency 2πK/T, i.e., H(ω) = 1 N K n= K ( δ ω 2πn ). (5.6) T The frequency response H(ω) of (5.5) for the cases N = 9 (odd) and N = 10 (even) are shown in Fig. 5-2 (a) and (b) respectively. N = 9 H(ω) 1 N ωt 2π N = 10 H(ω) 1 N 1 2N ωt 2π (a) Figure 5-2: Frequency response of the reconstruction filter H(ω) of (5.5) for N = 9 (a) and N = 10 (b). (b) Evidently, as predicted by Theorem 3.2, in both cases of odd and even number N the filters are bandlimited. In analogy to reconstruction of non-periodic bandlimited signals from uniform samples [2], we may say that in the case of N odd H(ω) is a low-pass filter (LPF) for periodic uniformly sampled signals with cutoff frequency π(n 1)/T. We also observe that in the case of N even H(ω) is wider than the LPF with cutoff frequency π(n 2)/T but it is not a LPF with cutoff frequency πn/t. We also note, that H(ω) of (5.6) is a LPF with cutoff frequency 2πK/T. 69

81 5.2 Recurrent Nonuniform Sampling We now show that the reconstruction formula (3.52) from recurrent nonuniform samples can be implemented using a continuous time filterbank. The ideas of this section are similar to those developed in [31] for recurrent nonuniform sampling of non-periodic signals. First, using periodicity of the recurrent nonuniform sampling and the property (3.54), which states that all the functions in the set (3.52) are shifted versions of N r functions, the reconstruction (3.6) can be rewritten as x(t) = = = N 1 p=0 M r 1 n=0 M r 1 n=0 x(t p )h p (t) N r 1 p=0 N r 1 p=0 x(t p+nnr )h p+nnr (t) x(t p+nnr )h p (t + nt r ). (5.7) The last expression (5.7) can be equivalently represented as a sum of N r convolutions, x(t) = N r 1 where s p (t) is an impulse train of samples, p=0 s p (t) h p (t), (5.8) s p (t) = M r 1 n=0 x(nt r + t p )δ(x nt r t p ), (5.9) and h p (t) = Nr 1 q=0 sin(m r π(t + t p t q )/T ) b p, N odd; sin(πt/t ) Nr 1 q p sin(m r π(t + t p t q )/T ) b p cos(πt/t ), N even, sin(πt/t ) (5.10) where b p was defined in (3.53). Equation (5.8) can be interpreted as a continuous-time filterbank as depicted in Fig Each one of N r uniform sequences of samples s p (t) formed according to (5.9) is filtered by a continuous-time filter H p (ω), with impulse response given by (5.10). Summing the outputs of the N r filters results in the reconstructed signal x(t). Note that each one of the subsequences corresponds to uniform samples at one-n r th of the average sampling rate. Therefore, the output of each branch of the filter bank is an 70

82 x(nt r + t 0 ) s 0 (t) H 0 (ω) Mr 1 n=0 δ(t nt r t 0 ) x(nt r + t 1 ) s 1 (t) H 1 (ω) x(t) Mr 1 n=0 δ(t nt r t 1 ). x(nt r + t Nr 1) s Nr 1(t) H Nr 1(ω) Mr 1 n=0 δ(t nt r t Nr 1). Figure 5-3: Reconstruction from recurrent nonuniform samples using a continuous-time filterbank.. aliased and filtered version of x(t). The filters, as specified by (5.10), have the inherent property that the aliasing components of the filter outputs cancel in forming the summed output x(t). To determine the frequency response H p (ω) of the filter h p (t) of (5.10), we note that h p (t) of (5.10) can be expressed as h p (t) = b p sin(m r πt/t ) sin(πt/t ) N r 1 q=0 q p b p cos(πt/t ) sin(m rπt/t ) sin(πt/t ) sin(m r π(t + t p t q )/T ), N r 1 q=0 q p sin(m r π(t + t p t q )/T ), N odd; N even. (5.11) Now, expanding the product of sines in (5.11) into complex exponentials (see Ap- 71

83 pendix B), equation (5.11) can be rewritten as h p (t) = b p sin(m r πt/t ) sin(πt/t ) N r 1 k= N r+1 b p cos(πt/t ) sin(m rπt/t ) sin(πt/t ) c pk e jkm rπt/t, N r 1 k= N r+1 c pk e jkm rπt/t, N odd; N even, (5.12) where the complex coefficients c pk are the result of expanding the product of sines in (5.11) into complex exponentials. The first terms in (5.12) corresponds to the LPFs with cutoff frequency πm r /T, which were defined in (5.3). The effect of the summation and multiplexing by the exponent is to create N r shifted and scaled versions of this bandlimited filter, so that the filter response H p (ω) is piecewise constant and bandlimited to πm r N r /T. This fact allows for further efficiency in the implementation. Example 5.1. We consider the problem of reconstructing a periodic 10-bandlimited signal x(t) from the set of recurrent nonuniform samples, where T = 2π. The set of nonuniform samples in the first group is given by t 0 = 0, t 1 = 0.087, and t 2 = repeated with period T r = π/6 so that N r = 3 and M r = 12. The reconstruction in this case is obtained using a bank of 3 continuous time filters. In Fig. 5-4 (a) we depict the third filter H 2 (ω) of the filterbank. As we expect, the filter is bandlimited to πm r N r /T = 12 3/2 = 18, since H 2 (ω) is created by three (N r ) shifted and scaled versions of a filter bandlimited to M r /2 = 6. We note that using the frame based reconstruction method developed in Chapter 4 also leads to a filterbank reconstruction, similar to that described in this section. In Fig. 5-4 (b) we depict the third filter H 2 (ω) of the filterbank of the reconstruction method of Theorem 4.1. As expected, this filter is an orthogonally projected version of the filter depicted in Fig. 5-4 (a) onto the space of the reconstructed signal x(t), i.e., V 10. In the next section, we will derive a discrete-time filterbank implementation of the reconstruction, which also provides efficient interpolation to uniform samples. 72

84 H 2 (ω) ω (a) H 2 (ω) (b) ω Figure 5-4: Frequency response of H 2 (ω) for the reconstruction methods of Theorem 3.1 (a) and Theorem 4.1 (b). 5.3 Interpolation and Reconstruction Using a Discrete-Time Filterbank Most of today s signal processing operations are performed on digital computers. Therefore, efficient techniques for digital processing of the signal are required. Following an analogous procedure to [31], we show that the continuous-time filterbank of Fig. 5-3 can be converted to a discrete-time filterbank followed by a continuous-time LPF. x(nt r + t p ) H p (ω) y(t) Mr 1 n=0 δ(t nt r t p ) (a) x(nt r + t p ) Nr x e p[n] Hp (ω) LPF y(t) N 1 n=0 δ(t nt r/n r ) (b) Figure 5-5: The Interpolation Identity. 73

85 Interpolation Identity, which was developed in [31], states that the block diagrams depicted in Fig. 5-5(a) and (b) are equivalent for for p = 0, 1,, N r 1. H p (ω) = N T H p ( Nω T ) e jt pnω/t, ω π (5.13) The block diagram of Fig. 5-5(a) is the pth branch of the continuous-time filterbank of Fig The block diagram of Fig. 5-5(b) consists of expanding a sequence of samples by a factor of N r and then filtering by a discrete-time filter H p (ω) with frequency response given by (5.13). Multiplication by exponent in (5.13) simply noting that the delay of t p in the impulse train of the pth branch is incorporated into the filter h p (t). The filter output is then followed by impulse modulation and lowpass filtering. The frequency response of the continuous-time lowpass filter is given either by (5.5) or by (5.6) for basis or frame reconstruction method, respectively. The input-output relation of the expander is given by x e x(nt r + t p ), n = 0, N r, 2N r,... p[n] = 0, otherwise. (5.14) Applying this Interpolation Identity to each branch in Fig. 5-3 and moving the impulse modulation and LPF in each branch outside the summer, we obtain the equivalent implementation in Fig As with the continuous-time filterbank of Fig. 5-3 the overall output of Fig. 5-6 is the original continuous-time signal x(t). Furthermore, since x(t) is reconstructed through lowpass filtering of a uniformly spaced impulse train with period T/N, the impulse train values r[n] must correspond to uniformly spaced samples of x(t). Thus, the discrete-time filterbank of Fig. 5-6 effectively interpolates the recurrent nonuniform samples to uniform. The discrete-time filterbank of Fig. 5-6 can be used to interpolate the uniform samples and to reconstruct the continuous-time signal from its recurrent nonuniform samples very efficiently, exploiting the many known results regarding the implementation of filterbank structures. As with the continuous-time filterbank, the magnitude response of the discretetime filters are piecewise constant, which allows for further efficiency in the implementation. 74

86 x(nt r + t 0 ) Nr H0 (ω) r[n] x(nt r + t 1 ) Nr H1 (ω) LPF x(t)... N 1 n=0 δ(t nt r/n) x(nt r + t Nr 1) Nr HNr 1(ω) Figure 5-6: Reconstruction from recurrent nonuniform samples using a discrete-time filter bank. 75

87 Chapter 6 Sampling in Polar Coordinates In most practical applications handled with two dimensional signals the standard rectilinear Cartesian coordinate system is used to represent the signal. However, introducing polar coordinates one can simplify significantly the sampling and reconstruction methods. In particular, polar sampling strategies and linear spiral scan techniques [11], which are widely used in MRI, provide practical advantages in the context of medical imaging [10]. The problem of signal reconstruction from its nonuniform frequency domain samples in Cartesian coordinates, arises in computerized tomography [80]. These nonuniform samples become uniform when representing the signal in polar coordinates. While treatment with two dimensional signals given in Cartesian coordinates is well developed both in theory and applications, the polar coordinates system is less understood and developed. Moreover, the samples given in polar coordinates may be nonuniformly spaced. To avoid artifacts in the reconstruction process, efficient interpolation methods from uniform and nonuniform samples in polar coordinates are required [44]. In this chapter, we consider the problem of reconstructing a 2-D signal sampled in polar coordinates. Using the fact that such signals are periodic in their angular coordinate we can apply efficient methods for reconstruction of a periodic signal developed in previous chapters (see Chapters 3-5). The organization of this chapter is as follows. In Section 6.1, we introduce nonuniform sampling and reconstruction approaches developed for Cartesian coordinates. We then show in Section 6.2 that these methods can be extended to polar coordinates, which result in two possible nonuniform sampling strategies in this coordinate system. Using reconstruction 76

88 methods presented in Chapters 3 and 4 and methods for reconstructing a non-periodic bandlimited signal from nonuniform samples we show that such signals can be perfectly reconstructed under certain assumptions. In Section 6.3, we define a recurrent nonuniform sampling model in polar coordinates and develop efficient reconstruction methods using a bank of separable two dimensional LTI filters. Then, in Section 6.4, we discuss practical limitations of the proposed reconstruction algorithms and suggest possible solutions to these problems. 6.1 Sampling Approach in Cartesian Coordinates One of the conclusions from Theorem 3.1 is that in the one-dimensional case a set of 2K +1 distinct sampling points uniquely characterizes a periodic K-bandlimited signal x(t). This fact was previously proved by some authors in [41, 81]. In the non-periodic case, it is well established that a bandlimited signal is uniquely determined from its distinct nonuniform samples, provided that the average sampling rate exceeds the Nyquist rate [28]. Analogous results in two dimensions do not hold. In other words, to reconstruct a bandlimited 2-D signal it is necessary to have a certain amount of distinct sampling points, but it is not a sufficient condition. Another requirement that has to be satisfied in order to give an adequate treatment of the 2D case is to restrict in some way the distribution of the sampling points. There are many appropriate condition on the distribution of the sampling points, under which the signal can be perfectly reconstructed [40, 82]. The trouble is that a explicit formula for reconstruction function is known only for a few special cases. One such case was proposed and proved by Butzer and Hinsen in [83]. Theorem 6.1 (Butzer and Hinsen sampling theorem [83]). Let {x n ; n Z} be a sampling sequence of real numbers with average density greater than a/π, where each number corresponds to a line parallel to the y-axis passing through x n. Let {y nm ; n, m Z} be a sequence of real numbers which define sampling points on the line passing through x n with average density greater than b/π. If both sequences {x n } and {y nm } satisfy x n n π a < L x <, x n x k > δ x > 0, n k, and y nm m π b < L y <, y nm y nk > δ y > 0, k m, (6.1) respectively, then for any function f(x, y) bandlimited to the rectangle B = [ a, a] [ b, b], 77

89 we have where f(x, y) = n= m= Φ n,m (x, y) = Φ n (x)φ nm (y) = f(x n, y nm )Φ n,m (x, y), (6.2) G(x) G (x n )(x x n ) G n (y) G n(y nm )(y y nm ), (6.3) and G(x) = (x x 0 ) G n (y) = (y y n0 ) n= n 0 m= m 0 (1 xxn ), ( 1 y ). (6.4) y nm One of the shortcomings of this approach is that all the sampling points must lie on straight lines parallel to the y axis. However, these lines are not necessarily equally spaced and the points on each line need not be uniformly distributed. We also observe that the reconstruction process is separable, i.e., first the set of one dimensional signals {f(x n, y)} n= are reconstructed and then applying the reconstruction in the x coordinate we obtain the 2D signal f(x, y). An example of this sampling strategy is depicted in Fig It can be immediately verified that nonuniform sampling along straight lines parallel to the x axis, will result in a similar reconstruction formula with appropriate coordinate changes. We also note that following recurrent nonuniform sampling strategy along each one of the coordinates x and y one can efficiently reconstruct the two dimensional bandlimited signal using the banks of one dimensional filters developed in [31]. In the next section we show that this sampling theorem can be extended to nonuniform sampling in polar coordinates. 6.2 Sampling Approaches in Polar Coordinates In the polar coordinate system (r, θ) we will follow the same sampling strategy. The two dimensional signal f(r, θ) given in polar coordinates is first nonuniformly sampled in one of its coordinates r or θ, where each sample corresponds to a circle centered at the origin or a line passing through the origin, respectively. Then, the signal f(r, θ) is nonuniformly 78

90 y x Figure 6-1: Butzer and Hinsen sampling approach in Cartesian coordinates. sampled along the second coordinate θ or r, where all samples lie on the already chosen circles or lines, respectively (see Fig. 6-2, and 6-3) First Sampling Strategy In the first sampling method, we consider the signal f(r, θ), which is nonuniformly sampled along nonuniformly spaced circles. An example of this sampling method is depicted in Fig. 6-2, where the number of samples on each circle is N = 11 and the average density of the samples in the radial direction is greater than one. Theorem 6.2 below describes how and under which conditions the signal f(r, θ) can be perfectly reconstructed from this set of nonuniform samples. We first extend the function f(r, θ), which is given in polar coordinates 0 θ < 2π, r 0 to the function f(r, θ), given by f(r, θ), r 0; f(r, θ) = f( r, θ + π), r < 0. (6.5) Actually, the last expression (6.5) is an extension of the function f(r, θ) to a new coordinate system where < r <. The reason for this extension is the fact that for the 79

91 r 5 r 2 r 3 r 4 r 0 r Figure 6-2: First sampling strategy. Samples are taken along azimuthal coordinate with fixed radius. Average density of samples in radial direction is greater than one and number of samples on each circle is N = 11. interpolation process in the radial direction we need the radial axis to be defined for any r in the range [, ]. Theorem 6.2. Let {r n ; n = 0, 1, 2...} be a sampling sequence of real numbers with average density greater than R/π, where each number corresponds to a circle with radius r n centered at the origin. Let {θ nm ; n = 0, 1, 2,..., m = 0, 1,... N 1} be a set of real numbers, which defines nonuniform samples on the circle r n, where N 2K + 1. If {r n } satisfies r n n π R < L <, r n r k > δ > 0, n k, (6.6) then any function f(r, θ) bandlimited to the circular disc of radius R and angularly ban- 80

92 dlimited to K can be perfectly reconstructed from this set of nonuniform samples by f(r, θ) = N 1 n= m=0 G(r) f(r n, θ nk )Φ nm (θ) G (r n )(r r n ), (6.7) where G(r) = (r r 0 ) n= n 0 ( 1 r ) r n (6.8) and {Φ nm (θ)} N 1 m=0 are defined in (3.7) with t = θ, T = 2π, and p = m. Index n in Φ nm(θ) attributes the set of reconstruction functions to the azimuthal samples on a circle with radius r n. Proof: The proof of this theorem is analogous to the proof of Theorem 6.1 [83]. first perform the reconstruction of the 2π-periodic K-bandlimited one-dimensional signals f(r n, θ) using one of the methods presented in Chapter 3 or 4, namely f(r n, θ) = N 1 m=0 We f(r n, θ nk )Φ nm (θ), (6.9) where the functions {Φ nm (θ)} N 1 m=0 are defined in (3.7) or (4.9). Now, using Yao and Thomas theorem [28] for reconstruction of non-periodic bandlimited signals from nonuniform samples we complete the reconstruction in radial direction by f(r, θ) = n= G(r) f(r n, θ) G (r n )(r r n ), (6.10) where G(r) is defined in (6.8). formula (6.7). Combining (6.9) and (6.10) leads to the reconstruction As an example of Theorem 6.2, any function f(r, θ) sampled with the pattern depicted in Fig. 6-2 can be perfectly reconstructed from its samples, if its Fourier transform is radially bandlimited to a disc of radius R = π (radial sampling density of the pattern is greater than π/r = 1) and the highest harmonic of its Fourier series representation with respect to θ is K = 5 (number of samples on each circle is N = 2K + 1 = 11). 81

93 θ 2 θ Figure 6-3: Second sampling strategy. Samples are taken along radial lines with fixed azimuthal coordinate. Average density of samples on each line is greater than one and number of rays from origin for azimuthal interpolation is N = 10 (always even) Second Sampling Strategy We now present the second sampling strategy, where the nonuniform samples all lie on nonuniformly spaced radial lines. An example of this sampling method is depicted in Fig. 6-3, where the average density of samples on the 5 radial lines is greater than one. We observe that in this sampling method each radial line provides two samples of the signal in θ coordinates. Therefore, number N of the samples in the azimuthal coordinate is always even. Moreover, we observe that these nonuniform azimuthal samples form the set of recurrent nonuniform samples with N r = N/2 and T r = π. Therefore, the set of reconstruction function (3.52) can be used to simplify the reconstruction in θ coordinate. Theorem 6.3 below describes how and under which conditions the signal f(r, θ) can be perfectly reconstructed from this set of nonuniform samples. Theorem 6.3. Let {θ n ; n = 0, 1,... N/2 1} be a sequence of N/2 real numbers in the 82

94 range [0, π], where N > 2K + 1. Here θ n corresponds to the radial line passing through the origin at the angel θ n. Let {r nm ; m Z} be a set of real numbers with average density greater than R/π, where each set defines nonuniform samples along the radial line θ n. If {r nm } satisfies r nm m π b < L y <, r nm r nk > δ y > 0, k m, (6.11) then any function f(r, θ) bandlimited to the circular disc of radius R and angularly bandlimited to K can be perfectly reconstructed from these nonuniform samples and the reconstruction is given by f(r, θ) = N 1 n=0 m= G n (r) f(r nm, θ n ) G n(r nm )(r r nm ) Φ n(θ), (6.12) where and G n (r) = (r r n0 ) m= m 0 Φ n (θ) = b p cos((θ θ p )/2) ( 1 r ), (6.13) r nm N/2 1 q=0 sin(θ θ q ). (6.14) sin((θ θ p )/2) Here b p = 1 2 N/2 1 q=0,q p sin(θ p θ q ), (6.15) and f(r, θ) was defined in (6.5). Proof: The proof of this theorem is analogous to the proofs of Theorems 6.1 and 6.2. We first perform the reconstruction of the non-periodic R-bandlimited one-dimensional signal f(r, θ n ) using Yao and Thomas reconstruction theorem [28] f(r, θ n ) = m= G n (r) f(r nm, θ n ) G n(r nm )(r r nm ). (6.16) where the functions G n (r) are defined in (6.13). Now, the function f(r, θ), which is periodic and bandlimited with respect to θ can be reconstructed as f(r, θ) = N 1 n=0 83 f(r, θ n )Φ n (θ), (6.17)

95 where the functions Φ n (θ) are defined in (6.14). These functions are obtained from substituting N r = N/2, M r = 2, T = 2π, and T r = π into the functions (3.52) developed in Section 3.3 for reconstruction a periodic bandlimited signal from recurrent nonuniform samples. Combining (6.16) and (6.17) completes the proof of Theorem 6.3. We note that the projected versions of (6.14), i.e., P K Φ n (θ), developed in Section 4.3 can be also used in (6.12) to reconstruct the signal in azimuthal coordinate. An example of nonuniform samples for the second sampling scheme is shown in Fig. 6-3, with five radial lines, i.e., N = 10, and the average sampling density on each line is greater than one. Thus, a function f(r, θ) can be perfectly reconstructed from these nonuniform samples if its Fourier transform is bandlimited to a disc of radius R = π and the highest harmonic of its Fourier series representation with respect to θ is K = 4. We note that in [84] Marvasti also considered the problem of reconstruction from nonuniform samples in polar coordinates. The interpolation functions he developed involve complex-valued functions, and are therefore more complicated to implement. Our methods, developed in this section, are more efficient but are still computationally difficult. In the next section, we focus on an efficient implementations of these methods for the cases of uniform and recurrent nonuniform sampling in polar coordinates. 6.3 Uniform and Recurrent Nonuniform Sampling in Polar Coordinates In this section we consider two special cases of distributions of sampling points: Uniform and recurrent nonuniform sampling. These sampling methods are very common in practical applications and the reconstruction can be obtain using two dimensional separable LTI filters Uniform Sampling The most common form of sampling in two dimensional Cartesian coordinates is uniform sampling, due to its simplicity. Here we show that uniform sampling in polar coordinates is also very simple and enables use of efficient reconstruction techniques such as filtering. We now consider the case of uniform sampling in polar coordinates, where the sampling 84

96 Figure 6-4: Uniform sampling in polar coordinates. points are uniformly distributed along each one of coordinates r and θ. An example of uniform sampling in polar coordinates in shown in Fig This form of sampling is used in computerized tomography imaging, where frequency domain samples of the image are usually given on the uniform polar grid. An example of tomographic image reconstruction will be shown in Chapter 7. We observe that uniform sampling method described above fits both sampling strategies in polar coordinates presented in the previous section. We denote by N the number of uniform samples in each circle and sampling steps are T r and T θ in r and θ, respectively. Using the set of reconstruction functions of (3.20) for periodic signals and Shannon reconstruction theorem for non-periodic signals the reconstruction can be expressed as a convolution f(r, θ) = s(r, θ) h(r, θ), (6.18) 85

97 where s(r, θ) is a 2D impulse train of samples, s(r, θ) = N 1 n= m=0 f(nt r, mt θ )δ(r nt r, θ mt θ ), (6.19) and h(r, θ) = h r (r)h θ (θ) is a separable 2D filter. From (5.3a) we have that or for one who use frames (5.3b) h θ (θ) = cos(πθ/t θ ) sin(nπθ/t θ) N sin(πθ/t θ ), (6.20) h(θ) = sin((2k + 1)πθ/T ). (6.21) N sin(πθ/t ) From [2], we have h r (r) = sinc(r/t r ). (6.22) We conclude that reconstruction from uniform samples in polar coordinates is very efficient and performed by LTI separable filtering along each one of the coordinates. In the Cartesian coordinate system this set of samples becomes nonuniform. As a consequence of this fact, reconstruction process in this case is very complicated Recurrent Nonuniform Sampling We now consider the case of recurrent nonuniform sampling in polar coordinates, where we perform recurrent nonuniform sampling along each one of coordinates r and θ. In the radial direction we define a group of N r samples repeated with period T r, and in the azimuthal direction we define a group of N θ samples with period T θ. A sampling grid for recurrent nonuniform samples in polar coordinates with M θ = 12, N θ = 3 and N r = 3 is depicted in Fig In this sampling scheme, samples on radial lines are always symmetric about r = 0 to provide radial symmetry of the grid. As can be seen from the figure, the nonuniform group of N θ samples in the θ coordinate always has an even number of repetitions, i.e., M θ is even, so that the total number of samples N = N θ M θ is also even. We observe from Fig. 6-5 that recurrent nonuniform sampling in polar coordinates fits both sampling strategies of Theorem 6.2 and Theorem 6.3. Interpolation from the azimuthal coordinate can be carried out using one of the formulas for reconstructing a periodic ban- 86

98 T r T r T θ Figure 6-5: Recurrent nonuniform sampling in polar coordinates with M θ = 12, N θ = 3, and N r = 3. dlimited signal from recurrent nonuniform samples, which were developed in Sections 3.3 and 4.3. For efficient implementation of these reconstruction formulas we use a bank of N θ LTI filters developed in Chapter 5. In the radial coordinate we use Yen s formula [29] for reconstructing a non-periodic bandlimited signal from recurrent nonuniform samples. Efficient implementation of this reconstruction formula using a bank of N r filters was developed by Y. C. Eldar and A. Oppenheim in [31]. Based on these results we express the reconstruction as a sum of N r N θ convolutions, f(r, θ) = N r 1 p=0 N θ 1 q=0 s pq (r, θ) h pq (r, θ), (6.23) 87

99 where s pq (r, θ) is a 2-D impulse train of samples, s pq (r, θ) = M θ 1 n= m=0 f(nt r + r p, mt θ + θ q )δ(r nt r r p, θ mt θ θ q ), (6.24) and h pq (r, θ) = h r p(r)h θ q(θ) is a separable 2D filter, where h θ q(θ) is given by (5.10) and h r p(r) follows from the results in [31] h r p(r) = a p T r sin(πr/t r ) πr N r 1 q=0 q p sin(π(r + r p r q )/T r ), (6.25) where a p = N r 1 q=0 q p 1 sin(π(r p r q )/T r ). (6.26) From Chapter 5 we have that the filter h θ q(θ) is bandlimited to M θ N θ /2 and its magnitude response is piecewise constant. It was shown in [31], that the filter h r p(r) of (6.25) is bandlimited to πn r /T r and piecewise constant over frequency intervals of length 2π/T r. These facts lead to further efficiency in the implementation of 2D filters h pq (r, θ). Moreover, exploiting the separability property of the filters h pq (r, θ) we conclude that practically we design only N r and N θ one dimensional filters in r and θ coordinates, respectively. For instance, to reconstruct the signal from the samples given in Fig. 6-5 we need N r + N θ = = 6 one dimensional filters. We note that various strategies can be used to sample the two dimensional signal in polar coordinates. One of such methods is depicted in Fig. 6-6, where recurrent nonuniform sampling in radial coordinate is combined with uniform sampling in θ coordinate. These forms of sampling with recurrent nonuniform distribution of samples along one of the polar coordinates can be used in magnetic resonance imaging, where nonuniform frequency domain samples of the object are usually taken in polar coordinates. In particular, sampling along interleaving linear spiral trajectories results in recurrent nonuniform distribution of samples in radial direction [85]. Sampling along other interleaving curves in polar coordinates such as STAR or rosette trajectories produces recurrent nonuniform samples in θ coordinates [86]. 88

100 T r T r T r /2 T r /2 Figure 6-6: Uniform sampling in θ coordinate and recurrent nonuniform sampling with N r = 3 in r coordinates. 6.4 Discussion We now make several remarks on the reconstruction methods proposed in this chapter. First we note that like most reconstruction theorems for non-periodic bandlimited signals, our methods involve functions of infinite support for reconstruction in radial coordinate, e.g., sinc function. These infinite functions can not be implemented practically and in most applications these functions are replaced by their truncated version. This method introduces some high frequency artifacts in the reconstructed image. The second remark concerns the fact that these theorems require infinite number of sampling points in r, while in practice we are usually given only a finite number of samples. In light of the above remarks an obvious question of practical applicability of the Theorems 6.2 and 6.3 rises. To bridge over the theory to practice gaps we will assume that the considered finite number of radial samples belong to a periodic bandlimited signal. This assumption will also enable us to use the results of Chapters 3-5 when reconstructing in the radial coordinate. Therefore, in the next chapter, we will use this assumption to recover tomographic images from a finite number of its frequency domain samples given in polar coordinates. 89

101 Chapter 7 Tomographic Image Reconstruction Computerized tomography has had a revolutionary impact in diagnostic medicine and has been used successfully in industrial non-invasive applications since its introduction by Hounsfield in In 1979, the inventors Hounsfield and Cormack were awarded the Nobel Prize in medicine for the development of computer assisted tomography. Reconstructing an object from its projections via tomographic methods should result in the least amount of distortion or artifacts, so as not to influence the clinicians judgment. For practical reasons, efficiency is another prime concern. Any processing should result in the least computational effort, particularly when dealing with the large amount of data involved in space or volumetric medical imaging. Therefore designing new algorithms for reconstruction of tomographic images in order to reach the required reconstruction speeds and quality becomes a prospect for many clinical institutes and researchers. In this chapter, we consider the problem of reconstructing a 2-D object in computerized tomographic imaging. In Section 7.1, we introduce the concepts of the tomographic imaging. Then, in Section 7.2, we define theoretical tools which are necessary to reconstruct an image from its tomographic projections and develop new reconstruction method. In Section 7.3, we first describe different iterative and non-iterative methods for reconstruction of tomographic images. We then compare performance of these methods with algorithm proposed in Section 7.2 on the reconstruction of an analytic phantom, which is usually used in tomographic applications. We show that our method provides efficient and accurate 90

102 Figure 7-1: A typical CT scanner construction: X-ray tube (1), detector (2), sliding bed (3) and patient s body (4). reconstruction. 7.1 Computerized Tomography In this section we present basic concepts, definitions, and notations of the tomographic imaging. For a comprehensive treatment of different 2-D and 3-D tomographic methods see, e.g., [80] Computerized Tomography Computerized tomography is a method for reconstructing a multidimensional signal from its projections, taken from different angles in a lower dimensional space. The angular sampling is usually accomplished by rotating the detector or source-detector system around the patient as shown in Fig. 7-1: the object 4 is illuminated by radiation from the source 91

103 Figure 7-2: Radon Transform (left) and the Fourier Slice Theorem. A parallel projection P θ (r) of an image f(x, y) is a radial line of the Fourier transform F (u, v) of the object. 1 and the projections are recorded on a detector 2. By combining these projection views mathematically, the structure of the object can be obtained. In practice, source emitting x-rays, gamma rays, electrons or positrons are used to produce the projections. When the source is outside the object, the reconstruction is called transmission tomography, otherwise it is called emission tomography Projections Based on their physical geometry, tomography techniques are classified into parallel beam, fan beam, cone beam and helical beam. In this work, the attention is devoted to the so-called parallel beam (or straight-ray) transmission tomography. In parallel beam tomography, a series of parallel rays of high-frequency radiation (usually in the X-ray spectrum) traverses the object f(x, y), which is space-limited to the circular disc of radius A. These rays of initial energy are attenuated by the object as they traverse across until they reach the detector on the other side (see Fig. 7-1). The remaining energy 92

104 of the rays forms the projection. Fig. 7-2 (left) shows a 2-D slice view of the parallel beam geometry. Assume the object can be defined with 2-D function f(x, y), we can easily derive the projection function P θ (r) at a certain angle θ. This function can be treated as line integral along a beam of parallel rays P θ (r) = f(x, y)δ(x cos θ + y sin θ r)dxdy, (7.1) R 2 where δ denotes the dirac delta function. The function R(θ, r) = P θ (r) is called the Radon transform of the object f(x, y) [87]. The goal of computerized tomography is to recover the object function f(x, y) given the information of the projections P θ (r). 7.2 Reconstruction Principles A fundamental tool of straight ray tomography is the Fourier Slice Theorem, which relates the 1-D Fourier transform of the projections and the 2-D Fourier transform of the object image [87]: Theorem 7.1 (Fourier Slice Theorem). The 1-D Fourier transform of the Radon transform P θ (r) with respect to r gives a slice S θ (ρ) of the 2-D Fourier transform F (u, v) of the object, subtending an angle θ with the u-axis. In other words, the Fourier transform of a projection in Fig. 7-2 (left) gives the values of F (u, v) along the line BB in Fig. 7-2 (right). R f(x, y) P θ (r) F 2D F 1D F (u, v) Figure 7-3: Fourier Slice Theorem. Here R, F 1D, and F 2D denote the Radon, 1-D and 2-D Fourier transforms, respectively. This theorem can be derived from a simple transformation of (7.1). Using Theorem 7.1, the reconstruction problem can be formulated as a problem of image reconstruction from its frequency domain samples. If an infinite number of projections are 93

105 taken, then F (u, v) would be known at all points in the u v plane and the function f(x, y) can be recovered by using the inverse Fourier transform f(x, y) = F (u, v)e j2π(ux+vy) dudv. (7.2) Using the fact that f(x, y) is zero outside the region Q = [ A, A] [ A, A], (7.2) can be written as f(x, y) = 1 A 2 m= n= F ( m A, n A) e j2π(mx+ny)/a (7.3) for [x, y] Q. From (7.3) we conclude that if the function F ( m A, n A) is known for all n and m, then f(x, y) can be perfectly reconstructed. However, this condition does not hold in practice. In real applications, the number of projections P θ (r) is finite and form a discrete version of the Radon transform of the object. In that case the function F (u, v) is only known along a finite number of radial lines. Moreover, the fact that a detector can detect only a finite number of parallel rays makes exact calculation of a slice S θ (ρ) of F (u, v) impossible. In the next section we discuss and propose a solution to the practical limitation of the method Reconstruction Method We now approximate the discrete version of S θ (ρ) from the finite number of samples of the projection P θ (r). For practical purposes, we may assume that each projection is bandlimited to W, i.e., S θ (ρ) = 0 for ρ W. In this case, the projections can be sampled at intervals of 1/2W, i.e., W is determined by the distance between the sensors. Since the projections are also limited to 2A, we have ( m ) P θ 2W, m = N ρ 1 2,, 0,, N ρ 1, (7.4) 2 where N ρ = 4W A + 1. We then approximate the S θ (ρ) at the finite number of points by the discrete Fourier transform of the projection ( ) 2W m S θ 1 N ρ 2W (N ρ 1)/2 k= (N ρ 1)/2 ( ) k P θ e j2πmk/nρ. (7.5) 2W From (7.5) we conclude that we are given only a finite number of samples of F (v, u) 94

106 Figure 7-4: Gridding from polar to cartesian coordinates. on the polar grid. These samples are depicted by circles in Fig In order to utilize (7.3), an interpolation has to be carried out to fill the Cartesian grid. This interpolation operation is called gridding and schematically shown in Fig. 7-4, where given samples in polar coordinates are marked by circles and desirable samples on the rectilinear grid are marked by squares. It is common to determine the values on the square grid by some kind of nearest-neighbor or linear interpolation from the radial points [80]. These simple polar-cartesian interpolations in the frequency domain may introduce inaccuracies to the reconstructed image [88]. Therefore, accurate interpolation methods are required. In our method, we propose using the reconstruction methods developed in Chapter 6 for polar-cartesian interpolation of F (u, v) from its samples (7.5) given on a polar grid. We first note that the problem of reconstructing a 2-D signal bandlimited in the frequency domain and the problem of reconstructing the 2-D Fourier transform of a space-limited signal are mathematically equivalent. We also observe that the frequency domain samples 95

107 of the reconstructed object in CT correspond to the second sampling strategy presented in Section 6.2.2, where samples of the 2-D signal all lie on radial lines passing through the origin. Therefore, relying on Theorem 6.3 and assuming that the Fourier transform F (ρ, θ) of the reconstructed image, which is given in polar coordinates, has a limited number of harmonics with respect to the θ coordinate, we have F (ρ, θ) = N θ 1 n=0 N ρ 1 k=0 S θn (ρ k )Ψ k (ρ)φ n (θ), (7.6) where Ψ k (ρ) and Φ n (θ) are defined in (3.20) and (3.52), respectively. Here the finite number of samples {S θn (ρ k )} Nρ 1 k=0 on each radial line are periodically expanded with respect to the ρ coordinate and then the ideal sinc(ρ) interpolation is replaced by Ψ k (ρ) of (3.20). Although the Fourier transform of most functions of CT will not be angularly bandlimited, in practice the energy contained above a certain frequency can be negligible. Given the fully reconstructed Fourier transform F (ρ, θ) of an image f(x, y) we resample it on the uniform grid, which is required in (7.3) for the IFFT calculation. To conclude this section we summarize the process of reconstruction of tomographic images in the four steps, which are schematically shown in (7.7). Step 1: Project the object at the different angles θ to obtain Radon transform P θ (r). Step 2: Compute 1-D FFT of each projection P θ (r). Step 3: Interpolate from polar samples onto rectilinear frequency samples. Step 4: Compute inverse 2-D FFT. f(x, y) projection P θ (r) F 1D P θ (ρ) = F (ρ, θ) gridding F (v, u) F 2D f(x, y). (7.7) In the next section we provide and analyze simulation results. We compare different reconstruction methods using different sampling patterns in the frequency domain. We show that our method developed in this section results in a higher quality of reconstruction compared to others. 96

108 7.3 Simulations We first describe some methods which will be used to reconstruct an image from its nonuniform Fourier transform samples, where the first two are very popular in the interpolation theory and the third is an iterative algorithm which is widely used for reconstruction of tomographic images. Nearest-Neighbor Interpolation: The basis function associated to nearest-neighbor interpolation is the simplest of all, since it is made of a square pulse [66]. The main interest of this basis function is its simplicity, which results in the most efficient of all implementations. In fact, for any coordinate where it is desired to compute the value of the interpolated function, there is only one sample that contributes, no matter how many dimensions are involved. The price to pay is a loss of quality. Linear Interpolation: It enjoys a large popularity because the complexity of its implementation is very low, just above that of the nearest-neighbor [66]. Moreover, it satisfies the interpolation property (3.11) and it is the simplest interpolating basis function one can think of that builds a continuous function out of a sequence of discrete samples. In the two-dimensional case, there are four samples that contribute for computation of one value of the interpolated function. Fessler s NUFFT (iterative algorithm): An alternative approach, which combines two last steps of (7.7) (gridding and IFFT) into one, is iterative computation of the inverse transform using forward non-uniform fast Fourier transform (NUFFT). Forward NUFFT can be achieved in the scheme proposed by Fessler and Sutton in [89]. This method is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm (see [89]). This method was successfully applied to reconstruction of images in diffraction ultrasound tomography [90]. We now compare performance of these reconstruction methods with our algorithm developed in Section 7.2. Computer simulations illustrating performance of the proposed algorithms are demonstrated for the well-known Shepp-Logan head phantom [91] (see Appendix D). Example 7.1. We consider the problem of reconstructing a Shepp-Logan head phantom from 12 uniformly spaced projections with N ρ = 91 samples each. This set of samples is 97

109 presented in Fig. 7-5 (top-right). We note that uniform distribution in both coordinates is the most common in the context of CT images, since the samples along radial lines are obtained by uniform FT of the projections which are usually uniform. Reconstruction results using four methods, previously discussed in this section, are shown in Fig Example 7.2. In this example we reconstruct a Shepp-Logan head phantom from 12 recurrent nonuniform projections with N ρ = 91 samples each. This set of samples was generated by throwing out every third projection from the uniform set of 18 projections. The set of frequency domain samples and reconstruction results are presented in Fig Example 7.3. Here, a Shepp-Logan head phantom is reconstructed from 12 nonuniform projections with N ρ = 91 samples each (see Fig. 7-7). Such distribution of samples may be the result of synchronization problems in the scanner machine (Fig. 7-1). In Figs. 7-5, 7-6, and 7-7 we present the original Shepp-Logan head phantom and four reconstructions obtained with different methods. We can clearly see that the reconstructed phantom is closer to the original when the method (7.6) proposed in this chapter is used for polar-cartesian gridding. From Figs we conclude that polar-cartesian gridding is very sensitive to the method used for interpolation and simple nearest-neighbor and linear techniques methods do not perform well and introduce artifacts in the reconstruction. We also observe that Fessler s NUFFT algorithm is not capable to recover the sharp edges and small objects of a Shepp-Logan head phantom. This iterative algorithm requires more frequency domain samples to reach the same quality achieved by our method. 98

110 Original Phantom Frequency Domain Samples Fessler s NUFFT Nearest Neighbor Interpolation Linear Interpolation Our Method Figure 7-5: Reconstruction of a Shepp-Logan phantom (top-left) in computerized tomography from its frequency domain samples lying on the uniformly distributed radial lines (top-right). 99

111 Original Phantom Frequency Domain Samples Fessler s NUFFT Nearest Neighbor Interpolation Linear Interpolation Our Method Figure 7-6: Reconstruction of a Shepp-Logan phantom (top-left) in computerized tomography from its frequency domain samples lying on the two sets of uniformly distributed radial lines (top-right). 100

112 Original Phantom Frequency Domain Samples Fessler s NUFFT Nearest Neighbor Interpolation Linear Interpolation Our Method Figure 7-7: Reconstruction of a Shepp-Logan phantom (top-left) in computerized tomography from its frequency domain samples lying on the nonuniformly distributed radial lines (top-right). 101

113 Chapter 8 Summary and Topics for Future Research In this chapter we summarize the contributions of this work, outline the conclusions and describe ongoing and future work. 8.1 Contribution In this work, we presented two new formulas for reconstruction of a periodic bandlimited signal from nonuniform samples. In the first formula we obtained new reconstruction functions for the case of an even number of sampling points. We showed that Lagrange interpolation for trigonometric polynomials for an odd number of nonuniform samples is a special case of our reconstruction theorem. Simplifying our reconstruction formula for the case of uniformly spaced samples we verify that the result is equal to already known functions derived by Cauchy in [1] and by Schanze in [47] for cases of odd and even number of samples, respectively. We also derived the new reconstruction functions for the case of recurrent nonuniform sampling. We obtained some interesting and new results concerning the stability issue of the reconstruction algorithm. Specifically, the fact that the reconstruction from even number of uniform samples is less stable compered to an odd number of sampling points. Simple method for stability analysis of the reconstruction in the case of recurrent nonuniform samples was developed. The second reconstruction formula was developed for the case of oversampled periodic 102

114 bandlimited signals. We compared the two proposed methods and examined the advantages and drawbacks of each. Specifically, we showed that the first method provides consistent reconstruction of the signal while the second is more stable in noisy environments. For uniform and recurrent nonuniform sampling schemes we developed an efficient implementation of these reconstruction algorithms using continuous-time and discrete-time LTI filters. Moreover, we used our one dimensional results in reconstruction of two dimensional signals, periodic in one dimension, given in polar coordinates. Here again, efficient reconstruction methods were developed for the uniform and recurrent nonuniform sampling in polar coordinates. These techniques were successfully applied to tomographic image reconstruction. We showed that our methods resulted in high quality of reconstruction compared to others. 8.2 Ongoing Work This research as well as development of the reconstruction of the periodic bandlimited signals from nonuniform samples has ongoing efforts. There are several extensions (continuations) to this work that we are going to explore Symmetric Expansion We may conclude from this work that the periodic expansion provides a simple and appropriate way to handle the problem of reconstruction of finite dimensional signals. Arguably the main drawback of this approach is the underlying periodicity assumption about the reconstructed function. If x(t N 1 ) x(t 0 ) is large, the reconstructed signal suffers from disturbing boundary effects due to the assumption of periodicity. Recently, there has been some work on reconstructing a signal from a finite number of nonuniform samples by applying Neumann boundary conditions [51], i.e., a symmetric extension across the end points of the sampling intervals. This approach may improve the quality of reconstruction at the boundaries of the signal when the signal is not really periodic. The reconstruction method proposed in [51] is iterative. Here instead, we propose non-iterative algorithms for reconstruction of the signal under assumption of its symmetry. In Fig. 8-1 we provide examples of periodic and symmetric expansions of nonuniform samples. We are currently developing this reconstruction method and investigating its properties, advantages, and disadvantages. 103

115 1 Periodic Expansion 1 Symmetric Expansion Figure 8-1: Examples of periodic (left) and symmetric (right) expansions of nonuniform samples Reconstruction with Nonuniform B-splines A B-spline of degree n is a piecewise polynomial with n + 1 pieces that are smoothly connected together. The joining points of the polynomials are called knots. For a B-spline of degree n, each segment is polynomial of degree n. The special case of B-splines with uniform knots was studied extensively by Unser et al. [92]. Recently, there has been some works on interpolating a signal using nonuniform B-splines [93, 94]. The assumption that the reconstructed signal is a spline may result in a more stable interpolation, as we show in Fig Investigation of the properties of this type of reconstruction is one of our future goals Recovery of Missing Pixels in Bandlimited Images We would like to complete this work with a two dimensional image of Lena, which is depicted in Fig. 8-3 (left). Specifically, we consider the problem of recovering missing samples of a discrete image. When data from a uniformly sampled image are lost, the result is a set of nonuniformly spaced samples. The nonuniformly sampled Lena is depicted in Fig. 8-3 (middle), where the missing samples are displayed as black pixels. Our main assumption, is that the image is a 2-D periodic bandlimited signal. This approach is frequently encountered in image processing [40] and provides good results. We recover the lost pixels using an oblique projection operator onto the space of periodic bandlimited signals [63]. The result of the recovery of the Lena image using our algorithm 104

116 15 Periodic Expansion 1 Interpolation with B spline Figure 8-2: Examples of periodic bandlimited (left) and B-spline (right) reconstructions from nonuniform samples. is shown in Fig. 8-3 (right). We observe that there is almost no visible difference between the reconstruction provided by this method and the original image. The drawback of the proposed algorithm is the fact that it involves inverse or pseudoinverse calculation of large matrices, which is computationally demanding. Our future goal is to develop efficient reconstruction methods for the problem considered in this section. Original Image Lena with 43% Randomly Missing Pixels Reconstructed Image Figure 8-3: Recovery of missing pixels. Left: Original digital image of Lena with pixels. Middle: Lena with 43% randomly missing pixels. Right: Reconstructed image. 105

117 Appendix A Proof of the Reconstruction Theorem from Nonuniform Samples In this appendix we prove Theorem 3.1. Let {t p } N 1 p=0 denote a set of nonuniformly distributed sampling points of a periodic bandlimited signal x(t) such as 0 t 0 <... < t N 1 < T, where T is the period of x(t). Using the periodic property of the signal, i.e., x(t p ) = x(t p + nt ), n Z, (A.1) it can be shown that arbitrary sampling of a periodic signal corresponds to a recurrent nonuniform sampling scheme according to which, a group of N samples repeats itself along the signal with period T. A formula for reconstructing a bandlimited signal from recurrent nonuniform samples was derived by Yen in [29] and is given by where x(t) = N 1 n= p=0 x(t p + nt ) a p( 1) nn N 1 q=0 sin(π(t t q)/t ) π(t nt t p )/T a p = N 1 q=0 q p 1 sin(π(t p t q )/T ). (A.2) (A.3) 106

118 Here the average sampling rate N/T [Hz] should be greater than the Nyquist rate. In Section 2.1, we defined the Nyquist rate, i.e., the highest nonzero frequency, of a T -periodic K-bandlimited signal as 2K/T [Hz]. Thus, the condition for perfect reconstruction of the signal x(t) is N/T > 2K/T or equivalently N 2K + 1. (A.4) Substituting (A.1) into (A.2), we have x(t) = N 1 p=0 x(t p ) n= a p ( 1) nn N 1 q=0 sin(π(t t q)/t ). (A.5) π(t nt t p )/T Now we define the function h p (t) as the function which multiplies the pth sampled value in (A.5). Using the relation sin(t nπ) = ( 1) n sin(t), we can express h p (t) as h p (t) = = n= n= a p ( 1) nn N 1 q=0 sin(π(t t q)/t ) π(t nt t p )/T N 1 q=0 a sin(π(t nt t q)/t ) p. (A.6) π(t nt t p )/T It is easily seen that h p (t) is periodic in T and satisfies the Dirichlet conditions, i.e., h p (t) is absolutely integrable over one period, has a finite number of maxima and minima in one period, and is continuous. Therefore, h p (t) has the following Fourier series representation h p (t) = c k e j2πkt/t, k= (A.7) where c k are the corresponding Fourier coefficients, and are calculated as (2.3) c k = 1 T = a p T = a p T T 0 T 0 T 0 h p (t)e j2πkt/t dt n= n= N 1 q=0 sin(π(t nt t q)/t ) e j2πkt/t dt π(t nt t p )/T sin(π(t nt t p )/T ) π(t nt t p )/T N 1 q=0 q p sin(π(t nt t q )/T )e j2πkt/t dt.(a.8) In the last equation (A.8), we replace t by a new variable u, which we define as u = t nt t p. 107

119 The integration limits are changed accordingly to I n = [ nt t p, T nt t p ]. Using the fact that e j2πkn = 1 for any integers n and k, the modified equation (A.8) is given by c k = a p T e j2πktp/t n= sin(πu/t ) I n πu/t N 1 q=0 q p sin(π(u + t p t q )/T )e j2πku/t du. (A.9) Using the exponential decomposition of sine functions, we can express the product of sines in (A.9) as a sum of exponents, i.e., N 1 q=0 q p sin(π(u + t p t q )/T ) = N 1 q=0 q p 1 2j (ejπ(u+tp tq)/t e jπ(u+tp tq)/t ) = N 1 l= N+1 b l e jπlu/t, where the complex coefficients b l are the result of expanding the product of sines. (A.10) power of each exponent in the final expression in (A.10) is obtained as a sum of m positive and N 1 m negative powers of exponents in the middle expression of (A.10). Therefore, the set of all possible values of l is defined as l = m (N 1 m) = 2m N + 1, for m [0, N 1]. As a result, every second coefficient b l starting from b N+2 is equal to zero, The b l = 0, l = N + 2, N + 4,..., N 2. (A.11) Since, the coefficients b l will be used to represent the final expression as the product of sines, the exact values of the coefficients are not needed. We observe that the intervals I n in equation (A.9) are not overlapping and the union of all intervals I n is the set of all real numbers, i.e., n= I n = R. For this reason, we replace the sum and integral operations in (A.9) by an integral over the domain u [, ]. The coefficients expression (A.9) then becomes c k = a p e j2πktp/t 1 T N 1 = a p e j2πktp/t l= N+1 sin(πu/t ) πu/t b l T N 1 l= N+1 b l e jπlu/t e j2πku/t du sin(πu/t ) 2π j e T (k 2)u l du. πu/t (A.12) 108

120 Using the fact that 1 T sinc(u/t ) Fourier rect(t f) = 1, T f < 0.5; 0.5, T f = 0.5; 0, otherwise, (A.13) we obtain c k = a p e j2πkt p/t = N 1 l= N+1 ( b l rect k l ) 2 a p b 2k e j2πkt p/t, k (N 1)/2, N odd; a p b 2k 1 +b 2k+1 2 e j2πkt p/t, k N/2, N even; 0, otherwise. (A.14) Substituting these coefficients into (A.7), we can calculate the expression for h p (t), where different functions are obtained for the case of an odd and even number of sampling points. For N odd: h p (t) = = k= (N 1)/2 c k e j2πkt/t k= (N 1)/2 (N 1)/2 = a p k= (N 1)/2 a p b 2k e j2πktp/t e j2πkt/t b 2k e j2πk(t tp)/t. (A.15) Replacing the index by l = 2k and using the property (A.11) of the coefficient set b l, we have h p (t) = a p N 1 l= N+1 b l e jπl(t tp)/t. (A.16) The last equation (A.16) is equivalent to (A.10), where u in (A.10) corresponds to (t t p ) in (A.16). Shrinking (A.16) into products of sines according to (A.10), results in N 1 h p (t) = a p q=0 q p sin(π(t t q )/T ) = N 1 q=0 q p sin(π(t t q )/T ) sin(π(t p t q )/T ). (A.17) 109

121 For N even: h p (t) = = = a p 2 k= N/2 k= N/2 c k e j2πkt/t N/2 k= N/2 b 2k 1 + b 2k+1 a p e j2πktp/t e j2πkt/t 2 b 2k 1 e j2πk(t t p)/t + a p 2 N/2 k= N/2 b 2k+1 e j2πk(t t p)/t. (A.18) Similar to the previous case (N odd), denoting l 1 = 2k 1 and l 2 = 2k + 1 in the first and the second terms of (A.18) respectively, and using (A.11) with fact that b N+1 = b N 1 = 0, we have h p (t) = a p 2 N 1 l 1 = N 1 = a p 2 ejπ(t t p)/t b l1 e jπ(l 1+1)(t t p )/T + a p 2 N 1 l 1 = N+1 = a N 1 p 2 ejπ(t t p)/t q=0 q p N+1 l 2 = N+1 b l2 e jπ(l 2 1)(t t p )/T b l1 e jπl 1(t t p )/T + a p 2 e jπ(t t p)/t sin(π(t t q )/T ) + a N 1 p 2 e jπ(t t p)/t = a N 1 p 2 (ejπ(t tp)/t + e jπ(t tp)/t ) sin(π(t t q )/T ) ( π(t tp ) = cos T ) N 1 q=0 q p completing the proof of the Theorem. q=0 q p sin(π(t t q )/T ) sin(π(t p t q )/T ), q=0 q p N 1 l 2 = N+1 b l2 e jπl 2(t t p )/T sin(π(t t q )/T ) (A.19) 110

122 Appendix B Expansions of the Reconstruction Functions In this appendix we expand the reconstruction functions of Theorem 3.1, which we developed in Appendix A, onto sums of complex exponential functions. This expansion finds use in various sections throughout this work, one of them is in the development of the framebased reconstruction methods of Theorem 4.1. We also show that these functions can be represented as a sum of trigonometric functions with real valued coefficients. In our development, we consider separately the case of even and odd number N of sampling points. N odd Using the expansion sin θ = (e jθ e jθ )/2j, the set of reconstruction functions {h p (t)} of (3.7), which is originally presented as a product of sines, can be equivalently represented as 111

123 a sum of exponents: h p (t) = N 1 q=0 q p N 1 = a p sin(π(t t q )/T ) sin(π(t p t q )/T ) q=0 q p N 1 = a p = q=0 q p (N 1)/2 l= (N 1)/2 1 2j (ejπ(t tq)/t e jπ(t tq)/t ) 1 2j (ejπt/t e jπt q/t e jπt/t e jπt q/t ) c pl e j2πlt/t, (B.1) where a p is defined in (A.3) and the complex coefficients c pl are the result of expanding the product of exponential differences in (B.1), and are given by c pl = a p( 1) l 2 N 1 ϕ G pl e jπϕ/t. (B.2) The set G pl consists of all possible sums of values {t q } N 1 q=0,q p, where (N 1)/2 + l values are chosen with a negative sign and the remaining (N 1)/2 l values are chosen to be positive. The size of the group G pl is G pl = N 1 (N 1)/2 l = (N 1)! ((N 1)/2 + l)! ((N 1)/2 l)!. (B.3) From the definition of the coefficients c pl of (B.2), we observe that c pl = c p( l), which is expected since h p (t) is real. Using this fact and the property that x + x = 2Re{x}, (B.1) can be written as a real valued trigonometric expression (N 1)/2 h p (t) = c p0 + (c pl e j2πlt/t + c pl e j2πlt/t ) l=1 (N 1)/2 = c p0 + (2Re{c pl } cos(2πlt/t ) 2Im{c pl } sin(2πlt/t )). l=1 (B.4) 112

124 N even Similarly to (B.1), we expand h p (t) into a sum of exponents h p (t) = a N 1 p 2 (ejπt/t e jπtp/t + e jπt/t e jπtp/t 1 ) 2j (ejπt/t e jπtq/t e jπt/t e jπtq/t ) = N/2 l= N/2 c pl e j2πlt/t, q=0 q p (B.5) where c pl = j a p( 1) l 2 N ϕ G pl e jπϕ/t ϕ G + pl e jπϕ/t, l = 0,..., N/2, (B.6) and c pl = c p( l). Here the set {G+ pl G pl } consists of all possible sums of values {t q} N 1 q=0, where N/2 + l values are chosen with a negative sign and the remaining N/2 l values are chosen to be positive. The value of t p appears with positive and negative signs in G + pl and G pl, respectively. The size of this set is G + pl G pl = N N/2 l = N! (N/2 + l)! (N/2 l)!. (B.7) Now, converting the exponential expression (B.5) into a real valued trigonometric form, results in N/2 h p (t) = c p0 + (2Re{c pl } cos(2πlt/t ) 2Im{c pl } sin(2πlt/t )), l=1 (B.8) with coefficients c pl defined in (B.6). 113

125 Appendix C Span of the Reconstruction Functions In this appendix we prove Theorem 3.2. It follows directly from Theorem 3.1 that the set of functions {h p (t)} N 1 p=0 of (3.7) provides perfect reconstruction for any T -periodic K-bandlimited function, where K (N 1)/2. In other words any function x(t) from the space V (N 1)/2 or V (N 2)/2 can be perfectly reconstructed from odd or even number of sampling points, respectively. This fact completes the prove for the case of N odd. We now deal with the case of even N. It follows from definition (2.5), that the space V K is of dimension 2K+1. Thus for N even the space V (N 2)/2 is an (N 1)-dimensional space of T -periodic ((N 2)/2)-bandlimited functions. We now define a T -periodic function x s (t) as x s (t) = sin(π(nt σ t )/T ), (C.1) where σ t = N 1 p=0 t p. (C.2) We observe that x s (t) is orthogonal to the space V (N 2)/2, since x s (t) is orthogonal to N 1 functions (2.5), which constitute a basis for V (N 2)/2. For the case of N even, the space V of (3.9) is of dimension N since V is a union of N 1 and 1 dimensional orthogonal spaces V (N 2)/2 and x s (t) respectively. The second step is to show that the set of N functions {h p (t)} N 1 p=0 of (3.7) spans the 114

126 N-dimensional space V. It follows directly from Theorem 3.1 that this set provides perfect reconstruction for any T -periodic K-bandlimited function, where K (N 1)/2. Since N is even, the highest possible integer value for K is (N 2)/2, and we have that V (N 2)/2 span{h p (t), p = 0,..., N 1}. (C.3) We now prove that the function x s (t) is also in the span of {h p (t)} N 1 p=0, namely x s (t) span{h p (t), p = 0,..., N 1}. (C.4) Proof: To prove (C.4), we have to show that the T -periodic function x s (t) of (C.4) can be reconstructed from an even number N of its nonuniform samples {t p } N 1 p=0. In terms of Theorem 3.1, we have to show that x s (t) = N 1 p=0 x s (t p )h p (t), (C.5) where the set of reconstruction functions {h p (t)} N 1 p=0 is defined in (3.7). Substituting x s(t) of (C.1) and an exponential expansions of the functions {h p (t)} N 1 p=0, which are given in (B.5), into (C.5), we have sin(π(nt σ t )/T ) = N 1 p=0 sin(π(nt p σ t )/T ) N/2 l= N/2 c pl e j2πlt/t. (C.6) Since x s (t) is a sine function with frequency πn/t, only terms with l = N/2 in (C.6) contribute to the reconstruction of the signal x s (t). Using this fact, (C.6) can be rewritten as sin(π(nt σ t )/T ) = N 1 p=0 sin(π(nt p σ t )/T )(c p( N/2) e jπnt/t + c p(n/2) e jπnt/t ), (C.7) where the complex coefficients c p( N/2) and c p(n/2) are defined in (B.6), and are given by c p(n/2) = c p( N/2) = a p( 1) (N/2 1) j2 N e jπσ t/t, (C.8) and the constants a p and σ t are defined in (A.3) and (C.2) respectively. Substituting (C.8) 115

127 into (C.7), results in sin(π(nt σ t )/T ) = ( 1)(N/2 1) 2 N 1 N 1 p=0 = sin(π(nt σ t )/T ) ( 1)(N/2 1) 2 N 1 sin(π(nt p σ t )/T ) a p j2 (ejπ(nt σt)/t e jπ(nt σt)/t ) N 1 p=0 a p sin(π(nt p σ t )/T ). We observe that our problem reduces to the proof of the following trigonometric identity (C.9) N 1 p=0 a p sin(π(nt p σ t )/T ) = ( 1) (N/2 1) 2 N 1. (C.10) Substituting constants a p and σ t into (C.10), we have N 1 p=0 sin( N 1 q=0,q p (π(t p t q )/T )) N 1 q=0,q p sin(π(t p t q )/T ) = ( 1)(N/2 1) 2 N 1. (C.11) Applying the trigonometric identities, sine and cosine of the sums: sin(α + β) = sin α cos β + cos α sin β, cos(α + β) = cos α cos β sin α sin β, (C.12) on the numerator of (C.11) for a given value p, results in cos( N 1 q=0,q p,q i (π(t p t q )/T )) N 1 q=0,q p,q i sin(π(t p t q )/T ) + cot(π(t p t i )/T ) sin( N 1 q=0,q p,q i (π(t p t q )/T )) N 1 q=0,q p,q i sin(π(t p t q )/T ). (C.13) Applying (C.12) recursively onto (C.13) and summating over p, converts the equation (C.11) into the form ( 1) (N/2 1) N 1 p=0 ( 1 i,j p cot(π(t p t i )/T ) cot(π(t p t j )/T )+ i<j + i,j,l,m p cot(π(t p t i )/T )... cot(π(t p t m )/T ) i<j<l<m ). (C.14) The last expression consists of the products of an even number of cotangent functions, which are summed over all possible combinations of its arguments, i.e., over all possible combinations of (t p t q ), for p q. The number of the combinations of the products of n 116

128 (even) cotangents in (C.14) is N N 1. Combining these products into the groups of n m = n + 1 sums, allows the use of the following equation, which is taken from the standard tables of finite trigonometric series [69, p. 647] m p=1 q=1 q p m cot(π(t p t q )/T ) = sin For n even (m odd), expression (C.15) reduces to ( mπ ). (C.15) 2 m m cot(π(t p t q )/T ) = ( 1) (m 1)/2. p=1 q=1 q p (C.16) Integrating the products of cotangents of (C.14) into such groups and using (C.16), we can evaluate the expression (C.14) as N/2 1 ( 1) (N/2 1) k=0 N N 1 2k 1 2k + 1 N/2 1 = ( 1) (N/2 1) k=0 N/2 1 = ( 1) (N/2 1) k=0 N/2 1 = ( 1) (N/2 1) k=0 N(N 1)! (N 1 2k)!(2k)!(2k + 1) N! (N (2k + 1))!(2k + 1)! N 2k + 1 = ( 1) (N/2 1) 2 N 1, (C.17) where the last equation results directly from [69, p. 607]. The last expression (C.17) completes the proof of the trigonometric identity (C.10). Substituting (C.10) into (C.9), completes the proof of Theorem

129 Appendix D Shepp-Logan head phantom The Shepp-Logan phantom is an analytic phantom used in tomographic application due to the simple expression of its Fourier transform [91]. The phantom consists of ellipses resembling the human brain features and the cranial contour. An example of the Shepp- Logan phantom is presented in Fig. D-1. Figure D-1: Shepp-Logan head phantom in space domain. The 2D Fourier transform in Cartesian coordinates of an ellipse with intensity ρ, radii A, B and orientation α, with its center located in (x 0, y 0 ), is given by J S(u, v) = ρe i(ux 1 (B ) (u AB 1 ) 2 + v 2 0+vy 0 ), (D.1) (u AB 1 ) 2 + v 2 118

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Difference between Reconstruction from Uniform and Non-Uniform Samples using Sinc Interpolation

Difference between Reconstruction from Uniform and Non-Uniform Samples using Sinc Interpolation IJRRES INERNAIONAL JOURNAL OF RESEARCH REVIEW IN ENGINEERING SCIENCE & ECHNOLOGY Difference between Reconstruction from Uniform and Non-Uniform Samples using Sinc Interpolation *Apurva H. Ghate, **Dr.

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

Homework: 4.50 & 4.51 of the attachment Tutorial Problems: 7.41, 7.44, 7.47, Signals & Systems Sampling P1

Homework: 4.50 & 4.51 of the attachment Tutorial Problems: 7.41, 7.44, 7.47, Signals & Systems Sampling P1 Homework: 4.50 & 4.51 of the attachment Tutorial Problems: 7.41, 7.44, 7.47, 7.49 Signals & Systems Sampling P1 Undersampling & Aliasing Undersampling: insufficient sampling frequency ω s < 2ω M Perfect

More information

Various signal sampling and reconstruction methods

Various signal sampling and reconstruction methods Various signal sampling and reconstruction methods Rolands Shavelis, Modris Greitans 14 Dzerbenes str., Riga LV-1006, Latvia Contents Classical uniform sampling and reconstruction Advanced sampling and

More information

A Short Course on Frame Theory

A Short Course on Frame Theory A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and

More information

DIGITAL signal processing applications are often concerned

DIGITAL signal processing applications are often concerned 5874 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 56, NO 12, DECEMBER 2008 Nonlinear and Nonideal Sampling: Theory and Methods Tsvi G Dvorkind, Yonina C Eldar, Senior Member, IEEE, and Ewa Matusiak Abstract

More information

Convex Optimization in Signal Processing and Communications. March 16, 2009

Convex Optimization in Signal Processing and Communications. March 16, 2009 Convex Optimization in Signal Processing and Communications March 16, 2009 i ii Contents List of contributors page v Part I 1 1 Optimization Techniques in Modern Sampling Theory 3 1.1 Introduction 3 1.2

More information

A Survey of Compressive Sensing and Applications

A Survey of Compressive Sensing and Applications A Survey of Compressive Sensing and Applications Justin Romberg Georgia Tech, School of ECE ENS Winter School January 10, 2012 Lyon, France Signal processing trends DSP: sample first, ask questions later

More information

Constructing Approximation Kernels for Non-Harmonic Fourier Data

Constructing Approximation Kernels for Non-Harmonic Fourier Data Constructing Approximation Kernels for Non-Harmonic Fourier Data Aditya Viswanathan aditya.v@caltech.edu California Institute of Technology SIAM Annual Meeting 2013 July 10 2013 0 / 19 Joint work with

More information

Data representation and approximation

Data representation and approximation Representation and approximation of data February 3, 2015 Outline 1 Outline 1 Approximation The interpretation of polynomials as functions, rather than abstract algebraic objects, forces us to reinterpret

More information

RECENT results in sampling theory [1] have shown that it

RECENT results in sampling theory [1] have shown that it 2140 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 6, JUNE 2006 Oversampled A/D Conversion and Error-Rate Dependence of Nonbandlimited Signals With Finite Rate of Innovation Ivana Jovanović, Student

More information

Beyond Bandlimited Sampling: Nonlinearities, Smoothness and Sparsity

Beyond Bandlimited Sampling: Nonlinearities, Smoothness and Sparsity IRWIN AND JOAN JACOBS CENTER FOR COMMUNICATION AND INFORMATION TECHNOLOGIES Beyond Bandlimited Sampling: Nonlinearities, Smoothness and Sparsity Y. C. Eldar and T. Michaeli CCIT Report #698 June 2008 Electronics

More information

Frames. Hongkai Xiong 熊红凯 Department of Electronic Engineering Shanghai Jiao Tong University

Frames. Hongkai Xiong 熊红凯   Department of Electronic Engineering Shanghai Jiao Tong University Frames Hongkai Xiong 熊红凯 http://ivm.sjtu.edu.cn Department of Electronic Engineering Shanghai Jiao Tong University 2/39 Frames 1 2 3 Frames and Riesz Bases Translation-Invariant Dyadic Wavelet Transform

More information

Finite Frame Quantization

Finite Frame Quantization Finite Frame Quantization Liam Fowl University of Maryland August 21, 2018 1 / 38 Overview 1 Motivation 2 Background 3 PCM 4 First order Σ quantization 5 Higher order Σ quantization 6 Alternative Dual

More information

Numerical Approximation Methods for Non-Uniform Fourier Data

Numerical Approximation Methods for Non-Uniform Fourier Data Numerical Approximation Methods for Non-Uniform Fourier Data Aditya Viswanathan aditya@math.msu.edu 2014 Joint Mathematics Meetings January 18 2014 0 / 16 Joint work with Anne Gelb (Arizona State) Guohui

More information

Super-resolution via Convex Programming

Super-resolution via Convex Programming Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation

More information

Fourier and Wavelet Signal Processing

Fourier and Wavelet Signal Processing Ecole Polytechnique Federale de Lausanne (EPFL) Audio-Visual Communications Laboratory (LCAV) Fourier and Wavelet Signal Processing Martin Vetterli Amina Chebira, Ali Hormati Spring 2011 2/25/2011 1 Outline

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES

APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES GUOHUI SONG AND ANNE GELB Abstract. This investigation seeks to establish the practicality of numerical frame approximations. Specifically,

More information

ELEN 4810 Midterm Exam

ELEN 4810 Midterm Exam ELEN 4810 Midterm Exam Wednesday, October 26, 2016, 10:10-11:25 AM. One sheet of handwritten notes is allowed. No electronics of any kind are allowed. Please record your answers in the exam booklet. Raise

More information

Nonlinear and Non-Ideal Sampling: Theory and Methods

Nonlinear and Non-Ideal Sampling: Theory and Methods Nonlinear and Non-Ideal Sampling: 1 Theory and Methods Tsvi G. Dvorkind, Yonina C. Eldar, Senior Member, IEEE, and Ewa Matusiak Abstract We study a sampling setup where a continuous-time signal is mapped

More information

SIGNALS AND SYSTEMS: PAPER 3C1 HANDOUT 6a. Dr David Corrigan 1. Electronic and Electrical Engineering Dept.

SIGNALS AND SYSTEMS: PAPER 3C1 HANDOUT 6a. Dr David Corrigan 1. Electronic and Electrical Engineering Dept. SIGNALS AND SYSTEMS: PAPER 3C HANDOUT 6a. Dr David Corrigan. Electronic and Electrical Engineering Dept. corrigad@tcd.ie www.mee.tcd.ie/ corrigad FOURIER SERIES Have seen how the behaviour of systems can

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals. 1. Sampling and Reconstruction 2. Quantization

ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals. 1. Sampling and Reconstruction 2. Quantization ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals 1. Sampling and Reconstruction 2. Quantization 1 1. Sampling & Reconstruction DSP must interact with an analog world: A to D D to A x(t)

More information

SIGNAL deconvolution is aimed at removing the system response

SIGNAL deconvolution is aimed at removing the system response 2636 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 7, JULY 2006 Nonideal Sampling and Interpolation From Noisy Observations in Shift-Invariant Spaces Yonina C. Eldar, Member, IEEE, and Michael Unser,

More information

Fourier Series. (Com S 477/577 Notes) Yan-Bin Jia. Nov 29, 2016

Fourier Series. (Com S 477/577 Notes) Yan-Bin Jia. Nov 29, 2016 Fourier Series (Com S 477/577 otes) Yan-Bin Jia ov 9, 016 1 Introduction Many functions in nature are periodic, that is, f(x+τ) = f(x), for some fixed τ, which is called the period of f. Though function

More information

MEDE2500 Tutorial Nov-7

MEDE2500 Tutorial Nov-7 (updated 2016-Nov-4,7:40pm) MEDE2500 (2016-2017) Tutorial 3 MEDE2500 Tutorial 3 2016-Nov-7 Content 1. The Dirac Delta Function, singularity functions, even and odd functions 2. The sampling process and

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

A primer on the theory of frames

A primer on the theory of frames A primer on the theory of frames Jordy van Velthoven Abstract This report aims to give an overview of frame theory in order to gain insight in the use of the frame framework as a unifying layer in the

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

Index. p, lip, 78 8 function, 107 v, 7-8 w, 7-8 i,7-8 sine, 43 Bo,94-96

Index. p, lip, 78 8 function, 107 v, 7-8 w, 7-8 i,7-8 sine, 43 Bo,94-96 p, lip, 78 8 function, 107 v, 7-8 w, 7-8 i,7-8 sine, 43 Bo,94-96 B 1,94-96 M,94-96 B oro!' 94-96 BIro!' 94-96 I/r, 79 2D linear system, 56 2D FFT, 119 2D Fourier transform, 1, 12, 18,91 2D sinc, 107, 112

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Digital Image Processing

Digital Image Processing Digital Image Processing, 2nd ed. Digital Image Processing Chapter 7 Wavelets and Multiresolution Processing Dr. Kai Shuang Department of Electronic Engineering China University of Petroleum shuangkai@cup.edu.cn

More information

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2013 December 17.

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2013 December 17. NIH Public Access Author Manuscript Published in final edited form as: Proc Soc Photo Opt Instrum Eng. 2008 March 18; 6913:. doi:10.1117/12.769604. Tomographic Reconstruction of Band-limited Hermite Expansions

More information

Lecture 15: Time and Frequency Joint Perspective

Lecture 15: Time and Frequency Joint Perspective WAVELETS AND MULTIRATE DIGITAL SIGNAL PROCESSING Lecture 15: Time and Frequency Joint Perspective Prof.V.M.Gadre, EE, IIT Bombay Introduction In lecture 14, we studied steps required to design conjugate

More information

Sampling Signals from a Union of Subspaces

Sampling Signals from a Union of Subspaces 1 Sampling Signals from a Union of Subspaces Yue M. Lu and Minh N. Do I. INTRODUCTION Our entire digital revolution depends on the sampling process, which converts continuousdomain real-life signals to

More information

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN

PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the

More information

SIGNAL expansions, in which a signal is represented by a

SIGNAL expansions, in which a signal is represented by a IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 12, DECEMBER 2006 4619 Mean-Squared Error Sampling Reconstruction in the Presence of Noise Yonina C Eldar, Member, IEEE Abstract One of the main goals

More information

Lecture 6 January 21, 2016

Lecture 6 January 21, 2016 MATH 6/CME 37: Applied Fourier Analysis and Winter 06 Elements of Modern Signal Processing Lecture 6 January, 06 Prof. Emmanuel Candes Scribe: Carlos A. Sing-Long, Edited by E. Bates Outline Agenda: Fourier

More information

Fourier Series. Spectral Analysis of Periodic Signals

Fourier Series. Spectral Analysis of Periodic Signals Fourier Series. Spectral Analysis of Periodic Signals he response of continuous-time linear invariant systems to the complex exponential with unitary magnitude response of a continuous-time LI system at

More information

A Novel Fast Computing Method for Framelet Coefficients

A Novel Fast Computing Method for Framelet Coefficients American Journal of Applied Sciences 5 (11): 15-157, 008 ISSN 1546-939 008 Science Publications A Novel Fast Computing Method for Framelet Coefficients Hadeel N. Al-Taai Department of Electrical and Electronic

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

On perfect conditioning of Vandermonde matrices on the unit circle

On perfect conditioning of Vandermonde matrices on the unit circle Electronic Journal of Linear Algebra Volume 16 Article 13 2007 On perfect conditioning of Vandermonde matrices on the unit circle Lihu Berman feuer@ee.technion.ac.il Arie Feuer Follow this and additional

More information

Deep Learning: Approximation of Functions by Composition

Deep Learning: Approximation of Functions by Composition Deep Learning: Approximation of Functions by Composition Zuowei Shen Department of Mathematics National University of Singapore Outline 1 A brief introduction of approximation theory 2 Deep learning: approximation

More information

Towards a Mathematical Theory of Super-resolution

Towards a Mathematical Theory of Super-resolution Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This

More information

Time and Spatial Series and Transforms

Time and Spatial Series and Transforms Time and Spatial Series and Transforms Z- and Fourier transforms Gibbs' phenomenon Transforms and linear algebra Wavelet transforms Reading: Sheriff and Geldart, Chapter 15 Z-Transform Consider a digitized

More information

Review of Discrete-Time System

Review of Discrete-Time System Review of Discrete-Time System Electrical & Computer Engineering University of Maryland, College Park Acknowledgment: ENEE630 slides were based on class notes developed by Profs. K.J. Ray Liu and Min Wu.

More information

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Topics in Harmonic Analysis Lecture 1: The Fourier transform Topics in Harmonic Analysis Lecture 1: The Fourier transform Po-Lam Yung The Chinese University of Hong Kong Outline Fourier series on T: L 2 theory Convolutions The Dirichlet and Fejer kernels Pointwise

More information

EE 224 Signals and Systems I Review 1/10

EE 224 Signals and Systems I Review 1/10 EE 224 Signals and Systems I Review 1/10 Class Contents Signals and Systems Continuous-Time and Discrete-Time Time-Domain and Frequency Domain (all these dimensions are tightly coupled) SIGNALS SYSTEMS

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Superiorized Inversion of the Radon Transform

Superiorized Inversion of the Radon Transform Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number

More information

Lecture 3 January 23

Lecture 3 January 23 EE 123: Digital Signal Processing Spring 2007 Lecture 3 January 23 Lecturer: Prof. Anant Sahai Scribe: Dominic Antonelli 3.1 Outline These notes cover the following topics: Eigenvectors and Eigenvalues

More information

Sensors. Chapter Signal Conditioning

Sensors. Chapter Signal Conditioning Chapter 2 Sensors his chapter, yet to be written, gives an overview of sensor technology with emphasis on how to model sensors. 2. Signal Conditioning Sensors convert physical measurements into data. Invariably,

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

The quantum state as a vector

The quantum state as a vector The quantum state as a vector February 6, 27 Wave mechanics In our review of the development of wave mechanics, we have established several basic properties of the quantum description of nature:. A particle

More information

Chap 4. Sampling of Continuous-Time Signals

Chap 4. Sampling of Continuous-Time Signals Digital Signal Processing Chap 4. Sampling of Continuous-Time Signals Chang-Su Kim Digital Processing of Continuous-Time Signals Digital processing of a CT signal involves three basic steps 1. Conversion

More information

FROM ANALOGUE TO DIGITAL

FROM ANALOGUE TO DIGITAL SIGNALS AND SYSTEMS: PAPER 3C1 HANDOUT 7. Dr David Corrigan 1. Electronic and Electrical Engineering Dept. corrigad@tcd.ie www.mee.tcd.ie/ corrigad FROM ANALOGUE TO DIGITAL To digitize signals it is necessary

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Error Estimates for Filtered Back Projection Matthias Beckmann and Armin Iske Nr. 2015-03 January 2015 Error Estimates for Filtered Back Projection Matthias

More information

The Fractional Fourier Transform with Applications in Optics and Signal Processing

The Fractional Fourier Transform with Applications in Optics and Signal Processing * The Fractional Fourier Transform with Applications in Optics and Signal Processing Haldun M. Ozaktas Bilkent University, Ankara, Turkey Zeev Zalevsky Tel Aviv University, Tel Aviv, Israel M. Alper Kutay

More information

Lecture 22: More On Compressed Sensing

Lecture 22: More On Compressed Sensing Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an

More information

Fourier Reconstruction from Non-Uniform Spectral Data

Fourier Reconstruction from Non-Uniform Spectral Data School of Electrical, Computer and Energy Engineering, Arizona State University aditya.v@asu.edu With Profs. Anne Gelb, Doug Cochran and Rosemary Renaut Research supported in part by National Science Foundation

More information

Beyond incoherence and beyond sparsity: compressed sensing in the real world

Beyond incoherence and beyond sparsity: compressed sensing in the real world Beyond incoherence and beyond sparsity: compressed sensing in the real world Clarice Poon 1st November 2013 University of Cambridge, UK Applied Functional and Harmonic Analysis Group Head of Group Anders

More information

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters Signals, Instruments, and Systems W5 Introduction to Signal Processing Sampling, Reconstruction, and Filters Acknowledgments Recapitulation of Key Concepts from the Last Lecture Dirac delta function (

More information

Bifrequency and Bispectrum Maps: A New Look at Multirate Systems with Stochastic Inputs

Bifrequency and Bispectrum Maps: A New Look at Multirate Systems with Stochastic Inputs IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 48, NO. 3, MARCH 2000 723 Bifrequency and Bispectrum Maps: A New Look at Multirate Systems with Stochastic Inputs Sony Akkarakaran and P. P. Vaidyanathan, Fellow,

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016

Random projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016 Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use

More information

Wavelet Bi-frames with Uniform Symmetry for Curve Multiresolution Processing

Wavelet Bi-frames with Uniform Symmetry for Curve Multiresolution Processing Wavelet Bi-frames with Uniform Symmetry for Curve Multiresolution Processing Qingtang Jiang Abstract This paper is about the construction of univariate wavelet bi-frames with each framelet being symmetric.

More information

Adjoint Fuzzy Partition and Generalized Sampling Theorem

Adjoint Fuzzy Partition and Generalized Sampling Theorem University of Texas at El Paso DigitalCommons@UTEP Departmental Technical Reports (CS) Department of Computer Science 1-2016 Adjoint Fuzzy Partition and Generalized Sampling Theorem Irina Perfilieva University

More information

LOCAL AND GLOBAL STABILITY OF FUSION FRAMES

LOCAL AND GLOBAL STABILITY OF FUSION FRAMES LOCAL AND GLOBAL STABILITY OF FUSION FRAMES Jerry Emidih Norbert Wiener Center Department of Mathematics University of Maryland, College Park November 22 2016 OUTLINE 1 INTRO 2 3 4 5 OUTLINE 1 INTRO 2

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

arxiv: v1 [physics.optics] 5 Mar 2012

arxiv: v1 [physics.optics] 5 Mar 2012 Designing and using prior knowledge for phase retrieval Eliyahu Osherovich, Michael Zibulevsky, and Irad Yavneh arxiv:1203.0879v1 [physics.optics] 5 Mar 2012 Computer Science Department, Technion Israel

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ECE 301 Fall 2010 Division 2 Homework 10 Solutions. { 1, if 2n t < 2n + 1, for any integer n, x(t) = 0, if 2n 1 t < 2n, for any integer n.

ECE 301 Fall 2010 Division 2 Homework 10 Solutions. { 1, if 2n t < 2n + 1, for any integer n, x(t) = 0, if 2n 1 t < 2n, for any integer n. ECE 3 Fall Division Homework Solutions Problem. Reconstruction of a continuous-time signal from its samples. Consider the following periodic signal, depicted below: {, if n t < n +, for any integer n,

More information

Review of Linear Time-Invariant Network Analysis

Review of Linear Time-Invariant Network Analysis D1 APPENDIX D Review of Linear Time-Invariant Network Analysis Consider a network with input x(t) and output y(t) as shown in Figure D-1. If an input x 1 (t) produces an output y 1 (t), and an input x

More information

BANACH FRAMES GENERATED BY COMPACT OPERATORS ASSOCIATED WITH A BOUNDARY VALUE PROBLEM

BANACH FRAMES GENERATED BY COMPACT OPERATORS ASSOCIATED WITH A BOUNDARY VALUE PROBLEM TWMS J. Pure Appl. Math., V.6, N.2, 205, pp.254-258 BRIEF PAPER BANACH FRAMES GENERATED BY COMPACT OPERATORS ASSOCIATED WITH A BOUNDARY VALUE PROBLEM L.K. VASHISHT Abstract. In this paper we give a type

More information

An Information Theoretic Approach to Analog-to-Digital Compression

An Information Theoretic Approach to Analog-to-Digital Compression 1 An Information Theoretic Approach to Analog-to-Digital Compression Processing, storing, and communicating information that originates as an analog phenomenon involve conversion of the information to

More information

An Information Theoretic Approach to Analog-to-Digital Compression

An Information Theoretic Approach to Analog-to-Digital Compression 1 An Information Theoretic Approach to Analog-to-Digital Compression Processing, storing, and communicating information that originates as an analog phenomenon involve conversion of the information to

More information

Homework 4. May An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt

Homework 4. May An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt Homework 4 May 2017 1. An LTI system has an input, x(t) and output y(t) related through the equation y(t) = t e (t t ) x(t 2)dt Determine the impulse response of the system. Rewriting as y(t) = t e (t

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Introduction to Fourier Analysis

Introduction to Fourier Analysis Lecture Introduction to Fourier Analysis Jan 7, 2005 Lecturer: Nati Linial Notes: Atri Rudra & Ashish Sabharwal. ext he main text for the first part of this course would be. W. Körner, Fourier Analysis

More information

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson Advanced Computational Fluid Dynamics AA5A Lecture Approximation Theory Antony Jameson Winter Quarter, 6, Stanford, CA Last revised on January 7, 6 Contents Approximation Theory. Least Squares Approximation

More information

Wavelets For Computer Graphics

Wavelets For Computer Graphics {f g} := f(x) g(x) dx A collection of linearly independent functions Ψ j spanning W j are called wavelets. i J(x) := 6 x +2 x + x + x Ψ j (x) := Ψ j (2 j x i) i =,..., 2 j Res. Avge. Detail Coef 4 [9 7

More information

Wavelets and multiresolution representations. Time meets frequency

Wavelets and multiresolution representations. Time meets frequency Wavelets and multiresolution representations Time meets frequency Time-Frequency resolution Depends on the time-frequency spread of the wavelet atoms Assuming that ψ is centred in t=0 Signal domain + t

More information

Digital Object Identifier /MSP

Digital Object Identifier /MSP DIGITAL VISION Sampling Signals from a Union of Subspaces [A new perspective for the extension of this theory] [ Yue M. Lu and Minh N. Do ] Our entire digital revolution depends on the sampling process,

More information

Communication Signals (Haykin Sec. 2.4 and Ziemer Sec Sec. 2.4) KECE321 Communication Systems I

Communication Signals (Haykin Sec. 2.4 and Ziemer Sec Sec. 2.4) KECE321 Communication Systems I Communication Signals (Haykin Sec..4 and iemer Sec...4-Sec..4) KECE3 Communication Systems I Lecture #3, March, 0 Prof. Young-Chai Ko 년 3 월 일일요일 Review Signal classification Phasor signal and spectra Representation

More information