A study of wave propagation and limited-diffraction beams for medical imaging

Size: px
Start display at page:

Download "A study of wave propagation and limited-diffraction beams for medical imaging"

Transcription

1 The University of Toledo The University of Toledo Digital Repository Theses and Dissertations 2005 A study of wave propagation and limited-diffraction beams for medical imaging Jiqi Cheng The University of Toledo Follow this and additional works at: Recommended Citation Cheng, Jiqi, "A study of wave propagation and limited-diffraction beams for medical imaging" (2005). Theses and Dissertations This Dissertation is brought to you for free and open access by The University of Toledo Digital Repository. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of The University of Toledo Digital Repository. For more information, please see the repository's About page.

2 A Dissertation entitled A Study of Wave Propagation and Limited-Diffraction Beams for Medical Imaging by Jiqi Cheng Submitted as partial fulfillment of the requirements for the Doctor of Philosophy degree in Engineering Adviser: Dr. Jian-yu Lu Graduate School The University of Toledo December 2005

3 The University of Toledo College of Engineering I HEREBY RECOMMEND THAT THE DISSERTATION PREPARED UNDER MY SUPERVISION BY Jiqi Cheng ENTITLED A Study of Wave Propagation and Limited-Diffraction Beams for Medical Imaging BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN ENGINEERING Dissertation Advisor: Dr. Jian-yu Lu Recommendation concurred by Dr. Ozan Akkus Dr. Brent Cameron Dr. Anthony Johnson Committee On Final Examination Dr. Vikram J. Kapoor Dean, College of Engineering

4 An Abstract of A Study of Wave Propagation and Limited-Diffraction Beams for Medical Imaging Jiqi Cheng Submitted as partial fulfillment of the requirements for the Doctor of Philosophy in Engineering The University of Toledo December 2005 In this dissertation, wave propagation and limited-diffraction beams are further studied to gain a deep understanding of their principles and applications in ultrasonic imaging. With the concept of angular spectrum, the ultrasound fields generated by array transducers are mapped as the summation of limited-diffraction beams. A new method of spatial impulse response based on simple algebraic operations instead of complex geometrical considerations for rectangular arrays is derived. Numerical and experimental results show that the method developed has a high accuracy and efficiency. Based on the knowledge of previous studies, a general theory of Fourier based imaging method is developed from the diffraction tomography theory that solves the inhomogeneous Helmoholtz equation under the Born approximation. The object function defined in this theory is more naturally linked to the physical properties of the object, such as the relative change of local compressibility and density. With this treatment, limited-diffraction array beam and broad-band steered plane wave transmissions studied previously are included, in addition to other previously studied imaging methods. A relationship between the Fourier transform of the echo data and that of the object function iii

5 is established. The theory is developed directly in 3D. Computer simulations, imaging experiments for wire targets, tissue-mimicking phantoms, and in vivo kidney and heart are carried out to verify the theory using the high frame rate imaging system. To study various methods on wave propagations, limited diffraction beams, and high frame rate imaging, logics and programs are designed and implemented for a generalpurpose high-frame-rate ultrasound imaging system. The system has 128 independent transmit and receive channels, each has a high-speed, high-precision A/D, D/A, and a large storage. The system is flexible for various ultrasound experiments. iv

6 DEDICATION This work is dedicated to my family! v

7 ACKNOWLEDGEMENTS I want to express my gratitude to my advisor Professor Jian-yu Lu for his patience, encouragement, support, guidance, and insight during the course of my dissertation work in the ultrasound laboratory. I am grateful to my dissertation committee members: Dr. Ozan Akkus, Dr. Brent Cameron, Dr. Anthony Johnson, Dr. Vikram J. Kapoor, and Dr. Frank J. Kollarits for their support and guidance. I would like to express my sincere appreciations to Dr. Paul D. Fox and Dr. Hu Peng for their valuable suggestions and helpful conversations. My hearty thanks are also extended to my fellow students in the ultrasound laboratory, Anjun Liu, Jing Wang, Cangmeng Cai and Zhaohui Wang, for their encouragements and support. Thanks are due to my classmates, Yanfang Li, Mingjie Tong, Ye Yuan and Gang Liu for their help and friendship. I would like to thank National Institute of Health, the Whitaker Foundation, and the department of Bioengineering for their financial support through these years. I am particularly grateful for the unflagging patience, unconditional love and continuous support from my family members: my wife, my parents, my grandma, my sisters, and brother-in-laws. vi

8 TABLE OF CONTENTS ABSTRACT...III DEDICATION...V ACKNOWLEDGEMENTS...VI TABLE OF CONTENTS...VII LIST OF FIGURES...XII CHATPER I: INTRODUCTION Background What is ultrasound Ultrasound wave propagation Limited-diffraction beams Fourier domain image formation Motivation and Significance Research Objectives Dissertation Organization CHAPTER II: WAVE PROPAGATION Introduction vii

9 2.2 Angular spectrum based field calculation The concept of Angular spectrum Field calculation for rectangular arrays Theory Numerical Examples Field calculation for annular arrays Theory Numerical Examples Spatial impulse response based field calculation Theory Numerical Examples The relationship between different field calculation methods Summary CHAPTER III: LIMITED DIFFRACTION BEAMS AND SIDELOBE REDUCTION Introduction Side Lobes of Limited diffraction beams Coded excitation Optical Coherence Tomography Side lobe reduction with code The principle of side lobe reduction Coded excitation Simulation and results Discussion and conclusion Low-sidelobe limited-diffraction optical coherence tomography Theory viii

10 3.3.2 Optical Design Simulation and results Discussion and conclusion Summary CHAPTER IV: FOURIER BASED IMAGING METHOD WITH LIMITED-DIFFRACTION ARRAY BEAMS AND STEERED PLANE WAVES Introduction Theory Wave equation for inhomogeneous media Non-steered plane wave limited-diffraction array beams and steered plane waves Two special cases Methods Inverse mapping procedure Reconstructive procedure Simulation Transducer definition Objects Simulation Results In vitro and in vivo experiments Experimental system Experiment in a water tank Experiment on tissue-mimicking phantom In vivo Experiments Discussion ix

11 4.6.1 Sampling constraints Aperture effects Fourier domain coverage and image resolution Summary CHAPTER V: SYSTEM AND LOGIC DESIGN OF HFR IMAGING SYSTEM Introduction Main Board Circuit Design of Main Board Working Sequence of Main Board Function Requirements of Control Logic for Main Board Function Partition of Control Logic for Main Board Channel Board Circuit Design of Channel Board Function Requirements of Control Logic for Channel Board Function Partition of Control Logic for Channel Board USB Board USB Board Circuit Design Function Requirements of USB Board FPGA Function Partition of USB Board FPGA FX2 program design IMT windows application Data transfer Sub-system Design Client Software Development FX2 Development Operation of the IMT Program x

12 5.5.5 Structure of the IMT program Summary CHAPTER VI: SUMMARY REFERENCE xi

13 LIST OF FIGURES Figure 2.1 The geometry for angular spectrum Figure 2.2 The diagram of the 2D transducer array Figure 2.3 Simulated fields of CW Bessel beams ((a) and (b)) and focused Gaussian beams ((c) and (d)) that were produced with a 2.5 MHz and 50 mm x 50 mm 2D array transducers of 250x250 elements ((a) and (c)) and 50x50 elements ((b) and (d)). Stepwise aperture weightings are assumed for both Bessel and focused Gaussian beams. The scaling parameter,α for the Bessel weighting is m-1. The focal length and full-width-at-half-maximum (FWHM) of the focused Gaussian weighting are 100 mm and 25 mm, respectively Figure 2.4 Lateral line plots of the CW Bessel beams in Figure 2.3 at axial distance z=100 mm away from the surface of the 2D array transducer of (a) 250x250 and (b) 50x50 elements, respectively. Solid lines are the simulation results with the new method, while dashed lines are the simulation results obtained with the Rayleigh-Sommerfeld diffraction formula. The parameters of the Bessel beams were the same as those in Figure 2.3. Dotted lines are the experiment results obtained with the synthetic array experiment Figure 2.5 This figure is the same as Figure 2.4, except that it is for the CW focused Gaussian beams. The focal length and the FWHM of the focused Gaussian beams are the same as those in Figure Figure 2.6 Simulated transverse fields of a CW array beam produced with the 2D array transducer of 250x250 elements at four axial distances, (a) z=50 mm, (b) z=100 mm, (c) z=150 mm, and (d) z=216 mm, away from the transducer surface. Stepwise weighting was assumed for the array beam. The scaling parameters were assumed to be kx 1 = 1000m and ky 1 = 500m along the x and y axes, xii

14 respectively. The parameters of the 2D array were assumed to be the same as those in Figure Figure 2.7 Lateral line plots of the CW array beam in Figure 2.6 at two axial distances, z=100 mm ((a) and (c)) and z=216 mm ((b) and (d)). At each distance, line plots are obtained in both the x ((a) and (b)) and y ((c) and (d)) axes. Solid lines represent the simulation results of the new method, while dotted lines are results with the synthetic array experiment. The parameters of the beams were assumed the same as those in Figure Figure 2.8 Simulated fields of focused Gaussian pulses with a 2D array transducer of 50x50 elements at four axial distances, (a) z=50 mm, (b) z=100 mm, (c) z=150 mm, and (d) z=216 mm, away from the transducer surface. The transmitting transfer function of the 2D array was assumed to be a Blackman window function peaked at the center frequency of 2.5 MHz, and the -6dB bandwidth of the array is about 81% of the center frequency. The focal length and the FWHM are the same as the CW focused Gaussian beam in Figure Figure 2.9 Lateral plots of the maximum sidelobes of the focused Gaussian pulses in Figure 2.8 at two axial distances, (a) z=100 mm and (b) z=216 mm. Solid lines are the simulation results of the new method, while dotted lines are results obtained with the synthetic array experiment Figure 2.10 The diagram of the annular transducer array Figure 2.11 Quantization profile for the annular array Figure 2.12 Field for quantized CW Bessel beam. a) Field calculated by Fourier- Bessel theory. b) Experimental field. c) Field calculated by Rayleigh- Sommerfeld formula without Fresnel approximation. d) Field calculated by Rayleigh-Sommerfeld formula with Fresnel approximation Figure 2.13 Lateral line plots of the CW array beam axial distance, z=100 mm Solid lines represent the experiment results; dotted lines are simulation results of Fourier-Bessel method; short dash lines represent the simulation results of Rayleigh-Sommerfeld method with quantized excitation; dash lines show the results of Rayleigh-Sommerfeld method with exact Bessel excitation xiii

15 Figure 2.14 Simulated fields of a zero-order band-limited X wave with Fourier Bessel method at distances: (a) z=85 mm, (b) z=170mm (c) z=255mm, and (d) z=340mm, respectively, away from the surface of a 50-mm-diam annular array. A stepwise X wave aperture weighting and a broadband pulse drive of the array were assumed. The transmitting transfer function of the array was assumed to be the Blackman window function peaked at 2.5 MHz and with -6 db bandwidth around 0.81f c. Parameters α 0 and ζ are 0.05 mm and 4, respectively Figure 2.15 The images are the same as those in Figure 2.14, except that they are produced with the Rayleigh Sommerfeld diffraction formula. The layout and the parameters used in simulation are the same as those in Figure Figure 2.16 Experimental results that correspond to the simulations in Figures and A 10-element, 50-mm diameter, 2.5MHz center frequency, PZT ceramic/ polymer composite J 0 Bessel transducer was used Figure 2.17 Simulated fields of a focused Gaussian pulse with Fourier Bessel method at distances: (a) z=50mm, (b) z=120mm, (c) z=150mm and (d) z=216mm. A stepwise Gaussian aperture shading and a stepwise phase was assumed. The broadband pulse and transmitting transfer function of the array are the same as those for the X wave in Figure The FWHM of the Gaussian shading was 25 mm Figure 2.18 The images are the same as those in Figure 2.17, except that they are produced with the Rayleigh Sommerfeld diffraction formula. The layout and the parameters used in simulation are the same as those in Figure Figure 2.19 Experimental results that correspond to the simulations in Figures 2.17 and 2.18, except that in the experiment the phases applied by a lens was continuous. The same transducer for Figure 2.16 was used Figure 2.20 Geometry of the transducer and the filed point. (a) Spherical interaction in original 3-D rectangular coordinates. (b) Spherical interaction in the shifted 2-D coordinates Figure 2.21 The geometry of array transducer and field point xiv

16 Figure 2.22 Coordinates transform for convex/concave array. (a) The original coordinates, (b) Rotated coordinates Figure 2.23 Spatial Impulse Response at two distances: (a) z=60 mm, (b) z= Figure 2.24 Plots of spatial impulse response at four points: (a) (0,0,60), (b) (10,0,60), (c) (0,0,120), (d) (10,0,120). The unit is mm Figure 2.25 Plots of transient fields at four points: (a) (0,0,60), (b) (10,0,60), (c) (0,0,120), (d) (10,0,120). The unit is mm Figure 2.26 Simulated and experimental echo signals: (a)-(d) are simulated results for 4 points at center of the transducer with a distance of 30 mm, 50 mm, 70 mm and 90 mm away from the transducer surface respectively. (e)-(i) are experimental results in water for the same points as in (a)-(d). V2 phased array from Acuson is used for the experiment. The vertical axis is the index of elements of the transducer, and horizontal axis is time Figure 2.27 Plots of echo signals at element 64 as in Figure 2.26: (a) z=30 mm, (b) z=50 mm, (c) z=70 mm, (d) z=90 mm Figure 2.28 Geometry of transducer and field point Figure 3.1 Squares of the zeroth-order and second-order Bessel functions of the first kind and the absolute value of their subtraction Figure 3.2 Diagram of the excitation signals Figure 3.3 Diagram of the scattering phantom. The diameters of the cylinders 1,2,3,4 are of 10mm, 10mm, 6mm, and 6mm respectively Figure 3.4 Images constructed for the scattering phantom in Figure Figure 3.5 Lateral plots at the center of images in Figure Figure 3.6 The block diagram of OCT SLD: superluminescent diode, BS: beam splitter, LDB T/R: limited diffraction beams transmitter/receiver, PD: photodiode, HD: heterodyne detection, A/D: analog/digital converter, PC: personal computer Figure 3.7 Sidelobe reduction for ideal Bessel beams Figure 3.8 The diagram of the limited diffraction transmitter and receiver unit (Unit is mm) Figure 3.9 Diagram of the first mask xv

17 Figure 3.10 Field plots at lens D Figure 3.11 Diagram of the second mask Figure 3.12 Field plots at different axial distances Figure 3.13 Diagram of the phantom used for simulation Figure 3.14 Simulated images before sidelobe reduction Figure 3.15 Simulated images after sidelobe reduction Figure 4.1 Geometry of the transducer and the object Figure 4.2 Coordinate transform for array beams. (a) The boundaries of echo spectrum; (b) The boundaries of object spectrum Figure 4.3 Coordinate transform for plane waves. (a) The boundaries of echo spectrum; (b) The boundaries of object spectrum Figure 4.4 Reconstructive procedure for 2-D imaging with 1-D array. (a) Transmission with steered plane wave; (b) Received echo data in temporalspatial domain; (c) Echo data in temporal-spatial frequency domain; (d) Object scattering function in spatial frequency domain Figure 4.5 Diagram of the ATS539 phantom Figure 4.6 Images constructed from simulated echoes for Fourier method with limited-diffraction-beams transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 ; (b) 11 transmissions for Fourier method, with 1 Δ kx T = π /(5 Δ x0 ) = m ; (c) 91 transmissions for Fourier method, with 1 Δ kx T = π /(45 Δ x0 ) = m ; (d) 263 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= ; The speed of sound c is assumed to be 1540 m/s and c center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the elements Δ x0 is 0.32mm Figure 4.7 images constructed from simulated echoes for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 263 transmissions xvi

18 with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= ; The speed of sound c is assumed to be 1540 m/s and c central frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively Figure 4.8 Images constructed from simulated echoes for Fourier method with limited-diffraction-beams and plane-wave transmissions and Delay-and-Sum method with focused transmissions with multiple focal depths. (a) 91 transmissions for Fourier method, with 1 Δ k = π /(45 Δ x0 ) = m ; (b) 91 transmissions for Fourier method, with Δ θ = 1 degree; (c) 263 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= ; (d) transmissions with 6 focal depths of c 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively for delay-andsum method. The speed of sound c is assumed to be 1540 m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm Figure 4.9 Images constructed from experimental data of wire targets in water for Fourier method with limited-diffraction-beam transmissions and Delay-and- Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 (b) 11 transmissions for Fourier method, with 1 Δ k = π /(5 Δ x0 ) = m ; (c) 19 transmissions for Fourier method, with x T 1 Δ k = π /(9 Δ x0 ) = m ; (d) 91 transmissions for Fourier method, with x T 1 Δ k = π /(45 Δ x0 ) = m ; (e) 274 transmissions with a focal depth of x T 70mm for delay-and-sum method, with δ = c/ f / 2 / D= ; (f) transmissions with 6 focal depths of 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively for delay-and-sum method. The speed of sound c is m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm x T c xvii

19 Figure 4.10 Images constructed from experimental data of wire targets in water for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 19 transmissions for Fourier method, with Δ θ = 5 degrees; (d) 91 transmissions for Fourier method, with Δ θ = 1 degree; (e) 274 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= ; (f) transmissions with 6 focal depths of 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively for delay-and-sum method. The speed of sound c is m/s and central frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively Figure 4.11 Images constructed from experimental data of wire targets in ATS539 for Fourier method with limited-diffraction-beam transmissions and Delayand-Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 (b) 11 transmissions for Fourier method, with 1 Δ kx T = π /(5 Δ x0 ) = m ; (c) 91 transmissions for Fourier method, with 1 Δ kx T = π /(45 Δ x0 ) = m ; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= The speed of sound c is 1450 m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm Figure 4.12 Images constructed from experimental data of wire targets in ATS539 for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= The speed of sound c is 1450 m/s, and center c c c xviii

20 frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively Figure 4.13 Images constructed from experimental data of cyst targets in ATS539 for Fourier method with limited-diffraction-beam transmissions and Delayand-Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 (b) 11 transmissions for Fourier method, with 1 Δ kx T = π /(5 Δ x0 ) = m ; (c) 91 transmissions for Fourier method, with 1 Δ kx T = π /(45 Δ x0 ) = m ; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= The speed of sound c is 1450 m/s and center frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm Figure 4.14 Images constructed from experimental data of cyst targets in ATS539 for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= The speed of sound c is 1450 m/s and center c frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively Figure 4.15 Images constructed from in vivo experimental data of the right kidney of a volunteer for Fourier method with plane-wave transmissions and Delayand-Sum method with focused transmissions. (a) 91 transmissions for Fourier method, with Δ θ = 1 degree; (b) 88 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= The speed of sound c is assumed to be 1540 m/s and center frequency and aperture of the transducer are 2.5MHz and 19.20mm respectively c c xix

21 Figure 4.16 Images constructed from in vivo experimental data of the heart of a volunteer for Fourier method with plane-wave transmissions and Delay-and- Sum method with focused transmissions. (a) 11 transmission for Fourier method, with θ = 9 degrees; (b) 19 transmissions for Fourier method, with Δ θ = 5 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 88 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ f / 2 / D= The speed of sound c is assumed to be 1540 c m/s and center frequency and aperture of the transducer are 2.5MHz and 19.20mm respectively Figure 5.1 The system diagram of HFR imaging system Figure 5.2 Implementation of data transfer through USB Figure 5.3 The GUI interface of IMT program Figure 5.4 File structure of IMT program xx

22 CHATPER I: INTRODUCTION In this chapter, the background of the dissertation work is introduced first. The background includes the concepts, merits and principles of ultrasound imaging, ultrasound wave propagation, limited-diffraction beams, and Fourier domain image formation. Then the motivation, significance and research objectives of this work are outlined. Finally, the organization of the dissertation is presented. Section Background What is ultrasound Ultrasound is a high-pitch sound wave that has a frequency higher than 20,000 Hz, above the upper limit of human auditory system. Diagnostic ultrasonic imaging, also called sonography, is a common medical imaging modality. It exploits the principle of sound wave propagation to probe the internal structure of human body. In principle, this technique is similar to the echolocation used by some animals, such as bats, and SONAR by navy vessels. The difference is that, in medical ultrasound, a sound of higher frequency (around 1-20 MHz) is used. Since ultrasound at diagnostic energy level has no radiation and ionization effects on human body like X-ray does, it is considered safe. Medical application in the past five decades also proved that ultrasound does no harm to human body if properly used. Compared to other medical imaging modalities, ultrasound has the following outstanding merits. (1) Ultrasound is safe. This non-harming nature makes ultrasound 1

23 2 number one imaging choice in obstetrics; (2) Ultrasound imaging is a real-time modality. This nature makes it possible for doctors to see what is happening inside human body in real time. With it, cardiologist can see how a heart beats; (3) Doppler ultrasound can provide quantitative information about blood flow; (4) Ultrasound machine is relatively inexpensive. This makes ultrasound highly available and highly accessible, even right at doctor s office; (5) Ultrasound is highly portable. A miniaturized ultrasound machine can easily fit into a doctor s pocket. This makes it even available in the battle field; (6) 3-D ultrasound holds the potential for real-time 3-D imaging. Generally speaking, five steps are involved to obtain a 2-D ultrasound image. First, the ultrasound machine transmits ultrasound pulses into human body using a probe; second, the ultrasound waves propagate into human body and hit a boundary between tissues with different acoustical impedance. Some of the acoustical energy will be scattered or reflected backward to the probe, while some travel on further until they reach another boundary and get reflected, which will arrive at the probe later; third, the reflected waves are picked up by the probe and converted to analog electric signals; fourth, these analog signals are processed by the machine based on the information of their location and arrival time. A 2-D image is reconstructed; finally, the machine displays reconstructed image representing the distances and intensities of the echoes, which is directly corresponding to human anatomy. To gain deep insight of principles of ultrasound imaging, one need to understand how ultrasound propagates, how ultrasound is generated, and how echo data are processed to form an image.

24 Ultrasound wave propagation The phenomenon of wave propagation has always been the common interest for electrodynamics, physical optics and acoustics. Although these disciplines study waves at different frequency range, they do share a common wave equation for scalar fields. In electrodynamics and physical optics, the classical diffraction theory [1-3] specifically deals with the diffraction from a boundary or an aperture. This is the intriguing part of wave propagation phenomena, because here the boundary conditions will determine how the wave is generated, and propagates into space. The most elegant solution of the diffraction problem is the Rayleigh-Sommerfeld (RS) diffraction formula [1-9], which is mathematically self-consistent, unlike earlier solutions such as Kirchhoff theory [8], and Huygens s principle [10]. However, even for some very simple apertures, the exact expression for the field in the space turns out to hard to obtain, since a double integration in the RS formula has to be evaluated. The closed form is only available for the points on central axis [4, 8, 9] without approximations. In most occasions, the near field approximation, i.e., Fresnel approximation [10], or far field approximation, i.e., Fraunhofer approximation [10] is applied in optics. Under Fraunhofer approximation, the far field pattern and diffraction aperture have a simple Fourier transform relationship. Along with the mathematical solutions to the diffraction problem, a unique technique was also developed to decompose complicated wave patterns into simple plane waves. This technique is called angular spectrum method [1-3, 6, 7, 10-26]. Back in 1919, H. Weyl provided a solution to express a spherical wave into an integration of plane waves. This solution is extremely important, since the spherical wave is a commonly used

25 4 Green s function for wave equation under certain boundary conditions. This solution later was expanded to the field generated by multipole radiators [20, 26]. The angular spectrum method has a very simple physical explanation. With this method, a complicated field pattern is decomposed to summation or integration of simple plane waves. There are two types of plane waves involved. One type is propagating waves, and the other type is evanescent waves. The propagating plane wave is also called homogeneous wave, since its amplitude keeps constant while propagating. On the other hand, the evanescent wave is called inhomogeneous plane wave as its amplitude will decrease exponentially when propagating and die out quickly. Normally the evanescent waves can be ignored. In this way, when the wave propagates to another distance, the field at the new location can be expressed as the superposition of propagating plane waves. After the phases of these plane waves are compensated, the field at the new location is known. Besides the physical interpretation, Sherman [15, 16], Montgomery [14], and Labor [13] also gave rigorous mathematical treatments of the validity of angular spectrum method. Under some conditions, approximations can be made. Shono [22, 23] studied the band-limited plane wave representations of diffracted field, and Sherman [21] showed the asymptotic behaviors of angular spectrum representations. Angular spectrum method has many applications, but one of extreme importance is in the optical holography [14, 17, 18, 24]. Based on angular spectrum descriptions of the scattered field, E. Wolf [17] provided the relationship between the detected scattered signal and the object, which later became the foundation of optical and acoustical diffraction tomography [27].

26 5 While the basic results from physical optics and electrodynamics can be directly applied to acoustics, the propagation of ultrasound has its own characteristics. One obvious difference is the frequency range. With ultrasound the frequency is normally in the range of 20K Hz to 100M Hz. While the frequency of microwave is often higher than 100M Hz, and the frequency of light is much higher. Second, the speed is also dramatically different. The sound speed in pure water under room temperature is about 1500 m/s, while the speed of light in free space is about m/s. The huge differences in speed and frequency make it possible to measure the phase change directly for ultrasound, while it is practically impossible to do so for light. Third, the ultrasound is a kind of mechanical vibration, which needs the support of an elastic media to propagate. Ultrasound also has two different modes. In one mode, it is a longitudinal wave, where the vibration is parallel to propagation direction. In the other mode, it is a transverse wave, where the oscillation is perpendicular to the propagation direction. However, in the water, only longitudinal mode is supported. On the contrary, the light and microwave are transverse waves. For ultrasound, most interests are within the near field, where the field has a very complex pattern. The closed form solutions for the field in this range are rarely available. Thus one needs not only care about theoretically solutions, but also pay attention to details of numerical evaluations and digital implementations. The time it takes to calculate the field becomes one important factor. In acoustics, RS formula has been used to study the propagation problem for different configurations. Chertock [28], Copley [29, 30] and Schenck [31] gave solutions to fields generated by different vibrating bodies, such as rigid spheres and cylinders, based on the integration solutions. In 1971, Zemanek

27 6 [32] started to describe the beam behavior within the near field of a vibrating piston using numerical evaluations. However, the proposed method is still based on integration solutions. The RS formula is a double integration over the vibrating surface, and numerical integration needs tremendous computation. In addition, when the distance is small, the kernel becomes highly oscillating, and it becomes difficult for the numerical integration to converge. To reduce the computation, the angular spectrum method originated from physical optics and electrodynamics gained more attentions. Since the angular spectrum of a complex field is evaluated through 2-D Fourier transform, it is natural to take advantage of the computation efficiency of Fast Fourier Transform (FFT). With FFT, the computation time is reduced dramatically compared to RS integration. In 1982, Stepanishen [33] used the angular spectrum technique to study forward and backward projection of acoustic fields with FFT. The same year, Williams [34] also reformulated the RS formula to the form of angular spectrum, and applied FFT to evaluate the fields generated by planar radiators. Later, Williams, Maynard and Veronesi [35-37] systematically developed near field acoustic holography technique based on angular spectrum, and gave guidelines on digital implementation of angular spectrum method. Schafer [38-40] and Wagg [41] applied the technique to ultrasound transducer characterization and extrapolation of ultrasonic fields. Even in recent decade, this technique still attracted many researchers [42-53]. Christopher [43, 44] used discrete Hankel transform to replace FFT to study the linear propagation of acoustic fields that are radially symmetric and extended the theory to nonlinear propagations. Orofino [47, 48] and Wu [50-52] examined the details of the numerical implementation of angular

28 7 spectrum method, and discussed the influences and consequences of sampling and selection of parameters. Wu [53] also extended the technique to curved radiators by sampling the field on a planar plane close to the radiator, where the field were calculated by other techniques. After that, the angular spectrum method can be applied. In additional to angular spectrum method, another technique that evaluated transient field directly in time domain also became popular. The method is called spatial impulse response method, first explicitly proposed by Stepanishen in two papers [54, 55] published in With this method, the transient field is formulated into a convolution of normal velocity function and the spatial impulse response function. In the original papers, a closed-form solution for circular pistons was given. However, the author only used far field approximations for rectangular pistons where a simple form was easily obtained. Lockwood [56] in 1973 provided the exact solution for spatial impulse response for the points whose projections are within the rectangular aperture, and the solution was completed by Emeterio [57] for any spatial points. Meanwhile other researchers [58-71] further developed the method. Scarano [66] provided an alternative explanation to the spatial impulse response function. Here, the impulse response was treated as a function of the spatial coordinates for a fixed time instant instead of a function of time at a fixed field location. Harris [60] expanded the impulse response method to planar pistons having an arbitrary vibration amplitude distribution. Stepanishen [67, 68] experimentally verified the validity of spatial impulse response method. In 1991, Stepanishen [69] revealed the relationship between the impulse response and angular spectrum methods. Later Jensen [62, 63] improved the numerical implementation of impulse response method, and developed an ultrasound simulation package, Field II, based on this method.

29 8 In the past decade or so, some other alternative methods [72-88] have also been proposed. For example, Cavanagh [72] suggested to decompose the axial-symmetrical field produced by circular piston to simple Gaussian beams, and the idea was further developed by Wen [73], Zhou [80] and Ding [87]. Berkhout and Wapenaar [75-77] developed matrix formulation for ultrasound propagation and reflection in inhomogeneous media. Fan [74, 81] calculated the axial-symmetric ultrasound field through equivalent phased array method. Lee [78] divided the piston into small rectangular parts, and applied Fourier transform relationship between field at far field and the aperture to speed up the calculation. In chapter II of this dissertation, the angular spectrum method will be applied to array transducers [89-93]. Instead of sampling the transducer into small grids, series expansion is used to take advantage of the natural structure of each element. With the new treatments, the field generated by arrays has a straightforward relationship with the driving profile at each element. The strong feature of this method is that the field is directly expressed as summation instead of integration. The field can be calculated with high accuracy at reasonable computational cost. Also, a new exact solution of spatial impulse response for rectangular piston is derived. In previous solutions [54-56, 62, 94], either far field approximation was applied, or complex geometrical considerations were required. This solution takes full advantage of the simplicity of trigonometric functions and sets operations. After proper coordinates transform, calculation of the spatial impulse response only needs some elementary and trivial operations of inverse sine and cosine functions. The numerical implementation of the new solution is very simple, and efficient.

30 Limited-diffraction beams Since Durnin [95, 96] first experimentally realized so called diffraction-free Bessel beams, the unique invariant propagation property of Bessel beams has drawn attentions from many investigators in different disciplines. Theoretically, these beams can propagate to infinite distance without spreading provided that they are generated with an infinite aperture and infinite energy. In practice, however, the aperture and energy of the source are always limited, and these beams can only be produced approximately and will spread out after certain depth, although the depth of field of these beams is extremely large compared to those of conventional Gaussian beams. Since diffraction-free beam can not be strictly realized, different terminologies were suggested to describe such reality, such as limited-diffraction beams, or more broadly localized waves. The limiteddiffraction beams, named by Lu and Greenleaf [97, 98] are widely accepted in ultrasound community. After Drunin optically realized Bessel beams, which were theoretically studied by Stratton [3] earlier, many different solutions that possess similar propagation-invariant property have been found. Such solutions include X waves [99-101], bowtie beams [102, 103], array beams [104], Bessel-X waves [105, 106], Mathieu Beams [ ], parabolic beams [111], and helicon waves [112] among others [ ]. There are so many literatures produced in the past two decades that is almost impossible to cite them all here. Several review papers [ ] give detailed accounts of different aspects of these waves. Specifically, in 1994, Lu et al [116] reviewed the applications of limited-diffraction beams in the context of ultrasound imaging. Later, Shaarawi and Besieris [119] focused on the superluminal propagation of pulsed localized

31 10 waves, which were superposition of Bessel beams. The velocities of the peaks of these waves were carefully evaluated. The authors also reviewed the methods employed to generate such pulsed waves by applying different spatiotemporal excitation schemes, and using devices such as annular slits, refractive axicons, and diffractive axicons. On the theory side, Salo et al [118] provided a unified description in spatial frequency domain for non-diffraction solutions to wave equation. With the approach, not only commonly considered Bessel beams, X waves, Bowtie beams and array beams can be easily described, but also the extended solutions based on Neumann and Hankel waves, namely Y waves, can be included. They also reviewed different approaches to obtain these nondiffraction solutions, such as angular spectrum representation, Fourier representation, cylindrical representation and spatial dimension extension representation. The internal relationships between different approaches can be understood from the unified descriptions. In addition, the physical properties of these non-diffracting waves were discussed in great detail. More recently, McGloin and Dholakia [121] reviewed the non diffraction properties of Bessel Beams, the experimental methods to generate such Beams, and their applications in physics. The parameters that define the propagation of Bessel beams were outlined. Authors compared the advantages and disadvantages of different methods employed to generate Bessel beams. These methods exploit different devices such as annular slits, hologram, axicon, and modified lasers. Using the Fourier relationship between aperture and field, an annular slits right before an imaging lens can generate a Bessel beam, but the efficiency is very low. If computer generated hologram is used instead of slits, higher efficiency is gained with the complication of higher order beams which require spatial filtering. Axicon can generate zeroth order Bessel beam very

32 11 efficiently, however, a high accuracy of alignment must be guaranteed. Using a laser with modified cavity, Bessel beams can also be produced. The Bessel beams find applications in areas like optic manipulation, atomic optics, and nonlinear optics. The non-diffracting feature and intense center core are used to manipulate small particles, trap atoms and produce harmonic lights. For a more general treatment of this topic in terms of localized waves, refer to these two review articles from Besieris et al [117] and Recami et al [120]. Limited-diffraction beams have also been hot topics in acoustics and ultrasonics in the past decade or so. Two years after Durnin et al produced Bessel beams using an annular slit, Karpelson [123] discussed the possibilities of forming limited-diffraction Bessel beams with piezoelectric transducers, and suggested that a transducer with non uniform pressure distribution of zeroth order Bessel function could generate such beams. Same year, Hsu et al [124, 125] fabricated a three-ring annular array which were nonuniformly poled according to zeroth order Bessel function, and produced the monochromatic Bessel beams in water. Later, Lu and Greenleaf [126] designed a 10-ring annular array transducer using PZT ceramic/polymer 1-3 composite, and the transducer could produce monochromatic and pulsed Bessel beams and X waves. The limited diffraction beams can also be acoustically generated with transducers such as annular or linear arrays [ ], weighted conical transducers [ ], hexagonal arrays [133] and 2-D arrays [134]. With hexagonal arrays and 2-D arrays, these limited diffraction beams can be successfully steered to other directions while keeping the limiteddiffraction propagation properties. The other propagation properties of these beams are also studied extensively. Fatemi and Ghasemi-Nejad [135] studied the propagation of limited-diffraction beams in biological soft tissues; Sushilov [136] studied the

33 12 propagation of X waves in dissipative media; Synnevag et al and Ding et al [ ] investigated the nonlinear propagation of limited-diffraction beams. It is worth noticing that, before 1992, most investigators focused on monochromatic Bessel beams. In that year, Lu and Greenleaf [100, 101] discovered a family of limiteddiffraction beams called X waves, which are pulsed wave solution to scalar wave equation in free space, and experimentally demonstrated zeroth order X wave with an annular array in water. This amazing solution to the century-old wave equation inspired many investigators to dig deeper and find new solutions. Lu, himself later also found several interesting limited-diffraction beams, such as bowtie beams [102, 103] and array beams [104]. The unique propagation properties of limited-diffraction beams have many practical implications for medical imaging. Lu and Greenleaf [126, ] first applied limiteddiffraction Bessel beams to ultrasound imaging, tissue characterization and nondestructive estimation. With the custom designed 10-ring Bessel transducer, a pulsed Bessel was excited to probe soft tissue. The central core of the Bessel beam has a very large depth of field comparing to conventional focused Gaussian beams. In reception, dynamic focusing could be employed to get a single A line. By mechanically scanning the transducer across the tissue surface, a 2-D B mode image can be formed. Because of the large depth of the center core, the reconstructed image has a high resolution throughout all depth. However, the sidelobes of Bessel beams are strong, and each sidelobe carries roughly same energy as the central core. The contrast resolution is relatively poor. To avoid this problem, Lu and Greenleaf [98] proposed a sidelobe reduction method, which employed three transmissions. In the first transmission, a zeroth

34 13 order Bessel beam is transmitted, and the same Bessel weighting is applied in reception to get an A-line. In the second transmission, a second order Bessel beam is transmitted and synthesized in reception. The third transmission uses rotated second order Bessel beam. The final A-line is obtain by substations of last two from the first A-line. This technique takes advantage the fact that the center of the second order Bessel function is zero, and the sidelobes of zeroth and second order Bessel beams are close. After subtraction, the contrast resolution is improved dramatically. Lu [103] also suggested to replace Bessel beams with bowtie beams which have much lower sidelobes. The drawback of these two solutions is that a 2-D array is needed to generate bowtie beams and second order Bessel beams. This will increase the system cost and complexity radically. There is another issue that plagues these imaging methods. All these methods require a mechanical scanning system, which severely limits the achievable frame rate and makes the images susceptible to motion artifacts from tissue movement. In 1997, Lu [103, 104, ] proposed a new high frame rate imaging method, which achieves the highest frame rate limited only by physical wave propagation. With this method, one pulsed plane wave is used in transmission and limited-diffraction array beams with different parameters are synthesized in reception. The final image is reconstructed by inverse Fourier transform of the array-beam responses. This method has a high signal to noise ratio, high computation efficiency, high resolution, and ultrahigh frame rate. It has been predicted to succeed by Wade [148]. In chapter III of this dissertation, limited-diffraction beams will be applied to both ultrasound imaging and optical imaging. First, coded excitation, which means driving transducer with a designed code instead of simple pulse, is employed to increase

35 14 penetration depth and signal to noise ratio for ultrasound imaging with a Bessel transducer. Multiple transmissions of zeroth and second order Bessel beams are also used to reduce sidelobes. Attempts will be made to increase frame rate by transmitting three coded Bessel beams simultaneously. Second, the sidelobe-reduction principle is applied to optically generated Bessel beams. An optical coherence tomography having a large depth of field and low sidelobes will be designed Fourier domain image formation Fourier analysis including Fourier series and Fourier transform is an important mathematical tool with many practical applications [149]. It provides a new perspective to mathematical problems and physical phenomena. It has been one of the most important tools for engineering and science disciplines. And it is the main topic for many books in different disciplines, such as electric engineering [149], optics [10], acoustics [150] and biomedical engineering [27, 151]. After Cooley and Tukey [152] first published a fast algorithm for Discrete Fourier Transform (DFT), called Fast Fourier Transform (FFT), which dramatically reduced the computation, Fourier analysis has become even more popular. The fast algorithm, considered as one of the most successful algorithms in the past century, has been widely adopted in digital signal processing [153]. For medical imaging, Fourier transform and other related transforms have been the cornerstones for X-ray computerized tomography [27], magnetic resonance imaging [151], and emission computed tomography [27]. For ultrasound imaging, the images are generally formed directly with signals in time domain [154, 155]. To take advantage of the computational efficiency of FFT, in the past two decades, many investigators have also studied the potential of Fourier transform for

36 15 ultrasound imaging under different configurations. To apply Fourier transform, the image formation has to be performed in frequency domain, also called Fourier domain or k- space, which includes temporal and spatial frequency domain. A relationship between the detected signals and the object is formulated in terms of Fourier transform. Generally, after the time domain signals are digitized, they are converted to frequency domain by FFT, and further processed to obtain the frequency domain representation, also called spectrum, of the object. The final images are reconstructed by inverse FFT of the spectrum of the object. Norton, Linzer [156, 157] and Nagai [158, 159] developed a Fourier domain reconstruction method for synthetic focusing by solving the inverse scattering equation. In this configuration, a single small-aperture transducer is used to transmit a broadband pulse and receive echoes while scanning on a planar surface. After the full set of data is collected, its frequency domain representation is obtained by Fourier transform. Then a numerical derivation is performed over temporal frequency, and followed by 2-D spatial Fourier transform. The 3-D image is reconstructed by inverse 3-D Fourier transform of the data after nonlinear mapping and interpolation. Ylitalo [160, 161] also suggested a narrow-band synthetic aperture imaging method based on ultrasound holography. A single element or a few elements combined together in a 1-D linear array or convex array serves as both transmitter and receiver. The data are collected while the transducer mechanically or electronically scans a straight line. After the data are collected, 1-D spatial Fourier transform is applied, and followed by backward propagation. A 2-D image is reconstructed by inverse Fourier transform after phase angle processing and curvature compensation.

37 16 These synthetic aperture methods have advantages of simple, low-cost implementation and fast reconstruction. However, they are more suitable for nondestructive estimation than for medical applications due to several reasons. First, the transmission efficiency of ultrasound is low, since only a single small transducer or a few elements in an array is used for transmission purpose. And the transmitted ultrasound signal cannot penetrate deep into biological tissue. The echo signal is weak and noisy, and signal to noise ratio is low, although severity of this problem can be partially reduced by providing a high quality, high dynamic-range receiving circuits since only one such receiving channel is needed. Second, in most of these configurations, mechanical scan is needed to reduce the system complexity. The time it takes to acquire the radio frequency (RF) data is very long, thus the frame rate is extremely low. Even if electrical scanning is implemented, it still needs hundreds of transmission events to acquire enough data to construct just one frame of 2-D B mode image. Third, the image quality is generally much poorer compared to that of delay-and-sum method with focused transmissions using a phase array, which is commonly seen in clinical practice. Busse [162] applied Fourier transform to synthetic aperture focusing where a scanning sensor is used to collect the ultrasound signal on a planar surface. The data is first converted to temporal frequency domain by Fourier transform. Then at selected frequencies, 2-D Fourier transform is performed to get the angular spectrums. The angular spectrums at different frequencies are backward propagated to certain depth by compensating phase differences. A C mode image can be reconstructed by inverse transforming the average of the angular spectrums over different frequencies. By back

38 17 propagating the angular spectrums to different depths, a 3-D image is obtained by stacking a sequence of C mode images. Benenson [163] suggested a 3-D imaging method based on a single mechanically scanning large-aperture focused transducer. After the scan on a planar surface is finished, a 3-D Fourier transformed is applied to the acquired 3-D (two spatial dimensions, and one temporal dimension) data set. Then a depth correction factor is applied to the spectrum. Finally, a 3-D image is formed by inverse Fourier transform. One strong feature of this technique is that after the compensation, which is performed in spatialtemporal frequency domain, high resolution and low sidelobe level are also achieved at depths behind and in front of the focal point, thus an image of high resolution can be obtained. The technique is well suited for ultrasound imaging in ophthalmology and dermatology, where high frequency transducer arrays are hard to build, and imaging area is relatively small. But it is not suitable for general diagnostic applications, since mechanical scanning of the probe is involved. The frame rate is low, and sensitivity to motion artifacts is high. In addition, a high-resolution automated mechanical scan system is mandatory, which is rather bulky and awkward for day to day clinical practice. Fourier based reconstruction method also has been used in intravascular imaging [164, 165], where a small-diameter cylindrical arrayed is used. And it is also implemented with a concave linear array for imaging fine structural features in blood vessels using the amplitude of the scattered RF data [166]. Fourier based method has also been used in depth-focused phased array imaging system [167], where a phased array is used to transmit a focused ultrasound pulse at a fixed depth by applying proper time delay for each array element, and the same amount delay is applied also in receiving.

39 18 Then signals from all elements are added together to form one line. By ignoring the influence of the amplitude, one can establish the relationship between the object reflection function and the measured echo data in temporal-spatial frequency domain. After compensating the defocusing effects in frequency domain, a corrected B image is constructed by inverse Fourier transform. This method has applications in low cost phased arrayed imaging systems, where dynamic receiving beam former is too costly. Limited diffraction beams such as X waves [100, 101] not only have a large depth of field, but also are orthogonal. Using these properties, Lu has developed a high frame rate imaging method [145, 147]. In this method, a broadband plane wave is used to insonify the object. In reception, echoes are converted to temporal frequency domain with a onedimension Fourier transform and weighted by multiple limited diffraction array beams of different parameters. A 3-D image can be reconstructed by a 3-D inverse Fourier transform of the weighted signals after interpolation between curvilinear and rectangular coordinates. Because all the elements are used in the transmission, the transmitting efficiency is much higher than that of the synthetic aperture technique, and thus SNR of received echoes is relatively high. This method can achieve the maximum frame rate that is only limited by the speed of sound and imaging depth if only one transmission is used. The ultrahigh high frame rate is extremely desirable in imaging of fast moving objects, such as the heart. In addition to the high frame rate imaging method, Lu has also developed a method using limited diffraction array beams in transmissions to increase image field of view and resolution, and reduce sidelobes [146]. Recently Liu studied a combination of steered plane waves transmissions with the high frame rate imaging method [168]. However, Liu s method is not entirely new

40 19 because it is a combination of steered plane waves [169, 170] and the high frame rate imaging method that Lu developed in 1997 [145, 147] and patented in 1998 [171]. Lu has also written a C program for steered plane wave image reconstruction and presented the results in an international conference in 2000 [170]. Liu also mentioned that his method is suitable for 3D imaging but he derived it only in the case of 2D imaging. In Liu s work, a 2-D Fourier transform is used to get the spectrum of the echo signals instead of limited diffraction array beam weighting, and nonlinear mapping is used to interpolate for the spectrum. The mapping procedure for a non-steered plane wave is the same as Lu s high frame rate imaging method and a one-dimensional mapping procedure is used for steered plane waves. Liu also has suggested superposing the constructed images incoherently to reduce image speckles. However, Lu has presented the same idea and its results a few years earlier in both 1999 and 2000 international conferences [169, 170]. In Chapter IV of this dissertation, the combination of steered plane wave transmissions with the high frame rate imaging method is expanded to include limited diffraction beam transmissions [146] from the diffraction tomography theory [27] that solves the inhomogeneous Helmoholtz equation under the Born approximation. An advantage of combining multiple transmissions with the high frame rate method is that the frame rate can be traded off with improved image quality in different applications. Both computer simulation and experiment are performed to validate the work. 1.2 Motivation and Significance In ultrasound imaging, when ultrasound propagates into human body, we must wait enough time for the echoes to come back before transmitting next wave. The speed of sound in tissue is around 1500 m/s. For a typical study, a depth of 200mm may be needed.

41 20 Then totally we can have 3750 transmissions in one second. In the conventional B mode imaging, roughly 128 focused transmissions are needed to reconstruct one frame highquality image. The highest frame rate is around 30 frames per second, which meets the standard for real-time imaging. But in 3-D imaging, 128 frames of such images may be needed to reconstruct a 3-D image. The frame rate will reduce to 0.2 frames per second. This means that, with the conventional imaging method, 3-D imaging can not be realized in real time. Even for 2-D imaging, in Doppler mode, multiple transmissions in the same direction are needed. The frame rate will also reduce dramatically. In some situations, like kidney and liver imaging, where the imaging targets are basically stationary, a frame rate of 30 frames per second is more than enough. It would be desirable to trade the high frame rate to get high resolution and high contrast resolution. On the other hand, for imaging fast moving targets such as hearts, this fames rate would not have enough temporal resolution to reveal the details of movement. It would be desirable to increase the frame rate while keeping reasonable spatial resolution and contrast resolution. There are several ways to reach the trade-off between frame rate and image quality. One way to improve spatial resolution is through multiple transmit foci [116, 172] along each scan line, which are achieved by transmitting multiple focused beams separately with different focal depths, and stitching the pieces around focal points together in reception. In this way, the objects are in both transmit and receive focus, and a high spatial resolution is obtained. The frame rate is inversely proportional to the number of foci used in each scan line. Since the depth of field of a focused beam is relatively small, the frame rate can be unacceptably low. Another way is to take advantage of redundancy

42 21 of the scan lines by using virtual source elements [173, 174]. In the method a focused beam is transmitted, same as conventional imaging method, but an area image instead of a scan line is formed through dynamic focusing by treating the focal point as a virtual source element. Then another focused beam is transmitted, and yet another area image is formed. In the end, all these area images are added together coherently to synthesize transmit focus. In this way, every point of the object is in synthetic transmit focus, thus resolution and signal to noise ratio are improved. The frame rate is not reduced, however an area image has to be formed for each transmission, and the burden for the beamformer is heavy. The depth of field, which is directly related to the spatial resolution, can also be improved by other methods, such as transmitting limited diffraction beams [126, 142] or using time-dependent focal zone and center frequency [175]. For limited diffraction beams, the depth of field is very large, but the side lobe level is relatively high. For timedependent focal zone and center frequency, at each transmission, each element needs to transmit a different waveform at different frequency band, which requires sophisticated transmit beamformer. Several solutions are also available to increase the frame rate by reducing the number of transmissions. One simple way [176] is to reduce the density of transmit lines, and use interpolation to increase the density of receive lines. Another way [177] is to transmit multiple focused beams along different directions, and form multiple receive lines simultaneously. Yet another way [178] is to transmit a fat beam consisting of multiple focused beams at close directions, and form multiple receive lines simultaneously. These methods can improve frame rate a few folds. When even higher frame is required, these methods can not be employed because of sampling problem or interference between

43 22 neighboring beams. To further increase frame, broader beams instead of focused beams are used to illuminate the object. Researchers at Duke university [ ] suggested a system in which a broad wave is used to illuminate part of the object. Parallel dynamic beam formers are used to construct multiple lines. The frame rate improvement is basically limited by the number of parallel beam formers. McLaughlin et al [182] also suggested a system using broad beams, such as plane waves and spherical waves to illuminate an area instead of one line. Jeong et al [129, 183] proposed a system to use multiple plane waves at different directions to illuminate the object, and synthesize transmit Sinc waves at every points of the object. With this method, a whole image is formed with each transmission through parallel dynamic receive focusing, and these images from multiple transmissions are added coherently to synthesize Sinc waves. The frame rate depends on the number of transmissions used, however when only a few transmissions are used, the Sinc waves will not be fully synthesized. Ustuner ea al [184] suggested to used only two plane waves at opposite directions to illuminate the object, however, only the part of the object was illuminated by both waves, and most part of the object was not. The image quality of the part illuminated only once is much poorer than the other part that illuminated twice. A new synthetic aperture method [ ] suggested by Lockwood et al theoretically could reconstruct an image of high quality with only three transmissions. With this method, in each transmission, only one element in an array transducer is excited to transmit a spherical wave to illuminate the whole object, and whole array is used in reception. The elements at both edges and middle of a 1-D array are excited separately in sequence. When appropriate apodizations are applied in reception, an image with high resolution can be synthesized with these three

44 23 transmissions. However, this technique has a major drawback. Since only a single element is excited to illuminate the object, the energy efficiency is very low. In the case of attenuating objects like human tissues, these waves can not penetrate deep into human body. The signal to noise ratio (SNR) is extremely low compared to that of conventional focused beams excited by whole aperture. With a very low SNR, the contrast resolution is also very poor. All these mentioned methods have one common feature is that multiple receive lines are formed simultaneously with signals in time domain. When a whole frame or volume need to be formed at the same time, the structure of the beamformer becomes very complex and the cost of building such beamfomers becomes prohibitively high. To overcome this limitation, Lu [145, 147] proposed a Fourier-base high frame rate imaging method. This method only needs one transmission, and takes advantage of computation efficiency of FFT. However, since only one transmission is used, and there is no transmitting focusing, the sidelobe level is relatively high, especially in 2-D imaging case. Lu s high frame rate imaging method provides a computation-efficient solution to 3- D imaging. With this method, 3-D real time ultrasound imaging can be implemented with a simple structure using FFT algorithm. The frame rate is extremely high, if a single plane wave is used in transmission. However, this is not necessary for some situations, where the object is stationary, and a high image quality is demanded. Therefore, it is necessary to develop a high frame rate and high resolution imaging method based on Fourier domain [27] with a variable frame rate for different applications. As mentioned above, recently, Liu [168] studied a combination of steered plane waves transmissions [169, 170] with the high frame rate imaging method [145, 147] to increase

45 24 image resolution and reduce sidelobes at the expenses of image frame rate and susceptibility to motion artifacts. However, such combination of methods can be extended to include the multiple limited diffraction transmissions that Lu developed in 1997 [146]. In addition, although Liu claimed that the combination above is applicable to 3D, his derivation is in 2D. A more general theoretical treatment in 3D based on the diffraction tomography [27] that includes both the steered plane wave and limited diffraction beam transmissions in addition to other previously developed methods such as that by Soumekh [189] is needed. Moreover, computer simulations, and in vitro and in vivo experiments need to be performed to verify the theory. For computer simulations, first, one needs to study wave propagation from different perspectives and gain some deep understandings on the interaction mechanism between ultrasound waves and the media; second, one needs develop numerical methods to simulate ultrasound fields, echo signals and the whole ultrasound system. These studies not only serve as preparations for the main task mentioned before, but also have some significance themselves. As new and efficient field calculation methods can help other investigators to speed up their studies in the area of ultrasound. For experiments, one needs a general-purpose ultrasound system to acquire the experiments data which are very difficult, if possible, to obtain with the commercial systems. The ultrasound system should have enough flexibility to implement different imaging methods, including those conventional ones and new ones in the future. Such a system makes it possible to experimentally study different methods using the same settings, and compare these methods objectively. Of course, developing such a system

46 25 will deepens the understanding of the working mechanism of ultrasound systems, let alone that the system can be used for other related studies. 1.3 Research Objectives This dissertation work consists of following research objectives: Study ultrasound wave propagation phenomena in homogenous lossless media with different approaches. Understand the inherent principles of ultrasound propagation, and the mathematical relationship between different approaches. Study the non diffracting properties of Bessel beams, and apply these properties to medical imaging. Design coded excitation for ultrasound imaging, and an optical layout for optical imaging. Apply sidelobe subtraction principle to both ultrasound and optical imaging system. Study the ultrasound image formation mechanism by expanding methods studied previously to include multiple limited-diffraction array beam transmissions. Build a theoretical model to describe the relationship between the echoes and object scattering function. Study the expanded method with computer simulations, and in vitro and in vivo experiments. Design and implement the logics and the controlling software of a general-purpose HFR imaging system. The system has 128 independent channels; each one of them is capable of transiting arbitrary waveforms with arbitrary time delays, digitizing RF data at high speed and storing data in SDRAM for offline studies. The system is used to experimentally verify various theories and algorithms developed in this dissertation.

47 Dissertation Organization The dissertation is organized as follows: In Chapter I, the background, motivation and objectives of this work is introduced. Three major concepts including ultrasound wave propagation, limited-diffraction beams, and Fourier domain image formation are introduced in the context of historical developments and their relationships to current study of this work. The motivation and significance behind this work are to develop an efficient imaging method to overcome the drawbacks of previous methods, and provide more accurate diagnostic information. The research objectives of this works are also outlined. In Chapter II, the wave propagation is studied with emphasis on field calculation. Based on the concept of angular spectrum and limited-diffraction beams, new methods are proposed to calculate the fields from annular and rectangular array transducers. A new approach to calculate spatial impulse response is also proposed, which significantly simplify the numerical implementation. Numerical examples and experimental results are presented to verify these field calculation methods. The theoretical relationships between these different methods are also briefly reviewed. In Chapter III, limited-diffraction beams are applied to ultrasound imaging and optical imaging. Multiple limited-diffraction beams are used in subtraction to reduce side lobe level. Coded excitation is studied for ultrasound imaging trying to increase penetration depth, SNR and frame rate. An optical coherence tomography system is also designed to increase the depth of field. Simulations are performed to demonstrate the effectiveness of these imaging systems.

48 27 In Chapter IV, a general theory of Fourier based imaging method [27] will be developed to include the limited diffraction array beam transmissions [146]. With this theory, steered plane wave transmissions [169, 170] and other previous imaging methods are included as special cases [189]. Multiple transmissions with steered plane waves or limited diffraction beams allow higher image resolution and lower sidelobes at the expense of image frame rate and the susceptibility to motion artifacts. The imaging methods will be verified by computer simulations and experiments, including water-tank experiments, phantom experiments and in vivo experiments. In Chapter V, the logics and control of HFR imaging system are designed and implemented. The HFR imaging system is a general-purpose platform for research. It has all the front end circuits of a typical ultrasound system, and also has additional storage and communication units for offline data processing. The functional partition of the system and detailed logic designs for each functional part are presented. In Chapter VI, the whole work is summarized, and future studies are suggested.

49 CHAPTER II: WAVE PROPAGATION In this chapter, the phenomena associated with wave propagation will be studied extensively. The relationship between different approaches of calculating the field will be revealed. Based on the principle of angular spectrum and the non diffracting nature of limited-diffraction beams, two algorithms for calculating fields for an annular array and a rectangular array are proposed. A new procedure to evaluate transient fields based on spatial impulse response is also proposed to simplify the implementation. From the knowledge of how waves propagate in a source free media, one can gain deep insights on how wave interacts with objects, and design new imaging methods based on the principle of wave propagation and interaction. 2.1 Introduction In the past decades, the wave propagation phenomenon has been one of the keen interests of acoustical community. For a good review, please refer to [190]. Generally speaking, one basic problem of wave propagation is how waves propagate away from their sources. Sound wave is a type of mechanical vibration, needing certain media to support the vibration, and different from electromagnetic waves, which can propagate through vacuum. But the wave propagation phenomena are governed by the same wave equation. For a source-free homogeneous lossless media, the scalar wave equation in rectangular coordinates (Cartesian coordinates) is given by: 28

50 = 0, x y z c t Φ Φ Φ Φ (2.1) where Φ represents the velocity potential, and is a function of both position defined by variables ( x, y and z ) and time t. c is the speed of wave in the media. The equation for pressure has the same form. To evaluate the field response of the wave when it propagates, in additional to (2.1), one needs to know the source which generates the wave in the first place. From a mathematical perspective, the information of the source is called boundary conditions. With these conditions, one can solve (2.1) to get the field response of the wave. There are two types boundary conditions generally considered. One of them is called Neumann condition, where the normal velocity of the source along the propagation direction is known. The other one is called Dirichlet condition, where the velocity potential or pressure of the source is known. The solution to the wave equation depends on the initial conditions, and can take quite different forms. Another way to treat the wave propagation problem is through physical perspectives, where the complex source or field is divided into small units, such as point sources, whose propagation properties are well understood. The most obvious approach is to treat the source as many individual point sources generating spherical waves. This approach is generally called Huygens s principle [10]. One of the embodiments of this principle is the Rayleigh-Sommerfeld formula [1-10, 12, 28-32, 34, 191, 192], which also includes one additional angular response term. Another closely related approach is angular spectrum method [21, 33-37, 39, 41-53, 63, 65, 66, 79, 83, 88, 94, 164, ], where the field or the source is decomposed into simple forms. In different coordinates, the decomposed waves can have different shapes, but all of them have very simple propagation

51 30 characteristics. In later sections, angular spectrum method will be used to derive new algorithms under different conditions. The field will be expressed as the summation of limited-diffraction beams. Simulation results show that these algorithms are both accurate and efficient. Yet another approach, spatial impulse response approach [54-57, 62, 69, 190, 196, ], not only divides the source into small intersection arcs, but also takes advantage of the linear system principle and formulates the field response into a convolution form. A new solution of impulse response for rectangular arrays will be proposed. The new solution only requires simple and trivial algebraic operations instead of complex geometrical considerations. The method to simulate echo signals is also presented. Simulations and experiments are carried out to verify the algorithms. All these methods from physical perspectives are closely related to each other. The relationships between these methods will also be discussed. 2.2 Angular spectrum based field calculation In this section, the concept of angular spectrum for wave propagation is introduced first. Then angular spectrum method is applied to annular array transducers and rectangular array transducers. New algorithms to evaluate field response are developed. Numerical examples and experiment results are presented later. The theory and some of the results contained in this section were presented at the 142 nd meeting of Acoustical Society of America [91]. The simulation results for rectangular arrays were presented at the 141 st meeting of Acoustical Society of America [90]. The simulation results for annular arrays were published on two journal papers [92, 93]. In this dissertation, these methods will be developed from a different perspective. In

52 31 the published papers and abstracts, the methods were derived from the concept of limited diffraction beams directly, while they are derived from the concept of angular spectrum here. With this concept, the methods for both rectangular arrays and annular arrays are unified in one framework The concept of Angular spectrum The spirit behind angular spectrum is to decompose complex waves to simple forms. Since amplitudes of these simple waves will not change with propagation and their phases can be easily compensated, the propagated waves can be calculated from the summation of those phase-compensated waves. z (x,y,z) γ k α β y x Figure 2.1 The geometry for angular spectrum. For continuous wave (CW) at angular frequencyω, the wave equation for a sourcefree, homogenous and lossless media is given by [10]: 2 2 P+ k P= 0, (2.2)

53 32 where P is the analytical representation of the pressure; k = ω / c is wave number; c is the speed of sound. The pressure in time domain can be expressed as j t p( xyzt,,, ) = Pxyze (,, ) ω. And the pressure at certain distance z, as shown in Figure 2.1, can be decomposed into an integration of plane waves, Pxyz (, ; ) = Af ( x, fy; z)exp[ j2 π ( fx x + fy y )] dfdf x y, (2.3) where A( f, f ; z ) is angular spectrum of the field at z and represents the amplitude and x y phase of each plane wave component. It is defined by the 2-D Fourier transfer over the spatial variables x and y, A( fx, fy; z) = P( x, y; z)exp[ j2 π ( fxx+ fyy)] dxdy. (2.4) When z = 0, it is a special case, and we have Pxy (, ;0) = Af ( x, fy;0)exp[ j2 π ( fx x + fy y )] dfdf x y. (2.5) Although expression in (2.3) is the direct result of Fourier transfer taken in (2.4), the kernel of the integration on the right of the equation can be treated as individual plane waves. To better understand this point, one can substitute (2.3) into (2.2) and get a simple second order differential equation as follows, The solution to this equation would be d 2 A k α β A + 1 = 0. (2.6) 2 dz α β α β 2π A, ; z = A, ;0 exp( j γ z), λ λ λ λ λ (2.7) 2 2 where α = f xλ, β = f yλ, γ = 1 α β, and λ is the wavelength of the wave. When 2 2 α + β 1, this solution can be interpreted as a single plane wave prorogates in a

54 direction defined by direction cosines ( α, β ). When α + β > 1, this solution becomes a evanescent wave, and rapidly disappears after propagating a few wavelengths away. Now, ignoring the contribution from evanescent waves, from (2.3) it is clear that the pressure at any point can be expressed as the summation of simple plane waves traveling at different directions. The amplitude of each propagating plane wave, i.e. A α, β ;0 λ λ, will not change when the plane wave propagates. The only change is the phase, which has a very simple expression. If the angular spectrum is known at the source, the field can be easily evaluated by an inverse Fourier transform of the angular spectrum after the phases are compensated Field calculation for rectangular arrays In this section, a new method for field calculation of rectangular arrays is developed from the concept of angular spectrum. Although this method has its root in angular spectrum, it is also significantly different from traditional approaches employing angular spectrum. Angular spectrum method is often coupled with 2-D Fast Fourier Transform (FFT) to speed up calculation, and the field is evaluated on evenly sampled grids. With the newly developed method, FFT is not directly involved, and the field at any point can be evaluated efficiently. Rectangular arrays are very popular in medical ultrasound, because the beams can be easily steered by electronics. In B scan imaging, normally a 1-D array is used. The study here is focused on 2-D arrays, and a 1-D array is a special case. A fully populated 2-D array consists of thousands of elements and it is very difficult to build. Despite of its complexity, a 2-D array can improve the volumetric resolution significantly comparing to

55 34 a conventional 1-D array. It can dynamically focus beams in all lateral directions. Combining with suitable imaging methods, such as those developed in later chapters of this study, a 2-D array can achieve real-time 3-D imaging, which provides many useful information hard to obtain with other imaging modalities Theory y a y R y (u,v) -a x -R x R x a x x -R y -a y Figure 2.2 The diagram of the 2D transducer array. Now, considering a 2-D transducer array made of N Ν elements as shown in Figure 2.2, each element is referred by the corresponding coordinates ( uv, ), where 0 uv, < N 1. The continuous wave (CW) excitation function at the surface of the transducer array is defined as f ( xy,, ) where x Rx, y Ry. 2R 2R is the physical x y aperture of the 2-D transducer array. However, this physical aperture needs to be enlarged in calculation to make sure that each field point in the field of interest (FOI) is

56 35 within the expanded aperture. In this way, the amplitudes of those decomposed plane waves do not change obviously. To model the real system, a new function is defined as f( x, y), ( x Rx, y Ry) gxy (, ) = 0, ( Rx < x ax, Ry < y ay) (2.8) where a x, a y are the half lengths of the expanded aperture of the transducer array in x and y direction respectively. This function is used to model the array, which has a truncation at the physical edge of the transducer. Outside of the physical aperture of the transducer, the excitation is set to zero. With the new driving function, the decomposed plane waves have a much larger aperture than the physical one. As in (2.4) the angular spectrum at surface of the transducer is given by A( fx, f y;0) = P( x, y;0)exp[ j2 π ( fxx + f yy)] dxdy. (2.9) where Pxy (, ;0) is the excitation pressure profile at the transducer surface. This profile takes fixed and quantized value over each transducer element. Expanding the quantized excitation profile Pxy (, ;0) = gxy (, ) with Fourier series yields, gxy (, ) = c exp( jkx)exp( jky), (2.10) mn, = mn, m n where c mn, is the coefficient of Fourier series, and defined as follows, Rx Ry xm yn cmn, g( ξ, η) e e dξdη, Rx Ry ik ξ ik η = (2.11) where k x m mπ =, k yn a x nπ =, mn=, ± 1, ± 2, ± 3. a y Substituting (2.10) into (2.9), one gets

57 36 A( f, f ;0) = c δ(2 π f k ) δ(2 π f k ), (2.12) x y m, n x m y n mn, = where, δ () i is a Dirac delta function. According to (2.7), the angular spectrum of the field at distance z would be 2π 2 2 A( fx, fy; z) = A( fx, fy;0)exp( j 1 ( λfx) ( λfy) z). (2.13) λ Substituting (2.13) into (2.3), and using the property of the delta function, one can get the field as (, ; ) = mn, exp( m )exp( n )exp( m n ) mn, = Pxyz c jkx jky j k k k z (2.14) The pressure for the CW excitation at angular frequency ω can be expressed as the summation of different plane waves [90] (,, ; ω; ) = mn, exp( m )exp( n )exp( m n )exp( ω ). mn, = p xyz t c jkx jky j k k kz j t (2.15) Assuming quantized amplitude at element ( uv, ) is q, ( ω ), one can evaluate (2.11) directly as follows, uv q ( ω ) ik ( x x m u+ 1 ikx x m u ik )( y y n v+ 1 iky y n v ), (2.16) uv, cmn, = e e e e uv, kx ky m n where x u, u 1 x +, y v and y v + 1 are the spatial coordinates of the four corners of corresponding element. From (2.15) and (2.16), theoretically one can evaluate the field for CW excitation. However, the parameters in these equations depend on the aperture size a x and a y. Only when a x and a y become infinite, the equality in (2.15) strictly holds for every point in the space. When the near field close to the center of the transducer is in concern, the

58 37 aperture can be set to 20 times as large as the physical aperture of the transducer array. Simulation results show that enough accuracy is achieved with this setting. Still in (2.15), to calculate the field at the free space, the double infinite summations must be reduced to finite summations. It is noticed that when k 2 + k 2 > k 2, the corresponding wave becomes evanescent wave, and it will not propagate into the space. Several wavelengths away from the surface of the transducer, these waves will not have any practical influence. So the limits of the summation can be evaluated as x m y n l m 2R x k =, l π n 2R y k =. Then (2.15) becomes π l m n x (,, ;, ) m y m z mn pxyzωt = A e e e e ω, (2.17) m= l n= l m l n ik x ik y ik z i t mn, where k = k k k z x y mn m n As for the pulsed wave (PW) excitation, the field calculation is achieved by integration over all the responses from each frequency. This can be implemented by sampling the pulsed excitation in time, then using discrete Fourier transform to get frequency domain signal, evaluating the CW responds at each specific frequency, in the end getting the time domain field by inverse discrete Fourier transform. The excitation signal at each element ( uv, ), 0 uv, < N 1, now becomes 1 i t i t quv, () t quv, ( ) e ω ω = ω dω quv, ( ω) = quv, () t e dt. 2π (2.18) For adequately sampled time domain signals, discrete Fourier transfer can be used. Then (2.18) can be expressed as nω iωst q () t = q ( ω ) e, (2.19) uv, uv, s s= 0

59 38 where, ω = 2π f0 and f 0 denotes the fundamental frequency. q, ( ω ) is the frequency s domain signal corresponding to quv, () t. uv s q ( ω ) = FFT ( q ( t)) q ( t) = IFFT ( q ( ω )) (2.20) uv, s uv, uv, uv, s Numerical Examples A. Transducer Definition For numerical examples, two transducer arrays are assumed. One has elements, while the other has elements. Both arrays have a physical aperture of 50mm 50mm, and all the elements are evenly distributed. But for all experiments, a circular aperture with a diameter of 50mm is implemented by deactivating elements outside of the circle. For simplicity, no physical kerfs are considered, which can be easily included by modifying the coordinates of each elements in (2.16). The one-way transfer function of the arrays is modeled as a Blackman window B( ω s ) : B cos( πω / ω ) cos(2 πω / ω ), 0 ω 2ω s c s c s s ( ωs ) =, 0, ωs > 2ωc (2.21) which has no phase shift for all frequencies, with B (0) = 0, B(2 ω ) = 0 and B( ω ) reaching maximum at the center frequency f c = 2.5 MHz. It has a 6dB bandwidth of 0.81 f c. For the PW study, a temporal excitation burst weighting function is assumed to be et ( ), which is given by c 2 2 t / t 0 et ( ) = e sin(2 π ft), (2.22) c where t 0 = 0.4 μs, and the burst lasts about 1.5 cycles, then reduces to zero. The total length of the burst is 20.48μs. The sampling rate is 100MHz. This gives a fundamental

60 39 frequency in (2.19) f 0 of khz. Using the FFT, the excitation burst weighting function can be represented as n ω iωst et ( ) = E( ωs) e E( ωs) = FFT( et ( )). (2.23) s= 0 Similarly, the underlying driving function is defined as nω iωst uv, = uv, ωs uv, ωs = uv, s= 0 d ( t) D ( ) e D ( ) FFT( d ( t)), (2.24) where d, () t is the user defined driving function for each element. Bessel beam, focused uv Gaussian beam, and asymmetrical array beam will be considered. Combining these functions in (2.21), (2.23) and (2.24), we can get actual transmission signal for each element, which is given by { } q ( ω ) = D ( ω ) E( ω ) B( ω ) q ( t) = IFFT D ( ω ) E( ω ) B( ω ).(2.25) uv, s uv, s s s uv, uv, s s s From (2.25), time domain transmitting signal can be calculated according to any defined driving function duv, () t. To verify simulation results, the data from synthetic experiment [102] are used. Basically, a single-element transducer made from PZT ceramics/polymer composite material with an effective diameter of 1mm is used to generate the field. The transmit transfer function is similar to the Blackman window with a central frequency of 2.5MHz. The field is measured by a broadband PVDF needle hydrophone of 0.5mm diameter. The hydrophone is scanned in a raster format by step motors. Details about the experiment and the characteristics of the transducer and hydrophone are described in [102].

61 40 B. Simulated and experimental results for CW fields For the CW study, three different fields are simulated. The first one is a Bessel beam, whose driving function is given by [126] q ( ω ) = J ( αr ), (2.26) uv, s 0 uv, 1 where ω = 2π f ; f = 2.5MHz is the center frequency; α = m ; J () 0 i is zero s c c order Bessel function of the first kind; r uv, is the radius from the center of element ( uv, ) to the center of the transducer array. The second one is a focused Gaussian beam, whose driving function is [126] r uv, / σ iks( F F + ru, v ) Duv, ( ωs) = e e, (2.27) where σ = 15mm ; the corresponding full width at half maximum (FWHM) for the Gaussian function is 25mm; F = 100mm is the focus of the beam; ks = 2 π / ωs ; ω s and r uv, have the same definitions as those in (2.26). The third one is an asymmetric array beam, which is defined by [104] D ( ω ) = cosk x cos k y, (2.28) uv, s x uv, y uv, where kx 1 = 1000m, k y 1 = 500m, uv, x and y uv, are the spatial coordinates for the center of element ( uv, ).

62 Figure 2.3 Simulated fields of CW Bessel beams ((a) and (b)) and focused Gaussian beams ((c) and (d)) that were produced with a 2.5 MHz and 50 mm x 50 mm 2D array transducers of 250x250 elements ((a) and (c)) and 50x50 elements ((b) and (d)). Stepwise aperture weightings are assumed for both Bessel and focused Gaussian beams. The scaling parameter,α for the Bessel weighting is m-1. The focal length and full-width-at-half-maximum (FWHM) of the focused Gaussian weighting are 100 mm and 25 mm, respectively. 41

63 Line Plots of CW Bessel Beam (a) z = 100 mm Simulation (Our Method) 250x250 Elements Simulation (R-S Method)... Experiment Normalized Magnitude (b) z = 100 mm 50x50 Elements Lateral Distance (mm) 04/01/JQC Figure 2.4 Lateral line plots of the CW Bessel beams in Figure 2.3 at axial distance z=100 mm away from the surface of the 2D array transducer of (a) 250x250 and (b) 50x50 elements, respectively. Solid lines are the simulation results with the new method, while dashed lines are the simulation results obtained with the Rayleigh- Sommerfeld diffraction formula. The parameters of the Bessel beams were the same as those in Figure 2.3. Dotted lines are the experiment results obtained with the synthetic array experiment.

64 Line Plots of CW Focused Gaussian Beam (a) z = 100 mm Simulation (Our Method) 250x250 Elements Simulation (R-S Method)... Experiment Normalized Magnitude (b) z = 100 mm 50x50 Elements Lateral Distance (mm) 04/01/JQC Figure 2.5 This figure is the same as Figure 2.4, except that it is for the CW focused Gaussian beams. The focal length and the FWHM of the focused Gaussian beams are the same as those in Figure 2.3.

65 44 Figure 2.6 Simulated transverse fields of a CW array beam produced with the 2D array transducer of 250x250 elements at four axial distances, (a) z=50 mm, (b) z=100 mm, (c) z=150 mm, and (d) z=216 mm, away from the transducer surface. Stepwise weighting was assumed for the array beam. The scaling parameters 1 were assumed to be kx = 1000m 1 and ky = 500m along the x and y axes, respectively. The parameters of the 2D array were assumed to be the same as those in Figure 2.3.

66 45 Normalized Magnitude (db) Line Plots of CW Asymmetric Array Beam (a) Simulation z = 100 mm... Experiment X Direction (c) z = 216 mm X Direction (b) z =1 00 mm Y Direction (d) z = 216 mm Y Direction Lateral Distance (mm) 04/01/JQC Figure 2.7 Lateral line plots of the CW array beam in Figure 2.6 at two axial distances, z=100 mm ((a) and (c)) and z=216 mm ((b) and (d)). At each distance, line plots are obtained in both the x ((a) and (b)) and y ((c) and (d)) axes. Solid lines represent the simulation results of the new method, while dotted lines are results with the synthetic array experiment. The parameters of the beams were assumed the same as those in Figure 2.6.

67 46 For Bessel and focused Gaussian beams, both transducers were simulated. One is with elements to meet the requirement that the physical dimension of each element is smaller than half wavelength. The produced fields by this array have no grating lobes. The other one has elements, which can be made with state-of-art technology and is more practical. All fields are generated by a circular aperture with a diameter of 50mm by driving the elements in the circle. The other elements are assigned zero as their driving signals. Figure 2.3 shows the simulated field for Bessel beams. (a) and (c) are generated by the array with elements. There are no grating lobes, and the field is nearly perfectly generated. On the other hand, (b) and (d) have some obvious grating lobes near the transducer surface. Figure 2.4 and Figure 2.5 show the comparison between the results from the new method, R-S method and synthetic experiment. Two lines at z = 100mmwith different arrays are shown in (a) and (b) respectively. The simulation results from our method and Rayleigh-Sommerfeld mostly overlap, demonstrating that our method is accurate. They also agree very well with data from the experiment. In panel (b) of Figure 2.5, some obvious differences between the simulations and experiment are observed. This is expected since experiment data are initially collected for elements. When calculating the field for elements, we ignored the difference of element size. Because of this, the beams do not arrive at the focus in phase, and the width of the focus is broadened. If no focus is used, as the case of Bessel beams, this effect is not obvious. For the asymmetric array beam, only one array of elements is simulated. Since the beam is not symmetric in x-y plane, which is obvious from the driving function

68 47 (2.28), the simulated fields (Figure 2.6) are shown in transverse sections with z = 50mm, z = 100mm, z = 150mm and z = 216mm away from the surface of the transducer array. Similarly, Figure 2.7 shows the comparison between the simulation and experiment. Again, the results agree with each other very well. C. Simulated and experimental results for PW fields For PW study, a focused Gaussian pulse is simulated and verified by synthetic experiments. For the focused Gaussian pulse, the corresponding driving function is given by [126] u, v r uv, / σ iks( F F + r ) Duv, ( ωs) = e e, (2.29) where σ = 15mm, with a full width at half maximum (FWHM) for the Gaussian function of 25mm; F = 100mm is the focal depth of the beam; ω s and r uv, have the same definitions as those in (2.26). With the underlying driving functions in (2.29), (2.25) can be used to get transmit signals in time domain. When q, ( ω ) is known for each specific frequency, same uv s techniques can be used to evaluate the responses for all the frequencies involved. Finally, inverse Fourier transform is used to get the field in time domain. Figure 2.8 shows the simulated fields for focused Gaussian pulse at z = 50mm, z = 100mm, z = 150mm and z = 216mm away from the surface of the transducer array. The amplitude of the field is shown with a dynamic range of 40 db. In all four panels, the horizontal axis represents time while the vertical axis represents the radial position away from the center of the transducer.

69 Figure 2.8 Simulated fields of focused Gaussian pulses with a 2D array transducer of 50x50 elements at four axial distances, (a) z=50 mm, (b) z=100 mm, (c) z=150 mm, and (d) z=216 mm, away from the transducer surface. The transmitting transfer function of the 2D array was assumed to be a Blackman window function peaked at the center frequency of 2.5 MHz, and the -6dB bandwidth of the array is about 81% of the center frequency. The focal length and the FWHM are the same as the CW focused Gaussian beam in Figure

70 Sidelobe Plots of PW Focused Gaussian Pulse (a) z = 100 mm Simulation... Experiment Normalized Magnitude (b) z = 216 mm Lateral Distance (mm) 04/01/JQC Figure 2.9 Lateral plots of the maximum sidelobes of the focused Gaussian pulses in Figure 2.8 at two axial distances, (a) z=100 mm and (b) z=216 mm. Solid lines are the simulation results of the new method, while dotted lines are results obtained with the synthetic array experiment.

71 50 Figure 2.9 shows the comparison of the maximum side lobes at z = 100mm and z = 216mm between the simulation and experiment. As explained in previous section, some differences between the results from simulation and experiment are expected Field calculation for annular arrays In this section, a new method of field calculation for annular arrays is developed from the concept of angular spectrum [89-93]. In this case, the angular spectrum takes a form quite different from its counterpart for rectangular arrays. Here, the field is decomposed to limited-diffraction Bessel beams instead of plane waves. The Bessel beam is one type of limited-diffraction beams, which keep their shape while propagating. An annular array transducer has multiple transducer elements and all of them share one common circular symmetrical center. It has very good focusing properties, but it needs mechanical scanning to form a 2-D image Theory Now, we consider an annular array consisting of N rings as shown in Figure 2.10, and each ring is referred by index ( l ), where 0 l N 1. The CW excitation at the surface of the transducer array is defined as f ( r ) in polar coordinates, where r R. R is the radius of the transducer. Similar to the case of rectangular arrays, a new function is defined to model the physical system with an expanded aperture, which is given as f (), r ( r R) gr () =, 0, (R<r a) (2.30) where a is the radius of the expanded aperture. This function is used to model the array, which has a truncation at the physical edge of the transducer. Out of the physical aperture

72 51 of the transducer, the excitation is set to zero. In this way, the decomposed Bessel beams have a much larger aperture than the physical one. y (l) r -a a x Figure 2.10 The diagram of the annular transducer array. Using the coordinates transform functions between rectangular coordinates and polar coordinates as follows, 2 2 r = x + y, 1 y θ = tan, x 2 2 ρ = fx + f y, f 1 y φ = tan, fx (2.31) one can get the angular spectrum at transducer surface from (2.9) [10], A( ρ;0) = 2 π Pr ( ;0) J(2 πrρ) rdr, (2.32) 0 0

73 52 where J () 0 i is the first-kind Bessel function of order zero and Pr ( ;0) is the excitation pressure profile at the transducer surface. The profile takes fixed and quantized value over each ring. Expanding the quantized excitation profile Pr ( ;0) = gr ( ) with Fourier Bessel series yields, gr () = cj( α r), (2.33) i= 1 i 0 i where α = x / a i i, and x i is the known infinite set of monotonically increasing positive solutions to J 0 ( x i ) = 0. c i is the coefficient of Fourier Bessel series and defined as follows, 2 a ci = g() r J0( αir) dr, aj( x) (2.34) i where J () 1 i is the first kind Bessel function of order one. Substituting (2.33) into(2.32), one gets A( ρ;0) = 2 π J ( α r) J (2 πrρ) rdr i= 1 i= i 0 αi = 2 π ciδ( ρ ). 2π (2.35) By rewriting (2.7) in polar coordinates, one obtains the angular spectrum of the field at distance z as, 2π 2 2 A( ρ; z) = A( ρ;0)exp( j 1 λ ρ z). (2.36) λ The pressure at space ( rz ; ) can be expressed as

74 53 Prz (; ) = 2 π A( ρ; zj ) (2 πrρ) ρdρ 0 0 α 2π = π c δ ρ j λ ρ z J πρr dρ i i ( )exp( 1 ) 0 0(2 ) i= 1 2π λ = cj r j k z i= i 0( αi )exp( αi ). (2.37) The pressure at ( rz ; ) can be expressed as the summation of different Bessel beams. Bessel beams also have non diffracting properties. Their amplitude pattern will not change while propagating. Only phase is changed and easy to compensate. The pressure for the CW excitation at angular frequency ω can be expressed as the summation of different Bessel waves 2 2 ( ; ; ) = i 0( αi )exp( αi )exp( ω ). i= 1 p rzt cj r j k z j t (2.38) Assuming quantized amplitude at element ( l ) is q ( ω ), one can evaluate (2.34) directly as follows [92, 93], l c = N 1 c q ( ω), i i, l l l= cil, = 2 rl J1( αirl ) rl J1( αirl ) axij1 ( xi), (2.39) where r + l and r l are the outer and inter radii of ring (l ) respectively. From (2.39) and (2.38), theoretically one can evaluate the field for CW excitation. However, the parameters in these equations depend on the aperture size a. When this aperture becomes infinite, (2.38) strictly holds for every point in the space. However, when the near field close to the center of the transducer is in concern, the aperture can be set to be 20 or 10 times as large as the physical aperture of the transducer array. Simulation results show that enough accuracy is achieved with this setting.

75 54 Still in (2.38), to calculate the field at free space, the infinite summation must reduce to a finite summation. It is noticed that when α i > k, the corresponding wave becomes evanescent wave, and it will not propagate into the space. Several wavelengths away from the surface of the transducer, these beams will not have any practical influence. So the limit of the summation can be approximately evaluated as ka / π + 1/4. Then (2.38) becomes ka / π + 1/4 2 2 ( ; ; ) = i 0( αi )exp( αi )exp( ω ) i= 1 p rzt cj r j k z j t (2.40) In later sections, we also call this new method as Fourier Bessel method, since Fourier Bessel series expansion is involved in the derivation. As for the PW excitation, similar technique is used as for rectangular arrays Numerical Examples A. Transducer Definition Consider the Bessel transducer of Lu and Greenleaf described in [126].The transducer is an N = 10-ring Bessel design transducer whose ring edges are located nominally at the first 10 zeros of J 0 ( α r),where α 1 = m. In practice, the transducer has a kerf of approximately 0.2 mm, such that in terms of the notation of previous section, r 0 = 0, r + 0 = x 1 / α kerf /2, r 1 = x 1 / α + kerf /2, and so on. Operating conditions are f = 2.5 MHz in water at speed of sound c = 1500 m/s, giving wave number k 1 = m. The transfer function and excitation burst are defined the same as the rectangular array in previous section.

76 55 Quantisation Profile Quantisation and Nonevanescent Coefficients for a = R Radial distance (mm) Figure 2.11 Quantization profile for the annular array. B. Simulated and experimental results for CW fields For the CW study, the R = 25 mm transducer has its ring pressures ql ( ω ) (solid lines in Figure 2.11) chosen as the peak value of each respective Bessel lobe (dashed line in Figure 2.11). Figure 2.12(a) shows the grey scale image of the transducer field evaluated with a = 30R. Next to it in panel (b), one observes the excellent agreement with the experimental field result of [126]. Panel (c) then shows the Rayleigh-Sommerfeld field computation without Fresnel approximation, which again agrees very closely with the Fourier-Bessel calculation. Finally, in panel(d), the Rayleigh-Sommerfeld computation with Fresnel approximation is shown, and it suffers from errors in the very nearfield. Notice that these errors are absent in the Fourier-Bessel calculation. In addition to its accuracy, the Fourier-Bessel algorithm ran 14.1 and times faster than the Rayleigh-Sommerfeld algorithms with and without Fresnel approximations, respectively. Figure 2.13 shows the lateral plots of the Bessel field, where FB method once again agrees well with S-B methods and experiment.

77 Figure 2.12 Field for quantized CW Bessel beam. a) Field calculated by Fourier- Bessel theory. b) Experimental field. c) Field calculated by Rayleigh-Sommerfeld formula without Fresnel approximation. d) Field calculated by Rayleigh- Sommerfeld formula with Fresnel approximation. 56

78 57 CW Bessel Fields Normalized Magnitude (db) Experiment... Simulation (Fourier-Bessel) Simulation (Approx. Weight) Simulation (Exact Weight) (z = 105 mm) Lateral Distance (mm) 10/00/JYL Figure 2.13 Lateral line plots of the CW array beam axial distance, z=100 mm Solid lines represent the experiment results; dotted lines are simulation results of Fourier-Bessel method; short dash lines represent the simulation results of Rayleigh-Sommerfeld method with quantized excitation; dash lines show the results of Rayleigh- Sommerfeld method with exact Bessel excitation.

79 58 C. Simulated and experimental results for PW fields For PW study, a zero order X wave and focused Gaussian pulse is simulated and verified by experiments. For the zero order X wave, the corresponding driving function is given by [101] a0 ωs / c D ( ω ) = (2 πa / c) e J ( rω / csin ζ ), (2.41) l s 0 0 l s where a0 = 0.05mm, ζ = 4 +, r 0 = 0, r = ( r + r ) / 2, ( l = 1 9), and c = 1500 m/ s. l l l For the focused Gaussian pulse, the corresponding driving function is given by [126] 2 2 ( + l ) 2 2 r / j s / c F F r l σ D ( ω ) = e e ω (2.42) l s Where σ = 15mm and the focus F is located at F = 120mm, with the full-width-at-halfmaximum being 25mm. Figure 2.14 shows the Fourier Bessel calculated field of a simulated X wave. The field is shown at the four distances with a=20r, (a) z=85 mm, (b) z=170mm (c) z=255mm, and (d) z=340mm. In all four panels, the horizontal axis represents time while the vertical axis represents the radial position away from the center of the transducer. Figure 2.15 then shows the Rayleigh-Sommerfeld field calculation of the same simulated X wave as per Figure The FB and RS plots are virtually identical, and this parallel is offered as an indicator of the F-B algorithm s accuracy since the R-S algorithm is widely accepted as a reliable method for field calculation.

80 Figure 2.14 Simulated fields of a zero-order band-limited X wave with Fourier Bessel method at distances: (a) z=85 mm, (b) z=170mm (c) z=255mm, and (d) z=340mm, respectively, away from the surface of a 50-mm-diam annular array. A stepwise X wave aperture weighting and a broadband pulse drive of the array were assumed. The transmitting transfer function of the array was assumed to be the Blackman window function peaked at 2.5 MHz and with -6 db bandwidth around 0.81f c. Parameters α 0 and ζ are 0.05 mm and 4, respectively. 59

81 Figure 2.15 The images are the same as those in Figure 2.14, except that they are produced with the Rayleigh Sommerfeld diffraction formula. The layout and the parameters used in simulation are the same as those in Figure

82 Figure 2.16 Experimental results that correspond to the simulations in Figures and A 10-element, 50-mm diameter, 2.5MHz center frequency, PZT ceramic/ polymer composite J 0 Bessel transducer was used. 61

83 Figure 2.17 Simulated fields of a focused Gaussian pulse with Fourier Bessel method at distances: (a) z=50mm, (b) z=120mm, (c) z=150mm and (d) z=216mm. A stepwise Gaussian aperture shading and a stepwise phase was assumed. The broadband pulse and transmitting transfer function of the array are the same as those for the X wave in Figure The FWHM of the Gaussian shading was 25 mm. 62

84 Figure 2.18 The images are the same as those in Figure 2.17, except that they are produced with the Rayleigh Sommerfeld diffraction formula. The layout and the parameters used in simulation are the same as those in Figure

85 Figure 2.19 Experimental results that correspond to the simulations in Figures 2.17 and 2.18, except that in the experiment the phases applied by a lens was continuous. The same transducer for Figure 2.16 was used. 64

86 65 Figure 2.16 shows actual experimental results for the Bessel transducer of [126], which match the predicted simulated X wave fields given previously in Figures 2.14 and See [101] for details of experimental setup. A high level of agreement between theory and practice is observed. Notice also that the FB algorithm is applicable right up to and including the transducer surface itself since (2.40) applies for all z > 0, whereas the RS algorithm is not applicable close to the transducer surface. When programmed in C under Linux on a Pentium III 600 MHz PC with 128 M Bytes of RAM, the FB algorithm took approximately 1 min and the RS algorithm approximately 10 h. Figure 2.17 shows FB field calculation for the simulated focused Gaussian pulse and plots are shown for (a) z=50mm, (b) z=120mm, (c) z=150mm and (d) z=216mm. Figure 2.18 then gives the RS field calculation for the same pulse, again showing a close correlation between the FB and RS simulation methods. Finally, Figure 2.19 shows experimental results except that in the experimental test the transducer had an acoustic lens added. This supplied a continuous phase shift across the transducer surface rather than the discretized phase shifts assumed in the simulation. Therefore some differences between Figure 2.19 and Figures 2.17 and 2.18 are expected. This is evidenced in the differences observed for the near field and far field panels (a) and (d) between the respective figures, although the simulated and experimental pulses are very similar in the closer regions around the focus in panels (b) and (c), respectively. 2.3 Spatial impulse response based field calculation In this section, a new approach is developed to calculate the spatial impulse response for a rectangular transducer. The exact solution is given in terms of the operation of sets

87 66 and trigonometric functions. The numerical implementation of this solution is much easier than solutions proposed before [56, 57, 62, 196, 200], where complex geometrical considerations are required Theory z (x,y,z) Transducer w y' y 2 Transducer w h R=ct y h r x 1 x 2 y 1 Θ x' x Θ 2 Θ 1 (x,y,0) r=((ct) 2 -z 2 ) 1/2 (a) (b) Figure 2.20 Geometry of the transducer and the filed point. (a) Spherical interaction in original 3-D rectangular coordinates. (b) Spherical interaction in the shifted 2-D coordinates. As shown in Figure 2.20 (a), a rectangular transducer locates at plane z = 0, with a height of h and a width of w. The center of the transducer is located at the origin of the coordinates. The transducer is mounted on an infinitely large rigid baffle. According to Rayleigh integral [190], the solution to wave equation (2.1) can be expressed as, ut ( R/ c) Φ ( x, yzt, ; ) = ds, (2.43) s 2π R

88 67 where s means surface integration; R is the distance between transducer element and the field point; ut ( ) is the uniform velocity distribution normal to the transducer surface. And the term ut ( R/ c) can be written in the integration form, yield, u( t R/ c) = u( τ ) δ( t R/ c τ) dτ. (2.44) Substituting the above equation into (2.43) and changing the order of integration If a new function is defined as, δ ( t R/ c τ ) ( x, yzt, ; ) u( τ ) dsdτ, s 2π R Φ = (2.45) then (2.45) can be expressed in a convolution form as, δ ( t R/ c) hxyzt (,, ; ) = ds, (2.46) s R Φ ( x, yzt, ; ) = ut ( ) hxyzt (,, ; ), (2.47) where means the convolution in time domain. Such kind of expression is widely used in linear system theory, where the output of a linear time-invariant system equals to the convolution of the input and the impulse response of the system. Thus, the newly defined function is named as the spatial impulse response of the system at field point ( x, yz, ). The spatial impulse response from (2.46) can express explicitly as follows (as illustrated in Figure 2.20), R δ ( t R/ c) 2π R Θ2 2 (,, ; ) = rdrdθ, Θ1 R1 hxyzt (2.48) [62], Using the relationship R = r + z, the above equation can be further simplified as

89 68 hxyzt ( Θ Θ ) c = (2.49) 2π 2 1 (,, ; ), where Θ 1 and Θ 2 are determined by the intersection of the transducer and the projected spherical wave with a radius of R plane z = 0 is a circle centered at ( x, y,0) with a radius of = ct. The intersection of the spherical wave with the r ct z 2 2 = ( ), when t / > z c. The circle also intersects with the transducer aperture, and some intersected arcs are located inside the transducer aperture. The angles expanded by these arcs are directly related to the spatial impulse response given by (2.49). When multiple arcs exist, the spatial impulse response is the summation of contributions from all arcs, and (2.49) is changed to the following summation: ( Θ Θ ) c = (2.50) 2π l l (,, ; ) 2 1, hxyzt l where l is the index of the arc. To calculate Θ 1 and Θ 2, we shift the origin of the coordinate to ( x, y,0), and get a new coordinate ( x ', y', z ) as shown in Figure 2.20 (b). Considering the new 2-D coordinates at the plane z = 0, the circle can be expressed in polar coordinate as x' = rcos Θ; y' = rsin Θ. (2.51) If a point on the circle is inside the transducer aperture, the angle Θ must satisfy the following inequalities, x1 rcos Θ x2 ( a), y1 rsin Θ y2 ( b), (2.52)

90 69 where x 1, x 2, y 1 and y 2 are the coordinates of the four boundaries of the transducer in the new coordinates, and given by x 1 = x w/2, x1 = x+ w/2, y1 = y h/2 and y2 = y+ h/2 respectively. i i j j Assuming when Θ [ Θa 1, Θa2], (2.52) (a) holds, and when Θ [ Θb 1, Θb2], (2.52) (b) holds, we have the following relation, i j [ Θ, Θ ] = [ Θ, Θ ] [ Θ, Θ ], (2.53) l l i i j j 1 2 a1 a2 b1 b2 l i j where i is the index of the sets that satisfy (2.52) (a); j is the index of the sets that satisfy (2.52) (b); means the union the sets; means the intersection of the sets; and each set corresponds to an arc. From the sets obtained from (2.53), the spatial impulse response can be evaluated from (2.50). Calculation of these sets is a matter of trivial algebraic operations. For example, assuming inverse cosine function acos( x ) defines on [0, π ], for (2.52) (a) we have, x1 x2 Θ [0,0] when > 1 or < 1; r r x x x x x x Θ π π r r r r r r x2 x2 x1 x2 Θ [acos( ), 2π acos( )] when < 1 and 1 1; r r r r x1 x1 x1 x2 Θ [0,acos( )] [2π acos( ), 2 π] when 1 1 and > 1; r r r r x1 x Θ 2 [0, 2 π ] when 1 or 1. r r have, [acos( ),acos( )] [2 acos( ), 2 acos( )] when 1 and 1; Assuming inverse sine function a sin( ) x defines on [ π /2, π /2] (2.54), from (2.52) (b) we

91 70 y1 y2 Θ [0,0] when > 1 or < 1; r r y1 y2 y2 y1 y1 y2 Θ [a sin( ),a sin( )] [ π a sin( ), π a sin( )] when 0 and 1; r r r r r r y2 y1 Θ [ π asin( ), π asin( )] r r y1 y2 y1 y2 [2π + a sin( ), 2π + a sin( )] when 1 and 0; r r r r y2 y2 y1 Θ [0,a sin( )] [ π a sin( ), π a sin( )] r r r y1 y1 y2 [2π + a sin( ), 2 π] when -1 0 and 0 1; r r r y2 y2 y1 y2 Θ [0,asin( )] [ π a sin( ), 2 π] when < 1 and 0 1; r r r r y2 y2 y1 y2 Θ [ π a sin( ), 2π + a sin( )] when < 1 and 1 < 0; r r r r y1 y1 y1 y2 Θ [a sin( ), π acos( ),] when 0 1 and > 1; r r r r y1 y1 y1 y2 Θ [0, π a sin( )] [2π + a sin( ), 2 π] when -1 < 0 and > 1; (2.55) r r r r y1 y2 Θ [0, 2 π ] when 1 or 1. r r After the spatial impulse response is evaluated, the pressure at the field point ( x, yz, ) can be calculated directly as [190], Φ( x, yzt, ; ) dut ( ) pxyzt (,, ; ) = ρ0 = ρ0hxyzt (,, ; ), t dt (2.56) where ρ 0 is the density of the media Numerical Examples A. Transducer definition There are two transducers studied in simulations. Transducer I models a custom built linear array and transducer II models a commercial phase array labeled as V2 from Acuson.

92 71 Transducer I assumed in simulations is a 1-D linear array with 128 elements, a pitch of 0.32mm and a dimension of 40.96mm 8.6mm. The center frequency is 3.5MHz, and there is no lens added in elevation. Transducer II has a pitch of 0.15mm, a dimension of 19.2mm 14mm, and a center frequency of 2.5MHz. There is no focusing in elevation too. However, in elevation, the physical transducer used in the experiment has a focus with a depth of 68mm. For simplicity, in simulation the transfer function of these transducers in both transmission and reception is assumed to a Blackman window as follows, cos( π f / fc) cos(2 π f / fc), 0 f 2 fc, B( f) = 0, f > 2 fc. (2.57) The two-way bandwidth is 64% of the center frequency at -6dB. The two-way fractional bandwidth of the physical transducer V2 is around 50% to 60%. The driving signal is one-cycle sine wave at the center frequency for both simulations and experiments. B. Impulse Response for an array transducer For a linear array consisting of N flat transducer elopements, the impulse response of the whole transducer can be treated as the summation of individual impulse response from each element if the same driving signal is applied to every element. As shown in Figure 2.21, for a linear array, the spatial impulse response of the whole transducer can be expressed as N 1 hxyzt (,, ; ) = ah( xyzt,, ; ), (2.58) i= 0 i i where hxyzt (,, ; ) is the spatial impulse response of the whole transducer; i = 0,1,, N 1, is the index of the elements of the array; a i is the weighting level at

93 72 element i ; hi ( x, y, z; t ) is the spatial impulse response between the element i and the field point ( x, yz, ), which can be calculated according to (2.50). Since (2.50) only applies to the condition in which the center of the transducer locates at the origin of the coordinates, the origin of the coordinates needs to be shifted to the center of each element, and the coordinates of the field point need to be changed accordingly. z N-1 a i (x,y,z) h i (x,y,z;t) i u i (t) y 0 x Figure 2.21 The geometry of array transducer and field point. Base on similar principles, the spatial impulse response of a convex or a concave array consisting of multiple flat elements can also be calculated by coordinates transform. For example, as shown Figure 2.22 (a), a convex array has N flat elements with their centers distributing on an arc of a radius r 0. The arc is located on plane y = 0, and the center of element i has a relative angle α i to positive z axis. Rotating the coordinates along y axis, one can get new coordinates (Figure 2.22 (b)). In the new coordinates, the

94 73 element is parallel to plane z = 0, and the coordinates of the field point in the new coordinates, ( x1, y1, z 1), is given by, x1 cosα 0 sinα x y = y. 1 z 1 sinα 0 cosα z (2.59) z z 1 (x,y,z) (x 1,y 1,z 1 ) i α i r 0 y i r 0 y 1 x (a) x 1 (b) Figure 2.22 Coordinates transform for convex/concave array. (a) The original coordinates, (b) Rotated coordinates. After that, another new coordinates is generated by shifting the origin of the new coordinates to the center of the element, and the field point will be defined as ( x2, y2, z 2), x2 = x1, y2 = y1, z2 = z1 r0, (2.60) where takes for a convex array and + for a concave array. Then the spatial impulse response for this element can be calculated according to (2.50).

95 Figure 2.23 Spatial Impulse Response at two distances: (a) z=60 mm, (b) z=

96 75 Plots of Spatial Impulse Response (a) (0,0,60) 0.8 (b) (10,0,60) Normalized Magnitude (c) (0,0,120) 0.8 (d) (10,0,120) Time(us) Figure 2.24 Plots of spatial impulse response at four points: (a) (0,0,60), (b) (10,0,60), (c) (0,0,120), (d) (10,0,120). The unit is mm.

97 76 Plots of Transient Fields 1 (a) (0,0,60) 1 (b) (10,0,60) Normalized Magnitude (c) (0,0,120) (d) (10,0,120) Time(us) Figure 2.25 Plots of transient fields at four points: (a) (0,0,60), (b) (10,0,60), (c) (0,0,120), (d) (10,0,120). The unit is mm.

98 77 Figure 2.23 shows the spatial impulse response for the transducer I. The impulse responses for 128 points along two lines, which are 60mm and 120mm away from the transducer surface, are presented. The weighting for all elements are all assumed to be 1. These lines are on plane y=0, and parallel to transducer surface (refer to Figure 2.21 for definitions of coordinates). The amplitude of the each spatial impulse response function is converted to grey level, and 128 such functions are put in parallel to form a 2-D image. The time duration of these impulse responses are truncated to 5μs to show the detailed shape. The impulse response at edge of panel (a) in Figure 2.23 lasts more than 5μs. Figure 2.24 plots the impulse response specifically for 4 points at (0,0,60), (10,0,60), (10,0,120) and (10,0,120), where the unit for these coordinates is mm. From these images and figures, one can easily observes that at same distance, when a point is further away from the center, the duration of the corresponding impulse response is increased and the main feature of the shape is condensed to a shorter period of time. At different distances, when a point is further away from the transducer, main features of the impulse response is condensed too, and the duration is reduced. These characteristics of the impulse response have some consequences for numerical simulations. When a point is far away from the transducer, the sampling rate for the impulse response should be increased to meet the sampling requirement and avoid aliases. C. Field calculation for array transducer To calculate the field response for an array (linear/convex/concave) at point ( x, yz, ), the following expression for pressure is applied, N 1 dui () t pxyzt (,, ; ) = ρ ah i i( xyzt,, ; ), (2.61) dt 0 i= 0

99 78 where ui ( t ) is the velocity profile for element i, and all other parameters have the same definitions as those in (2.58). Here the velocity profile at each element does not have to be the same. For example, to generate a focused pulse, different time delay is needed for each element and the velocity profile ui ( t ) has to be different for very element. Figure 2.25 shows the transient fields for four points as those in Figure For the field simulation, the velocity profile for each element is one cycle sine wave and no time delay is added. The weighting level for each element is assumed to be 1. A pulsed plane wave is resulted from this excitation. Transmit transfer function defined by (2.57) is also included. In all panels, one can observe a strong pulse starting at the moment corresponding to the beginning of impulse responses as seen in Figure A weak pulse appears in panel (a), (c) and (d) with a starting moment related to the ending of impulse responses. For panel (b), the weak pulse is not in the time windows of 5μs. Also seen in panel (b) of Figure 2.23, the impulse response lasts longer than the time window. While the impulse response differs quite obvious in terms of the shape, the main features of transient fields for these four points are quite similar. D. Echo simulation for array transducers Since the field calculation method based on spatial impulse response evaluates the transient field response directly, it is also relatively easy to model the pulse-echo response of a transducer based on the theory of linear system [198, 199]. When one point at ( x, yz, ) (refer to Figure 2.21) is illuminated by ultrasound energy, it scatters some energy with a coefficient of f ( xyz.,, ) Then the scattered ultrasound received by an element of an array is given by [199],

100 79 N 1 2 r ρ 0 dui() t pi ( xyzt,, ; ) = f( xyzh,, ) i( xyzt,, ; ) ah i i( xyzt,, ; )* 2 (2.62) c i= 0 d t If the electro-mechanical transfer function of the array in time domain is B() t, then electrically received echo signal from element i can be expressed as, N 1 2 ρ 0 dui() t ei() t = f( xyzbt,, ) () hi( xyzt,, ;) ah i i( xyzt,, ;)* 2 (2.63) c i= 0 d t When multiple scattering points exists, ignoring the multiple scattering (weak scattering approximation or First Born approximation [27]), one gets the echo signal as the summation of the contributions from each point, which is given in (2.63). Echo signals are simulated and experimentally collected for four separate points with coordinates of (0,0,30), (0,0,50), (0,0,70) and (0,0,90) respectively, where the unit is mm. Transducer II is used in simulation, while experimental data is acquired from the V2 transducer. The experiment is conducted in water, and a small glass ball of diameter about 0.5mm is served as a point scatterer. The scatterer is placed at the center of the transducer, and then vertically descended to an appropriate depth. The depth is verified by the flight time of echoes. Figure 2.26 shows the echo signals with duration of 5μs. The echo signal detected by each element is converted to linear grey scale, and the echoes from all 128 elements are put in parallel to form a 2D image. The horizontal axis for these images is time, while the vertical axis is the index of each element, starting from 0 to 127. Figure 2.27 shows the line plots of the echoes at element 64.

101 Figure 2.26 Simulated and experimental echo signals: (a)-(d) are simulated results for 4 points at center of the transducer with a distance of 30 mm, 50 mm, 70 mm and 90 mm away from the transducer surface respectively. (e)-(i) are experimental results in water for the same points as in (a)-(d). V2 phased array from Acuson is used for the experiment. The vertical axis is the index of elements of the transducer, and horizontal axis is time. 80

102 81 Plots of Echo Signals 1 (a) z = 30 mm 1 (b) z =50 mm Simulation Normalized Magnitude Experiment (c) z = 70 mm (d) z = 90 mm Time(us) Figure 2.27 Plots of echo signals at element 64 as in Figure 2.26: (a) z=30 mm, (b) z=50 mm, (c) z=70 mm, (d) z=90 mm.

103 82 From these two figures, one can see that the simulation agrees with the experiment pretty well. The main features of the echoes from the simulation are very similar to those from the experiment. However, some differences exist. The echoes in the experiment seem to last longer than those in the simulation. There are several possible explanations. First, the transfer function used in the simulation is a well defined Blackman window, and the two-way bandwidth is higher than that of V2 transducer, which results in a shorter pulse. Second, an ideal point scatter is assumed in the simulation, while a glass ball with a diameter close the central wavelength of the transducer is used in the experiment. Third, there is a physical focus at 68mm in elevation for V2 transducer, and no focus in elevation is assumed in simulation. 2.4 The relationship between different field calculation methods z (x 1,y 1,z 1 ) Transducer r 1 R r 0 y x (x 0,y 0,0) Figure 2.28 Geometry of transducer and field point. In this section, the relationship between angular spectrum method, spatial impulse response method and Reayleigh-Sommerfeld method is discussed.

104 83 As shown in Figure 2.28, the source point on the transducer is denoted as ( x0, y 0,0), and field point in space is denoted as ( x1, y1, z 1), while the distance between these two points is R. follows, According to (2.7), the angular spectrum of field at distance z 1 is expressed as x y 1 x y j k (2 π fx) (2 π fy) z 1 A( f, f ; z ) = A( f, f ;0) e. (2.64) (2.64), And the field at any point ( x 1, y 1, z 1 ) is obtained by inverse Fourier transform of j k (2 π fx) (2 π fy) z 1 j(2π fxx π fyy 1 ) x y x y Px (, y; z) = Af (, f;0) e e dfdf. (2.65) According to the convolution theorem of the Fourier transform, the above equation can be written in a convolution form as, fx, fy x y fx, fy j k (2 π fx) (2 π fy) z 1 Px (, y; z) =I ( Af (, f;0)) I ( e ), (2.66) where ** means the 2-D spatial convolution, and I, () i means the 2-D inverse 1 fx fy Fourier transform. The inverse Fourier transform of the angular spectrum at transducer surface is, I ( A( f, f ;0)) = P( x, y ;0), (2.67) 1 fx, fy x y 1 1 And the other inverse transform in (2.66) is given by [6, 17], jk x y1 + z1 1 j k (2 π fx) (2 π fy) z 1 1 e I f, ( ). x f e = y z π x1 y1 z (2.68) Substituting (2.67) and (2.68) into (2.66), one gets,

105 84 jkr 1 e Px ( 1, y1; z1) = Px ( 0, y0;0) dxdy 0 0 2π (2.69) z1 R where R = ( x x ) + ( y y ) + z is the distance between a source point and a field point. (2.69) is the first Rayleigh-Sommerfeld formulas [6, 10] for CW excitations. According to (2.47) of spatial impulse response method, the velocity potential at field point ( x1, y1, z 1) is given by, Φ ( x, y, z ; t) = u( t) h( x, y, z ; t), (2.70) Rewriting the above equation to the full form, one gets, δ ( t R/ c τ ) Φ ( x1, y1, z1; t) = u( τ ) dsdτ, s 2π R ut ( R/ c) = ds. s 2π R (2.71) The pressure can be expressed as follows, Φ( xyzt,, ; ) px ( 1, y1, z1; t) = ρ0, t ut ( R/ c)/ t = ρ0 dx 0 dy 0. 2π R (2.72) Taking temporal Fourier transform of above equation, one gets the pressure at single frequency ω, jkr e p( x1, y1, z1; ω) = jωρ0 u( ω) dx0dy0. (2.73) 2 π R From the relationship between pressure and velocity [203], one has, px y z t ( 1, 1, 1; ) ut () = ρ0. z1 t z1 = 0 (2.74) Taking temporal Fourier transform of above equation yields,

106 85 px ( 1, y1, z1; ω) z 1 z1 = 0 = jωρ u( ω). 0 (2.75) Changing variables, and substituting (2.75) into (2.73), one finally obtains, 1 e px (, y, z; ω) (2.76) jkr ω = dx0dy0 2π R z0 z0 = 0 p( x, y, z ; ) The above equation is the second Rayleigh-Sommerfeld formula. It is obvious now that the angular spectrum method is related to first Rayleigh- Sommerfeld method under Dirichlet condition where the pressure at transducer surface is known. Under this condition, one can derive Rayleigh-Sommerfeld method from angular spectrum method, or vice versa. The spatial impulse response method is related to second Rayleigh-Sommerfeld method under Neumann condition where the normal velocity at transducer surface is known. Under this condition, one can derive either one of the methods to another. In the derivation in this section, the velocity is assumed to be the same for all source points. If the velocity is also changing with position on the transducer surface, one can decompose this complex profile to the summation of simple profiles. The conclusion is still holds, for detailed treatment please refer to [69]. Worth noticing that, under Neumann condition, angular spectrum can also be used by modifying the propagation factor [6, 34]. In this case the propagation factor directly links the spectrum of pressure at any distance to spectrum of velocity at transducer surface. In this sense, one can also derive both Rayleigh-Sommerfeld method and spatial impulse response method from angular spectrum method.

107 86 It is not surprising that all these three methods are closely related to each other. All in all, they are solving the same wave propagation problem from different perspectives with different mathematical treatments. However, in practice, numerical implementations of different methods have different advantages and disadvantages. One should choose an optimal method according to the specific problem at hand. The Rayleigh-Sommerfeld formula is basically applicable to every situation, and in some case, it can provide a closed-form analytical expression for the field. However, the numerical implementation of the method involves a double integration, which is highly demanding in terms of computation. Also when the propagation distance is small, the phase of the kernel of the integration changes very fast, and makes the numerical integration unstable and hard to converge. So this method is not suitable for the calculation of the field close to the transducer surface. For other part of the field, the computation is also very intense. But the calculated field is very accurate. Thus, this method is usually used as a standard for comparison and not suitable for the situation where the fields for large amount of points are needed. In addition, this method is more suitable for CW field calculation than for transient field calculation. It is more suitable for single-element transducers than array transducers. To speed up the calculation, some approximations can be applied to the Rayleigh-Sommerfeld formula, such as Fresnel approximation [134] and Fraunhoffer approximation [198]. With Fresnel approximation, the speed can be improved significantly with the penalty that the accuracy is also reduced notably especially for the steered beams according the study done by Crombie et al [198]. Fraunhoffer approximation can even further improve the computation efficiency, but the

108 87 accuracy can be unacceptably low, because far field is assumed and the transducer has to be divided into small parts to meet this assumption. It is obvious that these two methods are not suitable for the calculation for the fields close to the transducer surface either. Angular spectrum method, when coupled with FFT, has high computation efficiency. To take advantage of FFT, the transducer surface has to be sampled into rectangular grids, which is appropriate for most configurations. However, for annular arrays, the spatial sampling must be very high to accurately represent the circular shape of the traducer elements. The angular spectrum method calculates the field on rectangular grids at a cross section. This cross section can be very close to the transducer surface. In fact, it can be right at the transducer surface. But it is difficult and awkward to use this method to calculate the fields for some arbitrary spatial points, since this method only calculates the field on rectangular grids. To overcome the limitations associated with this method, the angular spectrum method should be coupled with series expansion as developed in the previous sections. In this way, the array structure is naturally taken care of. No resampling on the transducer surface is needed. For annual arrays, high accuracy is achieved. The new methods are suitable for the calculation of the fields for arbitrary spatial points. These points can be very close or right at the transducer surface. Worth noticing that, for angular spectrum methods, zero padding is needed to ensure the accuracy of the calculation, which partially reduces the efficiency of this method. In addition, the angular spectrum method is more suitable for CW field calculation than for transient field calculation. Spatial impulse response method calculates transient field directly. So it is more suitable for transient field calculation than for CW field calculation. For annular arrays

109 88 and rectangular arrays, there are closed-form expressions [71] for the impulse response. The expressions are based on the geometrical considerations, which is rather complicated in terms of numerical implementation. But the expressions are exact, and thus very accurate. To simplify these expressions, we developed a new method for rectangular arrays based on algebraic operations in previous sections. The new method still gives the exact expression for the impulse response. These exact spatial impulse response methods do not require resampling of the transducer surface. These methods are suitable for the calculation of transient fields for arbitrary points with high accuracy. To calculate the transient fields for a large amount points, approximations [55, 200] can be employed to speed up the calculation while sacrificing some accuracy. Since the spatial impulse response depends on the location of the spatial point, and the impulse may last very short period of time, the time-domain sampling frequency of the impulse must be high enough to avoid aliasing. If a lower sampling frequency is used, the accuracy can be reduced significantly. In addition, the spatial impulse response method can be naturally used to calculate the echo signals received by transducer elements. Thus it is suitable for ultrasound system modeling in a rather straightforward fashion as discussed in previous sections. 2.5 Summary In this chapter, the angular spectrum principle is applied to derive new solutions for wave propagation problems with annular arrays and rectangular arrays. With the new solutions, the field of any point is expressed as the summation of Bessel beams for annular arrays and plane waves for rectangular arrays. There is no integration or convolution is directly involved. And the solutions provide a new, simple, and

110 89 straightforward mapping between the drive signal at each transducer element and the field of any point. The numerical implementation of the solutions is easy and the computation is reasonable. A new solution of spatial impulse response for rectangular transducers is also given based on simple algebraic operations instead complex geometrical considerations. The echo simulation for linear array transducers based on impulse response method and linear system theory is also outlined, and the simulation results are verified by experimental results. Finally, the relationships between Rayleigh-Sommerfeld diffraction formula, angular spectrum method and spatial impulse response methods are briefly reviewed. The studies of wave propagation [89-93] in this chapter will help to gain insight into interactions between ultrasound wave and scattering objects.

111 CHAPTER III: LIMITED DIFFRACTION BEAMS AND SIDELOBE REDUCTION In this chapter, the propagation properties of limited-diffraction beams are studied with the emphasis on their applications in ultrasonic and optical imaging. The coded excitation will be employed in an attempt to improve frame rate of ultrasound imaging system. The limited-diffraction beams are also optically generated to improve the depth of field of an optical coherence tomography system. 3.1 Introduction Side Lobes of Limited diffraction beams Limited diffraction beams are the approximate version of non-diffracted beams [95, 96], which theoretically can propagate to infinite distance without spreading. Although these beams have a pencil-like, localized high-pressure center and a large depth of field, the sidelobes from these beams are relatively higher than conventional focused beams. The quality of the constructed images using these beams is lower than conventional ones. To overcome this problem, multiple beams subtraction [98] can be used to significantly reduce the sidelobe. On the other hand, this method needs more than one transmission to get one A-line. Thus, in vivo, it is sensitive to the motion of the organs, which could introduce motion artifacts to the constructed images. 90

112 Coded excitation Coded excitation has the potential to transmit several beams at the same time, when orthogonal codes are used. So it is a promising way to solve the problem inherent in the multiple-beams-subtraction method. Coded excitation has been used extensively in radar data transmission and processing. It was introduced to biomedical ultrasound imaging about twenty years ago [204]. In the technique, excitation pulse is coded by FM chirp, pseudo-noise (PN) code, Golay code or other codes. In reception, the data are decoded by matched filtering or other adaptive filtering. Since in coded excitation, a much longer pulse is used, the total energy transmitted is also much higher than that from single short pulse. Thus it can penetrate deeper into the tissue than short pulses. When the detected signal is compressed by filtering, the resulting signal to noise ratio also increase dramatically. Of course, longer excitations reduce resolution in axial direction, and produce a dead zone right in front of the transducer surface. However, by choosing appropriate coding and filtering methods, the range sidelobe introduced by the coded excitation can be reduced to acceptable level. The axial resolution will not be obviously affected. In later sections, coded excitation and multiple beams subtraction will be combined together trying to reduce sidelobe and avoid motion artifacts at the same time. Zeroth order Bessel beam (or X wave) and two second order Bessel beams (or X waves) are used in the subtraction. Excitation is coded by FM chirp and binary PN code. Transmitting three beams at the same time and transmitting each one separately by different coding methods are tried. Computer simulations are performed to construct images form a 3-D scattering phantom with coded excitation. Quantitative evaluations are performed.

113 Optical Coherence Tomography Optical Coherence Tomography (OCT) [ ] is a relatively new noninvasive optical imaging modality for biomedical diagnosis. In principle, OCT works in a way very similar to B-scan ultrasound. It detects the back-scattered light from the object, while B-scan ultrasound detects the back-scattered sound wave. Since the velocity of light is extremely high, and the coherent time of the light source is extremely short, to detect the light pulse directly is very difficult. Instead, Michelson interferometer is used to detect the interference between the lights from the probe arm and the reference arm. And the interference pattern defines the axial resolution of OCT, which is related to the bandwidth of the light source. With ultra-fast lasers, the axial resolution can reach 2-4 μm, comparable to the resolution of a microscope, which is about 1 μm [212]. However, to improve the lateral resolution is another issue, which is defined by the central wavelength of the light source and numerical aperture (NA) of the objective. Normally the axial scanning distance is about 2-3mm. To keep such a long focal zone, an object with a low NA must be used. With a focal zone of 2mm, the theoretical lateral resolution at the focal plane is about 10 times of central wavelength [116], while the effective spot size in tissue is bigger than the theoretical diffraction limit. The average resolution over all the imaging distance would be even worse. To improve the lateral resolution and enlarge the depth of field, a new method is proposed to use limited diffraction beams to illuminate the object instead of focused Gaussian beams. With multiple-order-beam subtraction, a high quality image can be got with a very large depth of field. In later sections, this principle will be applied to OCT by replacing the objective of the probe arm with a limited diffraction beams transmitter and

114 93 receiver while keeping the basic structure of Michelson interferometer. The limited diffraction beams emitter and receiver are implemented through holograms. To generate the zero order Bessel beams, a two-level phase-only hologram of large size is shrunk to desired size by an imaging lens. On the same time, high order diffraction pattern at the imaging lens is blocked by an iris diaphragm, and lens are added at image plane and object plane respectively to correct the quadratic phases. The second order Bessel beams are produced by adding a second hologram right behind the first one to modulate the amplitude and phase in azimuthal direction. The second hologram can rotate to change the azimuthal distribution of the second order beams. To get the subtracted signal, at each scanning position, three interference signals illuminated by one first-order and two second-order beams are needed. Thus total time to get one image is as three times as that of the conventional OCT. However, with our current design, the average lateral resolution is 4.4 wavelengths, and a depth of field is 4.5mm, which can be shortened to further improve the lateral resolution. 3.2 Side lobe reduction with code In this section the sidelobe reduction principle is coupled with coded excitation to increase the penetration and frame rate. The main results of this section were present at the IEEE ultrasonic symposium in 2001 [213] The principle of side lobe reduction The rigorous theory and derivation has been discussed by Lu [98]. Here the basic idea behind the sidelobe reduction using the subtraction of multiple limited diffraction beams is briefly reviewed.

115 94 Using Bessel beams as an example, the field produced by a broad band Bessel beams can be expressed as Φ = 2 πj ( αr)cos m( φ φ ) F T( ω) e β (3.1) Jm m 0 1 { j z } where m = 0,1,2,, is the order of the Bessel beams. T ( ω ) is the transmit transfer function of the transducer. F 1 () i means inverse Fourier transform. For pulse-echo imaging system, in reception, when the transducer is weighted in the same way as that in transmission, the detected signal from a point scatterer located at r = ( r, φ, z) with reflection coefficient of Ar (, φ, z) is given by e ( r, t) = 4 π A( r) J ( αr)cos m( φ φ ) i F T ( ω) e β (3.2) Jm j z { } m 0 The total response is the integration of each scattering point given by j2 z ej 4 rdr π β = π d dz A( r) J ( )cos ( 0) { ( ) } m φ 0 m αr m φ φ F T ω e π i i (3.3) As shown in Figure 3.1, when α r = 0, J 2 (0) = 1, 2 0 J (0) 0 2 =, on the other hand, while αr 1, J ( αr) J ( αr) (2/ παr)cos ( αr π /4). Taking full advantage of these properties of Bessel function, sidelobe can be greatly reduced by subtraction between RF A-lines from one zeroth-order and two second-order transmissions. The resulting signal is given by e () t [ e () t + e () t ] J0 J2 φ0= 0 J2 φ0= π /4 = 4 π φ ( ) ( α ) ( α ) ( ω) j β z { } π rdr d dz A r J 0 0 r J2 r F T e π i (3.4) In Figure 3.1, assuming α 1 = m, for a transducer with a diameter of D= 50mm, at the edge of the transducer, the sidelobe is reduced about 20 db.

116 95 Substraction of Bessel Functions 0 J0*J0 J2*J2 J0*J0-J2*J2 Alfa= /m -10 Normalized Magnitude (db) Lateral Distance (mm) Figure 3.1 Squares of the zeroth-order and second-order Bessel functions of the first kind and the absolute value of their subtraction.

117 96 1 One Short Pulse 2 One FM-Chirp Normalized Magnitude One m-sequence Three m-sequences Time (us) Figure 3.2 Diagram of the excitation signals.

118 Coded excitation Coding is one of the most used techniques in radar, sonar and telecommunication. About two decades ago, researchers tried to extend it to ultrasound imaging. Generally, coding is applied in excitation. It uses a much longer coded excitation signal instead of single short pulse, which is normally determined by the impulse repulse of the transducer. In biomedical applications, the damage threshold of humane tissues limits the peak intensity of the transmitted sound field, which in term, limits the penetration depth and signal to noise ratio. With coded excitation, the energy of the sound field is distributed to a longer duration of time, and the total energy can be much higher than that of single short pulse. In reception, pulse compression is used to recover the coded signal to corresponding short pulse. Thus the penetration depth and signal to noise ratio can be improved significantly. Several different kinds of codes have been investigated. Among them, m-sequence, FM chirp, and Golay pair are the most used codes. m-sequence is one type of pseudonoise code, which possesses noise-like spectrum. The self-correlation of m-sequence is determined by the length of the code. For 127-chirp m sequence, the sidelobe of selfcorrelation is about 40db down from the peak, however, this is correct in the sense of circular correlation, which is difficult to realize in reality. When implemented by linear correlation, 20db is achieved, which means the range sidelobe of the final image will be high. The cross-correlation between two m-sequences depends on which sequence is selected, not solely defined by the length. Program is used to search all the possible combinations to find three m-sequences of 127 bits with low mutual cross-correlations. The sidelobe is about 18dB. These three m-sequences are used to code multiple limited

119 98 diffraction beams, and transmitted three beams at the same time. To make full use of the bandwidth of the transducer, two neighboring bits last one cycle of the central frequency. However, the sidelobe of m-sequence is too high for biomedical diagnostic applications. One alterative to m-sequence is FM chirp, which can be easily shaped to use the same spectrum band as that of the transducer. With matched filtering, the range sidelobe of FM chirp can be reduced to 50dB. When the bandwidth is wide enough, three orthogonal FM chirps can be generated by dividing the bandwidth into three equal ones. The cross-correlation between these chirps can be very low, if there is no overlap in the spectrum domain. But typically, transducers for clinical usage do not have such wide bandwidth. On the same time, since attenuation depends on frequency, it is hard to compensate for attenuation, because three beams are transmitted at the same time. So only the FM chirp occupying the whole bandwidth is tried. With limited diffraction beams, the time range of the chirp is purely defined by the depth of the acceptable dead zone, because dynamic focusing in conventional method is not an issue here. When two self-correlations from a Golay pair add together, they produce a delta function. Thus sidelobe for Golay pair is extremely low. The main disadvantage inherent in Golay pair is that it needs two transmissions to finish pulse compression, thus is sensitive to motion, which is exactly the issue addressed with multiple beams subtraction. So Golay pair is not in consideration in this study. Figure 3.2 shows the coded excitation signals considered in this study Simulation and results For computer simulation, a modified zeroth order Bessel transducer [126] with a diameter of 50mm is assumed. This transducer is made of 10 annular rings, whose edges

120 99 1 located nominally at the first 10 zeros of J 0 ( α r), where α = m. To make it capable of transmitting second order Bessel beam, each ring is divided into 40 equal elements azimuthally. The central frequency of the transducer is 2.5 MHz, and the 6 db bandwidth is about 81 percent of central frequency. 50 mm mm 12.5 mm 50 mm mm 25 mm 50 mm 100 mm 50 mm Figure 3.3 Diagram of the scattering phantom. The diameters of the cylinders 1,2,3,4 are of 10mm, 10mm, 6mm, and 6mm respectively. The schematic diagram of the 3 D scattering phantom is shown in Figure 3.3. The dimension of phantom is mm, with four cylinders inside it. The diameters of these cylinders are 10mm, 10mm, 6mm, and 6 mm separately. The reflection coefficients of the cylinders are 15 db (1), -15 db (2), 15 db (3) and -15 db (4) relative to that of the bulk scatterers. In each cubic wavelength, one random scatterer is assumed. Totally, over one million scatterers are considered in the simulation.

121 Figure 3.4 Images constructed for the scattering phantom in Figure

122 101 Figure 3.4. presents the images constructed from the phantom. In (a) and (c), short pulse is used as excitation signal, while in (b) and (d), FM chirp is used. In (a) and (b), only zeroth order Bessel beam is used, while in (c) and (d), multiple beams, including one zeroth order and two second order, are used, and RF A-lines subtraction is performed. Table 3.1 The contrasts of the cylinders Contrast Cyl 1 Cyl 2 Cyl 3 Cyl 4 Bg Ideal a b c d Where a,b,c,d are corresponding to the panels in Figure 3.3. Table 3.1 gives the contrast of the four cylinders under different excitations, where contrast is defined by Here, m contr = 20log (3.5) m mi is the mean of the constructed image of the cylinder, i o m o is the mean of the constructed image of bulk outside the cylinders. To avoid the influence of the edges on the contrast, when calculating m i, the radius of each cylinder is taken 1mm smaller, on the other hand, 1 mm larger in calculating m o.

123 Line Plots of Images (a) Short Pulse, With Sidelobe Reduction... Short Pulse, Without Sidelobe Reduction Ideal Normalized Magnitude (b) FM Chirp, With Sidelobe Reduction... FM Chirp, Without Sidelobe Reduction Ideal /01/JQC Lateral Distance (mm) Figure 3.5 Lateral plots at the center of images in Figure 3.4.

124 Discussion and conclusion As described in previous section, theoretically, multiple beams subtraction can reduce the sidelobe dramatically, and thus improve the image quality significantly. In simulation, the quality improvement is also very obvious. In Figure 3.4, it is easy to see that (c) is much better than (a), and (d) better than (b). In (c) and (d), the edges of the cylinders are pretty well defined, while in (a) and (b) they extend to a wide range. Statistically, as indicated in Table 3.1, the contrasts of the cylinders in (c) and (d) are more or less near the ideal values, and there is about 2-3 db improvement over those in (a) and (b). The same conclusion is also evident on the line plots of the Figure 3.4 as shown in Figure 3.5. Comparing the results from single short pulse ((a) and (c) in Figure 3.4 ) and FM chirp ( (b) and (d) in Figure 3.4 ), there is no essential difference between corresponding images, except that, with code excitation, the size of the speckles is a little bit bigger, which is introduced by pulse compression. This means that code excitation does not degrade the quality of the constructed images, while gaining deep penetration. Trying to avoid the motion artifact in the multiple beams subtraction, it is need to transmit three beams at the same time. Three m-sequences are used to encode the three beams, then add them together and transmit them at same time. Unfortunately, the range sidelobe is so high that no meaningful images can be constructed for the phantom. So it is very important to find proper codes with low cross correlation and high autocorrelation peak. Only with such codes, it is possible to send different beams at the same time. In summary, coded excitations are studied for sidelobe reduction of limited diffraction imaging systems. If codes of low cross-correlation and high autocorrelation

125 104 peak can be found, high contrast and high SNR images can be constructed without compromising the method due to object motions. 3.3 Low-sidelobe limited-diffraction optical coherence tomography In this section, the sidelobe reduction principle and limited-diffraction beams are applied to design an optical coherence tomography system with a large depth of field. The main results of this section were presented at the SPIE conference in 2002 [214] Theory The basic structure of OCT is a Michelson interferometer. As it is shown in Figure 3.6, according to Pan et al [215] ignoring the depolarization effects of the light, the detected signal from one fixed scanning position for one scatterer located at r = ( r, φ, z) can be expressed in terms of the axial distance as I z z = E E + E E + E E V z z k z z (3.6) * * d( ' ) s s r r 2 s r tc( ' ) cos ( ' ) where E s means the light field scattered from the tissue, E r means the light field from the reference arm. The incoherent light is ignored in above expression. The actual interference is taken into account by the last term in (3.6). V ( z ) is the autocorrelation function of the light expressed in space domain, and given by tc 2 ( ) exp( z V ) tc z = (3.7) L 2 c where L c is the coherence length of the light source, which is related to the group velocity index of the object and the bandwidth, central wavelength of the light source, and given by

126 105 Mirror Scan BS SLD LDB T/R Tissue Scan PD HD A/D PC Figure 3.6 The block diagram of OCT SLD: superluminescent diode, BS: beam splitter, LDB T/R: limited diffraction beams transmitter/receiver, PD: photodiode, HD: heterodyne detection, A/D: analog/digital converter, PC: personal computer.

127 106 L c 2 2 ln2 λ = π nδλ (3.8) L c is directly related the axial resolution of OCT, which is 2 ln2l c under this definition. After filtering out the direct current component and keeping the reference light field constant, one has 2 ( z' z) I( z' z) Es exp( )cos( k( z' z)) (3.9) L 2 c In conventional OCT, a focused Gaussian beam used to illuminate the object. Its lateral sidelobes at the focal plane are very low, but the depth of the focal zone is also very short. In the new design, a limited diffraction beam, which has a very large depth of field, but higher sidelobes, is used to illuminate the object. To reduce the sidelobe, sidelobe reduction method reviewed in previous sections is used. In the case of OCT, the last term in (3.2) is related to the interference term in (3.9), rewrite (3.2) in space domain and substitute into (3.9), we have ( z' z) I z z A r J r m i k z z (3.10) ( ' ) = 4 π ( ) ( α )cos ( φ φ0) exp( )cos( ( ' )) m 2 Lc In a linear system, the total detected signal is the sum of contributions from each scatterers under illumination. Integrate above equation, and get one A-line as π 2 I( z') = 4 π rdr dφ dz π ( z' z) Ar J r m k z z ( ) m( α )cos ( φ φ0 ) iexp( )cos( ( ' )) 2 Lc (3.11) As shown in Figure 3.7, when α r = 0, J 2 (0) = 1, 2 0 J (0) 0 2 =, on the other hand, while αr 1, J ( αr) J ( αr) (2/ παr)cos ( αr π /4). Taking full advantage of these properties of Bessel function, sidelobe can be greatly reduced by subtraction

128 107 between A-lines from one zeroth-order and two second-order transmissions. The resulting signal is given by [214] I ( z') [ I ( z') + I ( z') ] J0 J2 φ 2 0= 0 J φ0= π /4 2 π ( z' z) = 4 π rdr dφ dz A( r)( J0( αr) J2( αr)) exp( )cos( k( z' z)) π i i 2 Lc (3.12) 0 Sidelobe Reduction for Ideal Bessel Beams Normalized Magnitude(dB) J0*J0 J0*J0-J2*J Lateral Distance (um) 11/01/JQC Figure 3.7 Sidelobe reduction for ideal Bessel beams. 1 As shown in Figure 3.7, assuming α = m, for an aperture with a diameter of D= 1mm, at the edge of the aperture, the sidelobe is reduced about 40 db.

129 A B C D φ =4.95 φ =20.0 φ =2.0 φ =1.0 f =4.51 f =45.6 f =4.0 f =4.2 Figure 3.8 The diagram of the limited diffraction transmitter and receiver unit (Unit is mm) π φ μ m 0 π 85.1 μ m 0 Figure 3.9 Diagram of the first mask.

130 Before After Field Plots at Len D (a) Normalized Magnitude Before After (b) Lateral Distance (um) 11/01/JQC Figure 3.10 Field plots at lens D.

131 Optical Design The basic structure of this OCT system is the same as the conventional one. The major difference is that we use a limited diffraction transmitter and receiver unit, as shown in Figure 3.8, to replace the objective. The unit is to use holograms produce and receive limited diffraction beams. The designed depth of field for the limited diffraction Bessel beams is 4.5mm, and FWHM width of the main lobe of zero order Bessel beam is about 4.4 central wavelengths, which defines the lateral resolution of the system. With a central wavelength of 940nm, and a 70nm bandwidth, the scaling parameter for the Bessel beams is m -1, while the aperture is of 1mm in diameter. The phase of the on axis phase-only hologram for generating the zero order Bessel beams is given by [216] Ψ () r = α r π /4 (3.13) 0 1 where α 0 = m is the scaling parameter for the desired Bessel beams. When the phase hologram is illuminated by collimated light, in certain range, one can produces the light field as I(, rz) zj( α r) (3.14) We implement the hologram in two levels, and scale up the physical dimension 20 times to make it easy to fabricate and handle, which is given by 0 (cos( αr π / 4) 0) Ψ= π (cos( αr π / 4) < 0) (3.15) where α = α /20 0, is the scaling parameter for the physical hologram mask, which is shown in Figure 3.9. Totally there are 117 rings in the mask, and the width of each ring is

132 μm, except the first internal ring with a diameter of 127.7μm. The phase for each ring is labeled in Figure 3.9.The diameter of the mask is 20mm. As shown in Figure 3.8, the light from light source SLD is collimated by lens A and lens B. Here lens B servers two purpose. First it works with lens A to collimate the light; second, it provides converging illumination for masks. Lens C is used to image the magnified mask to a small one, which has the desired scaling parameter and aperture for generating Bessel beams. Lens D is used to cancel the quadratic phase introduced by the imaging process. According to Fourier optics [10], when illuminated by a focused light with a focal length of the objective length, and the quadratic phase at imaging plane is cancelled, the resulted image can be treated by the convolution between the desired image predicted by geometrical optics and the aperture of the imaging lens. In our case, the aperture of lens C is specifically designed to filter out higher order diffraction pattern introduced by the phase mask inserted behind lens B. Only the zero order diffraction, which theoretically contains about 40 percent of the total energy [216], passes through lens C. The amplitude of the light field at length B is unity. However, after passing though lens C, at lens D, the light field distribution has a cosine shape, which is desirable for phase-only hologram. Figure 3.10 shows the distribution of the light field at lens D, which is calculated based on linear system convolution theory. In panel (a), the dotted line shows the maximum of the rings, since too many details cannot be clearly displayed. Panel (b) shows the laterally magnified version of plots in panel (a). The cosine-like shape is clearly demonstrated. To generate the second order Bessel beams, a second hologram is placed right behind the first mask. The second mask modulates the phase and amplitude of the light in the azimuthal direction after it passes through the first mask.

133 112 Phase π Phase 0 Phase 0 Attenuation cos(2 φ) Phase π Figure 3.11 Diagram of the second mask. Figure 3.11 shows the schematic diagram of the second mask, which is divided into four quarters, and the phase difference of the neighboring parts is π. The amplitude transmission coefficient of the mask is controlled by the deposited absorbing dots, and the effective transmission coefficient changes with the azimuthal angle as cos(2 φ ). The second hologram can be rotate 45 degree to generate the other second order Bessel beams.

134 113 Field Plots Normalized Magnitude (db) (a) z = 0.5 mm... Before After Normalized Magnitude (db) (d) z = 3.5 mm... Before After Lateral Distance (um) Lateral Distance (um) Normalized Magnitude (db) (b) z = 1.5 mm... Before After Normalized Magnitude (db) (e) z = 4.5 mm... Before After Lateral Distance (um) Lateral Distance (um) Normalized Magnitude (db) (c) z = 2.5 mm... Before After Normalized Magnitude (f) Lateral Distance (um) Axial Distance (mm) Figure 3.12 Field plots at different axial distances.

135 Simulation and results After light pass through lens D, the light fields are calculated based on field calculation method developed by Ocheltree et al [196]. Figure 3.12 shows that light field at five different axial distances: 0.5mm, 1.5mm, 2.5mm, 3.5mm, and 4.5mm, while the spatial filtration at lens C has been taken into account. The doted lines are the Bessel beams before subtraction, while the solid lines are the beams after subtraction. Even before sidelobe reduction, Bessel beams already keep their shape over all depth of field, and do not spread. But the sidelobes are high. After sidelobe reduction, the beams have a thin pencil-like shape in the range of the depth of field. Over all the steps, subtraction reduces the sidelobes effectively. Panel (f) shows the intensity of zero order Bessel beams increasing with propagating distance within the depth of field, which agree well with (3.14). In Figure 3.12, the solid lines also show the light field after sidelobe reduction. Even at the surface of lens D, the side lobes can be reduced dramatically. Cyl1 Cyl2 120 μ m 30 μ m 60 μ m 60 μm 120 μm 240 μ m 120 μm Figure 3.13 Diagram of the phantom used for simulation.

136 Figure 3.14 Simulated images before sidelobe reduction. 115

137 116 Figure 3.15 Simulated images after sidelobe reduction. Table 3.2 Contrast of the cylinders in the phantom a b c Cyl1 Cyl2 Cyl1 Cyl2 Cyl1 Cyl2 Ideal(dB) Before(dB) After(dB) d e f Cyl1 Cyl2 Cyl1 Cyl2 Cyl1 Cyl2 Ideal(dB) Before(dB) After(dB) Where a-f are corresponding to the panels in Figure 3.14 and Figure 3.15

138 117 Computer simulation is also performed to verify the effect of the sidelobe reduction. The simulation is based on single scattering model, and (3.11) is used to calculate the detected signal. The 2-D images are constructed for a 3-D phantom, which is shown in Figure The dimension of phantom is m 3 μ, with two cylinders inside it. The diameters of these cylinders are 30μm. The reflection coefficients of the cylinders are 15 db (Cyl1), -15 db (Cyl2) relative to that of the bulk scatterers. In each cubic wavelength, one random scatterer is assumed. Totally, over two million scatterers are considered in the simulation. To show the limited diffraction property of the Bessel beams, images are simulated for five different axial distances, where center of the phantom is place 0.5mm, 1.5mm, 2.5mm, 3.5mm and 4.5mm away from the surface of lens D respectively. A central wavelength of 940nm with a bandwidth of 70nm for the light source is assumed. The group velocity index of the phantom is taken as Figure and Figure 3.15 present the images constructed from the phantom. To avoid the influence of the edge, the images displayed are cut to m 2 μ. Figure 3.14 gives the simulated images before multiple beams sidelobe reduction, and Figure 3.15 shows the images after sidelobe reduction. To compare the performance of the current design with that of the ideal case, images (panel (f) in Figure 3.14 and Figure 3.15) constructed from ideal Bessel beams are also displayed at axial distance 2.5mm, where at lens D, ideal Bessel beams are assumed. Over all the steps, sidelobe reduction method can enhance image quality dramatically. Table 3.2 gives the contrast of the cylinders placed at different distances, where contrast is defined by (3.5). To avoid the influence of the edges on the contrast, when

139 118 calculating larger in calculating m i, the radius of each cylinder is taken 4μm smaller, on the other hand, 4μm m o Discussion and conclusion A. Depth of field The proposed limited diffraction OCT has a very large depth of field. In our current design, it is about 4.5mm, which can be clearly seen from panel (f) of Figure Since the intensity of the generated Beams increases with the propagating distance, the light field very near the surface is relatively week, and has a higher sidelobe. At 0.5mm away, as it is shown in panel (a) of Figure 3.12, the light already develops into Bessel beams. The effective depth of field that can be worked with is about 4mm. A gap of 0.5mm is left between lens D and the front surface of the object. Normally, the resolvable axial distance is about 2-3mm. So 4mm is enough for most applications. In the range of the depth of field, panel (a) to panel (e) of Figure 3.12 show, at the center of the Beams, the generated fields keep a nearly constant pencil-like shape. It keeps the lateral resolution constant over all imaging range, which can be verified by Figure 3.14 and Figure 3.15, where the speckle patterns at five different axial distances have very similar features. At the same time, within the depth of field, the intensity of Beams linearly increases with propagating distance. This is an advantage in the absorbing situation, for it can compensate the attenuation naturally. B. Sidelobe Compared to the focused Gaussian beams at focal plane, limited diffraction beams have higher sidelobes. Figure 3.7 shows the sidelobe levels for ideal zero order Bessel beams. At the edge of the aperture, it is about 55dB. In the center of the Beams, the sidelobe

140 119 level is so high that it is hard to construct images with high contrast. But after multiple beams subtraction, sidelobe level is about 100dB at edge, and quickly reaches 60dB near the center. With our current implementation, at axial distance of 0.5mm, the sidelobe level before subtraction is higher than ideal beams, but at other depths, the actual sidelobes are lower. After subtraction, sidelobe at all depths are dramatically reduced, and the width of main lobe keeps nearly constant at all depths within the depth of field. Shown in Figure 3.14, before sidelobe reduction, the images have a low contrast, especially Cyl1 ( 15dB) is hard to distinguish from the background. From Table 3.2, the actually contrast is about 5dB to 8dB, well bellow the ideal 15dB. At axial distance of 0.5mm, even no contrast between this cylinder and the background. But after sidelobe reduction, both contrast between the cylinders and the background are very close to the ideal values. A high contrast can be achieved. C. Resolution The axial resolution of our current design is same to the conventional OCT system, and is determined by the bandwidth and central wavelength of the light source and group velocity index of the object. Since the wavelength around 800nm is the window for biomedical optical applications, where a good compromise between absorption and penetration is reached, to broaden the bandwidth of light source is the only practical way to increase the axial resolution. With ultra-fast laser with a pulse width in the range of a few femtoseconds, 1μm axial resolution is achievable. But the cost for such light source is extremely high. In most OCT systems, superluminescent diode is used as light source for its low cost and reasonable performance. Generally, an axial resolution of 10 to 20μm can be easily achieved. In conventional system, there is a tradeoff between the lateral

141 120 resolution and depth of the focal zone. If the thickness of the sample is well kept in the focal zone, the lateral resolution is very high. But with a thicker sample, a low numerical aperture must be used to insure a relative constant lateral resolution. For a thickness of 2 to 3mm, the lateral resolution would be less than 10μm. The lateral resolution of our current design is 4.4wavelenth, i.e. about 4 μm. From Figure 3.12, we can see that, after sidelobe reduction, the beam keeps a nearly constant shape. In Figure 3.14 and Figure 3.15, the speckle pattern shows that a nearly constant lateral resolution is achieved, and the lateral resolution is higher than the axial one, which is 8.13μm in the phantom. Comparing the panel (c) and panel (f) in Figure 3.14 and Figure 3.15, the performance of our design is nearly same as the ideal case, where at lens D, ideal Bessel beams are transmitted. D. Potential applications A large depth of field and a high and constant lateral resolution are desirable for most OCT applications. In ophthalmology, they mean a longer axial distance range can be imaged with high and uniform resolution. In turbid media, the linearly increasing intensity of the light can naturally compensate the attenuation. High and constant lateral resolution also means less distortion of the image and high contrast between the interested features and the background. High contrast makes it easier to optically distinguish the abnormal tissue from normal tissues, which is very important for optical biopsy in vivo. Our current design provides a large depth of fiend and a high lateral resolution. But mechanical exchange and rotation of the hologram is need, and the imaging time is three times longer. The hologram can be realized by liquid crystal spatial light modulator, where each element is addressed electronically, and no mechanic

142 121 movement is involved. To shorten the imaging time is more difficult to deal with. So it is desirable for the situation where high quality images are crucial while image time is not very essential. E. conclusion In this section, a novel OCT system with a large depth of field and a high lateral resolution based on limited diffraction beams is presented. The main goal of this design is to improve the lateral resolution and depth of field. An optical layout based on holograms to generate and receive the limited diffraction Bessel beams is presented. With the current design, the lateral resolution is about 4 μm, and the axial resolution is about 8μm with a large depth of field of 4.5mm. Preliminary simulation results show that within the depth of field, the current system can construct images with high quality. The further study will focus on physically implement the design, and investigate the performance of the system under multiple scattering situations 3.4 Summary In this chapter, limited-diffraction beams are used in both ultrasound imaging and optical imaging [213, 214]. Coupled with coded excitation, subtraction of multiple limited-diffraction Bessel beams can reduce the sidelobe level dramatically for ultrasound imaging, and improve the penetration depth and signal to noise ratio significantly. If orthogonal codes can be found, three Bessel beams coded by different codes can be transmitted simultaneously, thus reduce motion artifacts and increase frame rate. Unfortunately, these codes are difficult to find. The sidelobe reduction principle is also applied in a new design of OCT. In this design first order and second order Bessel beams are produced by holograms. The system has very large depth of field due to limited-

143 122 diffracting nature of the Bessel beams. Simulation results show that system has constant beam width throughout the large depth after subtraction. High quality images are obtained at all depths.

144 CHAPTER IV: FOURIER BASED IMAGING METHOD WITH LIMITED-DIFFRACTION ARRAY BEAMS AND STEERED PLANE WAVES In this chapter, a general theory of Fourier based imaging method will be developed to include the limited diffraction array beam transmissions. With this theory, steered plane wave transmissions and other previous imaging methods are included as special cases. Multiple transmissions with steered plane waves or limited diffraction beams allow higher image resolution and lower sidelobes at the expense of image frame rate and the susceptibility to motion artifacts. 4.1 Introduction As mentioned in Chapter I, in the past two decades, many investigators have studied the potential of Fourier transform for ultrasound imaging [ ]. In earlier studies of our group [145, 147], Lu developed a high frame rate imaging method based on limited diffraction beams. Liu [168] combined the steered plane waves method studied by Lu [169, 170] with the high frame rate imaging method [145, 147] to increase image resolution and reduce sidelobes at the expenses of image frame rate and susceptibility to motion artifacts. In addition, Lu [146] also used limited diffraction array beam transmissions to increase image field of view, image resolution, and reduce sidelobes. Based on the knowledge of previous studies, a general theory of Fourier based imaging method is developed from the diffraction tomography theory [27] that solves the inhomogeneous Helmoholtz equation under the Born approximation. The object function 123

145 124 defined in this theory is more naturally linked to the physical properties of the object, such as the relative change of local compressibility and density. With this treatment, limited-diffraction array beams [104], [146], as well as broad-band steered plane wave transmissions [168], [169, 170], are included, in addition to other previously studied imaging methods [189]. A relationship between the Fourier transform of the echo data and that of the object function is established. Unlike Liu who developed the method in 2D [168], the theory in this chapter is developed in 3D directly although Liu claimed that his method is also applicable to 3D imaging. In addition to the computer simulations that Liu did for the method, imaging experiments for wire targets, tissue-mimicking phantoms, and in vivo kidney and heart are carried out to verify the theory using the high frame rate imaging system that will be described in the next Chapter (for simplicity, all the simulations and experiments are performed in 2D). The main results in this chapter were presented at the IEEE ultrasonic symposium in 2005, and submitted as a journal paper to IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. In the journal paper, the theory part was developed in parallel with that of Lu s high frame rate imaging method [145, 147, 171] using X waves [100, 101] and limited diffraction array beams [104]. 4.2 Theory In this section, a general theory for Fourier based imaging is developed by solving the wave equation for inhomogeneous media under the first Born approximation. We start the derivation with the wave equation for inhomogeneous media.

146 Wave equation for inhomogeneous media The wave equation in temporal frequency domain for inhomogeneous media at single frequency is given by [27, 203] ( ; ) ( ) ( ; ) ( ) ( ; ), ( + k ) pr1 k = kγκ r1 pr1 k ( γρ r1 pr1 k) 0 (4.1) κ( r1) κ0 where γ κ ( r1 ) = is relative change of the compressibility; κ( r 1) and κ 0 are local κ ρ( r1) ρ0 and average compressibility of the media respectively; γ ρ ( r1 ) = is relative ρ( r ) change of density; ρ and ρ 0 are local and average density of the media respectively. By 1 considering the pressure p( r1 ; k) scattered wave ps ( r1 ; k), we have at r 1 as the summation of incident wave pi ( r1 ; k) and p( r; k) = p ( r; k) + p ( r; k). 1 i 1 s 1 (4.2) And the incident wave satisfies the following Helmholz wave equation, + p ( r ; k) = 0. ( 2 k 2 ) i 1 (4.3) we get Substituting of (4.3) and (4.2) into (4.1) and applying the first Born approximation ( ; ) ( ) ( ; ) ( ) ( ; ) ( + k ) ps r1 k = k γκ r1 pi r1 k ( γρ r1 pi r1 k ) (4.4) When ( r1 ) γ ρ is a slowly changing function relative to p ( r 1; k), the second term on i the right hand side of (4.4) can be written as ( r) p ( r; k) ( r) p ( r; k). 2 ( γρ i ) γρ i 1 (4.5) Substituting (4.5) into (4.4), we have k p ( r; k) k ( r) p ( r; k) ( + ) s = ( γκ γρ ) 1 1 i 1 (4.6)

147 Non-steered plane wave z (x 1,y 1,z 1 ) Transducer r 1 R r 0 y x (x 0,y 0,0) Figure 4.1 Geometry of the transducer and the object. As shown in Figure 4.1, a 2-D array transducer located at z=0 plane is excited to generate a broad band plane wave. For the moment, we assume that the aperture of the transducer is infinitely large, and size of each transducer element is infinitely small. The effects of limited aperture and discretely sampled elements will be considered later. We denote the coordinates of the array element as r 0 = ( x 0, y 0,0), and the coordinates of any spatial point in the media as r 1 = ( x 1, y 1, z 1 ). If the transmitting transfer function of the transducer, which incorporates both the electrical response from the driving circuits and the electrical-acoustical coupling [194, 217] of the transducer elements, is assumed to be Tk, ( ) the generated incident pulsed plane wave can be expressed in temporal frequency domain by Fourier transform as follows [100], 1 jkz1 jkct pi ( z1 ct) = T( k) e e dk, 2πc (4.7)

148 127 where pi ( z1 ct) is the incident pressure at distance z 1 ; k = ω / c is the wave number, and c is the speed of sound of the media. For simplicity, the integration range in this chapter is dropped. In default, the integration range is from to. Thus the temporal spectrum of the wave at wave number k and position r 1 is obtained as jkz1 Tke ( ) pi ( r1 ; k) =. 2πc (4.8) Substitution of (4.8) into (4.6) yields 2 ktk ( ) ( ; ) ( ) ( ). 2π c 2 2 jkz1 ( + k ) ps r1 k = ( γκ r1 γρ r1 ) e (4.9) The Green s function of equation (4.9) is [203] jkr e gr ( 1 r0; k) =, 4π R (4.10) where R = r r = ( x x ) + ( y y ) + z is the distance between array element at r 0 and spatial point at r 1. Applying Green s theorem to (4.9), we get the scattered pressure at the transducer surface as κ ps r k = f r e dxdydz 4πR 2πc ρ jkr 2 e k T( k) jkz1 ( ; ) ( ), (4.11) where f ( r1) = γ ( r1) γ ( r1) is defined as object scattering function, or shortly the object. Taking into consideration of the receiving transfer function R(k) of the transducer, which incorporates both the electrical response from the receiving circuits and the

149 128 electrical-acoustical response from the transducer elements, we can write the echo signal received by the transducer element at r 0 as jkr 2 e k T( k) jkz 1 ex ( 0, y0; k) = Rk ( ) f( x1, y1, z1) e dxdydz πR 2πc (4.12) Notice that the right hand side of (4.12) is an integration of a 2-D spatial convolution of the object scattering function and the Green s function [10, 203], namely jk x + y + z e 2 jkz1 ex ( 0, y0; k) = f( x 2 0, y0, z1)** ktkrke ( ) ( ) dz , 8π c (4.13) x0 + y0 + z 1 where ** means a 2-D spatial convolution. Taking 2-D spatial Fourier transform of the echo signal, and applying the convolution theorem of Fourier transform, we have j k T( k) R( k) Ek ( x, k ; ) (,, ) 0 y k = Fk 0 x k 0 y z e e dz 0 4π c k k k 2 jt ( k) R( k) k = 4π c k k k x0 y j k kx k 0 y z 0 1 jkz x 0 y0 Fk (, k, k ), ' x0 y0 z0 (4.14) where F() i is the 3-D Fourier transform of the object f ( xyz,, ) and the following Fourier transform pair is applied [1, 6, 11] jk x + y + z j k kx kyz e e 2 π j. x + y + z k k k x y (4.15) Changing the variables of (4.14), we can establish a relationship between echoes exy (, ) and object scattering function f ( xyz,, ) in frequency domain as follows, jt ( k) R( k) k E k k k F k k k 2 ' ( x, y; ) = (,, ), x y z 4πc k kx ky (4.16)

150 129 where k = k+ k k k is a nonlinear variable conversion from the temporal ' 2 2 z x y frequency domain of the echo signal to the spatial frequency domain of the object. Theoretically, we can rearrange (4.16), and get the spectrum of the object scattering function form the measured echo signals as follows, π c k k ' x ky F( kx, ky, kz) = E( k, ; ), 2 x ky k (4.17) jt ( k) R( k) k (4.16) and (4.17) hold when k 2 > k 2 + k 2, which means the evanescent waves are x ignored in the echo signals. This is generally valid in ultrasound imaging, because the amplitude of evanescent waves decreases exponentially with propagation distance. For z λ c y, where λ c is the center wavelength of the transmit wave, these evanescent waves are practically undetectable when they reach the transducer surface. Also for analytical signal, we only need to deal with the positive temporal frequency. Thus only part of the spectrum of the object scattering function is available. Though the spectrum is not complete, we still can reconstruct the object with inverse Fourier transform. From (4.17) The approximate object scattering function f ( xyz,, ) can be expressed as π c k k ' x ky jkxx jkyy jkzz ' f ( x, y, z) = (, ; ), 3 E k 2 x ky k e dkxdk ydkz (4.18) jt ( k) R( k) k ( 2π ) where k + k + k k = should also be applied on the right side of (4.18). However, in '2 2 2 z x y ' 2kz reality, the exact transfer functions of the transducer, including both Tk ( ) and R( k ), are unknown beforehand and at certain temporal frequency they could become zero, which makes numerical evaluation of (4.18) unstable. In addition, deconvolution of transform functions from the echo signal mostly improves the range resolution, which is already

151 130 much higher than lateral resolution even without this treatment. Thus the image that approximately represents the object scattering function can be defined directly as 1 k + k + k I( xyz,, ) = Ek (, k; ) e dkdkdk. (4.19) ( 2π ) '2 2 2 z x y ' jkxx jkyy jkzz ' 3 x y ' x y z 2kz The main results of this theoretically treatment reveal the relationship between echo signals and the object in temporal-spatial frequency domain, which is the core of inverse scattering problems. In the context of ultrasound imaging, diffraction tomography is the research focus for such problems. For a review on this topic, please refer to [27]. The basic principle of diffraction tomography can be traced back to Wolf [17, 18], who first derived the formula describing the relationship between the object and the detected scattering signals. In his derivation, a continuous plane wave is used to illuminate the object and the forward or backward scattering signal is measured on a plane. Under weak scattering assumption and first Born approximation, he pointed out that the spectrum of the measured is directly related to the spectrum of the object [1]. If we only consider a single frequency, the relationship in (4.14) will collapse to the classical result given by Wolf. However, in the diffraction tomography [218, 219], normally continuous plane waves are used in transmission and forward scattering signals are measured. The spectrum of the object is filled in by rotating the transmitter/receiver or the object 360 degrees. Broadband pulsed waves and backward scattering signals are investigated by Nortan, Linzer [156, 157] and Nagai [158, 159], but the incident wave are spherical waves, since a small transducer is used to scan on a planar surface. The configuration is quite different from our treatment here. Busse [162] did not provide the explicit relationship between the measured echo signals and the object, although broad-band plane waves were used in transmission and a 3-D image was reconstructed from a single

152 131 transmission. The main result of (4.19) is a reinterpretation of equation of (13) in Lu s previous study [145], where the spectrum of the echo were treated as the weighting response of the limited-diffraction beams in reception. When proper parameters are selected, the response can be evaluated as Fourier transform. Liu [168] also derived the relationship between the echo spectrum and the object in two dimensions, which is a special case under this treatment by letting k = 0 in (4.14). y limited-diffraction array beams and steered plane waves The pulsed limited-diffraction array beams can be expressed as [104] 1 jkz 1 ( 1 ) ( )cos( 1)cos( 1) T z jkct pi z ct = T k kx x k, T y y e e dk T 2π c 1 = Tk e + e e + e e e dk 8π c kx x1 kx x1 ky y1 ky y1 jkz z1 ( )( T T )( T T ) T jkct, (4.20) where k, k, k are the projections of the wave number on the three axes of the spatial x T y T z T coordinates, and satisfy k = k k k (4.21) z. T xt yt For any wave number k, k x T and k y T are fixed at transmission. The limiteddiffraction beam is generated by weighting the aperture with a function as cos( k x )cos( k y ). Considering the non-diffraction nature of these beams, the incident xt 0 yt 0 wave would be expressed as (4.20). Following similar derivation in previous section, we get the echo signal on transducer surface ( x 0, y 0 ) at wave number k,

153 132 ex (, y; k) 0 0 = + + 4πR 8πc jkr 2 e k T( k) R( k) kx x1 kx x1 ky y1 ky y1 jkz z f 1 ( x 1, y 1, z 1)( e T e T )( e T e T ) e T dx 1 dy 1 dz 1. (4.22) Taking 2-D spatial Fourier transform of the echo signal, using the convolution theorem and shift theorem of Fourier transform and changing variables, we get 2 jt ( k) R( k) k Ek (, k; k) = Fk ( + k, k + k, k + k ) x y x xt y yt z zt 16π c k kx ky + Fk ( + k, k k, k+ k ) + Fk ( k, k + k, k+ k ) + Fk ( k, k k, k+ k ). { x xt y yt z zt x xt y yt z zt x xt y yt z zt } (4.23) In this case, the echo signals do not have a simple relationship with object scattering function. From (4.20), it is obvious that a limited-diffraction array beam at single frequency can be treated as the summation of four plane waves propagating to different directions. So it s natural that spectrum of the echo signals in (4.23) has four components, each representing a shifted version of the spectrum of the object scattering function. It is not possible to separate the contribution from different components if single transmission is used. But when echoes from four different array beams with same parameters are added together coherently, a simple relationship can be established. The weighting functions for the four transmissions are cos kx x0cos k 0 (a); T y y T cos kx x0sin k 0 (b); T y y T sin kx x0cos k 0 (c); T y y T sin kx x0sin k 0 (d). T y y T (4.24)

154 133 If the corresponding echo spectrums from these four transmissions are denoted as Ea( kx, ky; k ), Eb( kx, ky; k ), Ec( kx, ky; k ) and Ed( kx, ky; k ) respectively, it is easy to show that Ea( kx, ky; k) + jeb( kx, ky; k) + jec( kx, ky; k) Ed( kx, ky; k) 2 jt ( k) R( k) k = Fk (,, ), x + kx k T y + ky k T z + k zt 4π c k kx k y Ea( kx, ky; k) jeb( kx, ky; k) + jec( kx, ky; k) + Ed( kx, ky; k) 2 jt ( k) R( k) k = Fk (,, x + kx k T y ky k T z + kz ), T 4π c k kx ky Ea( kx, ky; k) + jeb( kx, ky; k) jec( kx, ky; k) + Ed( kx, ky; k) 2 jt ( k) R( k) k = Fk (,, ), x kx k T y + ky k T z + kz T 4π c k kx ky Ea( kx, ky; k) jeb( kx, ky; k) jec( kx, ky; k) Ed( kx, ky; k) 2 jt ( k) R( k) k = Fk (,, x kx k T y ky k ). T z + kz T 4π c k kx ky (4.25) Thus for the combined echo spectrum, denoted as Eldb( kx, ky; k ), which is given by E ( k, k ; k) = E ( k, k ; k) + je ( k, k ; k) + je ( k, k ; k) E ( k, k ; k), (4.26) we still have, ldb x y a x y b x y c x y d x y 2 jt( k) R( k) k E ( k, k ; k) = F( k + k, k + k, k + k ). ldb x y x xt y yt z zt 4πc k kx ky (4.27) From (4.27), we can approximately evaluate the spectrum of the object scattering functions as F k k k E k k k E k k k (4.28) ' ' ' ' ' ' ( x, y, z) ( x, y; ) = ( x, y, z), where Fk k k is the spectrum of the object, and change of variables is performed on ' ' ' ( x, y, z) the echo spectrum. The relationship between the variables are defined as follows,

155 134 ' kx = kx + kx ; T ' ky = ky + ky ; T k = k k k + k k k ' z x y xt yt. (4.29) When wave front of the transmit pulsed plane wave is not parallel to transducer surface, the pulsed plane wave can be expressed as 1 jkx 1 y 1 z 1 ( 1 ) ( ) T x + jk T y + jk T z jkct pi z ct = T k e e dk (4.30) 2π c where k x T, k y T, k z T are the projections of the wave number on the three axes of the spatial coordinates. If θ is the polar angle and φ is the azimuthal angle of the wave number vector, we have kx = ksinθ cos φ, T ky = ksinθ sin φ, T kz = kcos θ, T (4.31) For steered plane wave, angles φ and θ are fixed for all wave number vectors, which are defined by the steering direction. Similar to non-steered case, the echo signal on transducer surface ( x 0, y 0 ) at wave number k would be jkr 2 e k T( k) R( k) jkx x1 jky y1 jkz z 1 ex ( 0, y0; k) = f( x1, y1, z1) e T e T e T dxdydz πR 2πc (4.32) Taking 2-D spatial Fourier transform of echo signals, using the convolution theorem and shift theorem of Fourier transform and changing variables, we get the relationship between echoes signals and object scattering function as 2 jt ( k) R( k) k Ek (, k; k) = Fk ( + k, k + k, k + k ). x y x xt y yt z zt 4πc k kx ky (4.33)

156 135 Changing the variables of (4.33), we can evaluate the spectrum of the object from the same equation as (4.28). If multiple limited-diffraction beams or steered plane waves are used to insonify the object, we can get the spectrum of object scattering function many times. Since each time we get a shifted version of the echo spectrum, when these spectrums are added together, the coverage of the object function in frequency domain will be broadened. The image resolution will be improved, and the side lobe levels will be reduced. For one particular transmission, the parameters which define the transmission waves take discrete values. i, j i, j i When θ and φ are used for plane waves, or, j k x and T k for limited diffraction i, j yt beams and the corresponding echo spectrum is E ( k, k, k ), i = 1,, N, j = 1, M, the i, ' ' ' x y z final spectrum of the object scattering function would be N M ' ' ' i, j ' ' ' ( x, y, z) ( x, y, z). i= 1 j= 1 Fk k k E k k k (4.34) Thus the image, which approximately represents the object, is constructed by inverse Fourier Transform, namely, 1 I( x, y, z) = F ( k, k, k ) e dk dk dk. (4.35) ( 2π ) 3 ' ' ' ' ' ' jkxx jkyy jkzz ' ' ' x y z x y z In 2-D B scan imaging, a 1-D array is used and for practical purpose, spatial frequency component from y direction can be set to zero. The relationship between echoes and the object function would be jt ( k) R( k) k Ek k Fk k 2 ' ' ( x; ) = (, ). 2 2 x z 4π c k kx (4.36)

157 136 For limited diffraction array beams, the echo spectrum in (4.36) is the coherently combined spectrum. k is fixed in transmission by weighting the aperture with cos k x 0 x T x T and sin k x 0 respectively. And x T ' k x and ' k z satisfy the following equation, ' kx = kx + kx ; T k = k k + k k ' z x xt. (4.37) For steered plane wave, ' k x and ' k z satisfy the following equation, ' kx = kx + ksin θ; ' 2 2 kz = k kx + kcos θ, (4.38) and θ is the steering angle of the transit plane wave relative the transducer surface. This is also the main result of Liu [168] Two special cases In 2-D B scan imaging, we can evaluate the 2-D spatial spectrum of the object scattering function that is available from the echo spectrum according to (4.36). We can also calculate the spectrum of the object scattering function in a different fashion. For limited-diffraction array beams, by letting k = k, from (4.37) and (4.36) we x x T can get 2 jt ( k) R( k) k Ek ( x ; k) = F(2 k,2 ) T x k T z (4.39) T 4π ck zt where k = k k. In this method, the spectrum of the object is filled out by zt 2 2 xt transmitting limited-diffraction array beams with evenly sampled parameter k x T. The spectrum is filled out in rectangular coordinates directly by applying same weighting

158 137 function in reception. However, the lateral view of the constructed image is limited by the sampling interval of k x T. To have an enough large lateral view, hundreds of transmissions are required. Lu [146] has also derived the results before, and pointed out that the coverage of the object scattering function is broadened with this approach and equivalent transmit/receive dynamic focusing is theoretically achievable. The low frame rate resulting from the sampling constraint will limit the applicability of this method. For steered plane wave with steering angle θ, if only one line of spectrum is evaluated by letting k = ksinθ, as in (4.38), we can get x jt ( k) R( k) k Ek ( sin θ ; k) = F(2ksin,2kcos ). 4πc cosθ θ θ (4.40) By changing the evenly sampled steering angle θ, the spectrum of the object scattering function can be filled out in polar coordinates. Soumekh [189] has derived the results from linear system modeling approach and discussed the sampling constraints and reconstruction procedure in detail. We will not repeat them here. With this approach, the spectrum of the echo can be obtained by simple summation after delaying each echo the same amount of time as in transmission. However it is worth noticing that the spectrum of the object is filled out in polar coordinates. For each steering angle θ, the corresponding coordinates of the spectrum is ( 2 k, θ ). Converting the spectrum from polar coordinates to rectangular coordinates is required before inverse Fourier Transform is applied, which is similar to scan conversion in conventional sector scan imaging. This step imposes very strict sampling constraints on steering angle θ. To avoid nonlinear aliasing, hundreds of transmissions are needed for typical transducer arrays and complicated interpolation method has to be used.

159 Methods In this section, the detailed reconstructive procedure for 2-D B scan imaging is demonstrated as an example. For simplicity, the following discussion is only applied to 2- D imaging. For the corresponding 3-D imaging, similar techniques can be employed. The inverse mapping rules are also discussed on the context of 2-D imaging Inverse mapping procedure k k z ' k = ( k k ) k z ' x ' x 2 2 T x T k = k x k = k x k= k x T 0 k x 0 k xt k x ' (a) (b) k k k k z ' = x 2 ( x ' x ) 2 T T Figure 4.2 Coordinate transform for array beams. (a) The boundaries of echo spectrum; (b) The boundaries of object spectrum. From (4.36), it is clear that, after Fourier transform, we have the spectrum of the echo signals on evenly sampled grids. With (4.37) and (4.38), we can map the echo spectrum to the object spectrum. This is a nonlinear mapping process. But the object spectrum will not be evenly sampled, which make the inverse Fourier transfer difficult to carry out. To make the spectrum of the object evenly sampled for both k ' x and ' k z, the

160 139 mapping process must be carry out backward. This means that we first set the values of ' k x and for ' k z as ' k z on evenly sampled grids. We denote the sampling interval for k ' x as Δ ' k x, and ' Δ k z and the index for the grid as ( i, j ). The echo spectrum is already evenly sampled for k x and k after 2-D Fourier Transform. We denote those sampling interval as Δ k x and Δ k respectively, and the index for the grid as ( mn., ) For limited-diffraction array beams, the inverse function for (4.37) is ' kx = kx kx T + + k = ' 2kz 2 '2 2 ' '2 ' ( kz kx ( k ) ) 4 ( ) T x kx k T z kx kx T 2 2 (4.41) This is not a single-value function and only under certain condition it provides one to one mapping. As shown in Figure 4.2, the valid data used in construction is bounded by three lines. Two of them are defined by The third line is defined by k = k, when k < 0, (4.42) x x k = k, when k > 0. (4.43) x x k = k xt. (4.44) Using the forward mapping equation (4.37), one can gets the corresponding equations in ( k, k ) coordinates. ' x ' z k = ( k k ) k when k k > k and k k < 0; (4.45) ' ' 2 2 ' ' z x xt xt x xt xt x xt k = ( k k ) k when k k > k and k k > 0; (4.46) ' ' 2 2 ' ' z x xt xt x xt xt x xt

161 140 k = k ( k k ) when k k < k. (4.47) ' 2 ' 2 ' z xt x xt x xt xt The following steps show the mapping procedure for limited-diffraction array beams: (a) For Fk on rectangular grid (, i j ), calculate ' ' ( x, kz ) k ' x = iδ k, ' x k ' z = jδ k. ' z (b) Calculate k = k k. If ' x x x T k k x x T, calculate k k, and if k ' > k 2 k 2, 2 2 xt x z x x T use equation (4.41) to calculate the corresponding coordinates ( k x, k ), and find the value of Fk through bilinear interpolation, else simply set ' ' ( x, kz ) Fk to zero. ' ' ( x, kz ) (c) If k > k, calculate x x T k k, and if k ' 2 2 z > kx kx, use equation (4.41) to T 2 2 x x T calculate the corresponding coordinates ( k x, k ), and find the value of Fk through ' ' ( x, kz ) bilinear interpolation, else simply set Fk to zero. ' ' ( x, kz ) For steered plane waves, the inverse function for (4.38) is ' k + k kx = kx k + k k = + '2 '2 z x ' ' 2kz cosθ + 2kxsin '2 '2 z x ' ' 2kzcosθ 2kxsin. θ sin θ; θ (4.48) This is not a single-value function too and only under certain condition it provides one to one mapping. As shown in Figure 4.3, the valid data used in construction is bounded by two lines, the equation for these lines in original ( k x, k ) coordinates are (4.42) and (4.43). Using the forward mapping equation (4.38), one can gets the corresponding equations in ( k, k ) coordinates as ' x ' z k ' z cosθ ' = kx, sinθ 1 (4.49)

162 141 k ' z cosθ ' = kx. sinθ + 1 (4.50) The following steps show the mapping procedure for steered plane waves: k= k x k k = k x k ' z = cosθ k sinθ 1 ' x k z ' k ' z = cosθ k 1 + sinθ ' x 0 k x 0 k x ' (a) (b) Figure 4.3 Coordinate transform for plane waves. (a) The boundaries of echo spectrum; (b) The boundaries of object spectrum. (a) For any Fk on rectangular grid (, i j ), calculate ' ' ( x, kz ) k ' x = iδ k, ' x k ' z = jδ k. ' z (b) If k ' x >0, calculate cosθ k sinθ + ' 1 x cosθ, and if k ' ' z > kx, use equation (4.48) to sinθ + 1 calculate the corresponding coordinates ( k x, k ), and find the value of Fk through ' ' ( x, kz ) bilinear interpolation, else simply set Fk to zero. ' ' ( x, kz ) (c) If k ' x <0, calculate cosθ k sinθ ' 1 x cosθ, and if k ' ' z > kx, use equation (4.48) to sinθ 1 calculate the corresponding coordinates ( k x, k ), and find the value of Fk through ' ' ( x, kz ) bilinear interpolation, else simply set Fk to zero. ' ' ( x, kz )

163 Reconstructive procedure LDB Wavefront z y t 0 (a) x Transducer 0 (b) x k k z ' k x 0 0 k x ' (c) (d) Figure 4.4 Reconstructive procedure for 2-D imaging with 1-D array. (a) Transmission with steered plane wave; (b) Received echo data in temporal-spatial domain; (c) Echo data in temporal-spatial frequency domain; (d) Object scattering function in spatial frequency domain. The following steps are involved in the reconstruction of 2-D B scan images with the proposed method.

164 143 (1) Transmit array beams with k x T or a plane wave with steering angle θ. The limited-array beams are generated by weighting the aperture with function cos k x 0 or x T sin k x. No time delay is applied and all elements are excited at the same time. The x T 0 steering of the plane wave is implemented by delaying the firing time of each element of the array. The delay τ is given by τ ( x0) = x0sin θ / c, x0 ( D/2, D/2), where x 0 is the position of the array element, c is speed of ultrasound, and D is the aperture size of the array. To make the system causal, an additional constant delay is added to the delay function. Figure 4.4 (a) shows this step for limited-diffraction beams. (2) Receive and digitize echoes from each transducer element. As shown in Figure 4.4 (b), a two dimensional data set is collected. The echoes from all elements are digitized by one master clock. The frequency of the clock must be high enough to capture all frequency components in the echoe. The 2-D echo data are sampled evenly both in time and space. (3) Take 2-D temporal-spatial Fourier transform to get echoes in temporal-spatial frequency domain. Ignoring the contribution from evanescent waves, we have one quarter spectrum as shown in Figure 4.4 (c). (4) Map the echo spectrum to get the spectrum of the object scattering function. For limited-diffraction array beams, an additional step is taken to coherently sum the echo spectrum according to (4.26). (4.38) and (4.37) show the relationship between the coordinates systems. It is obvious that the rearrangement is a non linear mapping process. The echo spectrum are evenly sampled for k and k x, if these two equations are used to map coordinates ( k, k ) to coordinates x ' ' ( x, z ) k k, the spectrum of the object scattering

165 144 function will not be evenly sampled for ' k x and ' k z. We call this process forward mapping, which will make the final inverse Fourier Transform difficult to carry out. Inverse mapping must be used to make the spectrum of the object sampled evenly. This process has been discussed in previous section. Notice that the coverage of echo spectrum in coordinates ( k, k ) does not changes with the transmission parameters. However, the x coverage of the spectrum of the object scattering function does change with the transmission parameters. The mapped spectrum of the object is illustrated in Figure 4.4 (d) for limited-diffraction beams. (5) Add the spectrum of the object scattering function from current transmission to that obtained from previous transmissions. Since different transmission parameters are used, these two spectrums will not overlap exactly, and the spectral coverage of the object scattering function is broadened. If all the transmissions are finished, go to step (6), else set new parameters and go to step (1). (6) Take 2-D inverse Fourier Transform of the spectrum of the object scattering function and get the reconstructed B image. 4.4 Simulation In this section, computer simulations are performed to verify the proposed imaging methods. Echoes are simulated using the spatial impulse response method discussed in Chapter II. The conditions for simulations are listed, and the results from these simulations are presented and discussed.

166 Transducer definition The transducer assumed in simulation is a 1-D linear array with 128 elements, a pitch of 0.32mm and a dimension of 40.96mm 8.6mm. The center frequency is 3.5MHz and there is no lens added in elevation. For simplicity, the transfer function of the transducer in both transmission and reception is assumed to be a Blackman window as follows, cos( π f / fc) cos(2 π f / fc), 0 f 2 fc, B( f) = 0, f > 2 fc. (4.51) The corresponding bandwidth is 81% of the center frequency at -6dB. The driving signal is one-cycle sine wave at the center frequency. A custom built transducer will be used in the experiments which has the same specifications except that two-way fractional bandwidth of the physical transducer is around 50% Objects The first object (Object A) assumed in the simulation consists of 40 point scatterers with same scattering coefficients. These scatterers are arranged in a pattern corresponding to that of the line targets of the ATS539 tissue-mimicking phantom from ATS Laboratories, Inc. Figure 4.5 shows the pattern of these scatterers in imaging area I and position of the transducer. The second object (Object B) assumed in this simulation consists of 18 point scatterers with same scattering coefficient. They are aligned along three lines, each having 6 points which are evenly distributed. The distance between the points along each line is 20mm. The tilting angles of the three lines are 0 degree (vertical), 15 degree, and 30 degree respectively. The center of the transducer is aligned with the vertical line.

167 146 ATS539 Tissue Mimicking Phantom mm Array Transducer 15dB 6dB Array Transducer 180 mm 3dB -3dB mm Imaging Area I -6dB -15dB Imaging Area II 200 mm Figure 4.5 Diagram of the ATS539 phantom Simulation Results For Object A, our goal is to show the point spread functions of the system with different methods and under different conditions. For this purpose, three sets of echoes are simulated and one of them is for delay-and-sum method which is used as a comparison. Total 46 The first set corresponds to the insonifications from limited diffraction array beams. k x T are used, which are corresponding to 91 transmissions since each parameter k needs two transmissions except when k = 0. x T x T k x T is evenly sampled according to i k = iδ k, i = 0,1,, N 1, where Δ k = π /( Δx0 ( N 1)) ; Δ x0 is the pitch of the array x T x T transducer; and N is the number of the x T k x T. This set of data is down sampled in k x T to

168 147 generate the echoes for 11 transmissions with Δ k = π /(5 Δ x0 ). For single transmission, the echoes for k x T = 0 are used. Second set corresponds to the insonifications from steered plane waves, where 91 transmissions are used. The steering angle spans from -45 degrees to 45 degrees with a sampling interval Δ θ = 1 degree. This set of data is also down sampled in steering angle to generate the echoes for 11 transmissions with Δ θ = 9 degrees. For single transmission, the echoes for θ = 0 degree are used. The third set corresponds to the insonifications from focused beams with a fixed focal depth of 70mm, where 263 transmissions are used. The steering angle for the sector x T scan is unevenly sampled according to φ i 1 = sin ( iδ), i = N / 2 + 1, N / 2 + 2,, N / 2, where δ = c/ f / 2 / D c ; c is the speed of sound and assumed to be 1540m/s; f c is the central frequency with a value of 3.5MHz; D is the aperture of the transducer with a value of 40.96mm. For an azimuthal view of 90 degrees, N = 2 / δ = 263. Figure 4.6 and Figure 4.7 show the images for Fourier methods with limiteddiffraction beams and plane waves transmissions. From panel (a) to panel (c), Fourier method can reconstruct the object correctly and image quality improves when number of transmissions increases. With one transmission, the view is limited and the sidelobe level in panel (a) is relatively higher comparing to that of the points at the transmit focus in panel (d). However, for those points away from the transmit focus the side lobe levels are similar. When the number of transmissions is increase to 11, the side lobe level is reduced significantly. The resolution is also improved dramatically.

169 148 Figure 4.6 Images constructed from simulated echoes for Fourier method with limited-diffraction-beams transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 ; (b) 11 1 transmissions for Fourier method, with Δ kx T = π /(5 Δ x0 ) = m ; (c) 91 1 transmissions for Fourier method, with Δ kx T = π /(45 Δ x0 ) = m ; (d) 263 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= ; The speed of sound c is assumed to be 1540 m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the elements Δ x0 is 0.32mm.

170 149 Figure 4.7 images constructed from simulated echoes for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 263 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= ; The speed of sound c is assumed to be 1540 m/s and central frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively.

171 150 Worth noticing that signature of the sidelobe is a little bit different between the case of limited-diffraction beams and the case of plane waves. When 91 transmissions are used, for both cases, the sidelobe level is below -50dB, and the resolution of the image at any point is comparable to that of the points at transmit focus of panel (d). For the second object, eight sets of echoes are simulated. For first two sets, they are the same as the first object except that the object is different. The other six sets correspond to the insonifications from focused waves with different focal depths. For each set, a fixed focal depth is used. These focal depths are 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively. For each set, 263 transmissions are used. Figure 4.8 shows the images constructed with Fourier method and Delay-and-Sum method. In these images, the center of the transducer is aligned with left column of points. To save some spaces, most of the left halves of the images are not shown. Six B mode images are reconstructed with delay-and-sum method and then pasted together to produce the image in panel (d). Every scatterer in panel (d) is at both transmit and receive focus. Comparing these images, one can easily see that with Fourier method, when 91 transmissions are used, the image quality for both limited-diffraction beams transmissions and plane waves transmissions is close to the case where both transmit and receive focusing are applied.

172 151 Figure 4.8 Images constructed from simulated echoes for Fourier method with limited-diffraction-beams and plane-wave transmissions and Delay-and-Sum method with focused transmissions with multiple focal depths. (a) 91 1 transmissions for Fourier method, with Δ kx T = π /(45 Δ x0 ) = m ; (b) 91 transmissions for Fourier method, with Δ θ = 1 degree; (c) 263 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= ; (d) transmissions with 6 focal depths of 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively for delay-and-sum method. The speed of sound c is assumed to be 1540 m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm.

173 152 However for Fourier method relatively small number of transmission is needed. For delay-and-sum method, the number of transmission is linearly proportional to the number of foci are used. To achieve transmit/receive focusing for every points in the object, many focal depths are needed and the frame rate will be dramatically reduced. When only one focal depth is used, as in panel (c), the image quality of the points around the focal depth is high. But for the points away from the focal depth, the resolution is poor and sidelobe level is high. 4.5 In vitro and in vivo experiments In this section, experiments are carried out to validate the theory and methods and confirm the results from simulations. Three different types of experiments are designed. First, a wire phantom in water tank is studied. These wires serve as point spread functions in a 2-D imaging situation. Since the experiments are conducted in the water tank, attenuation is not a factor and there are no random scatterers involved. The experimental results are used to confirm the theory and simulation under a relatively ideal condition. Second, a tissue-mimicking phantom with wire and cyst targets is studied. The image resolution of the Fourier method is tested on the wire targets of the phantom and the contrast resolution is tested on the cyst targets. Random scatterers and frequencydependent attenuation are included. Since the phantom has a known speed of sound, effects of the phase aberration are not included. The experimental results are used to check the performance of the method under more realistic situations. Third, in vivo experiments are conducted on a kidney and a heart to assess the performance of the proposed method under clinical situations. The kidney is a slow moving organ and heart is a fast moving organ. For Fourier method, the spectra of the object gained from

174 153 different transmission events are added coherently to get the final image. The aliasing from motion is always a concern. The in vivo experimental results from heart partially address this issue under clinical situations Experimental system A self-built HFR imaging system is used in all experiments to acquire the RF data. The system contains 128 transmit/receive channels. Each channel can send independent and arbitrary waveform lasting up to 51.2 μs with a sampling rate of 40MHz and a precision of 12 bits. Each channel samples echo signals at 40MHz with a precision of 12 bits. Each channel has a 64M Byte SDRAM (up to 512M Byte), which can hold about one-second RF data with a maximum frame rate of 3750 frame/second for a depth of 200mm. The system is fully configurable by a windows application running on a PC through high speed USB interface. The RF data stored in the system is also transferred to the PC through the interface. At transmit mode, one cycle sine wave at center frequency of the transducer is used to excite the array. For limited-diffraction beams, apodizations are applied to each element. For plane waves and focused transmissions, proper time delays are applied to each element with a precision of 6.25ns. The transmit waveform is sampled at 40MHz and a precision of 12 bits. A high speed D/A converts the transmit waveform into analog signal and sends it to a linear power amplifier. The amplified waveform drives transducer elements. At receive mode, all channels digitize echo signals at same time at a sampling rate of 40MHz and a precision of 12 bits, and store the data in their separate SDRAMs. The digitized raw echo data are then transferred to a PC through a high speed USB link after the acquisition is finished. The data are digitally filtered to reduce the noise

175 154 introduced by the receiving circuits and down sampled to 10M Hz for Fourier methods. For delay-and-sum method, the sampling rate is kept at 40MHz Experiment in a water tank The phantom used in this experiment is a self-built wire phantom. It contains 7 lines made of nylon threads with a diameter of 0.25mm. Six of them are evenly distributed along a vertical line with a distance of 20mm between neighboring lines. The other line locates at same depth as the third line, but with a lateral displacement of 20mm. The temperature of the degassed pure water is 18.5 degrees Celsius, and the speed of sound is m/s calculated according to Lubbers s simplifed formula [220]. The transducer used in this experiment is the costum built one mentioned in the previous section. 8 sets of echoes are acquired. The first set corresponds to 91 plane-wave transmissions with a Δ θ = 1 degree covering an azimuthal view of 90 degrees. The second set corresponds to 91 limited-diffraction-beam transmissions with 1 Δ kx T = π /(45 Δ x) = m. Similar to simulation, the echoes are also down sampled to get the echoes for fewer transmissions. The other six sets correspond to focused transmissions with 6 different focal depths. These focal depths are 20mm, 40mm, 60mm, 80mm, 100mm and 120mm respectively.

176 155 Figure 4.9 Images constructed from experimental data of wire targets in water for Fourier method with limited-diffraction-beam transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 (b) 11 transmissions for Fourier method, with 1 Δ k = π /(5 Δ x ) = m ; (c) 19 transmissions for Fourier method, with x T 0 1 Δ kx T = π /(9 Δ x0 ) = m ; (d) 91 transmissions for Fourier method, with 1 Δ kx T = π /(45 Δ x0 ) = m ; (e) 274 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= ; (f) transmissions with 6 focal depths of 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively for delay-and-sum method. The speed of sound c is m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm.

177 156 Figure 4.10 Images constructed from experimental data of wire targets in water for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 19 transmissions for Fourier method, with Δ θ = 5 degrees; (d) 91 transmissions for Fourier method, with Δ θ = 1 degree; (e) 274 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= ; (f) transmissions with 6 focal depths of 20mm, 40mm, 60mm, 80mm, 100mm, and 120mm respectively for delay-and-sum method. The speed of sound c is m/s and central frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively.

178 157 Figure 4.9 and Figure 4.10 show the images of Fourier method with limiteddiffraction beams transmissions and plane-wave transmissions and Delay-and-Sum method with focused transmissions. From panel (a) to panel (d), 1, 11, 19 and 91 transmissions are used respectively. For panel (e), 274 transmissions with a focal depth of 60mm are used. For panel (f), transmissions with 6 different focal depths are used. Through panel (a) to (d), the Fourier method reconstructs the object accurately and the image quality increases with the number of transmissions. As in the case of simulation, the resolution increases dramatically when 11 transmissions are used. The sidelobe levels also reduce significantly. Comparing panel (b) and (c), the resolution does not changes much when 19 transmissions are used. However, the sidelobe levels do obviously reduce. When 91 transmissions are used, the sidelobe level is below -50 db level. The image resolution in panel (d) is higher than that of panel (e) except for those points at transmit focus, and close to that of panel (f) where transmit/receive focusing is achieved for every point Experiment on tissue-mimicking phantom The tissue mimicking phantom is ATS 539 from ATS Laboratories, Inc., Bridgeport, CT. The phantom is made from urethane rubber with an attenuation coefficient of 0.5 db/cm/mhz and a speed of sound of 1450 m/s. The line targets are made of monofilament nylon with a diameter of 0.12mm. The anechonic target structures are cylinders of varying diameters, from 2mm to 8mm. The gray scale target structures are cylinders with a diameter of 15mm filled with different background materials. The contrasts of these materials to the rubber are +15 db, +6 db, +3 db, -3 db, -6 db and -15 db respectively. Figure 4.5 schematically shows the pattern of these targets.

179 158 Figure 4.11 Images constructed from experimental data of wire targets in ATS539 for Fourier method with limited-diffraction-beam transmissions and Delay-and- Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 (b) 11 transmissions for Fourier method, with 1 Δ k = π /(5 Δ x ) = m ; (c) 91 transmissions for Fourier method, with x T 0 1 Δ kx T = π /(45 Δ x0 ) = m ; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= The speed of sound c is 1450 m/s and center frequency and aperture of the transducer are 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm.

180 159 Figure 4.12 Images constructed from experimental data of wire targets in ATS539 for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= The speed of sound c is 1450 m/s, and center frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively.

181 160 Figure 4.13 Images constructed from experimental data of cyst targets in ATS539 for Fourier method with limited-diffraction-beam transmissions and Delay-and- Sum method with focused transmissions. (a) One transmission for Fourier method, with k x T = 0 (b) 11 transmissions for Fourier method, with 1 Δ k = π /(5 Δ x ) = m ; (c) 91 transmissions for Fourier method, with x T 0 1 Δ kx T = π /(45 Δ x0 ) = m ; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= The speed of sound c is 1450 m/s and center frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively. The pitch of the transducer Δ x0 is 0.32mm.

182 161 Figure 4.14 Images constructed from experimental data of cyst targets in ATS539 for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) One transmission for Fourier method, with θ = 0 degree; (b) 11 transmissions for Fourier method, with Δ θ = 9 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 279 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= The speed of sound c is 1450 m/s and center frequency and aperture of the transducer is 3.5MHz and 40.96mm respectively.

183 162 Figure 4.11 and Figure 4.12 show the results for wire targets with Fourier method and delay-and-sum method. Limited-diffraction beams and plane wave transmissions are used for Fourier method and focused transmissions with a focal depth of 70mm are used for delay-and-sum method. The image quality improves when the number of transmission increases for Fourier method. The resolution becomes higher and signal to noise ratio also becomes higher when more transmissions are used. When 91 transmissions are used, the image quality of Fourier method is higher than that of delay-and-sum method. Figure 4.13 shows the results of Fourier methods with limited-diffraction beam transmissions. The contrast resolution and signal to noise ratio also improves when the number of transmissions increases. But the improvement is not as effective as the case of plane-wave transmissions. This is partially because that half ultrasound energy is transmitted due to apodization process for generation of limited-diffraction beams. Another reason is related to the energy distribution of limited-diffraction beams. For a fixed k x T, all frequency components k of the limited-diffraction beam have to meet the requirement ksinθ = k xt, which means that the higher frequency component has a smaller steering angle and the lower frequency component has a larger steering angle. The ultrasound energy from different frequency is not directed to the same direction. Meanwhile, the higher frequency component has a higher attenuation coefficient, which in term reduces the signal to noise ratio. Figure 4.14 shows the results for cysts targets with Fourier method and delay-andsum method. Plane wave transmissions are used for Fourier method and focused transmissions with a focal depth of 70mm are used for delay-and-sum method. For the cysts, contrast resolution and signal to noise ratio improves when the number of

184 163 transmission increases. When 91 transmissions are used, five rows of cysts targets can be clearly resolved and contrast is very high. While for delay and sum method, only the cysts that are close to the focal depth are clearly resolved and contrasts for those cysts away from the focal depth are relatively low In vivo Experiments The in vivo experiments are carried out on the right kidney and the heart of a healthy volunteer. The transducer used in these experiments is a V2 phased array from ACUSON. The center frequency used in the experiments is 2.5 MHz. The transducer has 128 elements, with a pitch of 0.15 mm. The length of the transducer is 19.2 mm and the width of the transducer is 14 mm. An ACUSON 128XP commercial machine is used to find the location of the targets and then the transducer is detached from the machine and connected to the HFR system through a converter. For the kidney, the HFR system is configured to acquire 179 frames of RF data continuously, where the first 88 frames correspond to the focused transmissions for delay and sum method and second 91 frames correspond to the plane wave transmissions for Fourier method. The total acquisition time is ms. In such short period of time, the position of the kidney basically keeps stationary.

185 164 Figure 4.15 Images constructed from in vivo experimental data of the right kidney of a volunteer for Fourier method with plane-wave transmissions and Delay-and- Sum method with focused transmissions. (a) 91 transmissions for Fourier method, with Δ θ = 1 degree; (b) 88 transmissions with a focal depth of 70mm for delayand-sum method, with δ = c/ fc / 2 / D= The speed of sound c is assumed to be 1540 m/s and center frequency and aperture of the transducer are 2.5MHz and 19.20mm respectively.

186 165 Figure 4.16 Images constructed from in vivo experimental data of the heart of a volunteer for Fourier method with plane-wave transmissions and Delay-and-Sum method with focused transmissions. (a) 11 transmission for Fourier method, with θ = 9 degrees; (b) 19 transmissions for Fourier method, with Δ θ = 5 degrees; (c) 91 transmissions for Fourier method, with Δ θ = 1 degree; (d) 88 transmissions with a focal depth of 70mm for delay-and-sum method, with δ = c/ fc / 2 / D= The speed of sound c is assumed to be 1540 m/s and center frequency and aperture of the transducer are 2.5MHz and 19.20mm respectively.

187 166 Figure 4.15 shows the results of the kidney with Fourier method and delay-and-sum method. The image of panel (a) has smaller speckle size and finer features than the image of panel (b). The image constructed with Fourier method has higher quality over that with delay-and-sum method. More detailed anatomical details are revealed in panel (a), which can give doctors more confidence in their diagnosis. For the heart, the HFR system is configured to acquire 209 fames of RF data continuously with a depth of field of 120mm. The first 88 frames correspond to focused transmissions with a focal depth of 70mm; the second 11 frames correspond to plane wave transmissions with a Δ θ = 9 degrees covering a view of 90 degrees; the third 19 frames correspond to plane wave transmissions with a Δ θ = 5 degrees covering the same view; the rest 91 frames correspond to plane wave transmissions with a Δ θ = 1 degree covering the same view. The data acquisition is triggered and synchronized by the ECG signal. The total acquisition time is ms, with an interval of 187 μs for each transmission. Figure 4.16 shows the results of the heart with Fourier method and delay-and-sum method. From panel (a) to panel (d), 11, 19, 91 and 88 transmissions are used respectively, which correspond to frame rate of 486, 281, 58 and 60 frame/second, respectively. When 11 transmissions are used, the frame rate is very high. However, the image quality is relatively poor. The side lobe is higher comparing to that of delay and sum method in panel (d). For Fourier method, when the number of transmissions increases, the image quality also improves. Panel (b) has more detailed features than panel (a). Panel (c) shows very fine features. There is little motion artifacts observed in the images which are constructed with Fourier method. This is partially due to the facts

188 167 that most structures of the heart move at a velocity under 200 mm/s and most part of the image is illuminated by a small number of plane waves. The interval is very short. In such short period time, these structures have moved very small distances and the aliasing effects are not obvious. This is also due to the fact the at this specific moment, as indicated by the ECG in Figure 4.16, the structures of the heart do not move very fast. 4.6 Discussion In this section, sampling constraints and aperture effects are discussed Sampling constraints There are several sampling issues in the proposed imaging method. First issue is the sampling in spatial domain, i.e. sampling of the array elements. The distance between two neighboring elements, i.e. pitch, is denoted as Δ x0. Because the transmitting transfer function of the transducer Tk ( ) acts as a band pass filter, the ultrasound energy concentrates within a frequency band [ f 1, f 2 ] centered around the center frequency f c, where f is the lowest frequency, and 1 f 2 the highest frequency. The corresponding wavelengths and wave numbers are denoted as λ 1, λ 2, k 1 and k 2 respectively. For transmitting limited-diffraction array beams, the maximum value of the parameter k x T can be used is k 2, which leads to the requirement for Δ x0 as follows, Δx λ /2. (4.52) 0 2 For transmitting steered plane waves without grating lobes within all working frequencies, the sampling requirement for Δ x0 is the same as (4.52).

189 168 The pitch of the array will decide the maximum value of k x after Fourier transform, which is given by k = π / Δ x. (4.53) x,max 0 k x,max will limit the maximum spatial frequency available in the echo spectrum. However it does not directly limit the maximum spatial frequency for the object spectrum, since a displacement is added when echo spectrum is mapped to object spectrum as in (4.37) and (4.38). This displacement is important in the sense that it will enlarge the frequency range for k ' x, which is directly related to the lateral resolution of the imaging system. For over sampled arrays, k x,max is large enough and will not limit the lateral resolution even without the displacement. However, for under sampled arrays, k x,max could become a limiting factor even with the displacement. In this case, one can apply a phase shift to compensate for the transmitted wave after echo signals are received and before 2-D Fourier transform is applied. In this way, the displacements would be doubled. This process not only enlarges the spatial frequency range, but also reduces interpolation errors. Worth noticing that Liu [168] also applied a phase shift to echo signals, but the phase was used to cancel the displacement in (4.38), which resulted in one dimensional mapping process after 2-D Fourier transform. One severe drawback of this approach is that the frequency range of the object spectrum will be directly limited by lateral resolution for under sampled arrays would be greatly degraded. k x,max. The The second issue is the sampling in time domain. The sampling interval in time is denoted as Δ t. To capture all the frequencies inside the echo signal, Δ t must satisfy the following condition:

190 ( f ) Δt 1/ 2. (4.54) The sampling interval in time will decide the maximum value of k, which is given by k max π / ( c t) = Δ (4.55) The third sampling issue is the sampling in frequency domain, which is related to k x and k. The sampling intervals are denoted as Δ kx and Δ k respectively. The sampling of these two parameters is not done directly. They are adjusted by zero padding before 2-D Fourier Transform. When these parameters get smaller, more zeros are needed. Of course, when the dimension of the 2-D array becomes larger, it takes longer time to finish 2-D Fourier Transform. The final image is reconstructed by 2-D inverse Fourier Transform of the spectrum of the object scattering function. However, the spectrum of object is not measured directly. Instead, we obtain it from the spectrum of the echo signals which are measured directly. There is a nonlinear mapping process involved. To avoid nonlinear aliasing, the sampling of the measured signals should meet the following constraint [221] m kx n k m kx n k 2π + 2 L, ' ' + + ' ' Δkx kx Δk kx Δkx kz Δk k z (4.56) where ( mn, ) is the index for the evenly sampled grid of coordinates ( k, k ) ; the partial derivatives are based on the coordinate transform function (4.41) or (4.48); and L is the imaging depth. For limited-diffraction array beams, according to (4.41), we have x kx = 1; ' k x (4.57)

191 170 k k = kx = 0; ' k z ' 2 ' '2 '2 ( kx kx )( 2kx 2kx kx + kx + 3kz ) T T T ' 2 x 2 2 ' ' '2 '2 2 ' kz 4( kx kx ) k ( ) T z + kz + kx + k T x k xt ; (4.58) (4.59) k = k 4k + 8k k 8k k + 4k k k + k 4 3 ' 2 '2 '3 '4 '4 x x x x x x x x z T T T T ' z '2 ' '2 '2 2 ' 2kz 4 kx kx kz kz kx kx k x ( ) ( ) T T T. (4.60) Substitution of (4.57), (4.58), (4.59) and (4.60) into (4.56) leads to ' 2 ' '2 '2 m n ( kx kx )( 2k 2 3 ) T x k T x k T x kx k + + z Δ kx Δ k k ' ' '2 '2 2 ' z 4( k x k x ) ( ) T k z + k z + k x + T k x k x T 4 3 ' 2 '2 '3 '4 '4 n 4kx + 8k 8 4 T x k T x kx k T x + kx k T x kx + kz + Δ k 2 2 '2 ' '2 '2 2 ' 2 k z 4( k x k x ) ( ) T k z k z k xt k x k xt L. π (4.61) By letting Δ k =Δ k and ignoring the second term on the right hand of the equation, x we have k k ΔkL n+ m. ' x π (4.62) While it is hard to find the minimum for the functions on the left hand of (4.61) and (4.62), it is easy to show that for vast majority grid points the following inequality holds, k 1. ' n + m k x (4.63)

192 171 ΔkL If 1, (4.62) would hold for vast majority grid points. Thus we get the π sampling constraints for k x and k as follows, Δk π / L, Δ kx π / L. (4.64) For steered plane waves, according to (4.48), we have θ + ( ) ' ' ( z θ x θ) ' ' '2 '2 k 2kk x z cos kx kz sin = ' 2 ; kx 2 k cos k sin + θ + ( + ) ' ' ( z θ x θ) ' ' '2 '2 2kk x zsin kx kz cos k ' = kz 2 k cos + k sin + ( cosθ + sinθ) ' ' 2 ( z θ x θ) '2 ' ' k kz kz k x x = ' kx 2 k cos + k sin ( ) ' ' 2( zcosθ xsinθ) 2 2 θ θ ; '2 '2 ' ' k sinθ kx kz cosθ 2kxkzsinθ x = ; ' 2 kz k + k ; (4.65) (4.66) (4.67) (4.68) By letting Δ k =Δ k, and substituting (4.65), (4.66), (4.67) and (4.68) into (4.56), x one gets 2 '2 '2 ' ' ΔkL ( n msin θ) ( kx + kz )cosθ + 2kxkzsinθ ' ' 4 π 4( kzcosθ + kxsinθ) ' ' '2 ' ' ' n( 2kxkz cosθ + kx sinθ kzsinθ) + m( kzcosθ + kxsinθ) + ' ' 4 4( kzcosθ + kxsinθ). (4.69) The above equation can be simplified as follows by ignoring the first term on the right hand side of the equation,

193 172 where gk ' ' ( x, kz, θ ) = ΔkL 1 ' ' gk ( x, kz, θ ) n+ m (4.70) π 2 θ + ( ) ' ' ( kzcosθ + kxsinθ) ' ' '2 '2 2kk x z cos kx kz sin 2 θ, and m and n are integers. Applying similar reasoning, the sampling constraints for k x and k would be Δ k π /(2 L), Δ kx π /(2 L). (4.71) The above discussion about the constraints on k x and k is not strictly rigorous, but simulation results show that these constraints are good indicators for practical implementation of the imaging method. In reality, the constraints for these two parameters can be loosened to speed up the process. For Δ k, it is not a problem, since the echo signals naturally meet the constraints given by (4.64) and (4.71). No zero padding is need for the echo signals in time domain. However, for Δ kx, it requires zero padding in spatial domain. The padded aperture for limited diffraction beams should be comparable to the imaging depth while the padded aperture for steered plane waves should be larger than the imaging depth. The size of the data would increase 4 to 8 times depending on the physical aperture and the imaging depth. But the expansion is necessary to avoid aliasing in the nonlinear mapping process Aperture effects We first consider the finite aperture effects for reception. In reception, if the aperture size is finite, not all the back scattered energy are collected. The echo signals outside of the aperture will not be available. The echo signals for finite aperture e ( FA x, ; ) 0 y0 k can be expressed as follows by modifying (4.13),

194 173 e x y k = e x y k Π x D Π y D (4.72) (, ; ) (, ; ) ( / ) ( / ), FA where Π( i ) is a rectangular function and D is the aperture of a rectangular 2-D array. Taking 2-D Fourier Transform of (4.72) and changing variables, we can get the relationship between echo signals and the object scattering function as follows, jt ( k) R( k) k E k k k F k k k c Dk c Dk 2 ' FA( x, y; ) = (,, ) *sin ( )*sin ( ) x y z x y 4π c k kx ky (4.73) The above equation shows that the spectrum of the echo signals becomes the convolution between the spectrum of the object and Sinc functions. The spectrum of the object scattering function evaluated from the echo signals becomes less accurate. The point spread function of the imaging system will also depend on the aperture size. Second, we consider the finite aperture effect for transmission. In transmission, the limited aperture means that only part of the object will be illuminated. And the received echoes do not contain the information of the whole object. With a different transmission parameter, different part of the object is illuminated. From the standing point view of a spatial point, it will not be illuminated in every transmission event. Only when the transmission parameter meets certain requirements, this point would be illuminated and the received echoes would contain the information of the point. Otherwise when this point is not illuminated, the echo spectrum will not contribute to the reconstruction of this point. Although dozens of transmissions may be used, only part of these transmissions actually contributes to one specific point Fourier domain coverage and image resolution As shown in Figure 4.2 and Figure 4.3, for each transmission event we get a quarter of valid echo spectrum after 2-D Fourier transform. With different transmission

195 174 parameters, the echo spectrum will be mapped to different parts of object spectrum. From previous sections, we know that if a spatial point is illuminated by several transmission events, the echo spectrum from these events would all contains the information of this point. Since the transmission parameters are different, when mapped to object spectrum, different echo spectrum would not totally overlap and have displacements directly related to the transmission parameters as given in (4.29). Comparing to single transmission, the coverage in Fourier domain from the spectrum of an object point is broadened each time when an additional transmission with different parameter is added, in which this point is illuminated. When the coverage in Fourier domain is broadened, the point spread function of this point in spatial domain will become smaller and the resolution is improved. The image resolution depends on the number of transmissions and the transmission parameters used. Generally speaking, when more transmissions are used, the image resolution becomes better. Simulations and experiments both show that when the number of transmissions is similar to that of the conventional delay-and-sum method, transmit/receive focusing is achieved for every point and the image resolution is extremely high. 4.7 Summary A general theory for Fourier based imaging has been developed in this chapter. The theory is derived by solving the wave equation for inhomogeneous media under first Born approximation. The details of the reconstructive procedure, the sampling constraints, aperture effects and issues related to resolution are also considered on the context of 2-D B mode imaging.

196 175 Simulations and experiments are carried out to verify the theory and imaging methods. Although all simulations and experiments are carried out in the context of 2-D B mode imaging, the proposed method does not restrict to this mode. Fourier methods also has great potential to realize 3-D imaging in real time with multiple limiteddiffraction array beams or steered plane waves, where 3-D FFT will reduce the computation even more dramatically. The results from simulations and experiments verified the validity of the theory and algorithm of the Fourier method and illustrated the feasibility for clinical applications. The further work would focus on hardware implementation of the proposed method and retrieving other information, such as blood flow, based on same architecture in frequency domain.

197 CHAPTER V: SYSTEM AND LOGIC DESIGN OF HFR IMAGING SYSTEM In this chapter, the system and logic design of the HFR imaging system are presented. The HFR imaging system is a general-purpose platform for research. It has all the front end circuits of a typical ultrasound system and has additional storage and communication units for offline data processing. The control logic of the system is implemented through FPGAs and a windows application running on a PC. 5.1 Introduction In medical ultrasound research, in vitro and in vivo experiments play critical roles. For some studies, researchers can get the experiment data directly from commercial systems. These data can be extracted from video streams, either as freeze images or video clips. These data includes diagnostic information for medical applications. However, for researchers trying to improve system performance, this type of data contains less technical information than beamformed RF (radio-frequency) data before image formation. RF data not only contains amplitude information but also the phase information, which is very important for studies such as phase aberration correction [ ]. Some commercial systems equipped with ultrasound research interface can give researchers access to these RF data. Through the research interface, the beamformed RF data can be stored into memories for offline processing. With these systems, researchers have the freedom to set up different transmissions conditions that are available and take 176

198 177 advantage of the front end circuits from commercial systems. The SNR of these RF data are usually higher than self-built systems. A major drawback of these systems is that researchers can t access the channel RF data before beamforming. These channel RF data are the starting point for beamforming studies [116]. In addition, the researchers can t transmit arbitrary waveforms to excite the transducer. This capacity is crucial for many studies, such as coded excitation [188, 204, 213, 226, 227]. To gain full control of transmitting beamforming, and get access to channel RF data, several research systems [ ] have been designed and built. For example, the system built by Jesen s group [230] has 128 independent transit channels capable of transmitting arbitrary waveforms and 64 receive channels with a digitizing precision of 12 bits. The system can be configured to realize different image methods by changing the control programs inside FPGAs. With such systems, researchers basically have total freedom in terms of transmit and receive beamforming and the capacity to explore the merits of different ultrasound imaging methods. To experimentally test and verify the imaging methods proposed in previous chapters, we also designed and built a high-frame-rate imaging system in our lab. The goal is to design a general-purpose ultrasound imaging system. Since the main usage of the system is in research, flexibility is very important. To achieve this flexibility, the system implements basic functions, such as transmission (TX) and reception (RX) of signals, with high-speed hardware and leaves configuration and control to an application in PC. The system is also designed to work with external devices. The system has 128 channels. Each channel is independent in functions. The specifications are as follows:

199 178 Each channel has independent transmission beam forming capability. The shape, length, phase and amplitude of transmission waveforms are configurable. Each channel has independent linear transmission power amplifier. Each channel has independent TGC control. Each channel has independent high speed 12-bit D/A. Each channel has independent SDRAM. The size of the SDRAM can be 64M, 128M, 256M or 512M. Transmission and acquisition are controlled by a PC. Configuration and echo data are transferred through high-speed USB 2.0 port. Channel board #1 PC (IMT) USB board Main board Channel board #2 Channel board #16 Figure 5.1 The system diagram of HFR imaging system. The system is divided into four parts as shown in Figure 5.1. One is a windows application. The application provides the interface for users to control the system. It also

200 179 drives the high-speed USB port. The second is an USB interface board. This board receives the configuration data from PC; sends them to main control board; receives echo data from main control board; sends them back to PC. The third is the main control board. This part holds the configuration data for the system and controls the transmission, acquisition and data transfer according to the settings. The fourth part is channel board. There are total 16 channel boards and each channel board has 8 channels. Channel board executes commands from main control board and physically implements transmissions and data acquisitions. Windows application provides a graphic user interface (GUI) for a user to configure and control the system. The main configurations include TGC control, transmission beam forming, transmission waveform, depth, ECG setting and high voltage level, etc. The application formats the configuration data and sends them to main control board through high-speed USB port. The start acquisition, start data transfer, and memory test commands can also be issued through the interface. The application also displays the echo data and constructs images after the echo data are transferred, if it is necessary. The USB interface board provides an interface between main control board and high speed USB port. This interface is configurable to meet the timing of main control board. It implements the high speed USB protocol. Configuration data and echo data are transferred through this interface. Main control board receives the configuration data and commands from PC. It stores the configuration data and sends them to channel boards at a controlled manner. Main control board also drives the LED display to show the working status of the system. Power and high voltage are also controlled by main control board. Main control board

201 180 controls the transmission and data acquisition by telling channels when to transmit and when to start sampling. Each channel board includes 8 independent channels, which handle transmission, acquisition and data transfer. Channel boards drive D/A for transmission wave in transmission board, where the wave is converted to analog signal and amplified to drive transducer elements. At acquisition, channel boards generate TGC control signal for the TGC amplifier and store the sampled echo data to SDRAM. Channel boards also retrieve echo data and send them to main board during data transfer process. The logic control in main control board, channel board and USB interface board is achieved through field programmable gate arrays (FPGA). Two types of middle-scale FPGAs are used. Both of them are from Xilinx Company. Spartan II XC2S200, which has 200,000 system gates, is used in main control board. Spartan II XC2S150, which has 150,000 system gates, is used in channel board and USB interface board. The USB chip is from Cypress Semiconductor and the control firmware is custom-developed to suit the system requirements. The control logic is written in VHDL (Very high speed integrated circuits Hardware Description Language). The basic components like counters are generated by Coregen provided by the development tools. The VHDL codes are synthesized by Synopsis FPGA Express. Routing is done through the Xilinx development tool by providing timing constraint files. And the firmware is written in C language and compiled in an integrated development environment of Keil uvision2.

202 181 In the following sections, the designs for each part are briefly presented. The emphases are put on the system and function design. The technical details are presented in the related documentation. 5.2 Main Board Circuit Design of Main Board Main Board is located between the PC and channel boards. The main function of main board is to transfer data between PC and each channel and coordinates the data acquisition and transfer sequences. The circuits include the following parts. (1) FPGA and programming circuits, which control and coordinate all other parts on main board; (2) SDRAM circuits, which are used to hold configuration data; (3) Main board to Channel board interface circuits, which are used to transfer data between main board and channel boards; (3) ECG A/D circuits, which digitize analog ECG signal; (4) Front Panel I/O circuits, which are used to communicate between main board and front panel; (5) Set top box I/O circuits, which are used to communicate between main board and FX2 USB board; (6) High voltage D/A circuits, which are used to control high voltage level; (7) TGC A/D circuits, which are used to digitize TGC inputs; (8) Clock-generating circuits, which provide system (master) clock Working Sequence of Main Board After it is powered up, the control unit in main board first sends reset command to all channel boards and waits for 5 seconds and then it will try to communicate with each channel. If no communication can be established in another 5 seconds, main board will halt and display communication error. If all channels pass the communication test, it will send memory sleep command to all channels and show system is ready for inputs from

203 182 front panel or IMT. If input is memory test command, it will send memory wake up command to all channels and then send memory test command. After that, it will ask each channel to report memory test result. If wrong, it will halt and display error; otherwise, it will wait for another input. If input is configuration data transfer command, it will communicate with main board through USB port. It will update all configurations and store some configurations data to SDRAM. Then it will wait for another input, if the input is start acquisition command, it first sends memory wake up command to all channels. Then it will send common configuration data to all channels. After that it will send wave data to each channel, followed by first delay data for each channel. When all configuration data are sent, it will issue transmission trigger command, followed by sampling trigger command. Then it will send second delay data for each channel. It will repeat the above two steps until all the transmissions are finished. In the end, it sends memory sleep command to all channels and waits for next input. If input is data transfer command, it will send memory wake up command to all channels and transfer command to one channel, and send received data to USB board, then ask next channel for the same amount of data. It will repeat the above steps, until all data are transferred to PC. After that, it sends memory sleep command to all channels and waits for next input Function Requirements of Control Logic for Main Board The control logic of main board needs to implement the following functions: Communicate with PC through USB port, receive configuration data and commands and send echo data back.

204 183 Configure the main board itself and channel boards according configuration data. Coordinate transmission, acquisition, and data transfer procedures. Drive LED unit in front panel; display current status of the system. Drive SDRAM; store and retrieve configuration data. Drive ECG D/A; digitize and store ECG signal Drive High voltage D/A unit; control the amplitude of transmit waveforms Function Partition of Control Logic for Main Board According to the functions required for the main board, the control logic is divided into different function blocks, including clock regulation block, front panel and ECG A/D interface block, SDRAM control block, TGC data handling block, channel communication and coordination block, ECG and echo data transfer block, ECG data handling block, command processing block, display driving block and USB port interface block. Clock regulation block regulates input clock and outputs regulated 40MHz system clock and some lower-frequency pulses used by other blocks. Front panel and ECG interface block registers memory settings and commands from front panel and USB port block, and it also provides an ECG A/D interface, which drives ECG A/D circuits. SDRAM control block generates control signals and address for SDRAM circuits. It writes configuration data to SDRAM at the request of USB block and retrieves data from SDRAM at the request of Channel communication and coordination block. It also

205 184 switches echo data and high voltage D/A data to SDRAM address bus and data bus at the request of command processing block. TGC data handling block stores TGC data from USB port block, and sends TGC data to Channel communication and coordination block when asked. Channel communication and coordination block handles all the communications between main board and all channels. It controls the transmission and acquisition events and coordinates echo data transfer at the request of ECG and echo data transfer block. It receives signals and data from other blocks, formats them into commands and configuration data and sends them to all channels. ECG and echo data transfer block coordinates ECG and echo data transfer procedure. It formats ECG and echo data and sends them to USB interface block. ECG data handling block asks Front panel and ECG A/D interface block to drive ECG A/D circuits and stores the digitized data. At the same time, it detects the peak value of ECG signals and generates necessary trigger signals according to ECG delay time setting from command processing block. It also sends ECG data to ECG and echo data transfer block when asked. Command processing block receives commands from Front panel and ECG A/D interface block and analyzes them. It controls working sequence of main board and sends requests to other blocks. Display driving block drives LED unit in front panel. It decodes requests from command processing block, USB block and ECG and echo data transfer block, generates corresponding display signals and sends them to front panel.

206 185 USB port interface block generates control signals for GPIF (general programmable interface) in USB board. It receives configuration data from PC; asks SDRAM control blocks and TGC data handling block to store these data; and updates other configurable data. It generates hand-shaking signals for GPIF to send echo data to PC. It decodes commands from PC and sends them to Front panel and ECG interface block. 5.3 Channel Board Circuit Design of Channel Board Channel board executes commands sent from main board. Main functions of channel board are transmission control, data acquisition, data storage and data transfer. The circuits include the following parts. (1) 8 FPGAs for 8 channels and one common programming circuits; (2) One common communication interface circuits to main board; (3) One SDRAM circuits for each channel; (4) One T/R switch circuits for each channel; (5) one TGC D/A circuits for each channel; (6) one Receiving TGC amplifier circuits of each channel; (7) one Receiving A/D circuits for each channel; (8) one transmission board interface circuits for each channel; (9) Input circuits for independent use; (10) Clock generation circuits for independent use Function Requirements of Control Logic for Channel Board The control logic of channel board needs to implement the following functions: Communicate with main board and receive configuration data. Execute start transmission command and send out transmission wave and control signals.

207 186 Execute start acquisition command; generate and send out TGC control data; and digitize echo data and store them in SDRAM. Execute memory test commands and report test result. Execute data transfer command, retrieve data from SDRAM and send them to main board Function Partition of Control Logic for Channel Board According to the functions required for the channel board, the control logic is divided into different function blocks, including communication block, TGC control block, transmission control block, memory control block and independent use block. Communication block analyzes all the commands sent from main board and sends requests to other blocks. If the commands are related to memory and echo data, it sends requests to Memory control block; if the commands are related to TGC, it sends request to TGC control block; if the commands are related to transmission, it sends request to transmission control block. TGC control block generates TGC control data according 8 TGC configuration data. TGC control is divided into 7 segments for any depth. In each segment, control data changes linearly from the starting configuration data to the ending configuration data. Transmission control block stores the wave configuration data and sends it out according to the beam forming delay setting for each transmission. Memory control block drives SDRAM; stores and retrieves echo data; and executes memory test command and reports test result. When receiving start sampling command from communication block, it starts to format the digitized echo data and send

208 187 them to SDRAM. When receiving data transfer command, it retrieves data from SDRAM; reformats them into 12 bit echo; and sends them to main board. Independent use block registers the channel address and switches I/Os. 5.4 USB Board USB Board Circuit Design USB board handles data exchange tasks between main board and PC. Main function of USB board is passing configuration and commands from PC to main board and passing echo data from main board to PC. The communication between main board and USB board uses high speed USB protocol. The circuits include the following parts: (1) A FPGA and related programming circuits; (2) A high speed A/D; (3) A high speed D/A; (4) A 64M bytes SDRAM; (5) A one-digit LED display; (6) A USB 2.0 FX2 chip from Cypress. The control logic for USB board includes two parts. First part is implemented in FPGA, and the second part is implemented in FX2 through firmware Function Requirements of USB Board FPGA The FPGA on USB board implements the following functions: Manage the clock from main board to USB chip. Register all signals to match timing requirements for data transferring between main board and USB chip CY7C Implement address switching for the USB chip. Drive D/A, and related LED. Drive SDRAM, A/D, and display, if needed.

209 Function Partition of USB Board FPGA The functions are quite simple for this FPGA in current version. All functions are implemented in one module. If controlling for SDRAM, A/D and display are included, more modules are needed according to the specific tasks for those units FX2 program design EZ-USB FX2 chip form Cypress Semiconductor is a single-chip implementation of the second generation USB specification. Inside the chip, a transceiver is used to transit and receive signals from USB connectors; a Serial Interface Engine (SIE) is used to encode and decode data; an USB interface is used to transfer data from external logic to and from SIE; a general programmable interface (GPIF) is used to link USB interface to external logic; a 8051 CPU is used to configure the engine and interface. The chip liberates the USB 2.0 developers from spending too much time on the details of the protocol, and makes them focus on the functions they need. The developments for FX2 chip include two parts. First part involves defining the interface between external logic and the chip through GPIF. Second part involves developing the firmware for the CPU to configure the engine and the interface. 5.5 IMT windows application IMT is a windows application, which provides a graphic user interface for users to configure HFR imaging system, control data acquisition, analyze data and construct images. IMT program in PC sends configuration data to set up the system. It also controls the data acquisition and data transfer processes. After data are transferred, it displays them as images and waveforms. It can construct images according to the settings.

210 189 PC Interconnect FX2 IMT EP2OUT EP6IN HFR data transfer Function Buffers No USB Format Pipe Bundle to an interface Interfacespecific No USB Format USB System SW USB Logical Device Transfers EZ-USB GPD MS USB Driver USB Framed Data Default Pipe to endpoint zero Endpoint Zero Data Per Endpoint Endpoint Two USB Framed Data Endpoint Six Host Contorller USB BUS Interface USB Framed Data SIE Transactions USB Wire USB BUS Interface SIE Pipe: represent connection abstraction between two horizontal entities Data transport mechanism USB-relevant format of transported data Figure 5.2 Implementation of data transfer through USB.

211 Data transfer Sub-system Design The USB data transfer sub-system is made up of two parts. One part is on PC side. Client software is needed to communicate with system software. The other part is on Device side. A USB 2.0 chip is needed to implement the USB specifications at physical and logic level. Then at functional level, the requirements of the HFR system are considered. Figure 5.2 shows the system design and data flow model for the data transfer subsystem. Ez-USB FX2 chip from Cypress Semiconductors is selected to implement USB specifications. Endpoint 2 and Endpoint 6 are used to implement the data transfer function. Each of them is 512-byte deep and quadruple-buffered. IMT program is the client software and is interfaced to EZ-USB general purpose USB driver, which communicates with Microsoft USB driver Client Software Development IMT sends two types of data to HFR. Configuration data and commands are sent to Endpoint 2 using bulk transfer, while vendor requests are sent to Endpoint 0 using control transfer. IMT also receives echo data from Endpoint 6 using bulk transfer. When HFR system is linked to PC through USB cable, the operating system will load the EZ-UBS general purpose deriver into the system. The driver is running in kernel mode. IMT program accesses to the EZ-USB GPD through I/O Control calls. It first gets a handle to the device driver via a call to the Win32 function CreateFile(). The program then uses the Win32 function DeviceIoControl() to submit an I/O control code and relates input and output buffers to the driver through the handle returned by CreateFile().

212 191 To read or write data through USB, IMT needs to do following steps: 1) Open the device driver through CreateFile() call. 2) Retrieve the pipe information associated with the device. 3) Associate the input or output buffer with intended pipe. 4) Call DeviceIoControl() to request transfer. 5) Wait until the previous call returns. 6) Read/Write data from the buffer. Since the maximum length of each pipe (64K bytes) in original driver is not enough for our application, the length is reset to 32M bytes. Then, the driver is re-built using Microsoft WDM DDK for Windows The newly generated system file, ezusb.sys, is copied to \windows\system32\drivers\ directory FX2 Development At the device side, the development involves two parts. One part defines the hardware connections and handshaking protocols. The other part defines when and how actions take place. Since FX2 chip already takes care nearly everything in the physical and logical level of USB specifications, the focus of the development is the functions needed to finish the data transfer task. A tool called GPIF designer is used to define the hardware connections and handshaking between FX2 chip and main board. Keil uvision2, an IDE for C, is used to develop the firmware for the CPU inside FX2. The firmware defines the endpoints and controls when and how data transfer happens.

213 Operation of the IMT Program The IMT program is part of the HFR imaging system. To use the system, the user first needs to connect the USB cable from scanner to USB 2.0 port of PC; and power on the scanner from the rack; and push reset button on the front panel. Figure 5.3 The GUI interface of IMT program. To run the application, first power on the computer and enter Windows environment. Double-click the corresponding icon on the desktop to start the program. Figure 5.3 shows the main interface of the program. The following steps are needed to finish one experiment.

METHODS OF THEORETICAL PHYSICS

METHODS OF THEORETICAL PHYSICS METHODS OF THEORETICAL PHYSICS Philip M. Morse PROFESSOR OF PHYSICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Herman Feshbach PROFESSOR OF PHYSICS MASSACHUSETTS INSTITUTE OF TECHNOLOGY PART II: CHAPTERS 9

More information

Doppler echocardiography & Magnetic Resonance Imaging. Doppler echocardiography. History: - Langevin developed sonar.

Doppler echocardiography & Magnetic Resonance Imaging. Doppler echocardiography. History: - Langevin developed sonar. 1 Doppler echocardiography & Magnetic Resonance Imaging History: - Langevin developed sonar. - 1940s development of pulse-echo. - 1950s development of mode A and B. - 1957 development of continuous wave

More information

Frequency Quantized Nondiffracting X Waves in a Confined Space. Jian-yu Lu, Ph.D.

Frequency Quantized Nondiffracting X Waves in a Confined Space. Jian-yu Lu, Ph.D. Frequency Quantized Nondiffracting X Waves in a Confined Space Jian-yu Lu, Ph.D. Ultrasound Laboratory, Department of Bioengineering, The University of Toledo, Toledo, OH 4366, U.S.A. E-mail: jilu@eng.utoledo.edu,

More information

PRINCIPLES OF PHYSICAL OPTICS

PRINCIPLES OF PHYSICAL OPTICS PRINCIPLES OF PHYSICAL OPTICS C. A. Bennett University of North Carolina At Asheville WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION CONTENTS Preface 1 The Physics of Waves 1 1.1 Introduction

More information

31545 Medical Imaging systems

31545 Medical Imaging systems Simulation of ultrasound systems and non-linear imaging 545 Medical Imaging systems Lecture 9: Simulation of ultrasound systems and non-linear imaging Jørgen Arendt Jensen Department of Electrical Engineering

More information

vii Preface ix Acknowledgements

vii Preface ix Acknowledgements Series Preface vii Preface ix Acknowledgements xi Chapter 1: Introduction 1 1.1 Brief History of Underwater Sound Transducers..... 2 1.2 Underwater Transducer Applications... 9 1.3 General Description

More information

Modelling I. The Need for New Formulas Calculating Near Field, Lateral Resolution and Depth of Field D. Braconnier, E. Carcreff, KJTD, Japan

Modelling I. The Need for New Formulas Calculating Near Field, Lateral Resolution and Depth of Field D. Braconnier, E. Carcreff, KJTD, Japan Modelling I The Need for New Formulas Calculating Near Field, Lateral Resolution and Depth of Field D. Braconnier, E. Carcreff, KJTD, Japan ABSTRACT In Non-Destructive Testing (NDT), awareness of the ultrasonic

More information

Technical University of Denmark

Technical University of Denmark Technical University of Denmark Page 1 of 11 pages Written test, 9 December 2010 Course name: Introduction to medical imaging Course no. 31540 Aids allowed: none. "Weighting": All problems weight equally.

More information

Mandatory Assignment 2013 INF-GEO4310

Mandatory Assignment 2013 INF-GEO4310 Mandatory Assignment 2013 INF-GEO4310 Deadline for submission: 12-Nov-2013 e-mail the answers in one pdf file to vikashp@ifi.uio.no Part I: Multiple choice questions Multiple choice geometrical optics

More information

Transient, planar, nonlinear acoustical holography for reconstructing acoustic pressure and particle velocity fields a

Transient, planar, nonlinear acoustical holography for reconstructing acoustic pressure and particle velocity fields a Denver, Colorado NOISE-CON 013 013 August 6-8 Transient, planar, nonlinear acoustical holography for reconstructing acoustic pressure and particle velocity fields a Yaying Niu * Yong-Joe Kim Noise and

More information

LIST OF TOPICS BASIC LASER PHYSICS. Preface xiii Units and Notation xv List of Symbols xvii

LIST OF TOPICS BASIC LASER PHYSICS. Preface xiii Units and Notation xv List of Symbols xvii ate LIST OF TOPICS Preface xiii Units and Notation xv List of Symbols xvii BASIC LASER PHYSICS Chapter 1 An Introduction to Lasers 1.1 What Is a Laser? 2 1.2 Atomic Energy Levels and Spontaneous Emission

More information

Optics, Optoelectronics and Photonics

Optics, Optoelectronics and Photonics Optics, Optoelectronics and Photonics Engineering Principles and Applications Alan Billings Emeritus Professor, University of Western Australia New York London Toronto Sydney Tokyo Singapore v Contents

More information

gives rise to multitude of four-wave-mixing phenomena which are of great

gives rise to multitude of four-wave-mixing phenomena which are of great Module 4 : Third order nonlinear optical processes Lecture 26 : Third-order nonlinearity measurement techniques: Z-Scan Objectives In this lecture you will learn the following Theory of Z-scan technique

More information

Electromagnetic fields and waves

Electromagnetic fields and waves Electromagnetic fields and waves Maxwell s rainbow Outline Maxwell s equations Plane waves Pulses and group velocity Polarization of light Transmission and reflection at an interface Macroscopic Maxwell

More information

Light as a Transverse Wave.

Light as a Transverse Wave. Waves and Superposition (Keating Chapter 21) The ray model for light (i.e. light travels in straight lines) can be used to explain a lot of phenomena (like basic object and image formation and even aberrations)

More information

Technical University of Denmark

Technical University of Denmark Technical University of Denmark Page 1 of 10 pages Written test, 12 December 2012 Course name: Introduction to medical imaging Course no. 31540 Aids allowed: None. Pocket calculator not allowed "Weighting":

More information

Mir Md. Maruf Morshed

Mir Md. Maruf Morshed Investigation of External Acoustic Loadings on a Launch Vehicle Fairing During Lift-off Supervisors: Professor Colin H. Hansen Associate Professor Anthony C. Zander School of Mechanical Engineering South

More information

Simulation of Contrast Agent Enhanced Ultrasound Imaging based on Field II

Simulation of Contrast Agent Enhanced Ultrasound Imaging based on Field II Simulation of Contrast Agent Enhanced Ultrasound Imaging based on Field II Tobias Gehrke, Heinrich M. Overhoff Medical Engineering Laboratory, University of Applied Sciences Gelsenkirchen tobias.gehrke@fh-gelsenkirchen.de

More information

Physics Curriculum. * Optional Topics, Questions, and Activities. Topics

Physics Curriculum. * Optional Topics, Questions, and Activities. Topics * Optional Topics, Questions, and Activities Physics Curriculum Topics 1. Introduction to Physics a. Areas of science b. Areas of physics c. Scientific method * d. SI System of Units e. Graphing 2. Kinematics

More information

CHAPTER 3 CYLINDRICAL WAVE PROPAGATION

CHAPTER 3 CYLINDRICAL WAVE PROPAGATION 77 CHAPTER 3 CYLINDRICAL WAVE PROPAGATION 3.1 INTRODUCTION The phase and amplitude of light propagating from cylindrical surface varies in space (with time) in an entirely different fashion compared to

More information

Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a).

Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a). 7.1. Low-Coherence Interferometry (LCI) Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a). The light is split by the beam splitter (BS) and

More information

Fast simulation of nonlinear radio frequency ultrasound images in inhomogeneous nonlinear media: CREANUIS

Fast simulation of nonlinear radio frequency ultrasound images in inhomogeneous nonlinear media: CREANUIS Proceedings of the Acoustics 2012 Nantes Conference Fast simulation of nonlinear radio frequency ultrasound images in inhomogeneous nonlinear media: CREANUIS F. Varray a, M. Toulemonde a,b, O. Basset a

More information

Topic 4 &11 Review Waves & Oscillations

Topic 4 &11 Review Waves & Oscillations Name: Date: Topic 4 &11 Review Waves & Oscillations 1. A source produces water waves of frequency 10 Hz. The graph shows the variation with horizontal position of the vertical displacement of the surface

More information

Electromagnetic Theory for Microwaves and Optoelectronics

Electromagnetic Theory for Microwaves and Optoelectronics Keqian Zhang Dejie Li Electromagnetic Theory for Microwaves and Optoelectronics Second Edition With 280 Figures and 13 Tables 4u Springer Basic Electromagnetic Theory 1 1.1 Maxwell's Equations 1 1.1.1

More information

WAVE PROPAGATION AND SCATTERING IN RANDOM MEDIA

WAVE PROPAGATION AND SCATTERING IN RANDOM MEDIA WAVE PROPAGATION AND SCATTERING IN RANDOM MEDIA AKIRA ISHIMARU UNIVERSITY of WASHINGTON IEEE Antennas & Propagation Society, Sponsor IEEE PRESS The Institute of Electrical and Electronics Engineers, Inc.

More information

Formulas of Acoustics

Formulas of Acoustics F. P. Mechel (Ed.) Formulas of Acoustics With contributions by M. L. Munjal M. Vorländer P. Költzsch M. Ochmann A. Cummings W. Maysenhölder W. Arnold O. V. Rudenko With approximate 620 Figures and 70 Tables

More information

Generating Bessel beams by use of localized modes

Generating Bessel beams by use of localized modes 992 J. Opt. Soc. Am. A/ Vol. 22, No. 5/ May 2005 W. B. Williams and J. B. Pendry Generating Bessel beams by use of localized modes W. B. Williams and J. B. Pendry Condensed Matter Theory Group, The Blackett

More information

Microwave-induced thermoacoustic tomography using multi-sector scanning

Microwave-induced thermoacoustic tomography using multi-sector scanning Microwave-induced thermoacoustic tomography using multi-sector scanning Minghua Xu, Geng Ku, and Lihong V. Wang a) Optical Imaging Laboratory, Biomedical Engineering Program, Texas A&M University, 3120

More information

Angular Spectrum Decomposition Analysis of Second Harmonic Ultrasound Propagation and its Relation to Tissue Harmonic Imaging

Angular Spectrum Decomposition Analysis of Second Harmonic Ultrasound Propagation and its Relation to Tissue Harmonic Imaging The 4 th International Workshop on Ultrasonic and Advanced Methods for Nondestructive Testing and Material Characterization, June 9, 006 at ABSTRACT Angular Spectrum Decomposition Analysis of Second Harmonic

More information

Contents. 1 Basic Equations 1. Acknowledgment. 1.1 The Maxwell Equations Constitutive Relations 11

Contents. 1 Basic Equations 1. Acknowledgment. 1.1 The Maxwell Equations Constitutive Relations 11 Preface Foreword Acknowledgment xvi xviii xix 1 Basic Equations 1 1.1 The Maxwell Equations 1 1.1.1 Boundary Conditions at Interfaces 4 1.1.2 Energy Conservation and Poynting s Theorem 9 1.2 Constitutive

More information

Physics and Knobology

Physics and Knobology Physics and Knobology Cameron Jones, MD 02/01/2012 SOUND: Series of pressure waves traveling through a medium What is ULTRASOUND Physics Words WAVELENGTH: Distance traveled in one cycle FREQUENCY: number

More information

APPLIED PARTIM DIFFERENTIAL EQUATIONS with Fourier Series and Boundary Value Problems

APPLIED PARTIM DIFFERENTIAL EQUATIONS with Fourier Series and Boundary Value Problems APPLIED PARTIM DIFFERENTIAL EQUATIONS with Fourier Series and Boundary Value Problems Fourth Edition Richard Haberman Department of Mathematics Southern Methodist University PEARSON Prentice Hall PEARSON

More information

Chapter 1 Introduction

Chapter 1 Introduction Plane-wave expansions have proven useful for solving numerous problems involving the radiation, reception, propagation, and scattering of electromagnetic and acoustic fields. Several textbooks and monographs

More information

DIFFRACTION PHYSICS THIRD REVISED EDITION JOHN M. COWLEY. Regents' Professor enzeritus Arizona State University

DIFFRACTION PHYSICS THIRD REVISED EDITION JOHN M. COWLEY. Regents' Professor enzeritus Arizona State University DIFFRACTION PHYSICS THIRD REVISED EDITION JOHN M. COWLEY Regents' Professor enzeritus Arizona State University 1995 ELSEVIER Amsterdam Lausanne New York Oxford Shannon Tokyo CONTENTS Preface to the first

More information

Design of a High Frequency Equal Width Annular Array Transducer for Medical Imaging

Design of a High Frequency Equal Width Annular Array Transducer for Medical Imaging 12 INTERNATIONNAL JOURNAL OF APPLIED BIOMEDICAL ENGINEERING VOL.6, NO.1 2013 Design of a High Frequency Equal Width Annular Array Transducer for Medical Imaging Y. Qian 1 and N. R. Harris 1, ABSTRACT This

More information

Transducer design simulation using finite element method

Transducer design simulation using finite element method Transducer design simulation using finite element method Wenwu Cao Whitaker Center for Medical Ultrasonic Transducer Engineering Department of Mathematics and Materials Research Laboratory The Pennsylvania

More information

Today s menu. Last lecture. Measurement of volume flow rate. Measurement of volume flow rate (cont d...) Differential pressure flow meters

Today s menu. Last lecture. Measurement of volume flow rate. Measurement of volume flow rate (cont d...) Differential pressure flow meters Last lecture Analog-to-digital conversion (Ch. 1.1). Introduction to flow measurement systems (Ch. 12.1). Today s menu Measurement of volume flow rate Differential pressure flowmeters Mechanical flowmeters

More information

EL-GY 6813/BE-GY 6203 Medical Imaging, Fall 2016 Final Exam

EL-GY 6813/BE-GY 6203 Medical Imaging, Fall 2016 Final Exam EL-GY 6813/BE-GY 6203 Medical Imaging, Fall 2016 Final Exam (closed book, 1 sheets of notes double sided allowed, no calculator or other electronic devices allowed) 1. Ultrasound Physics (15 pt) A) (9

More information

Introduction to the Mathematics of Medical Imaging

Introduction to the Mathematics of Medical Imaging Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia

More information

Analysis of second-harmonic generation microscopy under refractive index mismatch

Analysis of second-harmonic generation microscopy under refractive index mismatch Vol 16 No 11, November 27 c 27 Chin. Phys. Soc. 19-1963/27/16(11/3285-5 Chinese Physics and IOP Publishing Ltd Analysis of second-harmonic generation microscopy under refractive index mismatch Wang Xiang-Hui(

More information

Fourier Optics - Exam #1 Review

Fourier Optics - Exam #1 Review Fourier Optics - Exam #1 Review Ch. 2 2-D Linear Systems A. Fourier Transforms, theorems. - handout --> your note sheet B. Linear Systems C. Applications of above - sampled data and the DFT (supplement

More information

Accelerator Physics. Tip World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI BANGALORE. Second Edition. S. Y.

Accelerator Physics. Tip World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI BANGALORE. Second Edition. S. Y. Accelerator Physics Second Edition S. Y. Lee Department of Physics, Indiana University Tip World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI BANGALORE Contents Preface Preface

More information

CAPABILITY OF SEGMENTED ANNULAR ARRAYS TO GENERATE 3-D ULTRASONIC IMAGING

CAPABILITY OF SEGMENTED ANNULAR ARRAYS TO GENERATE 3-D ULTRASONIC IMAGING CAPABILITY O SEGMENTED ANNULAR ARRAYS TO GENERATE 3-D ULTRASONIC IMAGING PACS RE.: 43.38.HZ, 43.35, 43.35.BC, 43.60 Ullate Luis G.; Martínez Oscar; Akhnak Mostafa; Montero rancisco Instituto de Automática

More information

Course Secretary: Christine Berber O3.095, phone x-6351,

Course Secretary: Christine Berber O3.095, phone x-6351, IMPRS: Ultrafast Source Technologies Franz X. Kärtner (Umit Demirbas) & Thorsten Uphues, Bldg. 99, O3.097 & Room 6/3 Email & phone: franz.kaertner@cfel.de, 040 8998 6350 thorsten.uphues@cfel.de, 040 8998

More information

EE 5345 Biomedical Instrumentation Lecture 6: slides

EE 5345 Biomedical Instrumentation Lecture 6: slides EE 5345 Biomedical Instrumentation Lecture 6: slides 129-147 Carlos E. Davila, Electrical Engineering Dept. Southern Methodist University slides can be viewed at: http:// www.seas.smu.edu/~cd/ee5345.html

More information

Lasers and Electro-optics

Lasers and Electro-optics Lasers and Electro-optics Second Edition CHRISTOPHER C. DAVIS University of Maryland III ^0 CAMBRIDGE UNIVERSITY PRESS Preface to the Second Edition page xv 1 Electromagnetic waves, light, and lasers 1

More information

ACOUSTIC TRANSMISSION WITH MODE CONVERSION PHENOMENON

ACOUSTIC TRANSMISSION WITH MODE CONVERSION PHENOMENON ABCM Symposium Series in Mechatronics - Vol. 2 - pp.113-120 Copyright 2006 by ABCM Proceedings of COBEM 2005 Copyright 2005 by ABCM 18th International Congress of Mechanical Engineering November 6 11,

More information

Topic 4: Waves 4.3 Wave characteristics

Topic 4: Waves 4.3 Wave characteristics Guidance: Students will be expected to calculate the resultant of two waves or pulses both graphically and algebraically Methods of polarization will be restricted to the use of polarizing filters and

More information

SOFT X-RAYS AND EXTREME ULTRAVIOLET RADIATION

SOFT X-RAYS AND EXTREME ULTRAVIOLET RADIATION SOFT X-RAYS AND EXTREME ULTRAVIOLET RADIATION Principles and Applications DAVID ATTWOOD UNIVERSITY OF CALIFORNIA, BERKELEY AND LAWRENCE BERKELEY NATIONAL LABORATORY CAMBRIDGE UNIVERSITY PRESS Contents

More information

Waves Encountering Barriers

Waves Encountering Barriers Waves Encountering Barriers Reflection and Refraction: When a wave is incident on a boundary that separates two regions of different wave speed, part of the wave is reflected and part is transmitted. Figure

More information

Nondiffracting Waves in 2D and 3D

Nondiffracting Waves in 2D and 3D Nondiffracting Waves in 2D and 3D A thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Physics from the College of William and Mary by Matthew Stephen

More information

The Fractional Fourier Transform with Applications in Optics and Signal Processing

The Fractional Fourier Transform with Applications in Optics and Signal Processing * The Fractional Fourier Transform with Applications in Optics and Signal Processing Haldun M. Ozaktas Bilkent University, Ankara, Turkey Zeev Zalevsky Tel Aviv University, Tel Aviv, Israel M. Alper Kutay

More information

10. OPTICAL COHERENCE TOMOGRAPHY

10. OPTICAL COHERENCE TOMOGRAPHY 1. OPTICAL COHERENCE TOMOGRAPHY Optical coherence tomography (OCT) is a label-free (intrinsic contrast) technique that enables 3D imaging of tissues. The principle of its operation relies on low-coherence

More information

Electromagnetic Theory for Microwaves and Optoelectronics

Electromagnetic Theory for Microwaves and Optoelectronics Keqian Zhang Dejie Li Electromagnetic Theory for Microwaves and Optoelectronics Translated by authors With 259 Figures Springer Contents 1 Basic Electromagnetic Theory 1 1.1 Maxwell's Equations 1 1.1.1

More information

Phys 531 Lecture 27 6 December 2005

Phys 531 Lecture 27 6 December 2005 Phys 531 Lecture 27 6 December 2005 Final Review Last time: introduction to quantum field theory Like QM, but field is quantum variable rather than x, p for particle Understand photons, noise, weird quantum

More information

Principles of Nuclear Magnetic Resonance Microscopy

Principles of Nuclear Magnetic Resonance Microscopy Principles of Nuclear Magnetic Resonance Microscopy Paul T. Callaghan Department of Physics and Biophysics Massey University New Zealand CLARENDON PRESS OXFORD CONTENTS 1 PRINCIPLES OF IMAGING 1 1.1 Introduction

More information

APPLICATION-DIRECTED MODELING OF RADIATION AND PROPAGATION OF ELASTIC WAVES IN ANISOTROPIC MEDIA: GPSS AND OPOSSM

APPLICATION-DIRECTED MODELING OF RADIATION AND PROPAGATION OF ELASTIC WAVES IN ANISOTROPIC MEDIA: GPSS AND OPOSSM APPLICATION-DIRECTED MODELING OF RADIATION AND PROPAGATION OF ELASTIC WAVES IN ANISOTROPIC MEDIA: GPSS AND OPOSSM M. Spies, F. Walte Fraunhofer-Institute for Nondestructive Testing (IzfP) 66123 Saarbriicken,

More information

Transmission Electron Microscopy

Transmission Electron Microscopy L. Reimer H. Kohl Transmission Electron Microscopy Physics of Image Formation Fifth Edition el Springer Contents 1 Introduction... 1 1.1 Transmission Electron Microscopy... 1 1.1.1 Conventional Transmission

More information

Pixel-based Beamforming for Ultrasound Imaging

Pixel-based Beamforming for Ultrasound Imaging Pixel-based Beamforming for Ultrasound Imaging Richard W. Prager and Nghia Q. Nguyen Department of Engineering Outline v Introduction of Ultrasound Imaging v Image Formation and Beamforming v New Time-delay

More information

Basic System. Basic System. Sonosite 180. Acuson Sequoia. Echo occurs at t=2z/c where c is approximately 1500 m/s or 1.5 mm/µs

Basic System. Basic System. Sonosite 180. Acuson Sequoia. Echo occurs at t=2z/c where c is approximately 1500 m/s or 1.5 mm/µs Bioengineering 280A Principles of Biomedical Imaging Fall Quarter 2007 Ultrasound Lecture Sonosite 80 From Suetens 2002 Acuson Sequoia Basic System Basic System Echo occurs at t=2/c where c is approximately

More information

Practical Results of Ultrasonic Imaging by Inverse Wave Field Extrapolation

Practical Results of Ultrasonic Imaging by Inverse Wave Field Extrapolation ECNDT 2006 - Th.2.3.1 Practical Results of Ultrasonic Imaging by Inverse Wave Field Extrapolation Niels PÖRTZGEN, RTD Group, Rotterdam, The Netherlands Abstract: Array technology in non-destructive inspection

More information

Boundary. DIFFERENTIAL EQUATIONS with Fourier Series and. Value Problems APPLIED PARTIAL. Fifth Edition. Richard Haberman PEARSON

Boundary. DIFFERENTIAL EQUATIONS with Fourier Series and. Value Problems APPLIED PARTIAL. Fifth Edition. Richard Haberman PEARSON APPLIED PARTIAL DIFFERENTIAL EQUATIONS with Fourier Series and Boundary Value Problems Fifth Edition Richard Haberman Southern Methodist University PEARSON Boston Columbus Indianapolis New York San Francisco

More information

Shear waves in solid-state materials

Shear waves in solid-state materials Shear waves in solid-state materials TEAS Related topics Ultrasonic transmission measurement, propagation of ultrasound waves, ultrasound wave modes, shear waves, longitudinal and transverse waves, modulus

More information

The Physics of Doppler Ultrasound. HET408 Medical Imaging

The Physics of Doppler Ultrasound. HET408 Medical Imaging The Physics of Doppler Ultrasound HET408 Medical Imaging 1 The Doppler Principle The basis of Doppler ultrasonography is the fact that reflected/scattered ultrasonic waves from a moving interface will

More information

Nature of diffraction. Diffraction

Nature of diffraction. Diffraction Nature of diffraction Diffraction From Grimaldi to Maxwell Definition of diffraction diffractio, Francesco Grimaldi (1665) The effect is a general characteristics of wave phenomena occurring whenever a

More information

Light Propagation in Free Space

Light Propagation in Free Space Intro Light Propagation in Free Space Helmholtz Equation 1-D Propagation Plane waves Plane wave propagation Light Propagation in Free Space 3-D Propagation Spherical Waves Huygen s Principle Each point

More information

Index. p, lip, 78 8 function, 107 v, 7-8 w, 7-8 i,7-8 sine, 43 Bo,94-96

Index. p, lip, 78 8 function, 107 v, 7-8 w, 7-8 i,7-8 sine, 43 Bo,94-96 p, lip, 78 8 function, 107 v, 7-8 w, 7-8 i,7-8 sine, 43 Bo,94-96 B 1,94-96 M,94-96 B oro!' 94-96 BIro!' 94-96 I/r, 79 2D linear system, 56 2D FFT, 119 2D Fourier transform, 1, 12, 18,91 2D sinc, 107, 112

More information

Acoustic radiation by means of an acoustic dynamic stiffness matrix in spherical coordinates

Acoustic radiation by means of an acoustic dynamic stiffness matrix in spherical coordinates Acoustic radiation by means of an acoustic dynamic stiffness matrix in spherical coordinates Kauê Werner and Júlio A. Cordioli. Department of Mechanical Engineering Federal University of Santa Catarina

More information

Feasibility of non-linear simulation for Field II using an angular spectrum approach

Feasibility of non-linear simulation for Field II using an angular spectrum approach Downloaded from orbit.dtu.dk on: Aug 22, 218 Feasibility of non-linear simulation for using an angular spectrum approach Du, Yigang; Jensen, Jørgen Arendt Published in: 28 IEEE Ultrasonics Symposium Link

More information

Lecture 9: Introduction to Diffraction of Light

Lecture 9: Introduction to Diffraction of Light Lecture 9: Introduction to Diffraction of Light Lecture aims to explain: 1. Diffraction of waves in everyday life and applications 2. Interference of two one dimensional electromagnetic waves 3. Typical

More information

Diffractive Optics. Professor 송석호, Physics Department (Room #36-401) , ,

Diffractive Optics. Professor 송석호, Physics Department (Room #36-401) , , Diffractive Optics Professor 송석호, Physics Department (Room #36-401) 2220-0923, 010-4546-1923, shsong@hanyang.ac.kr Office Hours Mondays 10:00-12:00, Wednesdays 10:00-12:00 TA 윤재웅 (Ph.D. student, Room #36-415)

More information

THE study of beampatterns and their use in imaging systems

THE study of beampatterns and their use in imaging systems IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL, VOL. 64, NO. 7, JULY 2017 1045 The Ultrasound Needle Pulse Kevin J. Parker, Fellow, IEEE, Shujie Chen, Student Member, IEEE, and

More information

I. INTRODUCTION. II. MODEL DEFINITIONS A. Propagation model

I. INTRODUCTION. II. MODEL DEFINITIONS A. Propagation model Theory and experiment of Fourier Bessel field calculation and tuning of a pulsed wave annular array Paul D Fox a) Ørsted.DTU, Technical University of Denmark, Building 348, DK-2800 Lyngby, Denmark Jiqi

More information

674 JVE INTERNATIONAL LTD. JOURNAL OF VIBROENGINEERING. MAR 2015, VOLUME 17, ISSUE 2. ISSN

674 JVE INTERNATIONAL LTD. JOURNAL OF VIBROENGINEERING. MAR 2015, VOLUME 17, ISSUE 2. ISSN 1545. The improved separation method of coherent sources with two measurement surfaces based on statistically optimized near-field acoustical holography Jin Mao 1, Zhongming Xu 2, Zhifei Zhang 3, Yansong

More information

Signal Loss. A1 A L[Neper] = ln or L[dB] = 20log 1. Proportional loss of signal amplitude with increasing propagation distance: = α d

Signal Loss. A1 A L[Neper] = ln or L[dB] = 20log 1. Proportional loss of signal amplitude with increasing propagation distance: = α d Part 6 ATTENUATION Signal Loss Loss of signal amplitude: A1 A L[Neper] = ln or L[dB] = 0log 1 A A A 1 is the amplitude without loss A is the amplitude with loss Proportional loss of signal amplitude with

More information

Contents. Associated Editors and Contributors...XXIII

Contents. Associated Editors and Contributors...XXIII Contents Associated Editors and Contributors...XXIII 1 Fundamentals of Piezoelectricity...1 1.1 Introduction...1 1.2 The Piezoelectric Effect...2 1.3 Mathematical Formulation of the Piezoelectric Effect.

More information

Modular Monochromatic Colorings, Spectra and Frames in Graphs

Modular Monochromatic Colorings, Spectra and Frames in Graphs Western Michigan University ScholarWorks at WMU Dissertations Graduate College 12-2014 Modular Monochromatic Colorings, Spectra and Frames in Graphs Chira Lumduanhom Western Michigan University, chira@swu.ac.th

More information

Physical principles of Harmonic Imaging Min Joo Choi, PhD

Physical principles of Harmonic Imaging Min Joo Choi, PhD Physical principles of Harmonic Imaging Min Joo Choi, PhD Department Biomedical Engineering College of Medicine, Cheju National University School of Medicine, King s College of London, University of London

More information

Time-Reversed Waves and Subwavelength Focusing

Time-Reversed Waves and Subwavelength Focusing Time-Reversed Waves and Subwavelength Focusing Mathias Fink, Geoffroy Lerosey, Fabrice Lemoult, Julien de Rosny Arnaud Tourin, Arnaud Derode, Philippe Roux, Ros-Kiri Ing, Carsten Draeger Institut Langevin

More information

5. LIGHT MICROSCOPY Abbe s theory of imaging

5. LIGHT MICROSCOPY Abbe s theory of imaging 5. LIGHT MICROSCOPY. We use Fourier optics to describe coherent image formation, imaging obtained by illuminating the specimen with spatially coherent light. We define resolution, contrast, and phase-sensitive

More information

Classical Electrodynamics

Classical Electrodynamics Classical Electrodynamics Third Edition John David Jackson Professor Emeritus of Physics, University of California, Berkeley JOHN WILEY & SONS, INC. Contents Introduction and Survey 1 I.1 Maxwell Equations

More information

Structure of Biological Materials

Structure of Biological Materials ELEC ENG 3BA3: Structure of Biological Materials Notes for Lecture #19 Monday, November 22, 2010 6.5 Nuclear medicine imaging Nuclear imaging produces images of the distribution of radiopharmaceuticals

More information

EFIT SIMULATIONS FOR ULTRASONIC NDE

EFIT SIMULATIONS FOR ULTRASONIC NDE EFIT SIMULATIONS FOR ULTRASONIC NDE René Marklein, Karl-Jörg Langenberg, Klaus Mayer (University of Kassel, Department of Electrical and Computer Engineering, Electromagnetic Field Theory, Wilhelmshöher

More information

Phased Array Inspection at Elevated Temperatures

Phased Array Inspection at Elevated Temperatures Phased Array Inspection at Elevated Temperatures Mohammad Marvasti 1, Mike Matheson 2, Michael Wright, Deepak Gurjar, Philippe Cyr, Steven Peters Eclipse Scientific Inc., 97 Randall Dr., Waterloo, Ontario,

More information

AP Goal 1. Physics knowledge

AP Goal 1. Physics knowledge Physics 2 AP-B This course s curriculum is aligned with College Board s Advanced Placement Program (AP) Physics B Course Description, which supports and encourages the following broad instructional goals:

More information

1. Consider the biconvex thick lens shown in the figure below, made from transparent material with index n and thickness L.

1. Consider the biconvex thick lens shown in the figure below, made from transparent material with index n and thickness L. Optical Science and Engineering 2013 Advanced Optics Exam Answer all questions. Begin each question on a new blank page. Put your banner ID at the top of each page. Please staple all pages for each individual

More information

Physical and Biological Properties of Agricultural Products Acoustic, Electrical and Optical Properties and Biochemical Property

Physical and Biological Properties of Agricultural Products Acoustic, Electrical and Optical Properties and Biochemical Property Physical and Biological Properties of Agricultural Products Acoustic, Electrical and Optical Properties and Biochemical Property 1. Acoustic and Vibrational Properties 1.1 Acoustics and Vibration Engineering

More information

Light as Wave Motion p. 1 Huygens' Ideas p. 2 Newton's Ideas p. 8 Complex Numbers p. 10 Simple Harmonic Motion p. 11 Polarized Waves in a Stretched

Light as Wave Motion p. 1 Huygens' Ideas p. 2 Newton's Ideas p. 8 Complex Numbers p. 10 Simple Harmonic Motion p. 11 Polarized Waves in a Stretched Introduction p. xvii Light as Wave Motion p. 1 Huygens' Ideas p. 2 Newton's Ideas p. 8 Complex Numbers p. 10 Simple Harmonic Motion p. 11 Polarized Waves in a Stretched String p. 16 Velocities of Mechanical

More information

FUNDAMENTALS OF OCEAN ACOUSTICS

FUNDAMENTALS OF OCEAN ACOUSTICS FUNDAMENTALS OF OCEAN ACOUSTICS Third Edition L.M. Brekhovskikh Yu.P. Lysanov Moscow, Russia With 120 Figures Springer Contents Preface to the Third Edition Preface to the Second Edition Preface to the

More information

PROPERTY STUDY ON EMATS WITH VISUALIZATION OF ULTRASONIC PROPAGATION

PROPERTY STUDY ON EMATS WITH VISUALIZATION OF ULTRASONIC PROPAGATION More Info at Open Access Database www.ndt.net/?id=18576 PROPERTY STUDY ON EMATS WITH VISUALIZATION OF ULTRASONIC PROPAGATION T. Yamamoto, T. Furukawa, I. Komura Japan Power Engineering and Inspection Corporation,

More information

I have nothing to disclose

I have nothing to disclose Critical Ultrasound for Patient Care April 6-8, 2016 Sonoma, CA Critical Ultrasound for Patient Care I have nothing to disclose April 6-8, 2016 Sonoma, CA UC SF University of California San Francisco UC

More information

Lecture 11: Introduction to diffraction of light

Lecture 11: Introduction to diffraction of light Lecture 11: Introduction to diffraction of light Diffraction of waves in everyday life and applications Diffraction in everyday life Diffraction in applications Spectroscopy: physics, chemistry, medicine,

More information

Course Syllabus. OSE6211 Imaging & Optical Systems, 3 Cr. Instructor: Bahaa Saleh Term: Fall 2017

Course Syllabus. OSE6211 Imaging & Optical Systems, 3 Cr. Instructor: Bahaa Saleh Term: Fall 2017 Course Syllabus OSE6211 Imaging & Optical Systems, 3 Cr Instructor: Bahaa Saleh Term: Fall 2017 Email: besaleh@creol.ucf.edu Class Meeting Days: Tuesday, Thursday Phone: 407 882-3326 Class Meeting Time:

More information

Sound radiation of a plate into a reverberant water tank

Sound radiation of a plate into a reverberant water tank Sound radiation of a plate into a reverberant water tank Jie Pan School of Mechanical and Chemical Engineering, University of Western Australia, Crawley WA 6009, Australia ABSTRACT This paper presents

More information

ULTRASONIC INSPECTION, MATERIAL NOISE AND. Mehmet Bilgen and James H. Center for NDE Iowa State University Ames, IA 50011

ULTRASONIC INSPECTION, MATERIAL NOISE AND. Mehmet Bilgen and James H. Center for NDE Iowa State University Ames, IA 50011 ULTRASONIC INSPECTION, MATERIAL NOISE AND SURFACE ROUGHNESS Mehmet Bilgen and James H. Center for NDE Iowa State University Ames, IA 511 Rose Peter B. Nagy Department of Welding Engineering Ohio State

More information

Prentice Hall: Conceptual Physics 2002 Correlated to: Tennessee Science Curriculum Standards: Physics (Grades 9-12)

Prentice Hall: Conceptual Physics 2002 Correlated to: Tennessee Science Curriculum Standards: Physics (Grades 9-12) Tennessee Science Curriculum Standards: Physics (Grades 9-12) 1.0 Mechanics Standard: The student will investigate the laws and properties of mechanics. The student will: 1.1 investigate fundamental physical

More information

There and back again A short trip to Fourier Space. Janet Vonck 23 April 2014

There and back again A short trip to Fourier Space. Janet Vonck 23 April 2014 There and back again A short trip to Fourier Space Janet Vonck 23 April 2014 Where can I find a Fourier Transform? Fourier Transforms are ubiquitous in structural biology: X-ray diffraction Spectroscopy

More information

A beam of coherent monochromatic light from a distant galaxy is used in an optics experiment on Earth.

A beam of coherent monochromatic light from a distant galaxy is used in an optics experiment on Earth. Waves_P2 [152 marks] A beam of coherent monochromatic light from a distant galaxy is used in an optics experiment on Earth. The beam is incident normally on a double slit. The distance between the slits

More information

Vector diffraction theory of refraction of light by a spherical surface

Vector diffraction theory of refraction of light by a spherical surface S. Guha and G. D. Gillen Vol. 4, No. 1/January 007/J. Opt. Soc. Am. B 1 Vector diffraction theory of refraction of light by a spherical surface Shekhar Guha and Glen D. Gillen* Materials and Manufacturing

More information