Rendering of Directional Sources through Loudspeaker Arrays based on Plane Wave Decomposition

Size: px
Start display at page:

Download "Rendering of Directional Sources through Loudspeaker Arrays based on Plane Wave Decomposition"

Transcription

1 Rendering of Directional Sources through Loudspeaker Arrays based on Plane Wave Decomposition L. Bianchi, F. Antonacci, A. Sarti, S. Tubaro Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano Piazza Leonardo da Vinci,, 1, Milano, Italy lbianchi/antonacc/sarti/tubaro@elet.polimi.it Abstract In this paper we present a technique for the rendering of directional sources by means of loudspeaker arrays. The proposed methodology is based on a decomposition of the sound field in terms of plane waves. Within this framework the directivity of the source is naturally included in the rendering problem, therefore accommodating the directivity into the picture becomes much simpler. For this purpose, the loudspeaker array is subdivided into overlapping sub-arrays, each generating a plane wave component. The individual plane waves are then weighed by the desired directivity pattern. Simulations and experimental results show that the proposed technique is able to reproduce the sound field of directional sources with an improved accuracy with respect to existing techniques. I. INTRODUCTION In this paper we propose a method for the rendering of directive sound sources by means of a loudspeaker array. The problem of sound field rendering is of growing interest in multimedia for potential applications in immersive gaming, telepresence and all the scenarios in which the delivery of a realistic sound impression is an issue, and rendering virtual sources along with their directivity is of crucial importance for improving the immersivity. Different solutions exist in the literature for the purpose of sound field rendering. The most popular ones are Wave Field Synthesis (WFS) [1], []; Ambisonics and Higher Order Ambisonics (HOA) [] ; and least squares solution of the rendering equation (LS) [], [5]. One main issue in rendering systems is that of rendering a radiation pattern of the virtual sound sources different from the omnidirectional one. There are evidences in the literature [6] that the disparities in the sound field at different listening positions created by the directional properties of sources greatly contribute in making realistic and immersive the sound scene. The importance of directive sources comes at light also when we are interested in rendering a desired reverberant effect. The accuracy of this kind of rendering, indeed, depends simultaneously on the shape and properties of the boundaries MMSP 1, Sept. - Oct., 1, Pula (Sardinia), Italy /1/$1. c 1 IEEE. and on the directivity characteristics of the source [6]. Some works in the literature focus on this topic, in both nearfield [7] (for Ambisonics) and farfield [8] (for WFS). In [6] the author proposes a methodology for synthesizing WFS filters through a limited set of spherical harmonics. Similarly, in [9] the authors integrate directional sound sources in the WFS framework by means of circular harmonic expansion. With respect to previous works in the literature, authors in [9] focus on the efficiency of the implementation. In this paper we propose an alternate methodology, partially inspired by the novel idea of plenacoustic rendering. The plenacoustic function, first presented in [1], refers to a description of the acoustic pressure as a function of position and time. In particular, in [1] a method is proposed to infer the sound pressure at arbitrary time instants and spatial positions, based on a preliminary sampling. In [11] authors partially redefine the concept of plenacoustic function including directional information and devise a solution for measuring the plenacoustic function by means of a microphone array. Authors in [11] adopt a parameterization of the plenacoustic function that describes the sound field in terms of acoustic rays. The microphone array operates as an Observation Window, as it samples rays passing through it. More specifically, a uniform linear array of microphones is used and it is subdivided into sub-arrays. Each sub-array senses acoustic rays passing through its center (the central microphone of the sub-array). The acoustic rays passing through the whole extension of the array are then found by considering adjacent sub-arrays. In this paper we aim at using the plenacoustic framework for sound field rendering purposes. The basic idea is that of reversing the plenacoustic sensing paradigm adopted in [11]. A loudspeaker array, therefore, acts as an Emission Window (EW) and it is subdivided into sub-arrays, each in charge of rendering the planar portion of the acoustic wavefront that, departing from the virtual acoustic source, passes through the considered sub-array. The overall acoustic wavefront is obtained by the superposition effect after a phase alignment stage, which guarantees that the planar wavefronts produced by all the sub-arrays are aligned in phase. In order to render planar acoustic wavefronts, the beamforming filter of each sub-

2 array is obtained by means of the Superdirective Beamforming (SD), described in [1]. Coming back to the main purpose of this work, the directivity is implemented in the plenacoustic rendering framework by weighing the planar wavefronts of each sub-array through the directivity pattern of the virtual source to be rendered. The main advantage of the proposed technique lies in the fact that the directivity of the sound source is easily included in the rendering framework, whereas other techniques, such as WFS, need to adopt a preliminary analysis based on circular harmonics. Simulations and experimental results confirm that a more accurate rendering is possible with respect to other techniques available in the literature. The rest of the paper is organized as follows: section II formulates the problem and introduces the notation. Section III illustrates the theory of plenacoustic rendering. Moreover, in Section III, Superdirective Beamforming is summarized for what concerns the rendering from individual sub-arrays. Section IV discusses the problem of integrating the directivity into the framework and extends the application to wideband signals. Finally, section V shows simulations and experimental results to prove the accuracy of the methodology presented in this paper. II. PROBLEM STATEMENT Let us consider an array of M loudspeakers. Transducers locations in Cartesian coordinates are x m = [ ] T x m y m, m =,..., M 1. We adopt the linear array configuration with uniform spacing q along the x axis and we fix the reference frame so that y m = y for all the loudspeakers. The line on which the array lies determines two half-spaces. We are interested in rendering the sound field in the half-space y > y. We assume the sound source to be located in the origin of the reference frame, i.e. in x s = [ ] T. Any other location of the source can be considered as a roto-translation of the coordinate system and therefore is included in the current framework. The virtual source emits the signal s(t) and it is also characterized by the polar pattern, where α is the angle with respect to the x axis and ω is the radial frequency. Our goal is to render the sound field generated by the virtual source, including the radiation pattern. As depicted in Fig. 1, the array is decomposed into maximally overlapped sub-arrays, each consisting of W adjacent loudspeakers. The number of sub-arrays is M W + 1. Our goal is to operate a plane-wave decomposition, and each sub-array is in charge of reproducing a component of the overall soundfield. A generic point in space is denoted with the symbol x = [ x y ] T, and the spacetime dependent sound pressure measured in x at time t is p(x, t). Let us consider for a while that the signal s(t) is narrowband, and the wavenumber vector is k and k = k. Later on, we will extend our discussion to wideband signals by means of Fourier analysis. In a -dimensional sound field the dispersion equation can be written as k = (ω/ν) = kx +ky, where k x and k y are the components of the wavenumber vector k and ν is the speed of sound. We can therefore write 1 M W 1 M W y 1 W 1 W M W 1 M W M M 1 q x r α x y=y Figure 1: Reference frame and decomposition of the loudspeaker array into sub-arrays. that k y = k kx. Considering a plane wave travelling in direction α, the Cartesian components of its wavenumber vector are k x = (ω/ν) cos α and k y = (ω/ν) sin α. In this context we can reformulate the Plane Wave Decomposition (PWD), described in [1] for -dimensional fields, as p(x, t) = D(k x )e j(kt x ωt) dk x, (1) R The -dimensional position vector x is related to polar coordinates by x = r cos α and y = r sin α. The spacefrequency pressure sound field is p(x, ω) and its relationship with is p(x, ω) = π e j ω ν (ˆk T x) dα, () where ˆk = k/k = [ cos α sin α ] T is the wavenumber versor. Eq. () states that p(x, ω) can be thought as the sum of infinite plane waves, weighted by the polar pattern. As a matter of fact, a number of decompositions can be conceived for the pressure sound field. A popular one is the Rayleigh I integral, which claims that the sound field produced by a source in a volume V can be reproduced by an infinite distribution of secondary omnidirectional sound sources located on the boundary S of V. The Rayleigh I integral is used for sound field rendering purposes by Wave Field Synthesis (WFS) methodology [1], []. In this paper we aim at approximating the PWD in () by a finite sum of plane-wave components, rendered by the loudspeaker array. More specifically, each sub-array is in charge of reproducing a plane-wave component with travel direction α i, where α i is the angle formed by the x axis and the position vector of the central loudspeaker of the ith subarray, and it is given by ( ) ( ) yi+ W/ y α i = arctan = arctan ; () x i+ W/ x i+ W/ the operator denotes the floor truncation. Notice that we could also demand each sub-array to produce a set of plane waves rather than only a single plane-wave component. Some preliminary results showed that a tradeoff issue between the computational cost (the burden of the rendering engine is proportional to the number of components rendered by each sub-array) and the accuracy of the rendering exists. Therefore

3 we decided to synthesize only one plane wave for each subarray. A choice of this sort is not novel in the literature. As an example, in WFS a windowing function is introduced, which selects the active loudspeakers based on considerations on the geometry of the rendering problem []. Note also that the angle α i coincides with the angle between the virtual source and the loudspeaker. If a different location of the source would be rendered, () modifies accordingly. The distance between the origin and the central loudspeaker of the ith sub-array is r i = x i+ W/ + y i+ W/. () The space-frequency pressure sound field p(x, ω) is then approximated by p(x, ω) M W i= d i (ω)e j ω ν (ˆk T i x) = p P (x, ω), (5) where ˆk i = [ ] T cos α i sin α i. In section IV we derive the expression of the weight d i (ω) in order to reproduce the polar pattern. Section III, on the other hand, concerns the synthesis of planar wavefronts by sub-arrays. III. RENDERING PLANE WAVES In this section we investigate on the synthesis of plane waves through beamforming. The idea of applying beamforming tecniques to loudspeaker arrays is based on the duality between microphones and loudspeakers and it has been previously investigated in the literature [1], [15]. In our framework many beamforming techniques can be employed to synthesize plane waves from each sub-array. We have adopted, among the possible choices, the data-independent reformulation of the MVDR beamformer, at our knowledge introduced in [1]. In order to include in the framework wideband signals, we operate a subdivision of the frequency axis into L sub-bands. Let ω l be the central frequency of the lth sub-band and ω s,i = ω l ν q cos α i (6) is the spatial frequency of a plane wave with time radial frequency ω l and travel direction α i, i =,..., M W. We can express the beam-steering vector for the ith sub-array as ã i (l) = [ e j W/ ωs,i e j W/ ωs,i ] T. (7) Denoting the noise covariance matrix for the lth sub-band with R(l), the MVDR spatial filter f i (l) is obtained as [16] fi (l) = arg min b(l) H R(l) b(l) b(l) subject to b(l)hã i (l) = 1, where ( ) H denotes the Hermitian transpose. The solution of (8) is R 1 (l)ã i (l) fi (l) = ã H i (l) R 1 (l)ã i (l), (9) Under the assumption of spherical isotropic or diffuse noise field [1], the noise covariance matrix R can be replaced by (8) F \ W. F M W \ W d (l) e jω l τ d M W (l) e jω l τ M W Merge \ M h(l) Figure : Block diagram of the proposed rendering framework. the coherence matrix Γ. The element at row r and column c of the coherence matrix is [1] ( ) Γ(l) c π xr x c F s l r = sinc, (1) c L where x r and x c are the coordinates of the r and cth elements of the loudspeaker array, respectively; and F s is the sampling frequency. The beamforming filters of the ith sub-array are collected in the matrix F i as F i = [ fi ()... fi (L 1) ]. (11) IV. RENDERING THE SOURCE DIRECTIVITY The polar pattern is the energy of the sound field in far field conditions all around the source location. For a pointlike source, the polar pattern is related to the sound field on a circle of radius r through [1] ( ) e j(ω/ν)r 1 = p(α, r, ω), (1) r where the sound field has been expressed, for convenience, in polar coordinates. In order to accommodate the synthesis of the directivity in the proposed framework, we set the weights d i (ω) in (5) to be equal to the directivity function evaluated at angles α i, i.e. d i (l) = D(α i, ω l ). (1) Before approaching the problem of finding a rendering filter of the mth loudspeaker from different sub-arrays, we must align in phase the beamforming filter f i. In particular, we need to ensure that the the phase in sub-band l is coherent with the position of the virtual source. At this purpose we introduce the vector τ i as the travel time from the virtual source to all the loudspeakers in the ith sub-array, i.e. τ i = [ xi+ ν... x i+w 1 ν ] T. (1) In order to obtain the rendering filter h(l) for the whole array, as depicted in the block diagram in Fig., we define a specific merging operation to sum all the filter weights relative to the same loudspeaker. At this purpose the beamforming coefficient f i,m i (l) of the (m i)th loudspeaker in the ith subarray is weighed to satisfy the Constant Overlapp and Add constraint.

4 More specifically, the rendering coefficient relative to the mth loudspeaker can be expressed as h m (l) = i 1 i=i fi,m i (l) e jωlτ i,m i d i (l) z(m i), (15).5.5 where i = max(, m W ), i 1 = min(m W, m + W ). The index i = i,..., i 1 spans all the sub-arrays that include the mth loudspeaker and τ i,m 1 is the (m i)th element of τ i. The windowing function z is a squared Hanning window [17] of length W, i.e. z(w) = (.5 ( 1 cos ( ))) πw, w =,..., W 1. W 1 (16) The rendering coefficients h m (l) are collected into the frequency responses h m as h m = [ h m ()... h m (L 1) ]. (17) Finally, the matrix H collects the frequency responses h m as H = h.. (18) h M (a) R{ p D (x u, ω} (b) R{ p P (x u, ω} The M time-domain rendering filters h are obtained with the L-point Inverse Fourier transform over the rows of the matrix H. In particular, } h m (n) = IDFT l { hm = 1 L 1 h m (l)e jπnl/l (19) L is the nth element of the filter to be applied to the mth loudspeaker and L is the number of taps of the filter. l= V. RESULTS In this section we present some simulations and experiments to validate the presented technique. We set up an array composed by M = loudspeakers, with a uniform spacing of q = 6. cm. The spatial Nyquist frequency is f = ν/(q).7 khz, and ν = m/s is the speed of sound at. The amplitude of all the depicted sound fields has been normalized, so that the amplitude range is between 1 and 1. A. Simulations Following results are presented for frequencies in the set 5 Hz, 5 Hz, 1 khz, khz. We compare the results obtained with the proposed rendering framework (PR) and methods available in the literature, namely Geometric Rendering (GR) []; and an implementation of Wave Field Synthesis (WFS) that accommodates the rendering of directive sources [9]. In order to simulate the sound fields rendered by the above techniques, we define a grid of control points x u inside the listening area. The listening area has dimensions m m and it is centered at ( m, m). For the proposed methodology, the sound field p P (x u, ω) is simulated at the control points by (c) R{ p G (x u, ω} (d) R{ p W (x u, ω} Figure : Real part of the sound field of a dipole at frequency f = 1 khz (Fig. a) compared with sound fields rendered by PR (Fig. b), GR (Fig. c) and WFS (Fig. d). Amplitudes have been normalized between 1 and 1. applying the rendering filter h(ω) to the propagation function, i.e. p P (x u, ω) = M 1 m= G(x u x m, ω) h m (ω), () where G(x u x m, ω) is the Green s function at x u, given the loudspeaker position x m. Similarly, we compute the sound fields p G (x u, ω) and p W (x u, ω) of GR and WFS, respectively, where the rendering filters are obtained through a duly implementation of the methdologies proposed in [] and [9]. Simulated sound fields allow us to qualitatively compare the accuracy of rendering for different techniques. In order to get a more quantitative result, we simulate the sound field on T = 18 points on a circle of radius r mic = m. Fig. a shows the real part of the desired sound field R{ p D (x u, ω)}, generated by a dipole placed in x s = [ ] T and emitting a narrowband signal at frequency f = 1 khz. Figs. b, c and d show the real part of the sound fields rendered by PR, GR and WFS, respectively. In these simulations we use W = loudspeakers in each sub-array. In order to make easier the comparison between the different rendering methodologies, Fig. shows for PR ( D P ), GR ( )

5 (a) f = 5 Hz (b) f = 5 Hz (a) α C = 9 (b) α C = (c) f = 1 khz (d) f = khz Figure : Desired directivity patterns of a dipole (, red), compared to the directivity pattern rendered by PR ( D P, green), GR (, blue) and WFS ( D W, black) at frequencies 5 Hz (Fig. a), 5 Hz (Fig. b), 1 khz (Fig. c), khz (Fig. d). and WFS ( D W ), compared to the desired pattern ( ) at frequencies 5 Hz, 5 Hz, 1 khz, khz. The directivity patterns are shown for α α α M W, as outside this range all the rendering techniques fail, as also shown in [18]. At low frequencies all the techniques, except GR, produce similar results. On the other hand, at high frequencies the proposed rendering methodology avoids the ripples introduced by GR and follows the desired pattern more closely than WFS. It is also worth noticing that the proposed rendering technique has a computational burden that is comparable to WFS and lower than GR, as the latter requires the inversion of the propagation matrix. In order to show the capability of the proposed methodology to render different directivity patterns, Fig. 5 shows the desired cardioid pattern ( ) and the directivity patterns synthesized by the proposed methodology ( D P ) and GR ( ). The directivity pattern rendered by WFS is not shown because its synthesis would require a preliminary analysis of the desired sound field. For these simulations we use W = 1 loudspeakers in each sub-array. In detail, Figs. 5a, 5b, 5c and 5d show the comparison of directivity patterns synthesized to reproduce a source with a cardioid pattern oriented at directions α C = 9, 1, 15 and 18, respectively. The (c) α C = 15 (d) α C = 18 Figure 5: Desired directivity pattern (, red) oriented at direction α C, compared to the directivity patterns synthesized by PR ( D P, green) and GR (, blue). directivity patterns synthesized by the proposed methodology better approximate the desired pattern with respect to GR, avoiding both the artifacts at angles close to α and α M W and the ripples in the central region of the plots. B. Experiments We proceed to the evaluation on real world data of the proposed methodology, by showing the sound fields measured using the methodology described in [19], [] and already adopted in [18], [1], for the sake of evaluating the accuracy of sound field rendering techniques. We sample the sound field with a cardioid microphone at N mic = 18 positions on a circle of radius ρ = 95 cm centered in x = [ m m ] T. In order to avoid artifacts due to the interference of microphone bodies with the propagating wavefronts, a sequential measurement technique is used. More specifically, the microphone is moved by a precision stepper motor. The sound field is then reconstructed inside the measurement area using an interpolation and extrapolation procedure based on the decomposition of the sound field into circular harmonics. Fig. 6 shows desired and PR sound fields (using W = ) at f = 1 khz. For reasons of accuracy of the visualization we show the measured sound field only in a circle of radius ρ. Despite the artifacts due to the reconstruction methodology, the rendering result is quite accurate. In particular, the proposed methodology enables the

6 .5.5 (a) Measured sound field..5.5 (b) Desired sound field. Figure 6: Sound field of a dipole generated by a real-world implementation of the proposed rendering framework, measured using [19], []. Amplitudes have been normalized between 1 and 1. correct reproduction of the low-energy area in front of the loudspeaker array (α = 9 ). Notice that the desired sound field exhibits a complex structure, as the two lobes of the dipole pattern have opposite phases, and therefore the listening area is splitted in two parts with wavefronts with an offset of λ/, λ = πν/ω being the wavelength. Nonetheless, the rendering engine is able to accurately reproduce this scenario. VI. CONCLUSION In this paper we have presented a methodology to render directional virtual sources for sound field rendering applications. Our methodology is inspired by the idea of plenacoustic function. The loudspeaker array acts as an Emission Window and it is subdivided into sub-arrays. Each sub-array is in charge of rendering a portion of the overall acoustic wavefront. The decomposition of the complex sound field into basis functions is accomplished by means of Plane Wave Decomposition. In this scenario, the rendering of directive sources turns out to be a fairly simple procedure, as the directivity pattern becomes a weighting of the rendering filter. Both simulations and experimental results showed that the proposed methodology is more accurate and less computationally intensive with respect to other techniques available in the literature. From a computational point of view, the fact that the directivity information is inherently accommodated in the framework enables to avoid the preliminary analysis of the sound field, required in WFS. Moreover, the proposed methodology avoids the numerical issues due to the inversion of the propagation matrix, required in LS methods, resulting in a more efficient implementation. As in WFS, the proposed technique does not require to define a listening area. In [19] the author shows that the directivity pattern can be measured through Plane Wave Decomposition, which can be interpreted as multiple beamforming. The duality between directivity pattern and multiple beamforming paves the way to accommodate data-driven rendering in the proposed framework, as already done in WFS. REFERENCES [1] A. Berkhout, D. D. Vries, and P. Vogel, Acoustic control by wave field synthesis, J. Acoust. Soc. Am., vol. 9, pp , May 199. [] J. Ahrens, R. Rabenstein, and S. Spors, The theory of wave field synthesis revisited, in Proc. AES Conv. 1, Amsterdam, NL, May 17, 8. [] M. A. Gerzon, Periphony: With-height sound reproduction, J. Audio Eng. Soc., vol. 1, no. 1, pp. 1, 197. [] F. Antonacci, A. Calatroni, A. Canclini, A. Galbiati, A. Sarti, and S. Tubaro, Soundfield rendering with loudspeaker arrays through multiple beam shaping, in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 9), New Paltz, USA, Oct. 18 1, 9, pp [5] G. Lilis, D. Angelosante, and G. Giannakis, Sound field reproduction using the lasso, IEEE Trans. Audio, Speech, Lang. Process., vol. 18, pp , Aug. 1. [6] E. Corteel, Synthesis of directional sources using wave field synthesis, possibilities, and limitations, EURASIP Journal on Advances in Signal Processing, Feb. 7. [7] J. Ahrens and S. Spors, Rendering of virtual sound sources with arbitrary directivity in higher order ambisonics, in Proc. AES Conv. 1, New York, USA, Oct. 5 8, 7. [8] E. Verheijen, Sound reproduction by wave field synthesis, Ph.D. dissertation, Delft University of Technology, [9] J. Ahrens and S. Spors, Implementation of directional sources in wave field synthesis, in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 7), New Paltz, USA, Oct. 1, 7, pp [1] T. Ajdler, L. Sbaiz, and M. Vetterli, The plenacoustic function and its sampling, IEEE Trans. Signal Process., vol. 5, pp. 79 8, Oct. 6. [11] D. Markovic, G. Sandrini, F. Antonacci, A. Sarti, and S. Tubaro, Plenacoustic imaging in the ray space, in Proc. Int. Workshop on Acoustic Signal Enhancement (IWAENC 1), Aachen, DE, Sep. 6, 1. [1] K. Kumatani, L. Lu, J. McDonough, A. Ghoshal, and D. Klakow, Maximum negentropy beamforming with superdirectivity, in Proc. 18th European Signal Processing Conf. (EUSIPCO 1), Aalborg, DK, Aug. 7, 1, pp [1] E. G. Williams, Fourier Acoustics: Sound Radiation and Nearfield Acoustic Holography. London, UK: Academic Press, [1] D. B. D. Keele Jr., Implementation of straight-line and flat-panel constant beamwidth transducer (CBT) loudspeaker arrays using signal delays, in Proc. AES Conv. 11, Los Angeles, USA, Oct. 5 8,. [15] E. Mabande and W. Kellermann, Towards superdirective beamforming with loudspeaker arrays, in Proc. Int. Conf. on Acoustics (ICA 7), Madrid, ES, Sep. 7, 7. [16] J. Capon, High-resolution frequency-wavenumber spectrum analysis, Proceedings of the IEEE, vol. 57, no. 8, pp , [17] F. J. Harris, On the use of windows for harmonic analysis with the discrete fourier transform, Proceedings of the IEEE, vol. 66, no. 1, pp. 51 8, [18] L. Bianchi, F. Antonacci, A. Canclini, A. Sarti, and S. Tubaro, Localization of virtual acoustic sources based on the Hough transform for sound field rendering applications, in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 1), Vancouver, CA, May 6 1, 1. [19] A. Kuntz, Wave field analysis using virtual circular microphone arrays, Ph.D. dissertation, Technische Fakultät der Friedrich-Alexander- Universität Erlanger-Nürnberg, 8. [] A. Canclini, P. Annibale, F. Antonacci, A. Sarti, R. Rabenstein, and S. Tubaro, A methodology for evaluating the accuracy of wave field rendering techniques, in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 11), Prague, CZ, May 7, 11, pp [1] L. Bianchi, F. Antonacci, A. Canclini, A. Sarti, and S. Tubaro, A psychoacoustic-based analysis of the impact of pre-echoes and postechoes in soundfield rendering applications, in Proc. Int. Workshop on Acoustic Signal Enhancement (IWAENC 1), Aachen, DE, Sep. 6, 1.

A LINEAR OPERATOR FOR THE COMPUTATION OF SOUNDFIELD MAPS

A LINEAR OPERATOR FOR THE COMPUTATION OF SOUNDFIELD MAPS A LINEAR OPERATOR FOR THE COMPUTATION OF SOUNDFIELD MAPS L Bianchi, V Baldini Anastasio, D Marković, F Antonacci, A Sarti, S Tubaro Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico

More information

THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY

THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY Thushara D. Abhayapala, Department of Engineering, FEIT & Department of Telecom Eng, RSISE The Australian National

More information

LOCALIZATION OF VIRTUAL ACOUSTIC SOURCES BASED ON THE HOUGH TRANSFORM FOR SOUND FIELD RENDERING APPLICATIONS

LOCALIZATION OF VIRTUAL ACOUSTIC SOURCES BASED ON THE HOUGH TRANSFORM FOR SOUND FIELD RENDERING APPLICATIONS LOCALIZATION OF VIRTUAL ACOUSTIC SOURCES BASED ON THE HOUGH TRANSFORM FOR SOUND FIELD RENDERING APPLICATIONS L. Bianchi, F. Antonacci, A. Canclini, A. Sarti, S. Tubaro Dipartimento di Elettronica ed Informazione,

More information

Horizontal Local Sound Field Propagation Based on Sound Source Dimension Mismatch

Horizontal Local Sound Field Propagation Based on Sound Source Dimension Mismatch Journal of Information Hiding and Multimedia Signal Processing c 7 ISSN 73-4 Ubiquitous International Volume 8, Number 5, September 7 Horizontal Local Sound Field Propagation Based on Sound Source Dimension

More information

19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011

19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 19th European Signal Processing Conference (EUSIPCO 211) Barcelona, Spain, August 29 - September 2, 211 DEVELOPMENT OF PROTOTYPE SOUND DIRECTION CONTROL SYSTEM USING A TWO-DIMENSIONAL LOUDSPEAKER ARRAY

More information

Convention Paper Presented at the 126th Convention 2009 May 7 10 Munich, Germany

Convention Paper Presented at the 126th Convention 2009 May 7 10 Munich, Germany Audio Engineering Society Convention Paper Presented at the 6th Convention 9 May 7 Munich, Germany The papers at this Convention have been selected on the basis of a submitted abstract and extended precis

More information

arxiv: v1 [cs.sd] 30 Oct 2015

arxiv: v1 [cs.sd] 30 Oct 2015 ACE Challenge Workshop, a satellite event of IEEE-WASPAA 15 October 18-1, 15, New Paltz, NY ESTIMATION OF THE DIRECT-TO-REVERBERANT ENERGY RATIO USING A SPHERICAL MICROPHONE ARRAY Hanchi Chen, Prasanga

More information

AN ANALYTICAL APPROACH TO 2.5D SOUND FIELD REPRODUCTION EMPLOYING CIRCULAR DISTRIBUTIONS OF NON-OMNIDIRECTIONAL LOUDSPEAKERS

AN ANALYTICAL APPROACH TO 2.5D SOUND FIELD REPRODUCTION EMPLOYING CIRCULAR DISTRIBUTIONS OF NON-OMNIDIRECTIONAL LOUDSPEAKERS AN ANALYTICAL APPROACH TO 2.5D SOUND FIELD REPRODUCTION EMPLOYING CIRCULAR DISTRIBUTIONS OF NON-OMNIDIRECTIONAL LOUDSPEAKERS Jens Ahrens and Sascha Spors Deutsche Telekom Laboratories, Technische Universität

More information

Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials

Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials Mehmet ÇALIŞKAN a) Middle East Technical University, Department of Mechanical Engineering, Ankara, 06800,

More information

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS AMBISONICS SYMPOSIUM 29 June 2-27, Graz APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS Anton Schlesinger 1, Marinus M. Boone 2 1 University of Technology Delft, The Netherlands (a.schlesinger@tudelft.nl)

More information

Introduction to Acoustics Exercises

Introduction to Acoustics Exercises . 361-1-3291 Introduction to Acoustics Exercises 1 Fundamentals of acoustics 1. Show the effect of temperature on acoustic pressure. Hint: use the equation of state and the equation of state at equilibrium.

More information

DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS. Sakari Tervo

DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS. Sakari Tervo 7th European Signal Processing Conference (EUSIPCO 9) Glasgow, Scotland, August 4-8, 9 DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS Sakari Tervo Helsinki University of Technology Department of

More information

ACOUSTIC SOURCE LOCALIZATION BY FUSING DISTRIBUTED MICROPHONE ARRAYS MEASUREMENTS

ACOUSTIC SOURCE LOCALIZATION BY FUSING DISTRIBUTED MICROPHONE ARRAYS MEASUREMENTS 6th European ignal Processing Conference (EUIPCO 8), Lausanne, witzerland, August 5-9, 8, copyright by EURAIP ACOUTIC OURCE LOCALIZATION BY FUING DITRIBUTED MICROPHONE ARRAY MEAUREMENT G. Prandi, G. Valenzise,

More information

Representation of sound fields for audio recording and reproduction

Representation of sound fields for audio recording and reproduction Representation of sound fields for audio recording and reproduction F. M. Fazi a, M. Noisternig b and O. Warusfel b a University of Southampton, Highfield, SO171BJ Southampton, UK b Institut de Recherche

More information

JOINT SOURCE AND SENSOR PLACEMENT FOR SOUND FIELD CONTROL BASED ON EMPIRICAL INTERPOLATION METHOD Hongo, Bunkyo-ku, Tokyo , Japan

JOINT SOURCE AND SENSOR PLACEMENT FOR SOUND FIELD CONTROL BASED ON EMPIRICAL INTERPOLATION METHOD Hongo, Bunkyo-ku, Tokyo , Japan JOINT SOURCE AND SENSOR PLACEMENT FOR SOUND FIELD CONTROL BASED ON EMPIRICAL INTERPOLATION METHOD Shoichi Koyama 1,2, Gilles Chardon 3, and Laurent Daudet 2 1 The University of Tokyo, Graduate School of

More information

Acoustic Signal Processing. Algorithms for Reverberant. Environments

Acoustic Signal Processing. Algorithms for Reverberant. Environments Acoustic Signal Processing Algorithms for Reverberant Environments Terence Betlehem B.Sc. B.E.(Hons) ANU November 2005 A thesis submitted for the degree of Doctor of Philosophy of The Australian National

More information

Integrated Direct Sub-band Adaptive Volterra Filter and Its Application to Identification of Loudspeaker Nonlinearity

Integrated Direct Sub-band Adaptive Volterra Filter and Its Application to Identification of Loudspeaker Nonlinearity Integrated Direct Sub-band Adaptive Volterra Filter and Its Application to Identification of Loudspeaker Nonlinearity Satoshi Kinoshita Department of Electrical and Electronic Engineering, Kansai University

More information

LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS. Thushara D. Abhayapala 1, Michael C. T. Chan 2

LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS. Thushara D. Abhayapala 1, Michael C. T. Chan 2 ICSV4 Cairns Australia 9-2 July, 2007 LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS Thushara D. Abhayapala, Michael C. T. Chan 2 Research School of Information Sciences and Engineering

More information

674 JVE INTERNATIONAL LTD. JOURNAL OF VIBROENGINEERING. MAR 2015, VOLUME 17, ISSUE 2. ISSN

674 JVE INTERNATIONAL LTD. JOURNAL OF VIBROENGINEERING. MAR 2015, VOLUME 17, ISSUE 2. ISSN 1545. The improved separation method of coherent sources with two measurement surfaces based on statistically optimized near-field acoustical holography Jin Mao 1, Zhongming Xu 2, Zhifei Zhang 3, Yansong

More information

Fan Noise Control by Enclosure Modification

Fan Noise Control by Enclosure Modification Fan Noise Control by Enclosure Modification Moohyung Lee a, J. Stuart Bolton b, Taewook Yoo c, Hiroto Ido d, Kenichi Seki e a,b,c Ray W. Herrick Laboratories, Purdue University 14 South Intramural Drive,

More information

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo

USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo th European Signal Processing Conference (EUSIPCO 1 Bucharest, Romania, August 7-31, 1 USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS Toby Christian

More information

Acoustic holography. LMS Test.Lab. Rev 12A

Acoustic holography. LMS Test.Lab. Rev 12A Acoustic holography LMS Test.Lab Rev 12A Copyright LMS International 2012 Table of Contents Chapter 1 Introduction... 5 Chapter 2... 7 Section 2.1 Temporal and spatial frequency... 7 Section 2.2 Time

More information

A study on regularization parameter choice in Near-field Acoustical Holography

A study on regularization parameter choice in Near-field Acoustical Holography Acoustics 8 Paris A study on regularization parameter choice in Near-field Acoustical Holography J. Gomes a and P.C. Hansen b a Brüel & Kjær Sound and Vibration Measurement A/S, Skodsborgvej 37, DK-285

More information

A wavenumber approach to characterizing the diffuse field conditions in reverberation rooms

A wavenumber approach to characterizing the diffuse field conditions in reverberation rooms PROCEEDINGS of the 22 nd International Congress on Acoustics Isotropy and Diffuseness in Room Acoustics: Paper ICA2016-578 A wavenumber approach to characterizing the diffuse field conditions in reverberation

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Signal Processing in Acoustics Session 4aSP: Sensor Array Beamforming

More information

AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL. Tofigh Naghibi and Beat Pfister

AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL. Tofigh Naghibi and Beat Pfister AN APPROACH TO PREVENT ADAPTIVE BEAMFORMERS FROM CANCELLING THE DESIRED SIGNAL Tofigh Naghibi and Beat Pfister Speech Processing Group, Computer Engineering and Networks Lab., ETH Zurich, Switzerland {naghibi,pfister}@tik.ee.ethz.ch

More information

Binaural Beamforming Using Pre-Determined Relative Acoustic Transfer Functions

Binaural Beamforming Using Pre-Determined Relative Acoustic Transfer Functions Binaural Beamforming Using Pre-Determined Relative Acoustic Transfer Functions Andreas I. Koutrouvelis, Richard C. Hendriks, Richard Heusdens, Jesper Jensen and Meng Guo e-mails: {a.koutrouvelis, r.c.hendriks,

More information

Interior and exterior sound field control using two dimensional higher-order variable-directivity sources

Interior and exterior sound field control using two dimensional higher-order variable-directivity sources Interior and exterior sound field control using two dimensional higher-order variable-directivity sources M. A. Poletti a) Industrial Research Ltd, P.O. Box 31-31, Lower Hutt, Wellington 54, New Zealand

More information

IMPROVED MULTI-MICROPHONE NOISE REDUCTION PRESERVING BINAURAL CUES

IMPROVED MULTI-MICROPHONE NOISE REDUCTION PRESERVING BINAURAL CUES IMPROVED MULTI-MICROPHONE NOISE REDUCTION PRESERVING BINAURAL CUES Andreas I. Koutrouvelis Richard C. Hendriks Jesper Jensen Richard Heusdens Circuits and Systems (CAS) Group, Delft University of Technology,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 6, 2009 http://asa.aip.org 157th Meeting Acoustical Society of America Portland, Oregon 18-22 May 2009 Session 3aSA: Structural Acoustics and Vibration 3aSA3.

More information

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS

A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS ICASSP 2015 A STATE-SPACE APPROACH FOR THE ANALYSIS OF WAVE AND DIFFUSION FIELDS Stefano Maranò Donat Fäh Hans-Andrea Loeliger ETH Zurich, Swiss Seismological Service, 8092 Zürich ETH Zurich, Dept. Information

More information

Estimation of Reflections from Impulse Responses

Estimation of Reflections from Impulse Responses Proceedings of the International Symposium on Room Acoustics, ISRA 2 29 3 August 2, Melbourne, Australia Estimation of Reflections from Impulse Responses PACS: 43.55 Sakari Tervo, Teemu Korhonen 2, and

More information

A beamforming system based on the acousto-optic effect

A beamforming system based on the acousto-optic effect A beamforming system based on the acousto-optic effect Antoni Torras-Rosell atr@dfm.dtu.dk Danish Fundamental Metrology A/S, Matematiktorvet 37, 8 Kgs. yngby, Denmark. Finn Jacobsen fja@elektro.dtu.dk

More information

RESOLUTION ISSUES IN SOUNDFIELD IMAGING: A MULTIRESOLUTION APPROACH TO MULTIPLE SOURCE LOCALIZATION

RESOLUTION ISSUES IN SOUNDFIELD IMAGING: A MULTIRESOLUTION APPROACH TO MULTIPLE SOURCE LOCALIZATION IEEE Workshop on Applications of Signal Processing to Audio and Acoustics October 8-,, New Paltz, NY RESOLUTION ISSUES IN SOUNDFIELD IMAGING: A MULTIRESOLUTION APPROACH TO MULTIPLE SOURCE LOCALIZATION

More information

Holographic measuring techniques in acoustics

Holographic measuring techniques in acoustics Holographic measuring techniques in acoustics Ferenc Márki and Fülöp Augusztinovicz Budapest University of Technology and Economics, Dept. of Telecommunications H-1111 Budapest, Sztoczek u., ferko@hit.bme.hu

More information

Virtual Array Processing for Active Radar and Sonar Sensing

Virtual Array Processing for Active Radar and Sonar Sensing SCHARF AND PEZESHKI: VIRTUAL ARRAY PROCESSING FOR ACTIVE SENSING Virtual Array Processing for Active Radar and Sonar Sensing Louis L. Scharf and Ali Pezeshki Abstract In this paper, we describe how an

More information

Convention Paper Presented at the 137th Convention 2014 October 9 12 Los Angeles, USA

Convention Paper Presented at the 137th Convention 2014 October 9 12 Los Angeles, USA Audio Engineering Society Convention Paper Presented at the 137th Convention 014 October 9 1 os Angeles, USA 9098 This Convention paper was selected based on a submitted abstract and 750-word precis that

More information

Study and Design of Differential Microphone Arrays

Study and Design of Differential Microphone Arrays Springer Topics in Signal Processing 6 Study and Design of Differential Microphone Arrays Bearbeitet von Jacob Benesty, Chen Jingdong. Auflage 2. Buch. viii, 84 S. Hardcover ISBN 978 3 642 33752 9 Format

More information

Spherical Waves, Radiator Groups

Spherical Waves, Radiator Groups Waves, Radiator Groups ELEC-E5610 Acoustics and the Physics of Sound, Lecture 10 Archontis Politis Department of Signal Processing and Acoustics Aalto University School of Electrical Engineering November

More information

ENERGETIC ANALYSIS OF THE QUADRAPHONIC SYNTHESIS OF SOUND FIELDS FOR SOUND RECORDING AND AURALIZATION ENHANCING

ENERGETIC ANALYSIS OF THE QUADRAPHONIC SYNTHESIS OF SOUND FIELDS FOR SOUND RECORDING AND AURALIZATION ENHANCING ENERGETIC ANALYSIS OF THE QUADRAPHONIC SYNTHESIS OF SOUND FIELDS FOR SOUND RECORDING AND AURALIZATION ENHANCING PACS: Sound recording and reproducing systems, general concepts (43.38.Md) Davide Bonsi,

More information

Plane Wave Medical Ultrasound Imaging Using Adaptive Beamforming

Plane Wave Medical Ultrasound Imaging Using Adaptive Beamforming Downloaded from orbit.dtu.dk on: Jul 14, 2018 Plane Wave Medical Ultrasound Imaging Using Adaptive Beamforming Voxen, Iben Holfort; Gran, Fredrik; Jensen, Jørgen Arendt Published in: Proceedings of 5th

More information

OPTIMIZED SPHERICAL SOUND SOURCE FOR AURALIZATION WITH ARBITRARY SOURCE DIRECTIVITY

OPTIMIZED SPHERICAL SOUND SOURCE FOR AURALIZATION WITH ARBITRARY SOURCE DIRECTIVITY Proc. of the EAA Joint Symposium on Auralization and Ambisonics, Berlin, Germany, 3-5 April 214 OPTIMIZED SPHERICAL SOUND SOURCE FOR AURALIZATION WITH ARBITRARY SOURCE DIRECTIVITY Johannes Klein Institute

More information

A SHORT REVIEW OF SIGNALS AND SYSTEMS FOR SPATIAL AUDIO

A SHORT REVIEW OF SIGNALS AND SYSTEMS FOR SPATIAL AUDIO 17th European Signal Processing Conference (EUSIPCO 29) Glasgow, Scotland, August 24-28, 29 A SHORT REVIEW OF SIGNALS AND SYSTEMS FOR SPATIAL AUDIO P. Annibale, R. Rabenstein, S. Spors, P. Steffen Chair

More information

16 SUPERPOSITION & STANDING WAVES

16 SUPERPOSITION & STANDING WAVES Chapter 6 SUPERPOSITION & STANDING WAVES 6. Superposition of waves Principle of superposition: When two or more waves overlap, the resultant wave is the algebraic sum of the individual waves. Illustration:

More information

Sound field decomposition of sound sources used in sound power measurements

Sound field decomposition of sound sources used in sound power measurements Sound field decomposition of sound sources used in sound power measurements Spyros Brezas Physikalisch Technische Bundesanstalt Germany. Volker Wittstock Physikalisch Technische Bundesanstalt Germany.

More information

Quantitative source spectra from acoustic array measurements

Quantitative source spectra from acoustic array measurements BeBeC-2008-03 Quantitative source spectra from acoustic array measurements Ennes Brandenburgische Technische Universita t Cottbus, Institut fu r Verkehrstechnik, Siemens-Halske-Ring 14, 03046 Cottbus,

More information

CAPTURE AND RECREATION OF HIGHER ORDER 3D SOUND FIELDS VIA RECIPROCITY. Zhiyun Li, Ramani Duraiswami, Nail A. Gumerov

CAPTURE AND RECREATION OF HIGHER ORDER 3D SOUND FIELDS VIA RECIPROCITY. Zhiyun Li, Ramani Duraiswami, Nail A. Gumerov CAPTURE AND RECREATION OF HIGHER ORDER 3D SOUND FIELDS VIA RECIPROCITY Zhiyun Li, Ramani Duraiswami, Nail A Gumerov Perceptual Interaces and Reality Laboratory, UMIACS University o Maryland, College Park,

More information

Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements

Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements Frank Schultz 1 and Sascha Spors 1 1 Institute of Communications Engineering, Universität Rostock, R.-Wagner-Str. 31 (H8),

More information

A R T A - A P P L I C A T I O N N O T E

A R T A - A P P L I C A T I O N N O T E Loudspeaker Free-Field Response This AP shows a simple method for the estimation of the loudspeaker free field response from a set of measurements made in normal reverberant rooms. Content 1. Near-Field,

More information

Sound radiation from a loudspeaker, from a spherical pole cap, and from a piston in an infinite baffle1

Sound radiation from a loudspeaker, from a spherical pole cap, and from a piston in an infinite baffle1 Sound radiation from a loudspeaker, from a spherical pole cap, and from a piston in an infinite baffle1 Ronald M. Aarts, Philips Research Europe HTC 36 (WO-02), NL-5656 AE Eindhoven, The Netherlands, Also

More information

Enhancement of Noisy Speech. State-of-the-Art and Perspectives

Enhancement of Noisy Speech. State-of-the-Art and Perspectives Enhancement of Noisy Speech State-of-the-Art and Perspectives Rainer Martin Institute of Communications Technology (IFN) Technical University of Braunschweig July, 2003 Applications of Noise Reduction

More information

AUDIO INTERPOLATION RICHARD RADKE 1 AND SCOTT RICKARD 2

AUDIO INTERPOLATION RICHARD RADKE 1 AND SCOTT RICKARD 2 AUDIO INTERPOLATION RICHARD RADKE AND SCOTT RICKARD 2 Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 28, USA rjradke@ecse.rpi.edu 2 Program in Applied

More information

BROADBAND MIMO SONAR SYSTEM: A THEORETICAL AND EXPERIMENTAL APPROACH

BROADBAND MIMO SONAR SYSTEM: A THEORETICAL AND EXPERIMENTAL APPROACH BROADBAND MIMO SONAR SYSTM: A THORTICAL AND XPRIMNTAL APPROACH Yan Pailhas a, Yvan Petillot a, Chris Capus a, Keith Brown a a Oceans Systems Lab., School of PS, Heriot Watt University, dinburgh, Scotland,

More information

FAULT IDENTIFICATION AND LOCALIZATION FOR MOVING WHEELS BASED ON DE-DOPPLERIZATION BEAMFORMING KURTOSIS METHOD

FAULT IDENTIFICATION AND LOCALIZATION FOR MOVING WHEELS BASED ON DE-DOPPLERIZATION BEAMFORMING KURTOSIS METHOD BeBeC-218-D18 FAULT IDENTIFICATION AND LOCALIZATION FOR MOVING WHEELS BASED ON DE-DOPPLERIZATION BEAMFORMING KURTOSIS METHOD CHEN Long 1 and CHOY Yat Sze 1 1 Department of Mechanical Engineering, The Hong

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Engineering Acoustics Session 4pEAa: Sound Field Control in the Ear Canal

More information

Spherical harmonic analysis of wavefields using multiple circular sensor arrays

Spherical harmonic analysis of wavefields using multiple circular sensor arrays Spherical harmonic analysis of wavefields using multiple circular sensor arrays Thushara D. Abhayapala, Senior Member, IEEE and Aastha Gupta Student Member, IEEE Abstract Spherical harmonic decomposition

More information

Transient, planar, nonlinear acoustical holography for reconstructing acoustic pressure and particle velocity fields a

Transient, planar, nonlinear acoustical holography for reconstructing acoustic pressure and particle velocity fields a Denver, Colorado NOISE-CON 013 013 August 6-8 Transient, planar, nonlinear acoustical holography for reconstructing acoustic pressure and particle velocity fields a Yaying Niu * Yong-Joe Kim Noise and

More information

A LOCALIZATION METHOD FOR MULTIPLE SOUND SOURCES BY USING COHERENCE FUNCTION

A LOCALIZATION METHOD FOR MULTIPLE SOUND SOURCES BY USING COHERENCE FUNCTION 8th European Signal Processing Conference (EUSIPCO-2) Aalborg, Denmark, August 23-27, 2 A LOCALIZATION METHOD FOR MULTIPLE SOUND SOURCES BY USING COHERENCE FUNCTION Hiromichi NAKASHIMA, Mitsuru KAWAMOTO,

More information

2 u 1-D: 3-D: x + 2 u

2 u 1-D: 3-D: x + 2 u c 2013 C.S. Casari - Politecnico di Milano - Introduction to Nanoscience 2013-14 Onde 1 1 Waves 1.1 wave propagation 1.1.1 field Field: a physical quantity (measurable, at least in principle) function

More information

Experimental investigation of multiple-input multiple-output systems for sound-field analysis

Experimental investigation of multiple-input multiple-output systems for sound-field analysis Acoustic Array Systems: Paper ICA2016-158 Experimental investigation of multiple-input multiple-output systems for sound-field analysis Hai Morgenstern (a), Johannes Klein (b), Boaz Rafaely (a), Markus

More information

A REVERBERATOR BASED ON ABSORBENT ALL-PASS FILTERS. Luke Dahl, Jean-Marc Jot

A REVERBERATOR BASED ON ABSORBENT ALL-PASS FILTERS. Luke Dahl, Jean-Marc Jot Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-00), Verona, Italy, December 7-9, 000 A REVERBERATOR BASED ON ABSORBENT ALL-PASS FILTERS Lue Dahl, Jean-Marc Jot Creative Advanced

More information

Lecture 11: Introduction to diffraction of light

Lecture 11: Introduction to diffraction of light Lecture 11: Introduction to diffraction of light Diffraction of waves in everyday life and applications Diffraction in everyday life Diffraction in applications Spectroscopy: physics, chemistry, medicine,

More information

Nearfield Binaural Synthesis and Ambisonics. Dylan Menzies and Marwan Al-Akaidi. De Montfort University, Leicester UK. (Dated: December 20, 2006)

Nearfield Binaural Synthesis and Ambisonics. Dylan Menzies and Marwan Al-Akaidi. De Montfort University, Leicester UK. (Dated: December 20, 2006) Nearfield Binaural Synthesis and Ambisonics Dylan Menzies and Marwan Al-Akaidi De Montfort University, Leicester UK (Dated: December 20, 2006) 1 Abstract Ambisonic encodings can be rendered binaurally,

More information

TIME DOMAIN ACOUSTIC CONTRAST CONTROL IMPLEMENTATION OF SOUND ZONES FOR LOW-FREQUENCY INPUT SIGNALS

TIME DOMAIN ACOUSTIC CONTRAST CONTROL IMPLEMENTATION OF SOUND ZONES FOR LOW-FREQUENCY INPUT SIGNALS TIME DOMAIN ACOUSTIC CONTRAST CONTROL IMPLEMENTATION OF SOUND ZONES FOR LOW-FREQUENCY INPUT SIGNALS Daan H. M. Schellekens 12, Martin B. Møller 13, and Martin Olsen 4 1 Bang & Olufsen A/S, Struer, Denmark

More information

Convention Paper Presented at the 124th Convention 2008 May Amsterdam, The Netherlands

Convention Paper Presented at the 124th Convention 2008 May Amsterdam, The Netherlands Audio Engineering Society Convention Paper Presented at the 124th Convention 2008 May 17 20 Amsterdam, The Netherlands The papers at this Convention have been selected on the basis of a submitted abstract

More information

Lifting Parameterisation of the 9/7 Wavelet Filter Bank and its Application in Lossless Image Compression

Lifting Parameterisation of the 9/7 Wavelet Filter Bank and its Application in Lossless Image Compression Lifting Parameterisation of the 9/7 Wavelet Filter Bank and its Application in Lossless Image Compression TILO STRUTZ Deutsche Telekom AG, Hochschule für Telekommunikation Institute of Communications Engineering

More information

Lecture 9: Introduction to Diffraction of Light

Lecture 9: Introduction to Diffraction of Light Lecture 9: Introduction to Diffraction of Light Lecture aims to explain: 1. Diffraction of waves in everyday life and applications 2. Interference of two one dimensional electromagnetic waves 3. Typical

More information

Speaker Tracking and Beamforming

Speaker Tracking and Beamforming Speaker Tracking and Beamforming Dr. John McDonough Spoken Language Systems Saarland University January 13, 2010 Introduction Many problems in science and engineering can be formulated in terms of estimating

More information

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA Audio Engineering Society Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA 9421 This Convention paper was selected based on a submitted abstract and 750-word

More information

Modal Beamforming for Small Circular Arrays of Particle Velocity Sensors

Modal Beamforming for Small Circular Arrays of Particle Velocity Sensors Modal Beamforming for Small Circular Arrays of Particle Velocity Sensors Berke Gur Department of Mechatronics Engineering Bahcesehir University Istanbul, Turkey 34349 Email: berke.gur@eng.bau.edu.tr Abstract

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:

More information

MICROPHONE ARRAY METHOD FOR THE CHARACTERIZATION OF ROTATING SOUND SOURCES IN AXIAL FANS

MICROPHONE ARRAY METHOD FOR THE CHARACTERIZATION OF ROTATING SOUND SOURCES IN AXIAL FANS MICROPHONE ARRAY METHOD FOR THE CHARACTERIZATION OF ROTATING SOUND SOURCES IN AXIAL FANS Gert HEROLD, Ennes SARRADJ Brandenburg University of Technology, Chair of Technical Acoustics, Siemens-Halske-Ring

More information

FUNDAMENTAL LIMITATION OF FREQUENCY DOMAIN BLIND SOURCE SEPARATION FOR CONVOLVED MIXTURE OF SPEECH

FUNDAMENTAL LIMITATION OF FREQUENCY DOMAIN BLIND SOURCE SEPARATION FOR CONVOLVED MIXTURE OF SPEECH FUNDAMENTAL LIMITATION OF FREQUENCY DOMAIN BLIND SOURCE SEPARATION FOR CONVOLVED MIXTURE OF SPEECH Shoko Araki Shoji Makino Ryo Mukai Tsuyoki Nishikawa Hiroshi Saruwatari NTT Communication Science Laboratories

More information

Phase difference plays an important role in interference. Recalling the phases in (3.32) and (3.33), the phase difference, φ, is

Phase difference plays an important role in interference. Recalling the phases in (3.32) and (3.33), the phase difference, φ, is Phase Difference Phase difference plays an important role in interference. Recalling the phases in (3.3) and (3.33), the phase difference, φ, is φ = (kx ωt + φ 0 ) (kx 1 ωt + φ 10 ) = k (x x 1 ) + (φ 0

More information

EIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis

EIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis EIGENFILTERS FOR SIGNAL CANCELLATION Sunil Bharitkar and Chris Kyriakakis Immersive Audio Laboratory University of Southern California Los Angeles. CA 9. USA Phone:+1-13-7- Fax:+1-13-7-51, Email:ckyriak@imsc.edu.edu,bharitka@sipi.usc.edu

More information

arxiv:physics/ v1 [physics.class-ph] 13 Jul 2005

arxiv:physics/ v1 [physics.class-ph] 13 Jul 2005 Active control of sound inside a sphere via arxiv:physics/0507104v1 [physics.class-ph] 13 Jul 2005 control of the acoustic pressure at the boundary surface N. Epain, E. Friot Laboratoire de Mécanique et

More information

Polynomial Root-MUSIC Algorithm for Efficient Broadband Direction Of Arrival Estimation

Polynomial Root-MUSIC Algorithm for Efficient Broadband Direction Of Arrival Estimation Polynomial Root-MUSIC Algorithm for Efficient Broadband Direction Of Arrival Estimation William Coventry, Carmine Clemente, and John Soraghan University of Strathclyde, CESIP, EEE, 204, George Street,

More information

ON THE NOISE REDUCTION PERFORMANCE OF THE MVDR BEAMFORMER IN NOISY AND REVERBERANT ENVIRONMENTS

ON THE NOISE REDUCTION PERFORMANCE OF THE MVDR BEAMFORMER IN NOISY AND REVERBERANT ENVIRONMENTS 0 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ON THE NOISE REDUCTION PERFORANCE OF THE VDR BEAFORER IN NOISY AND REVERBERANT ENVIRONENTS Chao Pan, Jingdong Chen, and

More information

Source localization and separation for binaural hearing aids

Source localization and separation for binaural hearing aids Source localization and separation for binaural hearing aids Mehdi Zohourian, Gerald Enzner, Rainer Martin Listen Workshop, July 218 Institute of Communication Acoustics Outline 1 Introduction 2 Binaural

More information

CIRCULAR MICROPHONE ARRAY WITH TANGENTIAL PRESSURE GRADIENT SENSORS. Falk-Martin Hoffmann, Filippo Maria Fazi

CIRCULAR MICROPHONE ARRAY WITH TANGENTIAL PRESSURE GRADIENT SENSORS. Falk-Martin Hoffmann, Filippo Maria Fazi CIRCULAR MICROPHONE ARRAY WITH TANGENTIAL PRESSURE GRADIENT SENSORS Falk-Martin Hoffmann, Filippo Maria Fazi University of Southampton, SO7 BJ Southampton,United Kingdom fh2u2@soton.ac.uk, ff@isvr.soton.ac.uk

More information

Noise source localization on washing machines by conformal array technique and near field acoustic holography

Noise source localization on washing machines by conformal array technique and near field acoustic holography Proceedings of the IMAC-XXVIII February 1 4, 2010, Jacksonville, Florida USA 2010 Society for Experimental Mechanics Inc. Noise source localization on washing machines by conformal array technique and

More information

PHYSICS. Chapter 16 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT Pearson Education, Inc.

PHYSICS. Chapter 16 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT Pearson Education, Inc. PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 16 Lecture RANDALL D. KNIGHT 2017 Pearson Education, Inc. Chapter 16 Traveling Waves IN THIS CHAPTER, you will learn the basic properties

More information

Array Antennas. Chapter 6

Array Antennas. Chapter 6 Chapter 6 Array Antennas An array antenna is a group of antenna elements with excitations coordinated in some way to achieve desired properties for the combined radiation pattern. When designing an array

More information

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS

MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS 17th European Signal Processing Conference (EUSIPCO 29) Glasgow, Scotland, August 24-28, 29 MITIGATING UNCORRELATED PERIODIC DISTURBANCE IN NARROWBAND ACTIVE NOISE CONTROL SYSTEMS Muhammad Tahir AKHTAR

More information

A FINITE DIFFERENCE METHOD FOR THE EXCITATION OF A DIGITAL WAVEGUIDE STRING MODEL

A FINITE DIFFERENCE METHOD FOR THE EXCITATION OF A DIGITAL WAVEGUIDE STRING MODEL A FINITE DIFFERENCE METHOD FOR THE EXCITATION OF A DIGITAL WAVEGUIDE STRING MODEL Leonardo Gabrielli 1, Luca Remaggi 1, Stefano Squartini 1 and Vesa Välimäki 2 1 Universitá Politecnica delle Marche, Ancona,

More information

Audio Engineering Society. Convention Paper. Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Audio Engineering Society. Convention Paper. Presented at the 139th Convention 2015 October 29 November 1 New York, USA Audio Engineering Society Convention Paper Presented at the 139th Convention 015 October 9 November 1 New York, USA This paper was peer-reviewed as a complete manuscript for presentation at this Convention.

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation

CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation CMPT 318: Lecture 5 Complex Exponentials, Spectrum Representation Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University January 23, 2006 1 Exponentials The exponential is

More information

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs Paris Smaragdis TR2004-104 September

More information

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation Mikkel N. Schmidt and Morten Mørup Technical University of Denmark Informatics and Mathematical Modelling Richard

More information

ON THE LIMITATIONS OF BINAURAL REPRODUCTION OF MONAURAL BLIND SOURCE SEPARATION OUTPUT SIGNALS

ON THE LIMITATIONS OF BINAURAL REPRODUCTION OF MONAURAL BLIND SOURCE SEPARATION OUTPUT SIGNALS th European Signal Processing Conference (EUSIPCO 12) Bucharest, Romania, August 27-31, 12 ON THE LIMITATIONS OF BINAURAL REPRODUCTION OF MONAURAL BLIND SOURCE SEPARATION OUTPUT SIGNALS Klaus Reindl, Walter

More information

Co-Prime Arrays and Difference Set Analysis

Co-Prime Arrays and Difference Set Analysis 7 5th European Signal Processing Conference (EUSIPCO Co-Prime Arrays and Difference Set Analysis Usham V. Dias and Seshan Srirangarajan Department of Electrical Engineering Bharti School of Telecommunication

More information

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES Abstract March, 3 Mads Græsbøll Christensen Audio Analysis Lab, AD:MT Aalborg University This document contains a brief introduction to pitch

More information

5. LIGHT MICROSCOPY Abbe s theory of imaging

5. LIGHT MICROSCOPY Abbe s theory of imaging 5. LIGHT MICROSCOPY. We use Fourier optics to describe coherent image formation, imaging obtained by illuminating the specimen with spatially coherent light. We define resolution, contrast, and phase-sensitive

More information

Three-dimensional Radiated Sound Field Display System Using Directional Loudspeakers and Wave Field Synthesis

Three-dimensional Radiated Sound Field Display System Using Directional Loudspeakers and Wave Field Synthesis Acoust. Sci. & Tech. PAPER Three-dimensional Radiated Sound Field Display System Using Directional Loudspeakers and Wave Field Synthesis Toshiyuki Kimura, Yoko Yamakata, Michiaki Katsumoto, Takuma Okamoto

More information

Sound Recognition in Mixtures

Sound Recognition in Mixtures Sound Recognition in Mixtures Juhan Nam, Gautham J. Mysore 2, and Paris Smaragdis 2,3 Center for Computer Research in Music and Acoustics, Stanford University, 2 Advanced Technology Labs, Adobe Systems

More information

Near Optimal Adaptive Robust Beamforming

Near Optimal Adaptive Robust Beamforming Near Optimal Adaptive Robust Beamforming The performance degradation in traditional adaptive beamformers can be attributed to the imprecise knowledge of the array steering vector and inaccurate estimation

More information

Kernel-Based Formulations of Spatio-Spectral Transform and Three Related Transforms on the Sphere

Kernel-Based Formulations of Spatio-Spectral Transform and Three Related Transforms on the Sphere Kernel-Based Formulations of Spatio-Spectral Transform and Three Related Transforms on the Sphere Rod Kennedy 1 rodney.kennedy@anu.edu.au 1 Australian National University Azores Antipode Tuesday 15 July

More information

A technique based on the equivalent source method for measuring the surface impedance and reflection coefficient of a locally reacting material

A technique based on the equivalent source method for measuring the surface impedance and reflection coefficient of a locally reacting material A technique based on the equivalent source method for measuring the surface impedance and reflection coefficient of a locally reacting material Yong-Bin ZHANG 1 ; Wang-Lin LIN; Chuan-Xing BI 1 Hefei University

More information

LOW-COMPLEXITY ROBUST DOA ESTIMATION. P.O.Box 553, SF Tampere, Finland 313 Spl. Independenţei, Bucharest, Romania

LOW-COMPLEXITY ROBUST DOA ESTIMATION. P.O.Box 553, SF Tampere, Finland 313 Spl. Independenţei, Bucharest, Romania LOW-COMPLEXITY ROBUST ESTIMATION Bogdan Dumitrescu 1,2, Cristian Rusu 2, Ioan Tăbuş 1, Jaakko Astola 1 1 Dept. of Signal Processing 2 Dept. of Automatic Control and Computers Tampere University of Technology

More information