SPACIOUSNESS OF SOUND FIELDS CAPTURED BY SPHERICAL MICROPHONE ARRAYS

Size: px
Start display at page:

Download "SPACIOUSNESS OF SOUND FIELDS CAPTURED BY SPHERICAL MICROPHONE ARRAYS"

Transcription

1 BEN GURION UNIVERSITY OF THE NEGEV FACULTY OF ENGINEERING SCIENCES DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING SPACIOUSNESS OF SOUND FIELDS CAPTURED BY SPHERICAL MICROPHONE ARRAYS THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE M.Sc. DEGREE By Amir Avni Supervised by: Prof. Boaz Rafaely November 2010

2 BEN GURION UNIVERSITY OF THE NEGEV FACULTY OF ENGINEERING SCIENCES DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING SPACIOUSNESS OF SOUND FIELDS CAPTURED BY SPHERICAL MICROPHONE ARRAYS SUBMITTED IN PARTIAL FULFILLMENT FOR THE M.Sc DEGREE By Amir Avni Supervised by: Prof. Boaz Rafaely Author: Date: Supervisor: Date: Chairman of M.Sc. degree committee: Date: November 2010

3 Abstract Reproduction of sound fields measured in auditoria and concert halls has advanced significantly in the past decade, with reproduction quality improving as high-quality sound field capturing and reproduction systems became available. Spherical microphone arrays are among the recently developed systems for sound field capturing, facilitating the high quality processing and analysis of three-dimensional sound fields in the spherical harmonics domain. However, a clear relation between sound fields recorded by spherical microphone arrays and their perception with a reproduction system has not yet been established. A high-quality sound field reproduction can be examined objectively, by analyzing room acoustic parameters over time, frequency, and space, or subjectively, by analyzing listeners impressions of the reproduced sound, with spatial impression being an important acoustic quality of concert halls. The following thesis presents a comprehensive study of binaural reproduction, including spherical microphone array recording, application of head related transfer functions (HRTF), and their finite order representations in the spherical harmonics domain. First, HRTF reproductions composed by a finite spherical harmonics order are investigated, with sound localization cues as examined parameters to determine the quality of the HRTF spatial reconstruction. Second, interaural cross correlation (IACC), an accepted objective parameter for spatial perception, is investigated for various sound field reproductions in a variety of arrival times and frequency bandwidths, aiming to understand the minimal number of spherical harmonics coefficients needed to preserve the sound field spatial attributes. Third, the listener s subjective perception of several binaural reproductions is examined, aiming to understand the effect of spherical harmonics finite order sound field representation on different subjective parameters such as spatial perception. i

4 Acknowledgments I would like to express my great appreciation to Prof. Boaz Rafaely for his passionate and excellent guidance during my two years of research at Ben Gurion University. His enthusiasm and interest in his field are contagious and admirable. I have learned from him not only necessary knowledge for my research, but also the method of scientific thinking, how to look at the big picture while investigating the smallest details. Thank you for accepting me as part of your research group. I would also like to thank The Israel Science Foundation (grant No. 155/06) for the financial support that I was offered during the course of my M.Sc. degree. In addition, the Deutsche Telekom Laboratories and their crew for the help and support during the several last months of my research. Many thanks for the acoustics group colleagues, who helped me each with his unique way, either by enriching me with valuable scientific or technical knowledge, listening to my ideas and letting me express my thoughts, or just by creating a good and positive atmosphere. Thank you all for sharing your time with me in the lab, breaking the routine when it was needed. Finally, I would like to thank my girlfriend, Sharon, for always being there and support me. Her unique point of view has helped me more than once in times where I encountered obstacles. Thank you for showing interest in my work and being part of my life. Amir Avni ii

5 Contents 1 Introduction Motivation Overview Contributions Publications Literature Review Spherical arrays The diffuse sound field Head related transfer functions Subjective and objective room acoustic parameters - Definitions and past research ITD and ILD Analysis by HRFT Modeling in the Spherical Harmonics Domain The use of HRTF database Simulation study - MSE Simulation study - Interaural differences IACC of Sound Fields Represented by Spherical Harmonics Theory Study I - Simulated sound field Study II - Measured sound field IACC of Reverberant Sound Fields Theory Simulation study Spatial Perception of Sound with Binaural Reproduction and Spherical Microphone Array Recording Perceptual experiment iii

6 6.2 Results and discussion Summary and Future Work Conclusion Future work iv

7 List of Figures 2.1 Magnitude of b n as a function of kr Amplitude of a single plane wave as a function of kr HRTF example at (0, 20 ) Map of the CIPIC database directions Condition number of Y as a function of N HRTF error as a function of N Two ITD error estimations for Az=-80 as a function of N ILD error as a function of N IACC RMSE as a function of N ρ N magnitude as a function of N - open sphere ρ N magnitude as a function of N - rigid spehre Eq. (5.5) magnitude as a function of N Grid for a single subject Grid consists of timbre attributes Grid consists of spaciousness attributes Grid consists of artifacts attributes v

8 List of Tables 3.1 The spherical harmonics order for several error bounds ITD estimation using cross-correlation ITD estimation using linear fitting of the excess phase ILD error analysis for 1kHz, kr = ILD error analysis for 2kHz, kr = ILD error analysis for 3kHz, kr = IACC E values at various frequencies and E IACC E mean, std and error - KEMAR vs. em Correlation in a diffuse sound field for different octave bands Flow diagram of the Experimental method Flow diagram of the Experimental procedure A list of the triads used in the elicitation part Results for a single subject for the castanet stimuli vi

9 Abbreviations and Important Notations HRTF Head related transfer functions HRIR Head related impulse responses ITD Interaural time difference ILD Interaural level difference JND just noticeable differences ASW Apparent source width LEV Listener envelopment IACF Interaural correlation function IACC Interaural cross correlation LF Lateral fractions C Clarity G Sound strength [ ] E Parameter at early sound [ ] 3 Parameter 3 mid octave bands averaged [ ] L Parameter at late sound SFT Spherical Fourier Transform MSE Mean square error RT reverberation time RGT Repertory grid technique vii

10 Chapter 1 Introduction 1.1 Motivation Sound field capturing and reproduction has recently been investigated, with reproduction based on loudspeaker arrays [1], [2], or on head related transfer functions (HRTF) and headphones [3]. Spatial impression is an important acoustic quality of concert halls, with different desirable spatial impressions associated with different enclosure design objectives (i.e., music appreciation or understanding the speaker). A high quality reproduction using headphones will therefore aim to preserve spatial impression. Beranek [4] [5] describes several spaciousness parameters that characterize sound fields in halls in terms of attributes related to time, frequency, and the perception by the listener. The parameters are defined as scalars, which are a result of a quantization of the sound field in different times or directions. Some of the spaciousness parameters, such as the lateral fractions (LF) or the interaural cross correlation (IACC), are objective parameters, well defined mathematically and hence, they can be measured and calculated easily. Others, such as apparent source width (ASW) and listener envelopment (LEV), are sub- 1

11 jective parameters that can be measured through different subjective experiments. Several experiments have been performed trying to find a connection between subjective parameters and the known objective parameters to obtain a better understanding of the perception of sound by the listener. Some of these experiments include Baranek et al. [6], Morimoto et al. [7] [8], Bradley et al. [9] [10], and Furuya et al. [11]. The study of subjective measurements could be rather difficult since each individual perceives sound differently. Despite the understanding of the connections between spatial hearing and spaciousness parameters, the relations are still not clear and fully understood. Nevertheless, the experiments show some common attributes where most subjects perception is similar up to a certain variance. A common method to investigate sound perception by listeners involves the head related transfer functions (HRTF). The HRTF describe the frequency response between a source arriving from a specific direction and the measured data inside the listener s ears. Measuring an HRTF database using measurements from a vast number of directions can be useful for further analysis of some objective measurements. The database can be generated using either a dummy head [12] or human subjects [13]. The listener s ability to localize sound sources implies that HRTF hold valuable spatial data that can be analyzed. According to Lord Rayleigh s duplex theory [14], interaural time difference (ITD) and interaural level difference (ILD) are two binaural cues that complement each other for sound localization, and therefore, they can be estimated using an HRTF database. ITD is the time difference between the sound measured at both ears; it is usually effective at low frequencies where the wavelength is larger than the head. ILD, on the other hand, is the difference in decibels (db) between the magnitude of the sound pressure measured at both ears, which is affected mostly by head shading; 2

12 hence, ILD is noticeable mostly at high frequencies. Spherical microphone arrays have recently been developed to capture and analyze sound fields in rooms and halls. Plane wave decomposition of the soundfieldhasbeenshowntobeanefficientmethod,formulatedinthespherical harmonics domain [15], [16], for spatial analysis of sound fields in various rooms [17]. As these arrays become readily available, enabling accurate capture of complex sound fields, they may be used to reproduce sound fields more accurately and preserve objective spatial room parameters. The sound field, typically captured by a finite number of microphones, has incomplete representation in the spherical harmonic domain, due to the finite number of spherical harmonics coefficients [18]. Hence, room acoustic parameters may not be preserved using a low number of coefficients. The following thesis investigates the effect of sound fields represented by a finite order of spherical harmonics coefficients on several spatial attributes and suggests relations between the measured attributes, the sound field bandwidth, and the number of coefficients needed to preserve the attributes in a reconstructed sound field. Both objective studies and studies involving human subjects are performed to investigate these relations. A fine reconstruction of spatial attributes may facilitate a high quality sound field reproduction and evaluation of sound fields captured in auditoria and concert halls. 1.2 Overview The remainder of the thesis is organized as follows. Chapter 2 presents a literature review and important definitions used in this work, where Section 2.1 review spherical microphone arrays, their uses, and the definition of the 3

13 spherical harmonics domain, Section 2.2 and 2.3 present definitions of the diffuse sound field model and HRTF, and Section 2.4 includes definitions of room acoustics parameters relevant to this work and past research in the field. Chapter 3 comprises an analysis of HRTF in the spherical harmonics domain in terms of mean square error (MSE) and sound localization cues estimation. Chapter 4 examines IACC involving early room reflections, computed from binaural cues reproduced by a sound field captured using a spherical microphone array, including both simulations and experimental studies. Chapter 5 analyzes the spatial-temporal correlation of two measurement points in a reverberant sound field represented in the spherical harmonics domain. Chapter 6 presents a perceptual experiment, including several subjects, that was performed to analyze subject perception of several binaural reproductions. Finally, Chapter 7 summarizes this work with a conclusion. 1.3 Contributions This thesis presents several contributions: A formulation for sound measured at the listener s ears using a sum of coefficients, including spherical microphone array measurements and HRTF of a selected subject, both represented in the spherical harmonics domain. Using the formulation, a sound field can be re-synthesized and played through headphones, creating a similar perception of the original sound field. Analysis of the relations between HRTF represented by a finite order of spherical harmonics coefficients and their spatial structures, examining localization cues as a means to perform this analysis, proving a relation 4

14 between the spherical harmonics representation of a sound field, and localization cues (ILD and ITD) presented by this sound field. A deeper understanding of the relation between the order of the spherical harmonics representation of a measured sound field and the degree to which IACC is preserved by this representation, providing an objective measure to preservation of spaciousness. A deeper understanding of the relations between a sound field represented by a finite order of spherical harmonics coefficients, captured by a finite number of microphones, and its perception by a listener through a perceptual experiment. Among the examined attributes affected by the order and the number of microphones are timbre-related differences, spaciousness and various unpleasant artifacts. A formulation for a temporal-spatial cross correlation between two measurement points in a diffuse sound field, represented in the spherical harmonics domain with a rigid sphere between the measurements points, as a crude model for the human head. A better understanding of the relation between IACC L, calculated from late reflections, and the spherical harmonics order of a sound field, is achieved analyzing this formulation. Sound filed order can therefore be related to LEV via IACC L. 1.4 Publications The research performed as part of this thesis has been presented in several journal and conference papers, as follows. A. Avni, S. Spors, J. Ahrens and B. Rafaely, Spatial perception of 5

15 sound with binaural reproduction and spherical microphone array recording, (in preparation). A. Avni and B. Rafaely, Sound localization in a sound field represented by spherical harmonics, in Ambisonics Symposium 2010, Paris, France, May B. Rafaely and A. Avni, Inter-aural cross correlation in a sound field represented by spherical harmonics, J. Acoust. Soc. Am.,vol. 127, no. 2, pp , February A. Avni and B. Rafaely, Interaural cross correlation and spatial correlation in a sound field represented by spherical harmonics, Ambisonics Symposium 2009, June B. Rafaely and A. Avni, Inter-aural cross correlation in a sound field represented by spherical harmonics (a), J. Acoust. Soc. Am., vol. 125, no. 4, p. 2545, April

16 Chapter 2 Literature Review 2.1 Spherical arrays Different array configurations have been suggested for analyzing sound fields, with well known configurations such as linear and planar arrays. However, these arrays suffer from some major disadvantages since they are designed around a one-dimension axis or two-dimension plane. When the direction of the wave arrival needs to be estimated, for example, these arrays suffer from ambiguities according to their symmetries. The spherical array on the other hand, occupies a three-dimensional space; therefore it has some major advantages when analyzing three-dimensional sound fields. The spherical microphone array has been shown to be useful for sound fields measurements and analysis in some recent work [15] [19] [18]. This section presents the properties of spherical arrays, which will be used later in this work for spatial data analysis in the spherical harmonics domain. The spherical microphone array is shown to be useful when analyzing measured sound fields. In addition, some disadvantages and their solutions will be presented as well. 7

17 2.1.1 Spherical Fourier transform Consider a function f (θ,φ) which is square integrable on the unit sphere; then the spherical Fourier transform of f, denoted by f nm, is given by [15]: f nm = 2π π 0 0 f (θ,φ)y m n (θ, φ) sin(θ)dθdφ (2.1) f (θ,φ) = n n=0 m= n f nm Y m n (θ,φ) (2.2) The spherical harmonics Y m n (θ,φ) functions are given by [20]: Y m n (θ,φ) = (2n+1)(n m)! 4π (n+m)! Pm n (cosθ)e imφ (2.3) where n is the order of the spherical harmonics. P m n is the associated Legendre function, only depending on θ, while the factor e imφ depends only on φ. The spherical harmonics, combined by these two functions, are the solution to the wave equation in spherical coordinates. The spherical harmonics are orthonormal and complete, and so the following relations hold [20]: 2π π 0 0 Yn m m (θ,φ)yn (θ,φ)sinθdθdφ = δ nn δ mm (2.4) n n=0 m= n Y m n (θ,φ )Y m n (θ,φ) = δ(φ φ )δ(cosθ cosθ ) (2.5) where δ nn and δ mm are the Kronecker delta functions and δ(φ φ ) and δ(cosθ cosθ ) are the Dirac delta functions. The spherical Fourier transform also sustains a symmetry property on the sphere with respect to azimuth 8

18 or elevation, f (θ,φ) = f (θ) f nm = 0 for m 0 (2.6) f (θ,φ) = f (φ) f nm = 0 for n+m = odd (2.7) Consider a function f (θ,φ). Eq. (2.6) shows that if the spherical transform coefficients of f nullify for all m 0, then the function is constant along the azimuth. The spherical harmonics addition theorem provides a simplification for a product of two spherical harmonics using the Legendre function [20]: n m= n Y m n (θ 1,φ 1 )Y m n (θ 2,φ 2 ) = 2n+1 4π P n(cosθ) (2.8) where Θ is the angle between the spatial angles (θ 1,φ 1 ) and (θ 2,φ 2 ) Plane waves decomposition Consider a sound pressure function p(k,r,θ,φ), with (r,θ,φ) the standard spherical coordinate system, which is square integrable over (θ, φ), with k the wavenumber. Its representation on the spherical harmonic domain is defined [21] by: p(k,r,θ,φ) = n n=0 m= n p nm (k,r)y m n (θ,φ) (2.9) where p nm (k,r) is the spherical Fourier transform of the sound pressure. Consideraplanewavewithunitamplitudearrivingfrom(θ l,φ l ), thepressure at the measurement point (r,θ,φ) due to the plane wave is p l (kr,θ,φ) and 9

19 can be represented by [15]: p l (kr,θ,φ) = = n n=0 m= n n=0 b n (kr)y m n (θ l,φ l )Y m n (θ,φ) 2n+1 4π b n(kr)p n (cosθ) (2.10) where Θ is the angle between (θ l,φ l ) and (θ,φ), and the equality in (2.10) is derived using Eq. (2.8). b n (kr) is defined for an open sphere and a rigid sphere of radius a as follows: ( ) 4πi n j n (kr) j n (ka) h b n (kr) = h n(kr) n(ka) 4πi n j n (kr) rigid sphere open sphere (2.11) where j n is the spherical Bessel function, h n is the spherical Hankel function, and j n, h n are their derivatives. The spherical Fourier transform of a single plane wave considering a complex amplitude a(k) is defined by: p nm (k,r) = a(k)b n (kr)y m n (θ l,φ l ) (2.12) Consider a measured sound pressure composed of a large number of plane waves arriving from different directions where each plane wave has a complex amplitude a(k,ω l ); the spherical harmonic domain pressure function will be defined by [15]: p nm (k,r) = 2π π 0 0 a(k,θ l,φ l )b n (kr)y m n (θ l,φ l )sinθdθdφ (2.13) 10

20 Eq. (2.13) can be simplified by using Eq. (2.1): p nm (k,r) = a nm (k)b n (kr) (2.14) where a nm is the spherical Fourier transform of a(k,θ l,φ l ). Having approximated p nm by measurements, a(k,θ l,φ l ) can be found using an inverse spherical Fourier transform: a(k,θ l,φ l ) = = n n=0 m= n n n=0 m= n a nm Y m n (θ l,φ l ) p nm (k,r) b n (kr) Y m n (θ l,φ l ) (2.15) A problem could occur when b n (kr), defined for an open sphere, has a value of zero or close to zero. However, a method to overcome this problem has been presented theoretically [22] and experimentally [17] by using two sets of measurements in two radii where for each k a different set of measurements will be selected so b n (kr) will get larger values Capturing sound fields with spherical array - Practical considerations A practical measurement set consisting of a finite number of microphones is capable of capturing a finite number of coefficients; therefore, a finite number of coefficients up to Nth order will be kept, and the transform of a single plane wave as shown in Eq. (2.10) will be presented as: p l (kr,θ,φ) = N n n=0 m= n b n (kr)y m n (θ l,φ l )Y m n (θ,φ) (2.16) 11

21 n=0 20 n=0 Magnitude(dB) n=1 n=2 n= (a) kr n=4 n=5 n=6 Magnitude(dB) n=1 n=2 n= (b) kr n=4 n=5 n=6 Figure 2.1: Magnitude of b n as a function of kr for (a) Open sphere, (b) Rigid sphere where ka = kr For choosing a sufficient number of coefficients it is necessary to show that from a certain order N the coefficients values are diminishing with their absence causing a negligible error. Figure 2.1 presents the magnitude of b n as a function of kr for an open and rigid sphere for different n as defined in Eq. (2.11). It can be seen that for a chosen kr, where n < kr, b n gets values of similar order of magnitude and diminishes approximately when n > kr. Therefore it can be assumed that N = kr is the sufficient number of coefficients for a good approximation of the spherical Fourier transform. An understanding of the effect of a finite spherical harmonics at the sound capture stage, may be useful in the analysis of spatial perception in this work. Since the analysis of a finite order spherical harmonics representation is not complete, the sound field captured by a spherical microphone array will not be exact, but in many cases an accurate representation can be achieved, as discussed below. 12

22 10 5 N=3 N=5 N=9 0 Amplitude (db) kr Figure 2.2: Amplitude of a single plane wave constructed by N order of spherical harmonics coefficients, with respect to kr Frequency bandwidth For a given order n, b n (kr), as seen in Fig. 2.1, has the shape of a bandpass filter (excluding order zero, which behaves as a low-pass filter), with a peak around N kr. This behavior may form a dependency between spherical harmonics order and frequency content, as indeed observed in the experimental study. When a plane wave is constructed by a finite summation of order N in the spherical harmonics domain, the reconstructed amplitude will be attenuated at frequencies corresponding to kr > N, as illustrated in Fig Spatial resolution Plane wave decomposition of a single plane-wave sound field, should yield a delta function at the direction of arrival of the plane wave, when employing an infinite order in the spherical harmonics domain. However, as shown by Rafaely [15], when using a spherical array with a finite number of microphones, and a finite spherical harmonics order N, the delta function is 13

23 replaced by: a(θ,φ) = N +1 4π(cosΘ 1) (P N+1(cosΘ) P N (cosθ)) (2.17) where Θ is the angle between (θ, φ), the plane-wave decomposition analysis direction, and the plane wave direction of arrival. Eq. (2.17) defines a spatial function with a main lobe 2Θ 0, which depends on the order N: Θ 0 π N (2.18) As a result, in a reproduced sound field with low orders, the direction of arrival of the plane wave is smeared over a wide range of directions. Furthermore, two or more plane waves that arrive from similar directions may be analyzed mistakenly as a single plane wave. Spatial aliasing Similar to time-domain aliasing, spatial aliasing can occur when the sound fieldiscapturedbyafinitenumberofsamplesonthesphere,i.e.,usingafinite number of microphones. f(θ, φ) was presented in Eq. (2.1) as a continuous function. However, usually a set of samples is present and the function is discrete. In case the samples are in a known shape such as equiangle, Gaussian, or uniform, each (θ j,φ k ) sample can be multiplies by different weights α(θ j,φ k ) [23] and the integration becomes a sum. Eq. (2.1) can be presented as: f nm = J K j=0 k=0 α(θ j,φ k )f(θ j,φ k )Y m n (θ j,φ k ) (2.19) 14

24 A finite number of 2(N + 1) 2 microphones, arranged in a Gaussian sampling scheme, can capture a sound field up to order N, while orders higher than N result in aliasing errors [23]. It can be seen in Fig. 2.1 that for a single plane wave with kr satisfying kr N, the orders that are higher than kr are diminishing in magnitude, and so aliasing error is expected to be negligible. However, when kr > N, spatial aliasing may occur. Different microphone arrangements require a different number of microphones to guarantee aliasing-free sampling for functions on the sphere limited to order N [18], with a lower bound of (N +1) 2 microphones. As a result, for a defined measurement set a limited bandwidth should be used to avoid spatial aliasing. In sound reproduction, however, in order to reproduce natural and high quality audio, a full audio bandwidth should be used, and spatial aliasing may occur at the high frequencies. 2.2 The diffuse sound field The diffuse sound field model has been developed in order to simplify calculations in reverberant rooms analysis. This section presents the assumptions and properties of the diffuse sound field model in order to use the model later in this work when analyzing reverberant halls Definition When a source of sound is radiating in a room, reflections at the wall produce a sound energy distribution that becomes more and more uniform with increasing time. Eventually, except close to the source or the room surfaces, the spatial energy density can be defined as constant, or completely diffused. Consider any point in the room; the energy is arriving and departing along 15

25 individual ray paths and the rays have random phases. The energy density ǫ at a selected measuring point in the room is the sum of the energy density ǫ j of each of the rays. The energy density at one ray is defined by [24]: ǫ j = 1 2 ρ 0 ( u 2 j + ( ) ) 2 pj ρ 0 c (2.20) where ρ 0 is constant, c is the speed of sound, p j and u j are the effective pressure and velocity. From the conservation of momentum law it can be shown that u j = p j ρ 0 c and the overall energy density is: ǫ = ǫ j = p 2 j ρ 0 c 2 = P2 r ρ 0 c 2 (2.21) where P r = ( p 2 j) 1/2 is the effective pressure at the selected measuring point. Since ǫ is constant in each point, the effective pressure would be constant in each point in the room as well Cross correlation Cook et al. [25] have developed a definition for the cross correlation between two measuring points in a diffuse field. In this subsection the method will be presented with respect to time delay at one of the measurement points. Assumingasingleplanewavewithunitamplitude,wavenumberk = ω where c c is the speed of sound. The measured pressure at two arbitrary points with 16

26 distance r is: p 1 (t) = cos(ωt) (2.22) p 2 (t) = cos(ωt ωτ krcos(θ)) (2.23) where θ is the angle between the wave front and the line connecting the two points [26]. The temporal cross correlation is defined as: R(τ,k,r,θ) = ( 1 T T 1 T T p 0 1(t)p 2 (t)dt )( 0 p2 1 1(t)dt T T 0 p2 2(t)dt ) (2.24) Substituting Eq. (2.22) and Eq. (2.23) in Eq. (2.24) will give: R(τ,k,r,θ) = cos(ωτ +krcos(θ) (2.25) For the definition of the diffuse field model, the cross correlation will be computed such that the plane waves are coming from all directions with equal weights and the plane waves are independent of each other. Assuming the two measuring points are on the z-axis and θ is the elevation as defined in spherical coordinates, the cross correlation, or spatial-temporal correlation, will be: ρ(τ,kr) = 1 2π π cos(ωτ + kr cos(θ)) sin(θ)dθdφ 4π 0 0 = sin(kr) cos(ωτ) (2.26) kr Consider a broadband sound field; assuming the plane waves in different frequencies are at the same weights and independent of each other, the cor- 17

27 relation will be: ρ BB (r,τ) = 1 ω 2 ω 1 ω2 ω 1 sin(ωr) c ω r cos(ωτ)dω (2.27) c Knowing the correlation between two points can be useful for statistical attenuation of a certain space around a measured point. The cross correlation will be presented later on as an important acoustic parameter. 2.3 Head related transfer functions Definition As sound propagates in free space, it is usually common to assume plane wave propagation. An omni-directional microphone located in the space will sense the pressure due to a given plane waves with some time delay and attenuation vs. frequencies, which depends on the location of the source, the microphone, and the acoustic conditions between them. A listener listening to a source can be modeled as two microphones that sense the sound field in their positions (i.e. the listener ears), which will be affected by the presence of the listener himself, and will cause different time delays and attenuations in different frequencies. The head related transfer functions (HRTF) are two transfer functions between the source and the right and left ears, which are defined [27] by the sound pressure measured at the eardrum or at the ear canal entrance divided by the sound pressure measured with a microphone at the center of the head but with the head absent. HRTF is a function dependent on frequency and direction of sound incidence. A different HRTF will be defined for each individual according to his/hers anatomic dimensions. The head and shoulders affect sound transmission into the ear canal at mid- 18

28 frequencies and the pinna contributes to distortions at higher frequency range (above 3KHz). In this work, the HRTF are shown to be very useful for the purpose of sound field reconstruction, perceived by a listener. When analyzing the direction of HRTF it is common to use an interaural coordinate system [28], assuming a horizontal imaginary axis between the listeners ears and a vertical axis from top to bottom through the listeners head. The azimuth and elevation in front of the listener would be (0,0). The azimuth is defined as a rotation of the vertical axis and can have values of [ 90,90 ] whereas the elevation is defined as a rotation of the horizontal axis and can have values of [ 180,180 ]. An HRTF database has been created using a dummy head [12] by MIT and also involving human subjects [13] by The CIPIC interface laboratory. The CIPIC data base consists of 45 subjects, with 1250 different directions of measured HRTF for each subject. The large database could be very useful for analysis of human perception of virtual or real sound fields. For example, consider p s (t), a pressure function caused by s(t), a monaural sound. In order to transform p s (t) as it will be perceived by the listener from a chosen direction (θ, φ), a convolution will be computed such that: p left ear (t) = p s (t) HRIR left ear (θ,φ) (2.28) p right ear (t) = p s (t) HRIR right ear (θ,φ) (2.29) where HRIR is the head-related impulse response, which is the inverse Fourier transform of the HRTF and is a convolution operator. 19

29 2.3.2 Interaural Differences Figure 2.3 presents an example of a single HRTF, for a chosen subject, with (0, 20 ) as the source direction, taken from the CIPIC database GUI. The two bottom plots present the HRTF of both ears, where it can be noticed that there are different attenuations at different frequencies. The impulse response of both ears are presented in the top plot, where the time delay and attenuation between the ears are highly noticeable. According to Lord 1 h (right in red) t (ms) Left H (db) Right H (db) f (khz) Figure 2.3: HRTF at (0, 20 ): Top: Impulse responses of both ears. Middle: Left ear HRTF. Bottom: Right ear HRTF. Rayleigh s duplex theory [14], interaural time difference (ITD) and interaural level difference (ILD) are two binaural cues that complement each other for sound localization, and therefore, they can be estimated using an HRTF database. ITD is the time difference between the sound measured at the left ear and the sound measured at the right ear; it is usually effective at low 20

30 frequencies where the wavelength is larger than the head. ILD, on the other hand, is the difference in decibels (db) between the magnitude of the sound pressure measured at the left ear and the sound pressure measured at the right ear, which is affected mostly by head shading; hence, ILD is noticeable mostly at high frequencies. ITD and ILD can be estimated objectively using HRTF or subjectively by subjective experiments of sound localization. Several methods have been suggested to estimate ITD using HRTF and head related impulse responses(hrir). Kistler and Wightman[29] suggested computing the cross correlation between the left HRIR and the right HRIR and determining ITD when the cross correlation is maximized. Jot et al. [30] and Huopaniemi and Smith [31] suggested linear fitting of the excess phase to estimate ITD by the time-of-arrival for both HRIR. Nam et al. [32] suggested computing the delay of the HRIR for each ear using a maximization of the cross correlation between the HRIR and the HRIR minimum phase. ILD is easier to compute using the ratio of the HRTF magnitudes; however it is frequency dependent and generally increases as frequency increases; therefore it is common to estimate ILD for a certain frequency [33] or in 1/3 octave bands [34]. Another interaural differences related definition is the just noticeable differences (JND). The JND are the smallest differences in a parameter value where its evaluation or perception by a listener is not affected. In ITD and ILD the JND is evaluated in time and amplitude respectively. The JND was defined by subjective experiments both for ITD [35], [36] and ILD [37], and was later defined well [38] as around 10µs for ITD and 1dB for ILD. 21

31 2.4 Subjective and objective room acoustic parameters - Definitions and past research This section examines some acoustic parameters that are used in room acoustics analysis, as well as in this work. Some of the parameters are objective [38] and can be measured using acoustic sensors (e.g., omni microphones, microphone arrays, dummy heads), while others are subjective [5] and can be estimated statistically using an experiment involving a large group of subjects who estimate the examined parameter. A major part of the analysis in this work is focused on the acoustics parameters reviewed in this section and their relations with the measured sound field Early and late reflections Suppose a person is sitting in a concert hall, listening to an orchestra. The first sound that arrives at the listener is called the direct sound. The direct sound will arrive before any reflection since it travels the shortest distance and will also be the first one to stop when the music stops. It is common to use the direct sound as a time reference measurement, assuming t = 0 at the time the direct sound arrives, and measure other parameters with respect to this measurement. The sound that reaches the listener after the direct sound will be the result of reflections from the hall boundaries. These reflections are usually divided into two time intervals. The reflections that arrive 80 ms after the direct sound will be defined as the early sound. The early sound usually consists of a small number of reflections, which are in total perceived by the listener and affect some early parameters which will be dealt later in this section. The later sound is created by many reflections that occur subsequently and is defined as the reverberation sound. At this period of 22

32 time, there are so many reflections it is common to assume the diffuse field model. The reverberation time (RT) can be defined as the time that it takes for a sound to decay 60 db and by assuming the diffuse field model the same RT can be assumed for every measurement point in the room Clarity and sound strength Clarity describes the degree to which every detail in the music or speech is perceived as opposed to the late reflections, reverberant sound that blurs the details. As mentioned, the early sound, which consists of reflections up to 80 ms after the direct sound, is perceived by the listener as clear sound which can be amplified as a result of the reflections. Therefore the ratio between the early and late sounds is a good objective definition for clarity. Defining early sound up to 80 ms, clarity is defined: [ 80ms ] h 2 (t)dt 0 C 80 = 10log 10 80ms h2 (t)dt (2.30) where h(t) is the impulse response and h 2 (t) is defined as the energy at a certain time. Thus it seems when there is more energy at the early sound, the clarity factor will be higher. Another objective parameter is sound strength. Sound strength is simply the ratio between the sound source measured in the room and the direct sound from the same source measured at a distance of 10 m: [ ] h 2 (t)dt 0 G = 10log 10 tdir h m(t)dt (2.31) where the upper bound of the denominator integral t dir is the time when the direct sound stops. 23

33 2.4.3 Spaciousness Several parameters are defined for the perception of spaciousness by the listener. Two important subjective measures that affect spaciousness in concert halls are apparent source width (ASW) and listener envelopment (LEV). ASW relates to the spatial width of the perceived sound source, and is affected by the degree of dissimilarity of the musical sound reaching the two ears in the first ms after the direct sound. For example, assuming the soundofan omni-sourceisperceived bythelistener asifit was a wider source means the ASW is high. LEV relates to the density and spatial distribution of reflections reaching the ears, and is mostly affected by the sound arriving ms after the direct sound. LEV could be estimated by the listener impression of being surrounded by the reverberant sound field in the room. Two important objective parameters are lateral fractions (LF) and interaural cross correlation (IACC). LF is defined as the ratio between the energy of the sides reflections and the energy of the whole response. LF is usually measured with the early sound, the side reflections are measured using a figure-of-eight microphone, where the maximum directivity points towards the sides of the room. The side reflections are assumed to arrive at least 5 ms after the direct source, therefore LF E is defined: LF E = 80ms 5ms h 2 L (t)dt 80ms 0 h 2 (t)dt (2.32) where h L (t) is the impulse response using the figure-of-eight microphone measurement. IACC is an objective parameter that determines the similarity, or correlation, between the two ears. IACC is measured using a dummy head or subjects with pressure microphones in their ears, and can be measured at differenttimesfordifferentpurposes. Supposeh l (t)andh r (t)arethepressure 24

34 functions measured at the left and right ears, respectively; the interaural correlation function (IACF) between t 1 and t 2 is defined as: ρ t1,t 2 (τ) = t2 t 1 h l (t)h r (t+τ)dt t2 t 1 h 2 l (t)dt t 2 t 1 h 2 r(t)dt (2.33) IACC is then defined [39] as the maximum of the absolute value of Eq. (2.33) over τ: IACC t1,t 2 = max ρ t1,t 2 (τ), τ ( 1,1)msec (2.34) τ There are several known definitions for different IACC measurements. IACC E is defined as IACC calculations in the time section of the early sound (i.e.,upto80msafterthedirectsound). IACC E3 isdefinedattheearlysound using the average of 3 mid octave bands (500Hz,1KHz,2Khz). IACC E3 will be mentioned in the next sub-section as a parameter that is related to ASW. IACC L is defined as IACC calculation in the late sound 80 ms up to 1 s and has relations with LEV Spaciousness past subjective investigations ASW and LEV [5] have been mentioned as two important parameters when determining the quality of a concert hall. Since these parameters are subjective, many experiments have examined the parameters relations with different sound levels and directions, as well as with some objective parameters. This work suggests analysis related with some objective parameters, such as IACC. These parameters affect and are affected by the subjective parameters and can influence the listeners perception and appreciation of sound. In order to understand the importance of these parameters, a brief summary of 25

35 some recent experiments will be described in this section. Bradly et al. [9], [10] performed an experiment that showed a strong correlation between LF L, meaning LF at the late sound, and LEV. It seems as theloweroctavebands, 125Hzto1kHzaffectedLEVmorethanthehigheroctave bands. Other experiments have shown that early and late arriving sound energies have opposing effects on perceived spatial impression. It seems that ASW increased with increasing early lateral sound levels and decreased with increasing late arriving lateral sound, whereas LEV increases with increasing late arriving sound, but decreases when early arriving sound is added. Similar results were presented by Furuya [11], showing relations between C 80, the clarity factor, and LEV. As the clarity factor rises, meaning the early sound energy is higher, LEV decreases. Morimoto et al. [7], [8] have shown that only when the listener experiences image-splitting, meaning he cannot determine the direction of the source, can LEV be perceived. Image-splitting can occur after the early reflections end; therefore late sound was shown to have relations with LEV. In other experiments the relations between RT, LEV, and ASW have been examined. Increasing RT at high or low frequencies significantly increases LEV, while increasing RT only at low frequencies decreases ASW. Another conclusion that is inconsistent with Bradley, was that C 80 is less significant for determining LEV. Beranek et al. [4], [6] have found relations between [1 IACC E3 ] and G low (G at octave bands 125Hz,250Hz) and ASW. It seems that the effect of low frequency reflections on ASW is related to G low level, whereas the effect of high frequency reflections on ASW is related to [1 IACC E3 ] level. Another relation that was found is that LEV is affected by [1 IACC L ] and G L, which is the sound strength at the late sound. In conclusion, it has been clearly shown there are correlations between the 26

36 early sound with ASW, and late sound with LEV. IACC and G, measured in early or late sound, can help evaluating the subjective parameters ASW and LEV. 27

37 Chapter 3 ITD and ILD Analysis by HRFT Modeling in the Spherical Harmonics Domain The following chapter examines the characteristics of an HRTF set which is represented by a finite spherical harmonics order. The HRTF sets will be used later in this work, combined with a complex sound field to create a binaural reproduction, therefore their examination constitutes a basic phase for the entire work. The content of this chapter has been published as part of a paper in the Journal of the Acoustical Society of America (JASA) [40] and presented in a paper at the 2nd Ambisonics Symposium conference [41]. 3.1 The use of HRTF database TheHRTF,H l andh r, fortheleftandrightears, respectively, aredependent on frequency and direction of arrival; therefore, they can be represented by 28

38 spherical harmonics using Eq. (2.2): H l (k,ω) = H r (k,ω) = n n=0 m= n n n=0 m= n H l nm(k)y m n (Ω) H r nm(k)y m n (Ω) (3.1) where k = 2πf c is the wave number, f is the frequency, and H l nm(k), H r nm(k) are the spherical Fourier transform of the HRTF. The CIPIC HRTF database [13] has been used for the analysis in this work. The database consists of measured HRTF for a dense grid of 1250 directions, presented by inter-aural polar coordinates, and a large number of human subjects including a KEMAR manikin. Oneofitsdisadvantages, whichcanbeseeninfigure3.1, isthelack of samples at the bottom of the sphere, where the interaural elevation is lower than 45. The lack of samples occurs in most HRTF databases since these samples are probably less important for most application involving HRTF, therefore can be neglected. When transforming to the spherical harmonic domain, a uniform sampling of the sphere will results in good robustness and as the samples will be scattered less uniformly the system will be less robust and larger errors may occur. Therefore, the lack of samples is a significant disadvantage for the analysis in the spherical harmonics domain. HRTF analysis using spherical harmonics has been previously presented [42] with a smaller set of HRTF database distributed in a Gaussian sampling configuration around the sphere, where the spherical harmonics coefficients could be calculated easily by Eq. (2.19) since the weights are known. However the CIPIC HRTF database sample weights are unknown and a different approach has to be carried out. Suppose we use spherical harmonics order N to represent the HRTF, sampled at L positions, for a given frequency; the 29

39 Figure 3.1: Map of the CIPIC database directions. Left: Front view. Right: Side view representation can be represented by using Eq. (2.2): H(k) = YH nm (k) (3.2) where H(k) is the right or left HRTF, represented by an L 1 vector, given by: H(k) = [H(k,θ 1,φ 1 ),H(k,θ 2,φ 2 ),...,H(k,θ L,φ L )] T (3.3) H nm (k) is an (N + 1) 2 1 vector, consisting of the spherical harmonics coefficients of H(k), given by: H nm (k) = [H 00 (k),h 1( 1) (k),h 10 (k),h 11 (k),...,h NN (k)] T (3.4) and Y is an L (N + 1) 2 transformation matrix as defined in Eq. (2.3), 30

40 sampled at L sample points and presented up to order N, given by: Y = Y 0 0 (θ 1,φ 1 ) Y 1 1 (θ 1,φ 1 ) Y 0 1 (θ 1,φ 1 ) Y 1 1 (θ 1,φ 1 )... Y N N (θ 1,φ 1 ) Y 0 0 (θ 2,φ 2 ) Y 1 1 (θ 2,φ 2 ) Y 0 1 (θ 2,φ 2 ) Y 1 1 (θ 2,φ 2 )... Y N N (θ 2,φ 2 ).... Y0 0 (θ L,φ L ) Y1 1 (θ L,φ L ) Y1 0 (θ L,φ L ) Y1 1 (θ L,φ L )... YN N(θ L,φ L ) (3.5) The HRTF spherical harmonics coefficients H nm (k) can be derived using the pseudo-inverse of Y, giving a numerical solution for Eq. (3.2) in the least squares sense: H nm (k) = Y H(k) (3.6) The condition number of Y needs to be low to ensure robustness of the computations. Figure 3.2 presents the condition number of Y with CIPIC samples as a function of the Nth order of spherical harmonics. The condition number using the CIPIC sampling degrades with increasing order N, compared to the Gaussian sampling, which remains relatively low. Nevertheless, by using numerical programming software such as MATLAB, as used in the current work, the large condition number can be accommodated and the results are satisfactory. A few simulations have been performed using the CIPIC HRTF database in order to examine the mean square error (MSE) of a reconstructed HRTF by finite spherical harmonics order and its effect on sound localization attributes. 31

41 10 20 CIPIC sampling Gaussian sampling κ(y) N Figure 3.2: Condition number of Y with CIPIC samples as a function of the Nth order of spherical harmonics and comparison with Gaussian sampling 3.2 Simulation study - MSE A MATLAB simulation has been performed to examine the MSE of a reconstructed HRTF. First the coefficients of the HRTF in the spherical harmonics domain, H nm, were computed, as in Eq. (3.6), using Y with spherical harmonics order equal to N, then the HRTF, Ĥ, was reconstructed back in the spatial domain, using N order coefficients, and the normalized MSE has been calculated as: ǫ(f) = H Ĥ H (3.7) The simulation has been made after filtering the HRTF for different octave bands, where for each octave band 10 different HRTF from different subjects were examined and the average error was computed. Figure 3.3 presents the average error and STD for each octave band for different spherical harmonics orders, and Table 3.1 presents the order that needed to be used in order to get an error below a certain limit, and a computation of kr for every band, 32

42 assuming c is the speed of sound and r = 0.09m. It seems that up to 4khz the error goes below -3dB at around kr = N, which has been shown to be a limit where the spherical harmonics coefficients have values that cannot be neglected. Error (db) Error (db) Error (db) Hz khz khz N Hz khz khz N Figure 3.3: Approximation errors in different octave bands of the HRTF CIPIC database as a function of N averaged over 10 subjects. 250Hz 500Hz 1kHz 2kHz 4kHz 8kHz -10 db error db error kr Table 3.1: The spherical harmonics order where the error achieves a chosen limit in different octave bands and calculation of kr. 33

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:

More information

Introduction to Acoustics Exercises

Introduction to Acoustics Exercises . 361-1-3291 Introduction to Acoustics Exercises 1 Fundamentals of acoustics 1. Show the effect of temperature on acoustic pressure. Hint: use the equation of state and the equation of state at equilibrium.

More information

THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY

THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY Thushara D. Abhayapala, Department of Engineering, FEIT & Department of Telecom Eng, RSISE The Australian National

More information

Headphone Auralization of Acoustic Spaces Recorded with Spherical Microphone Arrays. Carl Andersson

Headphone Auralization of Acoustic Spaces Recorded with Spherical Microphone Arrays. Carl Andersson Headphone Auralization of Acoustic Spaces Recorded with Spherical Microphone Arrays Carl Andersson Department of Civil Engineering Chalmers University of Technology Gothenburg, 2017 Master s Thesis BOMX60-16-03

More information

Experimental investigation of multiple-input multiple-output systems for sound-field analysis

Experimental investigation of multiple-input multiple-output systems for sound-field analysis Acoustic Array Systems: Paper ICA2016-158 Experimental investigation of multiple-input multiple-output systems for sound-field analysis Hai Morgenstern (a), Johannes Klein (b), Boaz Rafaely (a), Markus

More information

Spatial sound. Lecture 8: EE E6820: Speech & Audio Processing & Recognition. Columbia University Dept. of Electrical Engineering

Spatial sound. Lecture 8: EE E6820: Speech & Audio Processing & Recognition. Columbia University Dept. of Electrical Engineering EE E6820: Speech & Audio Processing & Recognition Lecture 8: Spatial sound 1 Spatial acoustics 2 Binaural perception 3 Synthesizing spatial audio 4 Extracting spatial sounds Dan Ellis

More information

Chapter 2 Acoustical Background

Chapter 2 Acoustical Background Chapter 2 Acoustical Background Abstract The mathematical background for functions defined on the unit sphere was presented in Chap. 1. Spherical harmonics played an important role in presenting and manipulating

More information

Perception Of Spatial Impression Due To Varying Positions Of The Source On Stage

Perception Of Spatial Impression Due To Varying Positions Of The Source On Stage University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Architectural Engineering -- Dissertations and Student Research Architectural Engineering 8-2017 Perception Of Spatial Impression

More information

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS

APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS AMBISONICS SYMPOSIUM 29 June 2-27, Graz APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS Anton Schlesinger 1, Marinus M. Boone 2 1 University of Technology Delft, The Netherlands (a.schlesinger@tudelft.nl)

More information

COMP 546. Lecture 21. Cochlea to brain, Source Localization. Tues. April 3, 2018

COMP 546. Lecture 21. Cochlea to brain, Source Localization. Tues. April 3, 2018 COMP 546 Lecture 21 Cochlea to brain, Source Localization Tues. April 3, 2018 1 Ear pinna auditory canal cochlea outer middle inner 2 Eye Ear Lens? Retina? Photoreceptors (light -> chemical) Ganglion cells

More information

Spatial Distribution of Acoustical Parameters in Concert Halls: Comparison of Different Scattered Reflections

Spatial Distribution of Acoustical Parameters in Concert Halls: Comparison of Different Scattered Reflections Spatial Distribution of Acoustical Parameters in Concert Halls: Comparison of Different Scattered Reflections Kenji Fujii 1, a), Takuya Hotehama 2, Kosuke Kato 2, Ryota Shimokura 2, Yosuke Okamoto 2, Yukio

More information

A Probability Model for Interaural Phase Difference

A Probability Model for Interaural Phase Difference A Probability Model for Interaural Phase Difference Michael I. Mandel, Daniel P.W. Ellis Department of Electrical Engineering Columbia University, New York, New York {mim,dpwe}@ee.columbia.edu Abstract

More information

Modeling Measurement Uncertainty in Room Acoustics P. Dietrich

Modeling Measurement Uncertainty in Room Acoustics P. Dietrich Modeling Measurement Uncertainty in Room Acoustics P. Dietrich This paper investigates a way of determining and modeling uncertainty contributions in measurements of room acoustic parameters, which are

More information

Last time: small acoustics

Last time: small acoustics Last time: small acoustics Voice, many instruments, modeled by tubes Traveling waves in both directions yield standing waves Standing waves correspond to resonances Variations from the idealization give

More information

LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS. Thushara D. Abhayapala 1, Michael C. T. Chan 2

LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS. Thushara D. Abhayapala 1, Michael C. T. Chan 2 ICSV4 Cairns Australia 9-2 July, 2007 LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS Thushara D. Abhayapala, Michael C. T. Chan 2 Research School of Information Sciences and Engineering

More information

Convention Paper Presented at the 125th Convention 2008 October 2 5 San Francisco, CA, USA

Convention Paper Presented at the 125th Convention 2008 October 2 5 San Francisco, CA, USA Audio Engineering Society Convention Paper Presented at the 125th Convention 2008 October 2 5 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract

More information

COMP 546. Lecture 20. Head and Ear. Thurs. March 29, 2018

COMP 546. Lecture 20. Head and Ear. Thurs. March 29, 2018 COMP 546 Lecture 20 Head and Ear Thurs. March 29, 2018 1 Impulse function at t = 0. I X, Y, Z, t = δ(x X 0, Y Y 0, Z Z 0, t) To define an impulse function properly in a continuous space requires more math.

More information

Obtaining objective, content-specific room acoustical parameters using auditory modeling

Obtaining objective, content-specific room acoustical parameters using auditory modeling Obtaining objective, content-specific room acoustical parameters using auditory modeling Jasper van Dorp Schuitman Philips Research, Digital Signal Processing Group, Eindhoven, The Netherlands Diemer de

More information

ACOUSTIC CLARITY AND AUDITORY ROOM SIZE PERCEPTION. Densil Cabrera 1. Sydney, NSW 2006, Australia

ACOUSTIC CLARITY AND AUDITORY ROOM SIZE PERCEPTION. Densil Cabrera 1. Sydney, NSW 2006, Australia ICSV14 Cairns Australia 9-12 July, 2007 ACOUSTIC CLARITY AND AUDITORY ROOM SIZE PERCEPTION Densil Cabrera 1 1 Faculty of Architecture, Design and Planning, University of Sydney Sydney, NSW 2006, Australia

More information

A LOCALIZATION METHOD FOR MULTIPLE SOUND SOURCES BY USING COHERENCE FUNCTION

A LOCALIZATION METHOD FOR MULTIPLE SOUND SOURCES BY USING COHERENCE FUNCTION 8th European Signal Processing Conference (EUSIPCO-2) Aalborg, Denmark, August 23-27, 2 A LOCALIZATION METHOD FOR MULTIPLE SOUND SOURCES BY USING COHERENCE FUNCTION Hiromichi NAKASHIMA, Mitsuru KAWAMOTO,

More information

arxiv: v1 [cs.sd] 30 Oct 2015

arxiv: v1 [cs.sd] 30 Oct 2015 ACE Challenge Workshop, a satellite event of IEEE-WASPAA 15 October 18-1, 15, New Paltz, NY ESTIMATION OF THE DIRECT-TO-REVERBERANT ENERGY RATIO USING A SPHERICAL MICROPHONE ARRAY Hanchi Chen, Prasanga

More information

A wavenumber approach to characterizing the diffuse field conditions in reverberation rooms

A wavenumber approach to characterizing the diffuse field conditions in reverberation rooms PROCEEDINGS of the 22 nd International Congress on Acoustics Isotropy and Diffuseness in Room Acoustics: Paper ICA2016-578 A wavenumber approach to characterizing the diffuse field conditions in reverberation

More information

Binaural Reproduction of Finite Difference Simulations Using Spherical Array Processing

Binaural Reproduction of Finite Difference Simulations Using Spherical Array Processing Binaural Reproduction of Finite Difference Simulations Using Spherical Array Processing Sheaffer, J., Van Walstijn, M., Rafaely, B., & Kowalczyk, K. (205). Binaural Reproduction of Finite Difference Simulations

More information

Sound impulse. r = (X X 0 ) 2 + (Y Y 0 ) 2 + (Z Z 0 ) 2. and

Sound impulse. r = (X X 0 ) 2 + (Y Y 0 ) 2 + (Z Z 0 ) 2. and Sound impulse Consider an isolated perturbation of air pressure at 3D point (X o, Y o, Z o ) and at time t = t 0, for example, due to some impact. Or, you can imagine a digital sound generation system

More information

Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials

Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials Mehmet ÇALIŞKAN a) Middle East Technical University, Department of Mechanical Engineering, Ankara, 06800,

More information

A new Audacity feature: room objective acustical parameters calculation module

A new Audacity feature: room objective acustical parameters calculation module A new Audacity feature: room objective acustical parameters calculation module Simone CAMPANINI and Angelo FARINA Departement of Industrial Engineering - University of Parma viale G.P.Usberti, 181/A 431

More information

Acoustic Signal Processing. Algorithms for Reverberant. Environments

Acoustic Signal Processing. Algorithms for Reverberant. Environments Acoustic Signal Processing Algorithms for Reverberant Environments Terence Betlehem B.Sc. B.E.(Hons) ANU November 2005 A thesis submitted for the degree of Doctor of Philosophy of The Australian National

More information

Measuring HRTFs of Brüel & Kjær Type 4128-C, G.R.A.S. KEMAR Type 45BM, and Head Acoustics HMS II.3 Head and Torso Simulators

Measuring HRTFs of Brüel & Kjær Type 4128-C, G.R.A.S. KEMAR Type 45BM, and Head Acoustics HMS II.3 Head and Torso Simulators Downloaded from orbit.dtu.dk on: Jan 11, 219 Measuring HRTFs of Brüel & Kjær Type 4128-C, G.R.A.S. KEMAR Type 4BM, and Head Acoustics HMS II.3 Head and Torso Simulators Snaidero, Thomas; Jacobsen, Finn;

More information

Introduction to Audio and Music Engineering

Introduction to Audio and Music Engineering Introduction to Audio and Music Engineering Lecture 7 Sound waves Sound localization Sound pressure level Range of human hearing Sound intensity and power 3 Waves in Space and Time Period: T Seconds Frequency:

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Signal Processing in Acoustics Session 4aSP: Sensor Array Beamforming

More information

5. Algorithms for the Evaluation of Interaural Phase Differences ("Phase-Difference-Cocktail-Party-Processor")

5. Algorithms for the Evaluation of Interaural Phase Differences (Phase-Difference-Cocktail-Party-Processor) - 46-5. Algorithms for the Evaluation of Interaural Phase Differences ("Phase-Difference-Cocktail-Party-Processor") 5.. Requirements for a Cocktail-Party-Processor A system for the analysis of ear signals

More information

The Effect of Scenery on the Stage Acoustic Conditions in a Theatre

The Effect of Scenery on the Stage Acoustic Conditions in a Theatre Department of The Built Environment Building Physics - Acoustics Research group The Effect of Scenery on the Stage Acoustic Conditions in a Theatre Master Thesis Ni Putu Amanda Nitidara Supervisors: Ir.

More information

Array Signal Processing Algorithms for Localization and Equalization in Complex Acoustic Channels

Array Signal Processing Algorithms for Localization and Equalization in Complex Acoustic Channels Array Signal Processing Algorithms for Localization and Equalization in Complex Acoustic Channels Dumidu S. Talagala B.Sc. Eng. (Hons.), University of Moratuwa, Sri Lanka November 2013 A thesis submitted

More information

Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements

Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements Frank Schultz 1 and Sascha Spors 1 1 Institute of Communications Engineering, Universität Rostock, R.-Wagner-Str. 31 (H8),

More information

HEARING DISTANCE: A LOW-COST MODEL FOR NEAR-FIELD BINAURAL EFFECTS

HEARING DISTANCE: A LOW-COST MODEL FOR NEAR-FIELD BINAURAL EFFECTS th European Signal Processing Conference (EUSIPCO 12) Bucharest, Romania, August 27-31, 12 HEARING DISTANCE: A LOW-COST MODEL FOR NEAR-FIELD BINAURAL EFFECTS Simone Spagnol IUAV - University of Venice

More information

Nearfield Binaural Synthesis and Ambisonics. Dylan Menzies and Marwan Al-Akaidi. De Montfort University, Leicester UK. (Dated: December 20, 2006)

Nearfield Binaural Synthesis and Ambisonics. Dylan Menzies and Marwan Al-Akaidi. De Montfort University, Leicester UK. (Dated: December 20, 2006) Nearfield Binaural Synthesis and Ambisonics Dylan Menzies and Marwan Al-Akaidi De Montfort University, Leicester UK (Dated: December 20, 2006) 1 Abstract Ambisonic encodings can be rendered binaurally,

More information

B.S. (distinction), Cornell University (1993) at the. June 1995

B.S. (distinction), Cornell University (1993) at the. June 1995 A Computational Model of Spatial Hearing by Keith Dana Martin B.S. (distinction), Cornell University (1993) Submitted to the Department of Electrical Engineering and Computer Science in partial fulllment

More information

A COMPUTATIONAL SOFTWARE FOR NOISE MEASUREMENT AND TOWARD ITS IDENTIFICATION

A COMPUTATIONAL SOFTWARE FOR NOISE MEASUREMENT AND TOWARD ITS IDENTIFICATION A COMPUTATIONAL SOFTWARE FOR NOISE MEASUREMENT AND TOWARD ITS IDENTIFICATION M. SAKURAI Yoshimasa Electronic Inc., Daiichi-Nishiwaki Bldg., -58-0 Yoyogi, Shibuya, Tokyo, 5-0053 Japan E-mail: saku@ymec.co.jp

More information

LOCALIZATION OF SOUND SOURCES USING 3D MICROPHONE ARRAY

LOCALIZATION OF SOUND SOURCES USING 3D MICROPHONE ARRAY LOCALIZATION OF SOUND SOURCES USING 3D MICROPHONE ARRAY Master of Science in Engineering Thesis by M.Sc.Eng. Student Svend Oscar Petersen Supervisor: Peter Møller Juhl University of Southern Denmark -

More information

A Source Localization/Separation/Respatialization System Based on Unsupervised Classification of Interaural Cues

A Source Localization/Separation/Respatialization System Based on Unsupervised Classification of Interaural Cues A Source Localization/Separation/Respatialization System Based on Unsupervised Classification of Interaural Cues Joan Mouba and Sylvain Marchand SCRIME LaBRI, University of Bordeaux 1 firstname.name@labri.fr

More information

Measurement of Temporal and Spatial Factors of a Flushing Toilet Noise in a Downstairs Bedroom

Measurement of Temporal and Spatial Factors of a Flushing Toilet Noise in a Downstairs Bedroom Measurement of Temporal and Spatial Factors of a Flushing Toilet Noise in a Downstairs Bedroom Toshihiro Kitamura, Ryota Shimokura, Shin-ichi Sato and Yoichi Ando Graduate School of Science and Technology,

More information

Investigation of the Correlation between Late Lateral Sound Level and Total Acoustic Absorption, and an Overview of a Related Subjective Study

Investigation of the Correlation between Late Lateral Sound Level and Total Acoustic Absorption, and an Overview of a Related Subjective Study Proceedings of the International Symposium on Room Acoustics, ISRA 2010 29-31 August 2010, Melbourne, Australia Investigation of the Correlation between Late Lateral Sound Level and Total Acoustic Absorption,

More information

The below identified patent application is available for licensing. Requests for information should be addressed to:

The below identified patent application is available for licensing. Requests for information should be addressed to: DEPARTMENT OF THE NAVY OFFICE OF COUNSEL NAVAL UNDERSEA WARFARE CENTER DIVISION 1176 HOWELL STREET NEWPORT Rl 02841-1708 IN REPLY REFER TO 31 October 2018 The below identified patent application is available

More information

Nicholas J. Giordano. Chapter 13 Sound

Nicholas J. Giordano.  Chapter 13 Sound Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 13 Sound Sound Sounds waves are an important example of wave motion Sound is central to hearing, speech, music and many other daily activities

More information

Predicting speech intelligibility in noisy rooms.

Predicting speech intelligibility in noisy rooms. Acknowledgement: Work supported by UK EPSRC Predicting speech intelligibility in noisy rooms. John F. Culling 1, Mathieu Lavandier 2 and Sam Jelfs 3 1 School of Psychology, Cardiff University, Tower Building,

More information

DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS. Sakari Tervo

DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS. Sakari Tervo 7th European Signal Processing Conference (EUSIPCO 9) Glasgow, Scotland, August 4-8, 9 DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS Sakari Tervo Helsinki University of Technology Department of

More information

OPTIMIZED SPHERICAL SOUND SOURCE FOR AURALIZATION WITH ARBITRARY SOURCE DIRECTIVITY

OPTIMIZED SPHERICAL SOUND SOURCE FOR AURALIZATION WITH ARBITRARY SOURCE DIRECTIVITY Proc. of the EAA Joint Symposium on Auralization and Ambisonics, Berlin, Germany, 3-5 April 214 OPTIMIZED SPHERICAL SOUND SOURCE FOR AURALIZATION WITH ARBITRARY SOURCE DIRECTIVITY Johannes Klein Institute

More information

ROOM ACOUSTICS THREE APPROACHES 1. GEOMETRIC RAY TRACING SOUND DISTRIBUTION

ROOM ACOUSTICS THREE APPROACHES 1. GEOMETRIC RAY TRACING SOUND DISTRIBUTION ROOM ACOUSTICS THREE APPROACHES 1. GEOMETRIC RAY TRACING. RESONANCE (STANDING WAVES) 3. GROWTH AND DECAY OF SOUND 1. GEOMETRIC RAY TRACING SIMPLE CONSTRUCTION OF SOUND RAYS WHICH OBEY THE LAWS OF REFLECTION

More information

Chapter 3 Room acoustics

Chapter 3 Room acoustics Chapter 3 Room acoustics Acoustic phenomena in a closed room are the main focal problems in architectural acoustics. In this chapter, fundamental theories are described and their practical applications

More information

Analysis and synthesis of room reverberation based on a statistical time-frequency model

Analysis and synthesis of room reverberation based on a statistical time-frequency model Analysis and synthesis of room reverberation based on a statistical time-frequency model Jean-Marc Jot, Laurent Cerveau, Olivier Warusfel IRCAM. 1 place Igor-Stravinsky. F-75004 Paris, France. Tel: (+33)

More information

The effect of boundary shape to acoustic parameters

The effect of boundary shape to acoustic parameters Journal of Physics: Conference Series PAPER OPEN ACCESS The effect of boundary shape to acoustic parameters To cite this article: M. S. Prawirasasra et al 216 J. Phys.: Conf. Ser. 776 1268 Related content

More information

Sound, acoustics Slides based on: Rossing, The science of sound, 1990, and Pulkki, Karjalainen, Communication acoutics, 2015

Sound, acoustics Slides based on: Rossing, The science of sound, 1990, and Pulkki, Karjalainen, Communication acoutics, 2015 Acoustics 1 Sound, acoustics Slides based on: Rossing, The science of sound, 1990, and Pulkki, Karjalainen, Communication acoutics, 2015 Contents: 1. Introduction 2. Vibrating systems 3. Waves 4. Resonance

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Engineering Acoustics Session 4pEAa: Sound Field Control in the Ear Canal

More information

ON THE LIMITATIONS OF BINAURAL REPRODUCTION OF MONAURAL BLIND SOURCE SEPARATION OUTPUT SIGNALS

ON THE LIMITATIONS OF BINAURAL REPRODUCTION OF MONAURAL BLIND SOURCE SEPARATION OUTPUT SIGNALS th European Signal Processing Conference (EUSIPCO 12) Bucharest, Romania, August 27-31, 12 ON THE LIMITATIONS OF BINAURAL REPRODUCTION OF MONAURAL BLIND SOURCE SEPARATION OUTPUT SIGNALS Klaus Reindl, Walter

More information

Source localization and separation for binaural hearing aids

Source localization and separation for binaural hearing aids Source localization and separation for binaural hearing aids Mehdi Zohourian, Gerald Enzner, Rainer Martin Listen Workshop, July 218 Institute of Communication Acoustics Outline 1 Introduction 2 Binaural

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:

More information

Deriving content-specific measures of room acoustic perception using a binaural, nonlinear auditory model

Deriving content-specific measures of room acoustic perception using a binaural, nonlinear auditory model Deriving content-specific measures of room acoustic perception using a binaural, nonlinear auditory model Jasper van Dorp Schuitman a) and Diemer de Vries Delft University of Technology, Faculty of Applied

More information

Computational Perception. Sound Localization 1

Computational Perception. Sound Localization 1 Computational Perception 15-485/785 January 17, 2008 Sound Localization 1 Orienting sound localization visual pop-out eye/body movements attentional shift 2 The Problem of Sound Localization What are the

More information

IEEE PError! Unknown document property name./d10.0, November, Error! Unknown document property name.

IEEE PError! Unknown document property name./d10.0, November, Error! Unknown document property name. IEEE PError! Unknown document property name./d10.0, November, Error! Unknown document property name. IEEE P1652 /D10.0 Draft Standard for Translating Head and Torso Simulator Measurements from Eardrum

More information

Multi Acoustic Prediction Program (MAPP tm ) Recent Results Perrin S. Meyer and John D. Meyer

Multi Acoustic Prediction Program (MAPP tm ) Recent Results Perrin S. Meyer and John D. Meyer Multi Acoustic Prediction Program (MAPP tm ) Recent Results Perrin S. Meyer and John D. Meyer Meyer Sound Laboratories Inc., Berkeley, California, USA Presented at the Institute of Acoustics (UK), Reproduced

More information

Spherical harmonic analysis of wavefields using multiple circular sensor arrays

Spherical harmonic analysis of wavefields using multiple circular sensor arrays Spherical harmonic analysis of wavefields using multiple circular sensor arrays Thushara D. Abhayapala, Senior Member, IEEE and Aastha Gupta Student Member, IEEE Abstract Spherical harmonic decomposition

More information

A R T A - A P P L I C A T I O N N O T E

A R T A - A P P L I C A T I O N N O T E Loudspeaker Free-Field Response This AP shows a simple method for the estimation of the loudspeaker free field response from a set of measurements made in normal reverberant rooms. Content 1. Near-Field,

More information

Characterisation of the directionality of reflections in small room acoustics

Characterisation of the directionality of reflections in small room acoustics Characterisation of the directionality of reflections in small room acoustics Romero, J, Fazenda, BM and Atmoko, H Title Authors Type URL Published Date 2009 Characterisation of the directionality of reflections

More information

Horizontal Local Sound Field Propagation Based on Sound Source Dimension Mismatch

Horizontal Local Sound Field Propagation Based on Sound Source Dimension Mismatch Journal of Information Hiding and Multimedia Signal Processing c 7 ISSN 73-4 Ubiquitous International Volume 8, Number 5, September 7 Horizontal Local Sound Field Propagation Based on Sound Source Dimension

More information

19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011

19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 19th European Signal Processing Conference (EUSIPCO 211) Barcelona, Spain, August 29 - September 2, 211 DEVELOPMENT OF PROTOTYPE SOUND DIRECTION CONTROL SYSTEM USING A TWO-DIMENSIONAL LOUDSPEAKER ARRAY

More information

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES

LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES LECTURE NOTES IN AUDIO ANALYSIS: PITCH ESTIMATION FOR DUMMIES Abstract March, 3 Mads Græsbøll Christensen Audio Analysis Lab, AD:MT Aalborg University This document contains a brief introduction to pitch

More information

Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data NASA/CR-22-211753 Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data Aimee L. Lalime and Marty E. Johnson Virginia Polytechnic Institute and State University Blacksburg,

More information

In situ measurement methods for characterising sound diffusion

In situ measurement methods for characterising sound diffusion Proceedings of the International Symposium on Room Acoustics, ISRA 9 August, Melbourne, Australia In situ measurement methods for characterising sound diffusion I. Schmich (), N. Brousse () () Université

More information

Methods for Synthesizing Very High Q Parametrically Well Behaved Two Pole Filters

Methods for Synthesizing Very High Q Parametrically Well Behaved Two Pole Filters Methods for Synthesizing Very High Q Parametrically Well Behaved Two Pole Filters Max Mathews Julius O. Smith III Center for Computer Research in Music and Acoustics (CCRMA) Department of Music, Stanford

More information

COMPARISON OF A MULTI-PURPOSE HALL WITH THREE WELL-KNOWN CONCERT HALLS ABSTRACT SOMMAIRE

COMPARISON OF A MULTI-PURPOSE HALL WITH THREE WELL-KNOWN CONCERT HALLS ABSTRACT SOMMAIRE Canadian Acoustics! Acoustique Canadienne 19(2) 3-10 (1991 ) Research article / Article de recherche COMPARISON OF A MULTI-PURPOSE HALL WITH THREE WELL-KNOWN CONCERT HALLS J.S. Bradley Institute for Research

More information

Chapter 2. Room acoustics

Chapter 2. Room acoustics Chapter 2. Room acoustics Acoustic phenomena in a closed room are the main focal problems in architectural acoustics. In this chapter, fundamental theories are described and their practical applications

More information

The impact of sound control room acoustics on the perceived acoustics of a diffuse field recording

The impact of sound control room acoustics on the perceived acoustics of a diffuse field recording See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/9045065 The impact of sound control room acoustics on the perceived acoustics of a diffuse

More information

3D Sound Synthesis using the Head Related Transfer Function

3D Sound Synthesis using the Head Related Transfer Function University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Masters Theses Graduate School 12-2004 3D Sound Synthesis using the Head Related Transfer Function Dayu Yang University

More information

Transaural Audio - The reproduction of binaural signals over loudspeakers. Fabio Kaiser

Transaural Audio - The reproduction of binaural signals over loudspeakers. Fabio Kaiser Transaural Audio - The reproduction of binaural signals over loudspeakers Fabio Kaiser Outline 1 Introduction 2 Inversion of non-minimum phase filters Inversion techniques 3 Implementation of CTC 4 Objective

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 OPTIMAL SPACE-TIME FINITE DIFFERENCE SCHEMES FOR EXPERIMENTAL BOOTH DESIGN

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 OPTIMAL SPACE-TIME FINITE DIFFERENCE SCHEMES FOR EXPERIMENTAL BOOTH DESIGN 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 OPTIMAL SPACE-TIME FINITE DIFFERENCE SCHEMES FOR EXPERIMENTAL BOOTH DESIGN PACS: 43.55.Ka Naka, Yusuke 1 ; Oberai, Assad A. 2 ; Shinn-Cunningham,

More information

ROOM RESONANCES USING WAVE BASED GEOMET- RICAL ACOUSTICS (WBGA)

ROOM RESONANCES USING WAVE BASED GEOMET- RICAL ACOUSTICS (WBGA) ROOM RESONANCES USING WAVE BASED GEOMET- RICAL ACOUSTICS (WBGA) Panos Economou, Panagiotis Charalampous P.E. Mediterranean Acoustics Research & Development Ltd, Cyprus email: panos@pemard.com Geometrical

More information

Experimental Guided Spherical. Harmonics based Head-Related. Transfer Function Modeling

Experimental Guided Spherical. Harmonics based Head-Related. Transfer Function Modeling Experimental Guided Spherical Harmonics based Head-Related Transfer Function Modeling Mengqiu Zhang M.E. (Xidian University, Xi an, Shanxi, China) B.E. (China Jiliang University, Hangzhou, Zhejiang, China)

More information

Plane-wave decomposition of acoustical scenes via spherical and cylindrical microphone arrays

Plane-wave decomposition of acoustical scenes via spherical and cylindrical microphone arrays Plane-wave decomposition of acoustical scenes via spherical and cylindrical microphone arrays 1 Dmitry N. Zotkin*, Ramani Duraiswami, Nail A. Gumerov Perceptual Interfaces and Reality Laboratory Institute

More information

SIMULATIONS, MEASUREMENTS AND AURALISA- TIONS IN ARCHITECTURAL ACOUSTICS

SIMULATIONS, MEASUREMENTS AND AURALISA- TIONS IN ARCHITECTURAL ACOUSTICS SIMULATIONS, MEASUREMENTS AND AURALISA- TIONS IN ARCHITECTURAL ACOUSTICS Jens Holger Rindel, Claus Lynge Christensen and George Koutsouris Odeon A/S, Scion-DTU, Diplomvej, Building 381, DK-8 Kgs. Lyngby,

More information

ECE 598: The Speech Chain. Lecture 5: Room Acoustics; Filters

ECE 598: The Speech Chain. Lecture 5: Room Acoustics; Filters ECE 598: The Speech Chain Lecture 5: Room Acoustics; Filters Today Room = A Source of Echoes Echo = Delayed, Scaled Copy Addition and Subtraction of Scaled Cosines Frequency Response Impulse Response Filter

More information

Noise in enclosed spaces. Phil Joseph

Noise in enclosed spaces. Phil Joseph Noise in enclosed spaces Phil Joseph MODES OF A CLOSED PIPE A 1 A x = 0 x = L Consider a pipe with a rigid termination at x = 0 and x = L. The particle velocity must be zero at both ends. Acoustic resonances

More information

Convention Paper Presented at the 126th Convention 2009 May 7 10 Munich, Germany

Convention Paper Presented at the 126th Convention 2009 May 7 10 Munich, Germany Audio Engineering Society Convention Paper Presented at the 6th Convention 9 May 7 Munich, Germany The papers at this Convention have been selected on the basis of a submitted abstract and extended precis

More information

The effect of impedance on interaural azimuth cues derived from a spherical head model a)

The effect of impedance on interaural azimuth cues derived from a spherical head model a) The effect of impedance on interaural azimuth cues derived from a spherical head model a) Bradley E. Treeby, b Roshun M. Paurobally, and Jie Pan Centre for Acoustics, Dynamics and Vibration, School of

More information

A Balloon Lens: Acoustic Scattering from a Penetrable Sphere. Derek Thomas Capstone Project Physics 492R Advisor: Kent L. Gee.

A Balloon Lens: Acoustic Scattering from a Penetrable Sphere. Derek Thomas Capstone Project Physics 492R Advisor: Kent L. Gee. A Balloon Lens: Acoustic Scattering from a Penetrable Sphere Derek Thomas Capstone Project Physics 492R Advisor: Kent L. Gee August 13, 2007 Abstract A balloon filled with a gas that has a different sound

More information

George Mason University ECE 201: Introduction to Signal Analysis Spring 2017

George Mason University ECE 201: Introduction to Signal Analysis Spring 2017 Assigned: March 20, 2017 Due Date: Week of April 03, 2017 George Mason University ECE 201: Introduction to Signal Analysis Spring 2017 Laboratory Project #6 Due Date Your lab report must be submitted on

More information

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA Audio Engineering Society Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA 9421 This Convention paper was selected based on a submitted abstract and 750-word

More information

MEASUREMENT OF INPUT IMPEDANCE OF AN ACOUSTIC BORE WITH APPLICATION TO BORE RECONSTRUCTION

MEASUREMENT OF INPUT IMPEDANCE OF AN ACOUSTIC BORE WITH APPLICATION TO BORE RECONSTRUCTION MEASUREMENT OF INPUT IMPEDANCE OF AN ACOUSTIC BORE WITH APPLICATION TO BORE RECONSTRUCTION Maarten van Walstijn Murray Campbell David Sharp Department of Physics and Astronomy, University of Edinburgh,

More information

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters Signals, Instruments, and Systems W5 Introduction to Signal Processing Sampling, Reconstruction, and Filters Acknowledgments Recapitulation of Key Concepts from the Last Lecture Dirac delta function (

More information

Loudspeaker Choice and Placement. D. G. Meyer School of Electrical & Computer Engineering

Loudspeaker Choice and Placement. D. G. Meyer School of Electrical & Computer Engineering Loudspeaker Choice and Placement D. G. Meyer School of Electrical & Computer Engineering Outline Sound System Design Goals Review Acoustic Environment Outdoors Acoustic Environment Indoors Loudspeaker

More information

ODEON APPLICATION NOTE Calibration of Impulse Response Measurements

ODEON APPLICATION NOTE Calibration of Impulse Response Measurements ODEON APPLICATION NOTE Calibration of Impulse Response Measurements Part 2 Free Field Method GK, CLC - May 2015 Scope In this application note we explain how to use the Free-field calibration tool in ODEON

More information

Laboratory synthesis of turbulent boundary layer wall-pressures and the induced vibro-acoustic response

Laboratory synthesis of turbulent boundary layer wall-pressures and the induced vibro-acoustic response Proceedings of the Acoustics 22 Nantes Conference 23-27 April 22, Nantes, France Laboratory synthesis of turbulent boundary layer wall-pressures and the induced vibro-acoustic response C. Maury a and T.

More information

Sound. p V V, where p is the change in pressure, V/V is the percent change in volume. The bulk modulus is a measure 1

Sound. p V V, where p is the change in pressure, V/V is the percent change in volume. The bulk modulus is a measure 1 Sound The obvious place to start an investigation of sound recording is with the study of sound. Sound is what we call our perception of the air movements generated by vibrating objects: it also refers

More information

Qualification of balance in opera houses: comparing different sound sources

Qualification of balance in opera houses: comparing different sound sources Qualification of balance in opera houses: comparing different sound sources Nicola Prodi, Andrea Farnetani, Shin-ichi Sato* Dipartimento di Ingegneria, Università di Ferrara, Italy Gottfried Behler, Ingo

More information

Sound field decomposition of sound sources used in sound power measurements

Sound field decomposition of sound sources used in sound power measurements Sound field decomposition of sound sources used in sound power measurements Spyros Brezas Physikalisch Technische Bundesanstalt Germany. Volker Wittstock Physikalisch Technische Bundesanstalt Germany.

More information

AUDITORY MODELLING FOR ASSESSING ROOM ACOUSTICS. Jasper van Dorp Schuitman

AUDITORY MODELLING FOR ASSESSING ROOM ACOUSTICS. Jasper van Dorp Schuitman AUDITORY MODELLING FOR ASSESSING ROOM ACOUSTICS Jasper van Dorp Schuitman Auditory modelling for assessing room acoustics PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit

More information

Echo cancellation by deforming sound waves through inverse convolution R. Ay 1 ward DeywrfmzMf o/ D/g 0001, Gauteng, South Africa

Echo cancellation by deforming sound waves through inverse convolution R. Ay 1 ward DeywrfmzMf o/ D/g 0001, Gauteng, South Africa Echo cancellation by deforming sound waves through inverse convolution R. Ay 1 ward DeywrfmzMf o/ D/g 0001, Gauteng, South Africa Abstract This study concerns the mathematical modelling of speech related

More information

AFMG. Focus Your Sub Arrays! AHNERT FEISTEL MEDIA GROUP. PLS 2017, Frankfurt

AFMG. Focus Your Sub Arrays! AHNERT FEISTEL MEDIA GROUP. PLS 2017, Frankfurt Why Making Subwoofer Arrays? Design Principles Typical Arrays The Right Tools in EASE Focus 3 www..eu 2 Why Making Subwoofer Arrays? 63 Hz, 1/3 Oct. Typical L-R setup Large frequency response variation

More information

FESI DOCUMENT A5 Acoustics in rooms

FESI DOCUMENT A5 Acoustics in rooms FESI DOCUMENT A5 Acoustics in rooms FileName: A5 Acoustics in rooms - English version.pdf o o Abstract: The first theory developed was Sabine\'s formula (1902) and it is the basis of the so-called \"classic

More information

REAL TIME CALCULATION OF THE HEAD RELATED TRANSFER FUNCTION BASED ON THE BOUNDARY ELEMENT METHOD

REAL TIME CALCULATION OF THE HEAD RELATED TRANSFER FUNCTION BASED ON THE BOUNDARY ELEMENT METHOD REAL TIME CALCULATION OF THE HEAD RELATED TRANSFER FUNCTION BASED ON THE BOUNDARY ELEMENT METHOD Shiro Ise Makoto Otani Department of Architecture and Architectural Systems Faculty of Engineering, Kyoto

More information

Room acoustic modelling techniques: A comparison of a scale model and a computer model for a new opera theatre

Room acoustic modelling techniques: A comparison of a scale model and a computer model for a new opera theatre Proceedings of the International Symposium on Room Acoustics, ISRA 2010 29-31 August 2010, Melbourne, Australia Room acoustic modelling techniques: A comparison of a scale model and a computer model for

More information