Robustness of Equalization
|
|
- Jasmine Lorena Jordan
- 6 years ago
- Views:
Transcription
1 Chapter 3 Robustness of Equalization 3.1 Introduction A major challenge in hands-free telephony is acquiring undistorted speech in reverberant environments. Reverberation introduces distortion into speech signals that renders automatic speech recognition algorithms largely ineffective. One means of removing reverberation is to acoustically equalize it out of the captured speech signal the sensor output, using an appropriate inverse filter. Unfortunately, this equalizer filter is extremely sensitive to source and sensor positions. Acoustic equalization is hence fundamentally nonrobust, at least with simple sound capture strategies. In this chapter we show that more advanced methods of sound capture, using directional microphones and beamformers, improve the perturbation robustness. We quantify these improvements by quantifying how movement of the source or sensor influences the error in equalization. As one moves from position to position in a room, the soundfield varies considerably. Mourjopoulos [68] showed for a particular room the variation in impulse response is so large, that an inverse filter that equalized well in one position would degrade the signal acquired in any other. Radlovi`c et al. [78] showed that even a change in the source or microphone position of a few tenths of a wavelength would, on average, cause large degradation in equalized output. Such results are consistent with experimental work [29,70,73]. Several techniques have been suggested to combat this robustness problem. Mourjopoulos [69] proposed a vector quantization technique that selects an optimum inverse filter from a spatial equalization library. Bharitkar and Kyriakakis proposed a more efficient library technique [9] using clustering. However these techniques require prior measurement of a large number of acoustic impulse responses which is usually impractical. Elliot et al. [27] optimized the equalizer filters over multiple points, trading robustness improvements for reduced equalizer performance. Haneda et al. [35,36] obtained minor improvement by sampling the transfer function over multiple points and equalizing only the common acous- 51
2 tical poles. Bharitkar and Kyriakakis showed minor improvements by averaging the received source signal over a number of sensors [9]. Talantzis and Ward [98] demonstrated robustness is improved by using multiple numbers of sensors and equalizers, here showing a two-fold or better increase of the radius of the equalized region. In this chapter, we demonstrate that further improvements to robustness of equalization are possible by capturing the sound with a directional microphone or beamformer. Robustness is shown to be significantly improved with a uniform linear array. We also study the dependence of the size of the equalized region on the geometric parameters specifying the soundfield. It is shown to be enlarged by decreasing the range of directions from which the sound originates. The analysis is general enough to predict the robustness of arbitrary sound capture devices and arbitrary soundfields. The study of robustness is performed using parallel analyses. We investigate the performance of perfect equalization for a directional sensor or beamformer in a diffuse field, and the performance of perfect magnitude response equalization for a directional sensor in an arbitrary soundfield. In both cases, we derive closedform equalizer error expressions, similar to those in [9,78,98,99]. These expressions clearly identify the factors impacting robustness. Our analysis is superior to that in [9,78,98] as it is more elegant and makes fewer approximations. Further it identifies a concern in the method of analysis applied in the literature, which impacts the accuracy of many past works. Previewing chapter content, we start in Section 3.2 by defining measures of the robustness of equalization to movement of the sensor. In Section 3.3, we derive expressions for these measures of robustness for the soundfields described in Chapter 2. In Section 3.4, we explore a principle for extending the robustness results to movement of the sound source. In Section 3.5 we look at some examples, characterizing the robustness for different beamformers and directional microphones, as well as for different reverberant soundfields. In Section 3.6, we draw some conclusions and identify problems for future research. 3.2 Robustness of Equalization This chapter is largely concerned with the robustness of equalization to movement of the sound capture device. Consider an equalizer connected to a sound capture device that is designed to equalize the acoustic channel at position x. We represent the output signal of the sensor with the spatial quantity Υ(x; ω). The term robustness of equalization refers to how the equalizer performs with the sensor at perturbed position x + r. We measure robustness by quantifying the mean square error in equalized out-
3 put. This error of equalization is an important measure of perceptual performance. Although equalization error does not take into account the more subtle perceptual phenomena such as the reinforcing effect of early reflections and the syllabic blurring of late-arriving reflections, it does quantify the distortion in the frequency response which is generally understood to be linked to the perceptual quality of the equalized sound signal 1 [69, 77]. For example Fielder noted that notches in an otherwise flat room-equalizer frequency response create audible listening differences [29]. Unlike recently developed perceptual criteria [12, 29], the equalization error yields a simple closed-form expression. In Section we derive a measure of robustness for perfect equalization of both magnitude and phase in a stochastic soundfield. In Section 3.2.2, we derive a measure of robustness of perfect equalization of the magnitude response in a deterministic soundfield. Our measures are similar to those used in [9,78,98,99], but allow for the spatial signal Υ(x; ω) to not only represent sound pressure P(x; ω), but also the output signal of a microphone M(x; ω) or beamformer B(x; ω). This fact is summarized by the notation Υ {P, M, B} Criterion for Stochastic Soundfields We now define a criterion suited to analyzing the robustness of perfect equalization and signals captured in a diffuse soundfield. The equalizer error criterion ɛ Υ (r; ω) is defined as follows. The mean square change in the output signal due to movement of the sensor is E{ Υ(x + r; ω) Υ(x; ω) 2 }. The expectation is calculated over all realizations of the output signal. Normalizing the error by dividing by E{ Υ(x; ω) 2 }, we define the error criterion as ɛ Υ (r; ω) E{ Υ(x + r; ω) Υ(x; ω) 2 }. (3.1) E{ Υ(x; ω) 2 } Ensemble averaging allows using the diffuse field model of Chapter 2. Use of the diffuse model allows the analysis results to be relatively independent of room geometry, so that the dependence on the characteristics of sound capture device can be quantified. This statistical criterion measures robustness by quantifying the change in the output signal. The underlying premise is that, for any equalizer attached to the signal output, a large change in sensor output signal for a small sensor movement implies poor robustness. This premise is based on the idea that a perfect zero-forcing equalizer designed to equalize system output for the original location G(ω) = Υ 1 (x; ω) will perform poorly if signal output changes substantially. In fact for small DRRs, ɛ Υ (r; ω) is equal to the ensemble averaged error in equalizer 1 In Section 3.6, we propose a modification of the robustness criteria defined below that will account for these perceptual phenomena.
4 output for the zero-forcing equalizer filter [78] Criterion for Deterministic Soundfields In this section, we define a criterion suitable for exploring the dependence of robustness of perfect magnitude response equalization on the soundfield. This criterion provides a measure of the average equalizer error as a function of perturbation distance. We define the equalizer error criterion κ Υ (r; ω) as follows. Without loss of generality, choose the origin of our coordinate system to coincide with the sound capture device (that is, set x = 0). The magnitude square error in equalizer output due to movement of the sound capture device when using the ideal magnitude response equalization filter Υ(0; ω) 1 is κ Υ (r; ω) = Υ(r; ω)/υ(0; ω) 2 1. To obtain insight into the average equalizer error for a random perturbation direction, we average κ Υ (rˆφ; ω) over all possible directions ˆφ, yielding κ Υ (r; ω) = 1 κ Υ (rˆφ; ω) ds(ˆφ) 4π S 2 = 1 4π S 2 Υ(rˆφ; ω) Υ(0; ω) 2 ds(ˆφ) 1. (3.2) This error measures the average error in equalization as a function of the perturbation distance r. The main reason for exploring magnitude response equalization is to avoid the so-called phasing problem described in Section This is not to say that the phase response is unimportant, for intelligible speech can be transmitted with phase alone. Equalizing the magnitude response can actually degrade sound quality if in the process the phase response is distorted [71]. However evidence suggests the magnitude response is more relevant in the reverberant case than it is in the reverberation-free case [56]. 3.3 Analysis of Robustness Expressions In this section we derive expressions for the robustness criteria (3.2) and (3.1). However we first describe some concepts and quantities important to the analysis of the captured sound signals. 2 The mean square error in equalizer output E{ Υ(x+r; ω)/υ(x; ω) 1 2 } can be shown equal to ɛ Υ (r; ω) provided E{Υ(x; ω)} 2 Var{Υ(x; ω)}. This was shown in [78] for Υ = P but a similar derivation shows the same is true for Υ {M, B}.
5 3.3.1 Preliminaries To describe the reverberation in the output signal Υ(x; ω), we define for it the direct-to-reverberant ratio and spatial correlation. We shall find that equalizer robustness is a function of these quantities. Direct and Reverberant Components The essence of the soundfield model is to separate signals into direct and reverberant components. This can clearly be done for sound pressure, but because of the linearity of most sound capture devices, it can also be done for the output signal Υ(x; ω). We separate the quantity Υ(x; ω) at each point x into two components, a direct component Υ d (x; ω) and reverberant component Υ r (x; ω): Υ(x; ω) = Υ d (x; ω) + Υ r (x; ω), (3.3) where Υ {P, M, B}. As in Chapter 2, the reverberant component at each point may be modelled as a random variable. The relative magnitude of the direct and reverberant components are quantified through the direct-to-reverberant ratio (DRR), defined as: γ Υ (x; ω) = Υ d(x; ω) 2 E{ Υ r (x; ω) 2 }, (3.4) where the expectation is present to take the ensemble average of reverberant energy over all realizations of the reverberant signal. Spatial Correlation The spatial correlation of a spatial quantity is a measure of similarity between two points in space. We shall find that the robustness criterion (3.1) is a function of spatial correlation. The spatial correlation between two positions x and x + r, for the spatial quantity Υ(x; ω) is defined as, ρ Υ (r; x, ω) = E{[Υ(x; ω) Υ(x; ω)] [Υ(x + r; ω) Υ(x + r; ω)]}, (3.5) E{ Υ(x; ω) Υ(x; ω) 2 }E{ Υ(x + r; ω) Υ(x + r; ω) 2 } where Υ(x; ω) E{Υ(x; ω)}. If Υ(x; ω) has direct and reverberant parts with a deterministic direct part and a reverberant part with properties E{Υ r (x; ω)} = 0 and E{ Υ r (x + r; ω) 2 } = E{ Υ r (x; ω) 2 }, the spatial correlation reduces to: ρ Υ (r; x, ω) = E{Υ r (x; ω)υ r(x + r; ω)}. (3.6) E{ Υ r (x; ω) 2 }
6 In Chapter 2, we saw that sound pressure in a diffuse field possesses these properties. We shall see below in Theorem that microphone and beamformer signals also possess these properties, so that (3.6) is applicable for Υ {P, M, B} Stochastic Criterion in a Diffuse Field We now derive an expression for the stochastic error criterion (3.1) for the robustness of equalization of sound capture device signal output in a diffuse reverberant field. We commence by writing an expression for the output signal of the sound capture device, whether it be a beamformer or directional microphone. Signal Models for a Diffuse Field First summarizing the soundfield model, let the soundfield be the result of a direct field created by a farfield sound source and a diffuse reverberant field. The direct component of sound pressure is given by: P d (x; ω) = ξ d (ω)e ikx ŷ, (3.7) where ξ d (ω) is the amplitude of the direct signal, y is the position of the direct source and ŷ = y/ y. Assuming the reverberant component of the soundfield is diffuse, we can write it as the superposition of plane waves: P r (x; ω) = ξ r (ˆφ; ω)e ikx ˆφds(ˆφ), S 2 (3.8) where the reverberation geometry function ξ r (ˆφ; ω) for a diffuse field is defined in Definition We now write expressions for the output signal of a directional sensor in this soundfield. The output signal of each directional microphone n {1, 2,..., N} with directivity function D n (ŷ; ω) is written as follows. Since the field results from a superposition of plane waves, the output microphone signal is the result of scaling each plane wave by the directivity of the microphone in the direction of the incoming waves. A microphone n positioned at x yields an output signal M n (x; ω) = M (d) n (x; ω) + M (r) n (x; ω), where the direct component 3 is equal to sound pressure scaled by the directivity function D n (ŷ; ω): M (d) n (x; ω) = D n (ŷ; ω)ξ d (ω)e ikx ŷ, (3.9) 3 Since each microphone lies in the farfield of the source, source wavefronts are planar and arrive at each microphone from the same direction ŷ no matter where the microphone lies.
7 (a) (b) (c) (d) 300 Figure 3.1: Directional response D(θ; ω) in db (10dB / division), where θ is the polar angle, for several second order directional microphone designs of [25]: (a) cardioid (b) supercardioid (c) hypercardioid (d) dipole. and for the reverberant component, the plane wave contribution from each direction ˆφ is scaled by D n (ˆφ; ω), to yield: M (r) n (x; ω) = S 2 D n (ˆφ; ω)ξ r (ˆφ; ω)e ikx ˆφds(ˆφ). (3.10) The directivity patterns of several common directional microphone designs are shown in Figure 3.1. We now write the output signals for a beamformer. Consider an array of N microphones with centroid at x, sensors positioned at x + x 1, x + x 2,..., x + x N and the frequency dependent weights W 1 (ω), W 2 (ω),..., W N (ω) as shown in Figure 3.2. The sensors possess the farfield directivity functions D 1 (ˆφ; ω), D 2 (ˆφ; ω),..., D N (ˆφ; ω). The output of the array is: N B(x; ω) = W n (ω)m n (x + x n ; ω). (3.11) n=1 We express the beamformer output signal as a sum of direct and reverberant parts: B(x; ω) = B d (x; ω) + B r (x; ω), where direct component B d (x; ω) and reverberant component B r (x; ω) each satisfy (3.11) separately. Substituting (3.9) and (3.10) into (3.11), the direct and
8 x 2 x x 1 M 1 (x + x 1 ;ω) M 2 (x + x 2 ;ω) W 1 (ω) W 2 (ω) B(x; ω) x N M N (x + x N ;ω) W N (ω) O Figure 3.2: Sensor array configuration for beamformer. Each sensor n is positioned at x + x n and attached to a filter with the frequency dependent weight W n (ω). reverberant output signals of the beamformer are shown to be B d (x; ω) = D(ŷ; ω)ξ d (ω)e ikx ŷ, (3.12) and where B r (x; ω) = D(ˆφ; ω)ξ r (ˆφ; ω)e ikx ˆφds(ˆφ), S 2 (3.13) D(ˆφ; ω) N W n (ω)d n (ˆφ; ω)e ikxn ˆφ, (3.14) n=1 is the farfield transfer function of the beamformer with x = 0 in a free-field environment to a source of unit strength in direction ˆφ, in other words it is the farfield directivity function of the beamformer or beampattern. An important parameter of a beamformer or directional sensor is the array gain or directivity factor in the direction of the source [47]: ε(ŷ; ω) 1 4π D(ŷ; ω) 2 D(ˆϕ; ω) S 2 ds(ˆϕ). 2 In the above direct plus diffuse field model, one can show that the direct-toreverberant ratio (DRR) γ B (x; ω) (db) = 10 log 10 ε(ŷ; ω) + γ P (x; ω) (db), (3.15) where x (db) 10 log 10 (x). As a result, capturing sound with a directional sensor is to boost the DRR. Duality Properties of Output Signals To establish some further characteristics of the microphone and beamformer output signals in the diffuse field, we establish some duality properties. These properties
9 will also be utilized for the deterministic robustness analysis of Section First, we note that a beamformer can be safely interchanged with a directional microphone possessing the same directivity pattern. Comparison of (3.9) and (3.10) with (3.12) and (3.13) reveals this farfield duality principle. Observation (Directional Sensor/Beamformer Duality) For a sound source in the farfield and its reverberant image-sources the output signal of a beamformer is equal to the output signal of a single directional sensor with the same directivity function, located at the beamformer. Comparison of (3.9) and (3.10) with (3.7) and (3.8) reveals an important duality between directional sensor output and sound pressure in farfield reverberant sources. Observation (Directional Sensor/Field Pressure Duality) The outputs in the following two scenarios are the same: the output signal of a directional sensor positioned at x with directivity pattern D(ˆφ; ω) in a field created by a direct source in direction ŷ with amplitude ξ d (ω) and reverberation possessing the reverberation geometry function ξ r (ˆφ; ω) and the sound pressure at position x in a field created by a direct source with amplitude ξ d (ω) = D(ŷ; ω)ξ d(ω) and reverberation possessing the reverberation geometry function ξ r (ˆφ; ω) = D(ˆφ; ω)ξ r (ˆφ; ω). Observation additionally holds when a sensor is in the nearfield of the source or reverberation. However, the sensor must lie at the origin, which greatly restricts application of the duality principle to the nearfield case. Observations and show a duality between the signal of a sensor M(x; ω) or beamformer B(x; ω) in a diffuse field, and the sound pressure P(x; ω) of a generalized diffuse field. They allow ready calculation of the statistical properties of M (r) n (x; ω) and B r (x; ω). We summarize these statistics in a theorem analogous to Theorem 2.4.3: Theorem For a microphone or beamformer of directivity pattern D(ˆφ; ω) positioned at x in a soundfield with a diffuse reverberant component, the signal output Υ r (x; ω) is complex Gaussian with the zero mean property E{Υ r (x; ω)} = 0, covariance property: E{Υ r(x; ω)υ r (x + r; ω)} = σξ(ω) 2 D(ˆφ; ω) 2 e ikr ˆφds(ˆφ), S l 1 (3.16) and circularity property: E{Υ r (x; ω)υ r (x + r; ω)} = 0, (3.17)
10 where σξ 2 (ω) is the reverberant geometry variance of the diffuse field and Υ {M, B}, provided no particular direction in the directivity D(ˆφ; ω) dominates over the others. Proof Consider a directional microphone of directivity function D(ˆφ; ω) situated at x in a diffuse reverberant field, where the reverberant geometry function ξ r (ˆφ; ω) is a spatial random variable with properties described in Definition By Observation 3.3.2, the output signal Υ r (x; ω) is equal to the sound pressure resulting from the reverberation geometry function ξ r (ˆφ; ω) = D(ˆφ; ω)ξ r (ˆφ; ω). Since ξ r (ˆφ; ω) then satisfies the properties of Definition 2.4.2, the soundfield is a generalized diffuse field with a reverberant geometry variance of σ 2 ξ (ˆφ; ω) = E{ ξ r (ˆφ; ω) 2 } = D(ˆφ; ω)σ 2 ξ (ω), where σξ 2(ω) = E{ ξ r(ˆφ; ω) 2 }. Since the directivity function is not dominated by any particular direction while ξ r (ˆφ; ω) has constant variance, there is no random variable in the set {ξ r (ˆφ; ω)e ikx ˆφ : ˆφ S 2 } that has a variance which dominates. These random variables thus satisfy the Lindeberg condition. All properties of Υ r (x; ω) then follow directly from the properties of generalized diffuse field sound pressure. The complex Gaussianity and zero mean properties follow from Theorem The covariance and circularity properties follow from Theorem Finally, by Observation 3.3.1, the output signals and hence signal statistics are the same when the microphone is replaced by a beamformer of same directivity function. The condition on D(ˆφ; ω) ensures the signal output Υ(x; ω) is complex Gaussian. In practical cases, the directivity function is continuous and the condition is satisfied. The only interesting case where the condition is violated is the idealized directional microphone D(ˆφ; ω) = δ(ˆφ ˆφ 0 ) pointing in a direction ˆφ 0. To evaluate the covariance in (3.16), it is convenient to take the spherical harmonic expansion of the square magnitude of the directivity D(ˆφ; ω) 2 : D(ˆφ; ω) 2 = n n=0 m= n ς nm (ω)y m n (ˆφ), ς nm (ω) = D(ˆφ; ω) 2 [Yn m (ˆφ)] ds(ˆφ). S 2 (3.18a) (3.18b) Substituting (3.18a) and Jacobi-Anger expansion (2.25) into (3.16), followed by
11 applying the orthogonality property (2.10), for l = 3 the covariance reduces to: n E{Υ r(x; ω)υ r (x + r; ω)} = 4πσξ(ω) 2 ς nm (ω)j n (k r )Yn m (ˆr), (3.19) n=0 m= n where Υ {M, B}. This expression is analogous to the sound pressure covariance (2.26). In the next section, the models for microphone and beamformer signals are used to derive an expression for the stochastic error criterion. Expression for Stochastic Error Criterion With the properties summarized in Theorem 3.3.1, the error criterion (3.1) reduces to the following expression. Theorem For the signal of a sound capture device Υ {P, M, B} with properties satisfying Theorem or 3.3.1, the error criterion (3.1) reduces to: ɛ Υ (r; ω) = 2 2 γ Υ (x; ω) + 1 Re{γ Υ(x; ω)e ikr ŷ + ρ Υ (r; ω)}, where γ Υ (x; ω) is the DRR of Υ(x; ω) and ρ Υ (r; ω) is spatial correlation of Υ between positions x and x + r. The proof is in the appendix at the end of this chapter. Remarks : 1) In the classical diffuse field, γ P (x; ω) 0 and 4 ρ P (r; ω) = sinc(k r ) and equalizer error for sound pressure reduces to ɛ P (r; ω) = 2 2sinc(k r ) as in [78]. For the direct plus diffuse soundfield, the dependence of equalizer error on DRR is illustrated in Figure 3.3. (2) Robustness is shown to be a function of γ Υ (x; ω) and ρ Υ (r; ω). Significant improvement is not only obtained by boosting the DRR, but also by increasing the spatial correlation. As will be seen, spatial correlation is increased by increasing the directivity of the sensor or beamformer. Overcoming the Phasing Problem Unfortunately the statistical error criterion ɛ Υ (r; ω), like similar criteria of recent work [9,78,98,99], is sensitive to phase shifts e iωτ in Υ(x; ω), which we call the 4 In this thesis, we define the sinc function as: sinc x sinx x.
12 5 0 5 diffuse field only ( db) 5dB 0dB 5dB Error (db) dB r / λ Figure 3.3: Evaluation of statistical error criteria for an omnidirectional sensor in a diffuse field at several DRRs. phasing problem. From (3.1), in the case the perturbed signal and original signals are equal, error criterion Υ(x + r; ω) = Υ(x; ω), ɛ Υ (r; ω) is zero. However if the only difference is a τ phase shift Υ(x + r; ω) = e iωτ Υ(x; ω), ɛ Υ (r; ω) is no longer zero: ɛ Υ (r; ω) = E{ e iωτ Υ(x + r; ω) Υ(x; ω) 2 } E{ Υ(x; ω) 2 } = e iωτ 1 2 = 2 2 cos(ωτ). Depending on τ, the ɛ Υ (r; ω) may range anywhere from 0 to 4. The stochastic criterion can yield an overly conservative measure of error of equalization. Consequent care must be taken in interpreting the robustness results obtained with this criterion. In the case of no reverberation (γ Υ (x; ω) ), the omnidirectional sensor equalization error becomes ɛ Υ (r; ω) = 2 2 cos(kr ŷ). Since no equalization is actually needed in this case, one would expect a correctly-defined error criterion to yield zero error. However with the stochastic error criterion, only when r ŷ = 0 is error equal to zero. If the sensor is moved toward or away from the direct source, r ŷ 0 and ɛ Υ (r; ω) becomes an overly-conservative measure. Similar problems occur when the sensor moves toward or away from the reverberant sources. Consequently, to minimize the phasing effect, we consider perturbation directions r perpendicular to the bulk of reverberant sources. We combat the phasing problem in examples below by taking the following steps: (i) positioning the direct source amongst the reverberant sources and (ii) only calculating the error for perturbations r perpendicular to the source direction ŷ. Step (ii)
13 eliminates all phasing problems associated with the direct field component. A way to avoid the phasing problem entirely is to use a phase insensitive robustness criterion. Such a criterion is the topic of the next section Modal Analysis of Deterministic Criterion We now derive an expression for the deterministic robustness criterion, in terms of the modal coefficients of the soundfield. We start by deriving the expression for an arbitrary soundfield, then consider the case the soundfield has a sizable direct component. General Expression As shown in Chapter 2, any soundfield inside the source free region B 3 = {x R 3 : x R} can be expressed as a linear sum of a set of basis function solutions of the Helmholtz equation: P(x; ω) = n n=0 m= n β nm (ω)j n (k x )Y m n (ˆx), (3.20) where β nm (ω) are the soundfield coefficients. We now present an expression for average equalizer error (3.2) in terms of the soundfield coefficients. Theorem The average magnitude square error in equalizer output due to movement of a sensor by distance r (deterministic criterion (3.2)) in a source-free region with a pressure at any position x of P(x; ω) is given by: κ P (r; ω) = n n=0 m= n β nm (ω) β 00 (ω) 2 [j n (kr)] 2 1, (3.21) where β nm (ω) are the modal coefficients of the field defined in (3.20). The proof is in the appendix at the end of this chapter. Remarks : 1) This error criterion has been formulated without utilizing any assumptions of statistical room acoustics. Further, it does not possess the phasing problem of the statistical criterion. 2) For soundfield coefficients constant in ω, equalization error varies in the same way to both perturbation distance and frequency. That is, κ P (r; αω) = κ P (αr; ω) for α > 0. As a consequence, when the field coefficients are frequency invariant, we can successfully describe the dependence of equalizer error on distance and frequency simultaneously.
14 3) Average equalizer error is expressed as a simple weighted sum of the squared soundfield coefficients. Due to the efficiency of the modal representation, only a small number ( kr + 1) 2 of terms are needed in this sum for an accurate computation [48,106]. E.g. for an r/λ of up to 0.15, only ( 2π ) 2 = 4 terms are needed. 4) The error criterion is equal to the average deviation in square pressure from the pressure at the origin: κ P (r; ω) = 1 S2 P(rˆφ; ω) 2 P(0; ω) 2 ds(ˆφ). 4π P(0; ω) 2 5) The origin of the coordinate system is chosen at the sensor to simplify analysis. Though change of origin in spherical coordinates is not trivial, by no means does this choice limit the usefulness of Theorem Direct and Reverberant Parts We now investigate the relative contributions of the direct and reverberant fields to equalization error. We express the average equalizer error κ P (r; ω) as a function of a direct field term, a reverberant field term and a field interaction term. By the linearity of (3.20), we can separate soundfield coefficients into direct and reverberant parts: β nm (ω) = β nm (d) (ω) + β(r) nm (ω). (3.22) where β nm(ω) (d) and β nm(ω) (r) are the direct component and reverberant components of the soundfield coefficients. For a sound source positioned at y the direct component of the soundfield is: P d (x; ω) = ξ d (ω) e ik y x 4π y x, where ξ d (ω) is the source signal amplitude. From the spherical harmonic expansion (2.29) of this, provided y > x, the direct field coefficients are β (d) nm(ω) = ikh (2) n (k y )[Y m n (ŷ)] ξ d (ω). (3.23) We write the reverberant component in the general form: P r (x; ω) = n n=0 m= n β (r) nm (ω)j n(k x )Y m n which holds provided no reverberant image-source lies in B 3. (ˆx), (3.24) At the origin, the DRR for sound pressure reduces to a particularly simple
15 expression. Utilizing the modal expansion (3.20), at the origin ( x = 0) the DRR is: γ P (0; ω) = n n=0 m= n β(d) n=0 nm(ω)j n (k0)yn m (ˆx) 2 n m= n β(r) nm(ω)j n (k0)yn m(ˆx) 2. Since j n (0) = δ n0, only the n = 0 terms in the double summations are non-zero. Hence we obtain: γ P (0; ω) = β(d) 00 (ω) 2 β (r) 00 (ω). (3.25) 2 Before investigating the relative effects of the direct and reverberant field component on robustness, we establish a spherical Bessel function identity. Though the rate of convergence is dependent on the choice of k, the following series always converges. Lemma For y = y > r and k > 0, the following series expansion identity holds: y 2r ln y + r y r = n n=0 m= n h (2) n (ky)yn m(ŷ) h (2) 0 (ky)y 0 0(ŷ) 2 [j n (kr)] 2. The proof is in the appendix at the end of the chapter. Theorem The average equalization error due to movement of a sensor of a distance r in a soundfield with a pressure at any position x of P(x; ω) and a DRR at the origin γ P (0; ω) is given by: { 1 κ P (r; ω) = γ P (0; ω) + 2 γ P0 (ω) y γ P (0; ω)z 00 (ω) + 1 2r ln y + r y r n [ + 2 γ P (0; ω)z nm (ω) + β (r) } nm(ω) β (r) 2][j n (kr)] 2 1, (3.26) 00 (ω) where n=0 m= n z nm (ω) = Re{[β(d) nm(ω)] β nm(ω)} (r) (d) [β 00 (ω)] β (r) 00 (ω), (3.27) β nm(ω) (d) = ikh (2) n (k y )[Yn m (ŷ)] ξ d (ω) are the direct soundfield coefficients and β nm(ω) (r) are the reverberant soundfield coefficients defined in (3.20). The proof is in the appendix at the end of the chapter. Remarks : (1) Coefficient z nm (ω) quantifies the interaction occurring between the direct and reverberant field components. It is influenced by the relative phase of
16 0 5 reverberation only ( db) 5dB 0dB 5dB 10 10dB Error (db) r / λ Figure 3.4: Evaluation of deterministic error criteria of magnitude response equalization for an omnidirectional sensor in the spherical shell reverberant geometry, at several DRRs. the direct and reverberant coefficients. Rearranging (3.27): z nm (ω) = β (d) nm(ω) β (d) 00 (ω) β (r) nm(ω) β (r) 00 (ω) cos[ β(d) nm (ω) β(r) nm (ω)]. (2) In practice, the source-to-sensor distance are large, of order 1m, while perturbation distances of interest are small, of order 1cm. In such cases, we apply the limit r/y 0, ( y lim r/y 0 2r ln ) y + r y r = 1, to see that the equalizer error only depends on source-to-sensor distance indirectly, through the DRR. In Chapter 2, we showed that an equivalent form to (3.24) is: P r (x; ω) = S2 ξ r (ˆφ; ω) e ik R ˆφ x 4π R ˆφ x ds(ˆφ). (3.28) where the reverberant geometry parameter ξ r (ˆφ; ω) is related to β (r) nm(ω) through (2.32) and R is the reverberant source radius. In Section 3.5.2, we use this representation to investigate the dependence of robustness on the soundfield. In the case R is large, (3.28) becomes equivalent to the farfield form (3.8). We can then also use this soundfield model to study realizations of the generalized diffuse field by selecting ξ r (ˆφ; ω) as one of the geometries described in Section 2.6. When the direct source also lies in the farfield, by the duality principles
17 of Observations and we can extend use of the deterministic criterion to investigate dependence of robustness on sound capture strategy (as in Section 3.5.1). For a soundfield given by (3.28) and ξ r (ˆφ; ω) 1 with y r, the error in magnitude equalization of sound pressure reduces to: κ P (r; ω) = γ P(0; ω) + {2 γ P (0; ω)z 00 (ω) + 1}[j 0 (kr)] 2 γ P (0; ω) + 2 γ P (0; ω)z 00 (ω) (3.29) where the direct/reverberant interaction coefficient z 00 (ω) = cos[k(r y)]. Equation 3.29 is plotted in Figure 3.4 for several DRRs and z 00 (ω) = 0. Further in the case γ(0; ω) = 0, equalizer error becomes κ P (r; ω) = 1 [sinc(kr)] 2 = [1 + sinc(kr)][1 sinc(kr)], where for small r, sinc(kr) 1 and κ(r; ω) 2 2sinc(kr). This expression is the same as the diffuse field expression for the statistical criterion ɛ P (r; ω). 3.4 Robustness of Equalization to Movement of Source Up until now, the robustness of equalization has only been quantified for movement of the sound capture device. However, it is more likely in practice for the sound capture device to remain fixed while the sound source moves. We now investigate the extension of the above results to this more interesting case. Consider a directional microphone fixed at x with directivity function D(ˆφ; ω) while an omnidirectional sound source is shifted from y to y + r. Define the reverberant parts output signals at the microphone before and after the source shifts as M r (x; y, ω) and M r (x; y + r, ω) respectively. We assume the following: (i) The sound source creates a soundfield with a deterministic direct part and a diffuse reverberant part. (ii) The sound energy density at the microphone is unchanged by the movement of the source so that E{ M r (x; y + r, ω) 2 } = E{ M r (x; y, ω) 2 }. From (i), it follows that E{M r (x; y, ω)} = 0. We expect (ii) is accurate for rooms with regular geometry and homogeneous wall absorption parameters. A spatial correlation function ρ M (r; x, y, ω) can be defined for the movement of the source analogous to (3.5). Because of the zero mean and constant sound
18 Source FIXED Sensor MOVES Source MOVES Sensor FIXED Omnidirectional Sensor sinc(kr) sinc(kr) Directional Sensor S 2 D(ˆφ; ω) 2 e ikr ˆφds(ˆφ) S 2 D(ˆφ; ω) 2 ds(ˆφ) unknown Table 3.1: Summary of knowledge of spatial correlation for the output signal of a microphone in a diffuse field setup for movement of a sensor ρ M (r; x, ω) and omnidirectional sound source ρ M (r; x, y, ω). For a directional sensor, we expect a similar expression for the unknown ρ M (r; x, y, ω) to that of ρ M(r; x, ω). energy density properties, we can write it as: ρ M (r; x, y, ω) = E{M r (x; y, ω)m r (x; y + r, ω)}. E{ M r (x; r, ω) 2 } Using this alternative spatial correlation function, it can be shown that Theorem still applies, and can be used to quantify the robustness of equalization in this case. However, we are so far without a way to calculate this new spatial correlation analytically. In the special case of an omnidirectional sensor, the classical soundfield reciprocity principle 5 can be invoked to show the spatial correlation is the same, no matter whether the source moves or the sensor moves. Here ρ M (r; ω) = sinc(kr). We postulate that a similar equivalence exists between the movement of source and sensor for a directional sensor, specifically that spatial correlation is the same for movement of the source as it is for movement of the sensor. If this equivalence does exist, our robustness results extend automatically to the case of a perturbed source location. We intuitively expect a similar spatial correlation relationship to exist. After all, in both the case of sensor movement and source movement, its high signal correlation results from small changes in path lengths. However the situation is different for a moving directional sensor than it is for an omnidirectional sensor, since the correlation is dependent on the direction of perturbation. If there is in general an equivalent case with a moving source, which direction does the source need to move to be equivalent? Current knowledge of the spatial correlation is summarized in Figure 3.3. We leave the spatial correlation for source movement as a topic for future research. 5 The reciprocity principle states that if an omnidirectional source and sensor are interchanged, the sound pressure at the sensor does not change [67, p. 312]. It is a consequence of the time reversal property of the wave equation.
19 3.5 Examples As examples, we study the dependence of the robustness of equalization on the type of sound capture strategy (Section 3.5.1) and the distribution of reverberant sources specifying the soundfield (Section 3.5.2) Study of Dependence on Sound Capture Strategy We now study the effect of several sound capture strategies on the robustness of equalization. We plot both the statistical and deterministic criteria against frequency for a 2cm perturbation in the position. To combat the phasing problem with the statistical criterion, we consider a sensor perturbation r perpendicular to the source direction ŷ. The speed of sound used here c = 342m/s. Reminding the reader, the statistical criterion is used to explore the robustness of perfect equalization. The deterministic case studies are used to study the robustness of magnitude response equalization. Although the styles of analysis and type of equalization are both different, we shall see that the robustness results are similar. Single Omnidirectional Sensor Here we investigate robustness for equalization of the signal output of a single omnidirectional microphone in a diffuse soundfield (D(ˆφ; ω) 1, σξ 2(ˆφ; ω) 1) with different levels of reverberation. Figure 3.3 shows a plot of the statistical criterion. In line with other authors in the equalization and active sound control literature [50,78], we define the zone of equalization to be the spherical region in which the error remains below 10dB. Under this definition, in the diffuse field only case the equalizer has a zone of equalization of radius 0.08λ. With a DRR of 5dB, it increases to 0.18λ. Figure 3.4 shows a plot of the deterministic criterion with source-sensor distance y = 3m. To draw the closest comparison with the diffuse field case, an R of 12.49m was chosen to set the interaction term z 00 (ω) to zero. These curves predict a similar robustness results to that in Figure 3.3. The reverberation only zone of equalization also has a radius of 0.08λ. Single Directional Microphone In this section, we consider the improvement that can be made to robustness of equalization using a directional microphone. Due to the axial symmetry of a typical microphone, its directivity function usually also has axial symmetry. Without loss of generality, assign this symmetry along the z-axis, so that D(ˆφ; ω) is only dependent on the polar angle θ,
20 Design ε (db) η 0 η 1 η Cardioid Dipole Supercardioid Hypercardioid Table 3.2: The directivity factor ε and non-zero directional microphone coefficients η n (ω) for the second order directional microphone designs of [25]. i.e. D(ˆφ; ω) = D(θ; ω). With this symmetry, any D(θ; ω) can be written as the Legendre polynomial expansion, D(θ; ω) = η n (ω)p n (cos θ), (3.30) n=0 where η n (ω) are basis function coefficients and P n ( ) is the Legendre polynomial of degree n. This expansion is useful, as it can represent the directivity of traditional directional microphone designs with only a small number of coefficients. For example, the directivity patterns of several second order microphone designs described by Elko [25] are presented in Figure 3.1, with associated η n (ω) coefficients in Table 3.2. The ς n0 (ω) coefficients are determined from these η n (ω) in a straight-forward manner through substitution of (3.30) into (3.18b). Because of the axial symmetry only the ς n0 (ω) coefficients are nonzero. For D(θ; ω) truncated to n N T in η n (ω) coefficients, D(θ; ω) is a polynomial of degree N T in cosθ. Consequently D(θ; ω) 2 is a polynomial of degree 2N T. In Figure 3.5(a), the statistical criterion is plotted for the Elko designs for a DRR of 3dB. As this figure shows, simply using one of these directional microphones instead of an omnidirectional microphone at least doubles the effective range of equalized frequencies. The hypercardioid design is shown to perform best, mainly due to its high directivity index. The deterministic criterion is plotted for the same microphone designs in Figure 3.5(b), in a soundfield described by (3.28) with the spherical shell geometry (where ξ r (ˆφ; ω) 1) and again with y = 3m and R = 12.49m. As shown, most microphone designs yield a significant performance increase. A comment must be made on the dips in the cardioid and supercardioid curves of Figure 3.5(b). For the deterministic error criterion, it is possible for the error κ Υ (r; ω) at particular locations to be nonzero, yet have an average error κ Υ (r; ω) of zero. We should not infer from these curves that the robustness will improve as the sensor moves away from the original point of equalization, because in general it will not. Further exploring the improvements to the robustness of directional microphones, consider the case of an ideal directional microphone. Define the microphone
21 5 0 Omnidirectional Cardioid 5 Error (db) 10 Supercardioid 15 Dipole 20 Hypercardioid (a) r / λ 0 5 Omnidirectional Dipole Hypercardioid 10 Supercardioid Error (db) 15 Cardioid r / λ (b) Figure 3.5: Evaluation of error criteria for the second order directional microphone designs. (a) Statistical criterion for a sensor in a diffuse field with a DRR of 3dB. (b) Deterministic criterion for a sensor in the realization ξ r (ˆφ; ω) 1 of the diffuse field with a DRR of 6dB. directivity function D(ˆφ; ω) so that sound is accepted from within a conical sector of half cone angle θ c but rejected outside that conical sector (as in Figure 2.6(b)). The equivalent soundfield by Observation is precisely that created by an omnidirectional sensor in the conical sector geometry, so the soundfield coefficients β (r) nm(ω) are those presented in row 2 of Table 2.1. Figure 3.6 plots the deterministic criterion for when the DRR γ M (0; ω) is held constant while cone angle is varied, with the direct source placed at (0, 0, y). Using these plots, we are able to compare the effects of reducing the cone angle which
22 0 Omnidirectional 5 Error (db) o cone 60 o cone 45 o cone r / λ Figure 3.6: Average equalizer error for several directional microphones in the spherical shell reverberant source configuration at 1kHz with the DRR γ M (0; ω) held constant. Source-sensor distance y is 3m and γ P (0; ω) is 6dB. increases the coherence of the field and increasing the DRR γ P (0; ω). The effect of both is to improve robustness. However Figure 3.6 shows the effect of increasing coherence is considerably stronger than that of simply increasing the DRR. Array of Omnidirectional Sensors For an N-element array of omnidirectional sensors (D n (ˆφ; ω) 1 for n = 1, 2,..., N), from (3.14) the beamformer directivity function is given by, D(ˆφ; ω) N W n (ω)e ikˆφ x n. (3.31) n=1 To deduce the robustness properties we require the spherical harmonic expansion in ˆφ of the square magnitude of the directivity function: D(ˆφ; ω) 2 = N m=1 n=1 N Wm(ω)W n (ω)e ikˆφ (x m x n). (3.32) Substituting in Jacobi-Anger expansion (2.25) and rearranging the order of terms, D(ˆφ; ω) 2 = { q q=0 p= q i q N N m=1 n=1 W m(ω)w n (ω)j q (k u mn )[Y p q (û mn )] } Y p q (ˆφ), (3.33)
23 where u mn x m x n and û mn = u mn / u mn. Comparing (3.33) with (3.18a), we identify that ς qp (ω) = i q N N m=1 n=1 W m (ω)w n(ω)j q (k u mn )[Y p q (û mn)]. (3.34) Substituting (3.34) into (3.19) and rearranging the order of the terms yields E{B r (x; ω)b r(x + r; ω)} = 4πE{ ξ r (ω) 2 } q=0 p= q N m=1 n=1 N Wm (ω)w n(ω) q ( 1) q j q (k u mn )j q (k r )[Yq p (û mn)] Yq p (ˆr). } {{ } j 0 (k u mn+r ) The identification of the j 0 (k u mn + r ) expression came from [111, p. 366]. Substituting this into (3.5), spatial correlation is given by: N N m=1 n=1 ρ B (r; ω) = W m (ω)w n(ω)j 0 (k x m x n + r ) N n=1 W m (ω)w. n(ω)j 0 (k x m x n ) N m=1 The spatial correlation is a function of the diffuse field correlation between sensor locations x m and x n, j 0 (k x m x n ), as well as diffuse field correlation between perturbed sensor locations x m +r and unperturbed sensor locations x n, j 0 (k x m x n + r ). We now study the robustness of several delay-and-sum uniform linear arrays for a perturbation along the axis of the array. These arrays are oriented along the x-axis, focussed to a source at broadsides. That is, we set x n = [nd, 0, 0] T and W n (ω) = 1 for n = 1, 2,..., N for various sensor spacings d and numbers of sensors N. The direct to reverberant ratio has been set to 3dB. In Figure 3.7(a) we plotted the statistical criterion for a ULA with 5cm spacing and varied the number of sensors N. One can see significant improvement from using a ULA, extending the useful frequency range from 1800Hz in the single sensor case to 6800Hz in the limit of large numbers of sensors. It is interesting to observe from Figure 3.7(a) that the improvements to robustness obtainable by increasing N are not unbounded. Improvements are also strongly dependent on sensor spacing. The frequency 6800Hz in the point at which the wavelength of sound waves equals the sensor spacing. In Figure 3.7(b) is plotted the statistical criterion for a 16 element array, varying sensor spacings d. In each case, the statistical criterion error climbs rapidly at the frequency f = c/d. The explanation of this phenomenon is as follows. In the broadsides case f is the frequency at which spatial aliasing begins to occur. Side lobes begin to appear at this frequency. These side-lobes in general not only cause a drop in array gain, but
24 also a drop in beamformed spatial correlation. Such a phenomenon encourages one to design arrays with small sensor spacings. For example, although not shown on Figure 3.7(b), the effective range of equalized frequencies for the 16-element array with d = 2.5cm is 13kHz, an order of magnitude improvement over the single sensor case for a 2cm perturbation. To capture the 20Hz-20kHz frequency range, we would want sensors no more than 1.7cm apart Study of Dependence on Field Geometry Parameters This section studies magnitude response equalization robustness for an omnidirectional sensor as a function of the geometric parameters of the reverberant field. We explore equalizer error as a function of the angular spread of the reverberant sources. We also calculate the error in a more realistic acoustical scenario created by the image-source model, in the case that walls are successively removed. Reverberant Source Geometry We first compare the quantitative differences in the robustness of magnitude response equalization created by arranging the reverberant sources in different geometries. We compare the robustness of the conical sector geometry {(R, θ, φ) : 0 < θ < θ c, 0 < φ < 2π} and the spherical slice geometry {(R, θ, φ) : 0 < θ < π, φ < φ < φ}. We study the robustness as the cone angle θ c of the conical sector is varied, and choose a φ = π(1 cosθ 4 c) to make the DRR the same for the corresponding spherical slice case. The direct source has been positioned in the center of both reverberant source geometries, at (0, 0, y) and (y, 0, 0) respectively for y = 3m and R = 12.49m. The robustness error curves are plotted in Figure 3.8. The conical sector yields the better robustness of equalization results, since its reverberant sources are more tightly concentrated. Greater concentration of reverberant sources produces a field with a greater coherence. Figure 3.8 also provides estimates of the extent of the improvement that can be attained by cutting a proportion of the reverberation in a room. Going on the spherical slice results, we expect only a minor, 25%, improvement to the zone of equalization radius for the half sphere configuration. However, if most of the reverberation is eliminated, we expect a much more significant improvement. For the 1/4 sphere, we have an 80% improvement. Image-Source Method - Successive Removal of Walls We now examine the changes to the robustness of equalization in a field as the walls of the room are successively removed. To perform this study, we model
25 5 Error (db) Frequency (Hz) 5 (a) 0 single sensor 5 10 Error (db) m.025m 30.1m m Frequency (Hz) (b) Figure 3.7: Statistical error curves for a ULA (a) with 5cm element spacing and the numbers of sensors varied and (b) with 16 sensors and the sensor spacing varied. In each case the array is focussed to source at broadsides for a 2cm perturbation.
26 0 5 full sphere (γ = 6dB) Error (db) half sphere (γ = 3.0dB) 1/4 sphere / 60 o cone (γ = 0.0dB) o slice / 45 o cone (γ = 6dB) r / λ Figure 3.8: Equalizer error for magnitude response equalization in several configurations of reverberant sources at 1kHz. The solid line is for conical sector and the broken line for spherical slice. Source-sensor distance y is 3m. reverberation with the image-source model [4]. We exploit the modal expansion of the image-source pressure in (2.35) which allows a closed-form solution to (3.2). To keep computation time low, the image-source method was limited to first, second and third order images only, truncating the infinite series in (2.35) to 64 terms (6 first order terms, 20 second order terms and 38 third order terms). We compare the error curves of a 6-wall room, a 5-wall room, a 4-wall room and a 3-wall room. Because the soundfield is dependent on which walls are removed, the curves have been averaged through all possible permutations of walls (6 permutations for the 5-wall room, 15 for the 4-wall room and 20 for the 3-wall room). The equalizer error curves were averaged on the db scale over N = 100 random sets of source and sensor positions, with source-to-sensor distance fixed at 3m. Figure 3.9 shows a plot of equalizer error for a room of dimensions m, a reflection coefficient of 0.84 and at a frequency of 1kHz. From Figure 3.9 a 13% improvement to the radius of the zone of equalization was obtained with 5 walls, 26% improvement with 4 walls, and a 47% improvement with 3 walls. These improvements to equalization robustness correspond to the boost in DRR caused by removal of image-sources. 3.6 Conclusion and Future Research The results of this chapter agree with what has commonly been known about equalization for many years. Using a more refined analysis with more accurate error criteria and a more general treatment, we have more rigorously showed that acoustic
Acoustic Signal Processing. Algorithms for Reverberant. Environments
Acoustic Signal Processing Algorithms for Reverberant Environments Terence Betlehem B.Sc. B.E.(Hons) ANU November 2005 A thesis submitted for the degree of Doctor of Philosophy of The Australian National
More informationTHEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY
THEORY AND DESIGN OF HIGH ORDER SOUND FIELD MICROPHONES USING SPHERICAL MICROPHONE ARRAY Thushara D. Abhayapala, Department of Engineering, FEIT & Department of Telecom Eng, RSISE The Australian National
More informationLIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS. Thushara D. Abhayapala 1, Michael C. T. Chan 2
ICSV4 Cairns Australia 9-2 July, 2007 LIMITATIONS AND ERROR ANALYSIS OF SPHERICAL MICROPHONE ARRAYS Thushara D. Abhayapala, Michael C. T. Chan 2 Research School of Information Sciences and Engineering
More informationActive noise control in a pure tone diffuse sound field. using virtual sensing. School of Mechanical Engineering, The University of Adelaide, SA 5005,
Active noise control in a pure tone diffuse sound field using virtual sensing D. J. Moreau, a) J. Ghan, B. S. Cazzolato, and A. C. Zander School of Mechanical Engineering, The University of Adelaide, SA
More informationUSING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS. Toby Christian Lawin-Ore, Simon Doclo
th European Signal Processing Conference (EUSIPCO 1 Bucharest, Romania, August 7-31, 1 USING STATISTICAL ROOM ACOUSTICS FOR ANALYSING THE OUTPUT SNR OF THE MWF IN ACOUSTIC SENSOR NETWORKS Toby Christian
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Architectural Acoustics Session 1pAAa: Advanced Analysis of Room Acoustics:
More informationA wavenumber approach to characterizing the diffuse field conditions in reverberation rooms
PROCEEDINGS of the 22 nd International Congress on Acoustics Isotropy and Diffuseness in Room Acoustics: Paper ICA2016-578 A wavenumber approach to characterizing the diffuse field conditions in reverberation
More informationCepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials
Cepstral Deconvolution Method for Measurement of Absorption and Scattering Coefficients of Materials Mehmet ÇALIŞKAN a) Middle East Technical University, Department of Mechanical Engineering, Ankara, 06800,
More informationDIRECTION-of-arrival (DOA) estimation and beamforming
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 3, MARCH 1999 601 Minimum-Noise-Variance Beamformer with an Electromagnetic Vector Sensor Arye Nehorai, Fellow, IEEE, Kwok-Chiang Ho, and B. T. G. Tan
More informationr p = r o r cos( φ ) cos( α )
Section 4. : Sound Radiation Pattern from the Mouth of a Horn In the previous section, the acoustic impedance at the mouth of a horn was calculated. Distributed simple sources were used to model the mouth
More informationarxiv: v1 [cs.sd] 30 Oct 2015
ACE Challenge Workshop, a satellite event of IEEE-WASPAA 15 October 18-1, 15, New Paltz, NY ESTIMATION OF THE DIRECT-TO-REVERBERANT ENERGY RATIO USING A SPHERICAL MICROPHONE ARRAY Hanchi Chen, Prasanga
More informationFOR many applications in room acoustics, when a microphone
IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 8, NO. 3, MAY 2000 311 Equalization in an Acoustic Reverberant Environment: Robustness Results Biljana D. Radlović, Student Member, IEEE, Robert C.
More informationAFMG. Focus Your Sub Arrays! AHNERT FEISTEL MEDIA GROUP. PLS 2017, Frankfurt
Why Making Subwoofer Arrays? Design Principles Typical Arrays The Right Tools in EASE Focus 3 www..eu 2 Why Making Subwoofer Arrays? 63 Hz, 1/3 Oct. Typical L-R setup Large frequency response variation
More informationIntroduction to Acoustics Exercises
. 361-1-3291 Introduction to Acoustics Exercises 1 Fundamentals of acoustics 1. Show the effect of temperature on acoustic pressure. Hint: use the equation of state and the equation of state at equilibrium.
More informationSpherical Waves, Radiator Groups
Waves, Radiator Groups ELEC-E5610 Acoustics and the Physics of Sound, Lecture 10 Archontis Politis Department of Signal Processing and Acoustics Aalto University School of Electrical Engineering November
More informationAPPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS
AMBISONICS SYMPOSIUM 29 June 2-27, Graz APPLICATION OF MVDR BEAMFORMING TO SPHERICAL ARRAYS Anton Schlesinger 1, Marinus M. Boone 2 1 University of Technology Delft, The Netherlands (a.schlesinger@tudelft.nl)
More informationSPEECH ANALYSIS AND SYNTHESIS
16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques
More informationADAPTIVE ANTENNAS. SPATIAL BF
ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used
More informationSound Source Tracking Using Microphone Arrays
Sound Source Tracking Using Microphone Arrays WANG PENG and WEE SER Center for Signal Processing School of Electrical & Electronic Engineering Nanayang Technological Univerisy SINGAPORE, 639798 Abstract:
More informationA R T A - A P P L I C A T I O N N O T E
Loudspeaker Free-Field Response This AP shows a simple method for the estimation of the loudspeaker free field response from a set of measurements made in normal reverberant rooms. Content 1. Near-Field,
More informationHeadphone Auralization of Acoustic Spaces Recorded with Spherical Microphone Arrays. Carl Andersson
Headphone Auralization of Acoustic Spaces Recorded with Spherical Microphone Arrays Carl Andersson Department of Civil Engineering Chalmers University of Technology Gothenburg, 2017 Master s Thesis BOMX60-16-03
More information2 Voltage Potential Due to an Arbitrary Charge Distribution
Solution to the Static Charge Distribution on a Thin Wire Using the Method of Moments James R Nagel Department of Electrical and Computer Engineering University of Utah, Salt Lake City, Utah April 2, 202
More informationUncertainty Principle Applied to Focused Fields and the Angular Spectrum Representation
Uncertainty Principle Applied to Focused Fields and the Angular Spectrum Representation Manuel Guizar, Chris Todd Abstract There are several forms by which the transverse spot size and angular spread of
More informationINVERSION ASSUMING WEAK SCATTERING
INVERSION ASSUMING WEAK SCATTERING Angeliki Xenaki a,b, Peter Gerstoft b and Klaus Mosegaard a a Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby,
More informationProbability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver
Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation
More informationThe Design of Precisely Coincident Microphone Arrays for Stereo and Surround Sound
The Design of Precisely Coincident Microphone Arrays for Stereo and Surround Sound Michael A. Gerzon Mathematical Institute, University of Oxford, England Tuesday 4 March 1975 So-called coincident microphone
More informationStudy and Design of Differential Microphone Arrays
Springer Topics in Signal Processing 6 Study and Design of Differential Microphone Arrays Bearbeitet von Jacob Benesty, Chen Jingdong. Auflage 2. Buch. viii, 84 S. Hardcover ISBN 978 3 642 33752 9 Format
More informationPoles and Zeros in z-plane
M58 Mixed Signal Processors page of 6 Poles and Zeros in z-plane z-plane Response of discrete-time system (i.e. digital filter at a particular frequency ω is determined by the distance between its poles
More informationData-based Binaural Synthesis Including Rotational and Translatory Head-Movements
Data-based Binaural Synthesis Including Rotational and Translatory Head-Movements Frank Schultz 1 and Sascha Spors 1 1 Institute of Communications Engineering, Universität Rostock, R.-Wagner-Str. 31 (H8),
More informationPlane-wave decomposition of acoustical scenes via spherical and cylindrical microphone arrays
Plane-wave decomposition of acoustical scenes via spherical and cylindrical microphone arrays 1 Dmitry N. Zotkin*, Ramani Duraiswami, Nail A. Gumerov Perceptual Interfaces and Reality Laboratory Institute
More information1. Electricity and Magnetism (Fall 1995, Part 1) A metal sphere has a radius R and a charge Q.
1. Electricity and Magnetism (Fall 1995, Part 1) A metal sphere has a radius R and a charge Q. (a) Compute the electric part of the Maxwell stress tensor T ij (r) = 1 {E i E j 12 } 4π E2 δ ij both inside
More informationChapter 2 Fourier Series Phase Object Spectra
PhD University of Edinburgh 99 Phase-Only Optical Information Processing D. J. Potter Index Chapter 3 4 5 6 7 8 9 Chapter Fourier Series Phase Object Spectra In chapter one, it was noted how one may describe
More informationChapter 5. Effects of Photonic Crystal Band Gap on Rotation and Deformation of Hollow Te Rods in Triangular Lattice
Chapter 5 Effects of Photonic Crystal Band Gap on Rotation and Deformation of Hollow Te Rods in Triangular Lattice In chapter 3 and 4, we have demonstrated that the deformed rods, rotational rods and perturbation
More informationAnalysis and synthesis of room reverberation based on a statistical time-frequency model
Analysis and synthesis of room reverberation based on a statistical time-frequency model Jean-Marc Jot, Laurent Cerveau, Olivier Warusfel IRCAM. 1 place Igor-Stravinsky. F-75004 Paris, France. Tel: (+33)
More informationNoise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic Approximation Algorithm
EngOpt 2008 - International Conference on Engineering Optimization Rio de Janeiro, Brazil, 0-05 June 2008. Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic
More informationDesign of Pressure Vessel Pads and Attachments To Minimize Global Stress Concentrations
Transactions, SMiRT 9, Toronto, August 007 Design of Pressure Vessel Pads and Attachments To Minimize Global Stress Concentrations Gordon S. Bjorkman ) ) Spent Fuel Storage and Transportation Division,
More informationChapter 2 Acoustical Background
Chapter 2 Acoustical Background Abstract The mathematical background for functions defined on the unit sphere was presented in Chap. 1. Spherical harmonics played an important role in presenting and manipulating
More informationNotes: Most of the material presented in this chapter is taken from Jackson, Chap. 2, 3, and 4, and Di Bartolo, Chap. 2. 2π nx i a. ( ) = G n.
Chapter. Electrostatic II Notes: Most of the material presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartolo, Chap... Mathematical Considerations.. The Fourier series and the Fourier
More informationAOL Spring Wavefront Sensing. Figure 1: Principle of operation of the Shack-Hartmann wavefront sensor
AOL Spring Wavefront Sensing The Shack Hartmann Wavefront Sensor system provides accurate, high-speed measurements of the wavefront shape and intensity distribution of beams by analyzing the location and
More informationBohr & Wheeler Fission Theory Calculation 4 March 2009
Bohr & Wheeler Fission Theory Calculation 4 March 9 () Introduction The goal here is to reproduce the calculation of the limiting Z /A against spontaneous fission Z A lim a S. (.) a C as first done by
More informationDIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS. Sakari Tervo
7th European Signal Processing Conference (EUSIPCO 9) Glasgow, Scotland, August 4-8, 9 DIRECTION ESTIMATION BASED ON SOUND INTENSITY VECTORS Sakari Tervo Helsinki University of Technology Department of
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationThe 3 dimensional Schrödinger Equation
Chapter 6 The 3 dimensional Schrödinger Equation 6.1 Angular Momentum To study how angular momentum is represented in quantum mechanics we start by reviewing the classical vector of orbital angular momentum
More informationMulti Acoustic Prediction Program (MAPP tm ) Recent Results Perrin S. Meyer and John D. Meyer
Multi Acoustic Prediction Program (MAPP tm ) Recent Results Perrin S. Meyer and John D. Meyer Meyer Sound Laboratories Inc., Berkeley, California, USA Presented at the Institute of Acoustics (UK), Reproduced
More informationIntroduction to Condensed Matter Physics
Introduction to Condensed Matter Physics Diffraction I Basic Physics M.P. Vaughan Diffraction Electromagnetic waves Geometric wavefront The Principle of Linear Superposition Diffraction regimes Single
More informationLecture 3: Acoustics
CSC 83060: Speech & Audio Understanding Lecture 3: Acoustics Michael Mandel mim@sci.brooklyn.cuny.edu CUNY Graduate Center, Computer Science Program http://mr-pc.org/t/csc83060 With much content from Dan
More informationTRINICON: A Versatile Framework for Multichannel Blind Signal Processing
TRINICON: A Versatile Framework for Multichannel Blind Signal Processing Herbert Buchner, Robert Aichner, Walter Kellermann {buchner,aichner,wk}@lnt.de Telecommunications Laboratory University of Erlangen-Nuremberg
More informationLecture notes 5: Diffraction
Lecture notes 5: Diffraction Let us now consider how light reacts to being confined to a given aperture. The resolution of an aperture is restricted due to the wave nature of light: as light passes through
More informationEIGENFILTERS FOR SIGNAL CANCELLATION. Sunil Bharitkar and Chris Kyriakakis
EIGENFILTERS FOR SIGNAL CANCELLATION Sunil Bharitkar and Chris Kyriakakis Immersive Audio Laboratory University of Southern California Los Angeles. CA 9. USA Phone:+1-13-7- Fax:+1-13-7-51, Email:ckyriak@imsc.edu.edu,bharitka@sipi.usc.edu
More informationDIFFERENTIAL microphone arrays (DMAs) refer to arrays
760 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 6, NO. 4, APRIL 018 Frequency-Domain Design of Asymmetric Circular Differential Microphone Arrays Yaakov Buchris, Member, IEEE,
More informationSo far, we have considered three basic classes of antennas electrically small, resonant
Unit 5 Aperture Antennas So far, we have considered three basic classes of antennas electrically small, resonant (narrowband) and broadband (the travelling wave antenna). There are amny other types of
More informationOn the existence of magnetic monopoles
On the existence of magnetic monopoles Ali R. Hadjesfandiari Department of Mechanical and Aerospace Engineering State University of New York at Buffalo Buffalo, NY 146 USA ah@buffalo.edu September 4, 13
More informationPhys 622 Problems Chapter 6
1 Problem 1 Elastic scattering Phys 622 Problems Chapter 6 A heavy scatterer interacts with a fast electron with a potential V (r) = V e r/r. (a) Find the differential cross section dσ dω = f(θ) 2 in the
More informationSpherical harmonic analysis of wavefields using multiple circular sensor arrays
Spherical harmonic analysis of wavefields using multiple circular sensor arrays Thushara D. Abhayapala, Senior Member, IEEE and Aastha Gupta Student Member, IEEE Abstract Spherical harmonic decomposition
More informationLecture 2: Acoustics. Acoustics & sound
EE E680: Speech & Audio Processing & Recognition Lecture : Acoustics 1 3 4 The wave equation Acoustic tubes: reflections & resonance Oscillations & musical acoustics Spherical waves & room acoustics Dan
More informationProblem Set #4: 4.1,4.7,4.9 (Due Monday, March 25th)
Chapter 4 Multipoles, Dielectrics Problem Set #4: 4.,4.7,4.9 (Due Monday, March 5th 4. Multipole expansion Consider a localized distribution of charges described by ρ(x contained entirely in a sphere of
More informationPhysics 505 Fall Homework Assignment #9 Solutions
Physics 55 Fall 25 Textbook problems: Ch. 5: 5.2, 5.22, 5.26 Ch. 6: 6.1 Homework Assignment #9 olutions 5.2 a) tarting from the force equation (5.12) and the fact that a magnetization M inside a volume
More information13 Spherical Coordinates
Utah State University DigitalCommons@USU Foundations of Wave Phenomena Library Digital Monographs 8-204 3 Spherical Coordinates Charles G. Torre Department of Physics, Utah State University, Charles.Torre@usu.edu
More informationf dr. (6.1) f(x i, y i, z i ) r i. (6.2) N i=1
hapter 6 Integrals In this chapter we will look at integrals in more detail. We will look at integrals along a curve, and multi-dimensional integrals in 2 or more dimensions. In physics we use these integrals
More information19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011
19th European Signal Processing Conference (EUSIPCO 211) Barcelona, Spain, August 29 - September 2, 211 DEVELOPMENT OF PROTOTYPE SOUND DIRECTION CONTROL SYSTEM USING A TWO-DIMENSIONAL LOUDSPEAKER ARRAY
More informationLaboratory synthesis of turbulent boundary layer wall-pressures and the induced vibro-acoustic response
Proceedings of the Acoustics 22 Nantes Conference 23-27 April 22, Nantes, France Laboratory synthesis of turbulent boundary layer wall-pressures and the induced vibro-acoustic response C. Maury a and T.
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Signal Processing in Acoustics Session 4aSP: Sensor Array Beamforming
More informationNetwork Theory and the Array Overlap Integral Formulation
Chapter 7 Network Theory and the Array Overlap Integral Formulation Classical array antenna theory focuses on the problem of pattern synthesis. There is a vast body of work in the literature on methods
More informationApplications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012
Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications
More informationPHIL 50 INTRODUCTION TO LOGIC 1 FREE AND BOUND VARIABLES MARCELLO DI BELLO STANFORD UNIVERSITY DERIVATIONS IN PREDICATE LOGIC WEEK #8
PHIL 50 INTRODUCTION TO LOGIC MARCELLO DI BELLO STANFORD UNIVERSITY DERIVATIONS IN PREDICATE LOGIC WEEK #8 1 FREE AND BOUND VARIABLES Before discussing the derivation rules for predicate logic, we should
More informationThe below identified patent application is available for licensing. Requests for information should be addressed to:
DEPARTMENT OF THE NAVY OFFICE OF COUNSEL NAVAL UNDERSEA WARFARE CENTER DIVISION 1176 HOWELL STREET NEWPORT Rl 02841-1708 IN REPLY REFER TO 31 October 2018 The below identified patent application is available
More informationFFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations
FFTs in Graphics and Vision Homogenous Polynomials and Irreducible Representations 1 Outline The 2π Term in Assignment 1 Homogenous Polynomials Representations of Functions on the Unit-Circle Sub-Representations
More information7.2.1 Seismic waves. Waves in a mass- spring system
7..1 Seismic waves Waves in a mass- spring system Acoustic waves in a liquid or gas Seismic waves in a solid Surface waves Wavefronts, rays and geometrical attenuation Amplitude and energy Waves in a mass-
More informationLecture 12. Block Diagram
Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data
More informationRepresentation of sound fields for audio recording and reproduction
Representation of sound fields for audio recording and reproduction F. M. Fazi a, M. Noisternig b and O. Warusfel b a University of Southampton, Highfield, SO171BJ Southampton, UK b Institut de Recherche
More informationElectrodynamics II: Lecture 9
Electrodynamics II: Lecture 9 Multipole radiation Amol Dighe Sep 14, 2011 Outline 1 Multipole expansion 2 Electric dipole radiation 3 Magnetic dipole and electric quadrupole radiation Outline 1 Multipole
More informationCIRCULAR MICROPHONE ARRAY WITH TANGENTIAL PRESSURE GRADIENT SENSORS. Falk-Martin Hoffmann, Filippo Maria Fazi
CIRCULAR MICROPHONE ARRAY WITH TANGENTIAL PRESSURE GRADIENT SENSORS Falk-Martin Hoffmann, Filippo Maria Fazi University of Southampton, SO7 BJ Southampton,United Kingdom fh2u2@soton.ac.uk, ff@isvr.soton.ac.uk
More informationEnhancement of Noisy Speech. State-of-the-Art and Perspectives
Enhancement of Noisy Speech State-of-the-Art and Perspectives Rainer Martin Institute of Communications Technology (IFN) Technical University of Braunschweig July, 2003 Applications of Noise Reduction
More informationFORMULA SHEET FOR QUIZ 2 Exam Date: November 8, 2017
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.07: Electromagnetism II November 5, 207 Prof. Alan Guth FORMULA SHEET FOR QUIZ 2 Exam Date: November 8, 207 A few items below are marked
More informationGhost Imaging. Josselin Garnier (Université Paris Diderot)
Grenoble December, 014 Ghost Imaging Josselin Garnier Université Paris Diderot http:/www.josselin-garnier.org General topic: correlation-based imaging with noise sources. Particular application: Ghost
More informationI. Rayleigh Scattering. EE Lecture 4. II. Dipole interpretation
I. Rayleigh Scattering 1. Rayleigh scattering 2. Dipole interpretation 3. Cross sections 4. Other approximations EE 816 - Lecture 4 Rayleigh scattering is an approximation used to predict scattering from
More informationElastic Scattering. R = m 1r 1 + m 2 r 2 m 1 + m 2. is the center of mass which is known to move with a constant velocity (see previous lectures):
Elastic Scattering In this section we will consider a problem of scattering of two particles obeying Newtonian mechanics. The problem of scattering can be viewed as a truncated version of dynamic problem
More informationChirp Transform for FFT
Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a
More informationChapter 22 Gauss s Law. Copyright 2009 Pearson Education, Inc.
Chapter 22 Gauss s Law Electric Flux Gauss s Law Units of Chapter 22 Applications of Gauss s Law Experimental Basis of Gauss s and Coulomb s Laws 22-1 Electric Flux Electric flux: Electric flux through
More informationMoment of inertia. Contents. 1 Introduction and simple cases. January 15, Introduction. 1.2 Examples
Moment of inertia January 15, 016 A systematic account is given of the concept and the properties of the moment of inertia. Contents 1 Introduction and simple cases 1 1.1 Introduction.............. 1 1.
More informationFourier Methods in Array Processing
Cambridge, Massachusetts! Fourier Methods in Array Processing Petros Boufounos MERL 2/8/23 Array Processing Problem Sources Sensors MERL 2/8/23 A number of sensors are sensing a scene. A number of sources
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationA family of closed form expressions for the scalar field of strongly focused
Scalar field of non-paraxial Gaussian beams Z. Ulanowski and I. K. Ludlow Department of Physical Sciences University of Hertfordshire Hatfield Herts AL1 9AB UK. A family of closed form expressions for
More informationPoint Vortex Dynamics in Two Dimensions
Spring School on Fluid Mechanics and Geophysics of Environmental Hazards 9 April to May, 9 Point Vortex Dynamics in Two Dimensions Ruth Musgrave, Mostafa Moghaddami, Victor Avsarkisov, Ruoqian Wang, Wei
More informationDigital Signal Processing Lecture 9 - Design of Digital Filters - FIR
Digital Signal Processing - Design of Digital Filters - FIR Electrical Engineering and Computer Science University of Tennessee, Knoxville November 3, 2015 Overview 1 2 3 4 Roadmap Introduction Discrete-time
More informationPlate mode identification using modal analysis based on microphone array measurements
Plate mode identification using modal analysis based on microphone array measurements A.L. van Velsen, E.M.T. Moers, I. Lopez Arteaga, H. Nijmeijer Department mechanical engineering, Eindhoven University
More informationImpulse Response of the Channel with a Spherical Absorbing Receiver and a Spherical Reflecting Boundary
1 Impulse Response of the Channel with a Spherical Absorbing Receiver and a Spherical Reflecting Boundary Fatih inç, Student Member, IEEE, Bayram Cevdet Akdeniz, Student Member, IEEE, Ali Emre Pusane,
More informationModal Beamforming for Small Circular Arrays of Particle Velocity Sensors
Modal Beamforming for Small Circular Arrays of Particle Velocity Sensors Berke Gur Department of Mechatronics Engineering Bahcesehir University Istanbul, Turkey 34349 Email: berke.gur@eng.bau.edu.tr Abstract
More informationElectrodynamics Qualifier Examination
Electrodynamics Qualifier Examination August 15, 2007 General Instructions: In all cases, be sure to state your system of units. Show all your work, write only on one side of the designated paper, and
More informationGAUSSIANIZATION METHOD FOR IDENTIFICATION OF MEMORYLESS NONLINEAR AUDIO SYSTEMS
GAUSSIANIATION METHOD FOR IDENTIFICATION OF MEMORYLESS NONLINEAR AUDIO SYSTEMS I. Marrakchi-Mezghani (1),G. Mahé (2), M. Jaïdane-Saïdane (1), S. Djaziri-Larbi (1), M. Turki-Hadj Alouane (1) (1) Unité Signaux
More informationSpeaker Tracking and Beamforming
Speaker Tracking and Beamforming Dr. John McDonough Spoken Language Systems Saarland University January 13, 2010 Introduction Many problems in science and engineering can be formulated in terms of estimating
More informationDiscriminant Analysis with High Dimensional. von Mises-Fisher distribution and
Athens Journal of Sciences December 2014 Discriminant Analysis with High Dimensional von Mises - Fisher Distributions By Mario Romanazzi This paper extends previous work in discriminant analysis with von
More informationExperimental investigation on varied degrees of sound field diffuseness in enclosed spaces
22 nd International Congress on Acoustics ` Isotropy and Diffuseness in Room Acoustics: Paper ICA2016-551 Experimental investigation on varied degrees of sound field diffuseness in enclosed spaces Bidondo,
More informationECE 604, Lecture 17. October 30, In this lecture, we will cover the following topics: Reflection and Transmission Single Interface Case
ECE 604, Lecture 17 October 30, 2018 In this lecture, we will cover the following topics: Duality Principle Reflection and Transmission Single Interface Case Interesting Physical Phenomena: Total Internal
More informationA far-field based T-matrix method for three dimensional acoustic scattering
ANZIAM J. 50 (CTAC2008) pp.c121 C136, 2008 C121 A far-field based T-matrix method for three dimensional acoustic scattering M. Ganesh 1 S. C. Hawkins 2 (Received 14 August 2008; revised 4 October 2008)
More informationCMPT 889: Lecture 8 Digital Waveguides
CMPT 889: Lecture 8 Digital Waveguides Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University February 10, 2012 1 Motion for a Wave For the string, we are interested in the
More informationIs there a magnification paradox in gravitational lensing?
Is there a magnification paradox in gravitational lensing? Olaf Wucknitz wucknitz@astro.uni-bonn.de Astrophysics seminar/colloquium, Potsdam, 26 November 2007 Is there a magnification paradox in gravitational
More informationMultidimensional digital signal processing
PSfrag replacements Two-dimensional discrete signals N 1 A 2-D discrete signal (also N called a sequence or array) is a function 2 defined over thex(n set 1 of, n 2 ordered ) pairs of integers: y(nx 1,
More informationSEAFLOOR MAPPING MODELLING UNDERWATER PROPAGATION RAY ACOUSTICS
3 Underwater propagation 3. Ray acoustics 3.. Relevant mathematics We first consider a plane wave as depicted in figure. As shown in the figure wave fronts are planes. The arrow perpendicular to the wave
More informationDigital Room Correction
Digital Room Correction Benefits, Common Pitfalls and the State of the Art Lars-Johan Brännmark, PhD Dirac Research AB, Uppsala, Sweden Introduction Introduction Room Correction What is it? 4 What is loudspeaker/room
More information