A Small Footprint i-vector Extractor

Size: px
Start display at page:

Download "A Small Footprint i-vector Extractor"

Transcription

1 A Small Footprint i-vetor Extrator Patrik Kenny Centre de reherhe informatique de Montréal (CRIM) Abstrat Both the memory and omputational requirements of algorithms traditionally used to extrat i-vetors at run time and to train i-vetor extrators off-line sale quadratially in the i- vetor dimensionality. We desribe a variational Bayes algorithm for alulating i-vetors exatly whih onverges in a few iterations and whose omputational and memory requirements sale linearly rather than quadratially. For typial i-vetor dimensionalities, the omputational requirements are slightly greater than those of the traditional algorithm. The run time memory requirement is sarely greater than that needed to store the eigenvoie basis. Beause it is an exat method, the variational Bayes algorithm enables the onstrution of i-vetor extrators of muh higher dimensionality than has previously been envisaged. We show that modest gains in speaker verifiation auray (as measured by the 200 NIST detetion ost funtion) an be ahieved using high dimensional i-vetors.. Introdution An important reent advane in speaker reognition is the disovery that speeh signals of arbitrary duration an be effetively represented by i-vetors of relatively low dimension (with this dimension being independent of the utterane duration) []. I-vetors have worked equally well in language reognition [2, 3] and both i-vetors and speaker fators extrated from segments of very short duration (on the order of seond) have been suessfully used in diarizing telephone onversations [4, 5, 6]. (Speaker fators are extrated in the same way as i-vetors. They differ in the way the data used to estimate the eigenvoie basis is organized.) I-vetors are so easy to work with that they are now the primary representation used in many state of the art speaker reognition systems. By banishing the time dimension altogether, the i-vetor representation enables the speaker reognition problem to be ast as a traditional biometri pattern reognition problem like fae reognition. This allows well established tehniques suh as osine distane soring, Linear Disriminant Analysis and Probabilisti Linear Disriminant Analysis (PLDA) to be applied. The advantage of a low, fixed dimensional feature representation is partiularly apparent in the ase of PLDA as this an be regarded as a simplified version of Joint Fator Analysis (JFA) whih results when eah utterane is represented by single feature vetor (rather than by a sequene of epstral vetors of random duration, as is traditional in speeh proessing). The i-vetor representation of speeh utteranes an be viewed as a type of prinipal omponents analysis of utteranes of arbitrary duration, based on the assumption that eah utterane an be modeled by a Gaussian Mixture Model (GMM) and applying probabilisti prinipal omponents analysis to the GMM supervetors. Thus the basi assumption is that all utterane supervetors are onfined to a low dimensional subspae of the GMM supervetor spae so that eah supervetor is speified by a small number of oordinates. These oordinates an be thought of as representing physial quantities whih are onstant for a given utterane (suh as voal trat length, room impulse response et.) but whih differ from one utterane to another. The i-vetor representation of the utterane is defined by these oordinates. The standard algorithm for extrating i- vetors is eigenvoie maximum a posteriori (MAP) estimation (Proposition of [7]) and the primary training algorithm used in building an i-vetor extrator is the eigenvoie estimation algorithm given in Proposition 3 of [7] (applied in suh a way that eah utterane is treated as oming from a different speaker). Both the omputational and memory requirements of these algorithms sale quadratially in the i-vetor dimensionality. This aounts for the fat that, although prinipal omponents analyzers of several thousand dimensions are ommonly used in other fields, there are to our knowledge no instanes in the urrent speaker reognition literature of i-vetor extrators of dimension greater than 800. For example an i-vetor extrator of dimension 000 (with a standard onfiguration of 2048 Gaussians and 60 dimensional aousti features) requires 8.8 Gb of storage (in double preision) at run time and twie that amount of memory is needed to train it. In this paper, we will show how to minimize the memory requirements of i-vetor extration (both at run time and during training) using an iterative variational Bayes algorithm to perform the eigenvoie MAP omputation. The algorithm onverges very quikly (3 variational Bayes iterations are typial) and no ost in speaker reognition auray is inurred. A 000 dimensional i-vetor extrator an be aommodated in less than Gb at run time and the omputational overhead is quite modest: the CPU time required to extrat a 000 dimensional i-vetor from an utterane is on the order of seond, assuming that the Baum-Welh statistis for the utterane are given. A key aspet of the variational Bayes algorithm is that both the omputational and memory requirements sale linearly rather than quadratially in the i-vetor dimensionality. This makes it possible to explore the question of whether improvements in speaker reognition auray an be obtained using very high dimensional i-vetor representations. (It is well known that approximate methods an be used to extrat i-vetors at run time without seriously ompromising speaker reognition auray but exat omputations appear to be needed for training i-vetor extrators [8]. This was our prinipal motivation for developing the variational Bayes method.) We have experimented with i-vetors of dimension as high as 600 and we will show that minor improvements in the value of the normalized 200 NIST detetion ost funtion an be obtained using very high dimensional i-vetors.

2 2.. Notation 2. Review of i-vetors The probabilisti model underlying eigenvoie MAP is an extension of probabilisti prinipal omponents analysis [9]. We assume that we are given a universal bakground model (UBM) with C mixture omponents indexed by =,..., C. For eah mixture omponent we denote by w, m and Σ the orresponding mixture weight, mean vetor and ovariane matrix. We denote by F the dimension of the aousti feature vetors. To aount for inter-utterane variability, we assoiate with eah utterane an R vetor y (the i-vetor) and with eah mixture omponent an F R matrix V. For the given utterane, the aousti feature vetors assoiated with the mixture omponent are supposed to be distributed with mean µ and ovariane matrix Σ where µ = m + V y. Thus, if we are given an utterane represented as a sequene of frames,..., T and the aliment of frames with mixture omponents is given, the likelihood of the utterane is N ln (2π) F/2 Σ /2 ( t V y m ) Σ ( t V y m ) 2 t where the sum over extends over all mixture omponents; the sum over t extends over all frames aligned with the mixture omponent ; and N is the number of suh frames. This expression an be evaluated in terms of the first and seond order statistis for eah mixture omponent, namely F = t S = t t t t. Sine the alignment of frames with mixture omponents is not in fat given, we use the Baum-Welh statistis instead. These are defined by N = t F = t S = t γ t() γ t() t γ t() t t where, for eah time t, γ t() is the posterior probability that t is generated by the mixture omponent, alulated with the UBM. (This hoie is intuitively natural and it an be motivated by variational Bayes [0].) In [7], the only roles played by the seond order Baum- Welh statistis are in the alulation of the likelihood funtion (Proposition 2) and in the estimation of the ovariane matries Σ (Proposition 3). In pratie, for eah mixture omponent, m and Σ an be opied from the UBM and only the matrix V needs to be estimated from the training data. Furthermore, the ontribution of the seond order statistis an be dropped from the expression for the likelihood funtion without ompromising its usefulness. Thus the seond order statistis do not play an essential role in most implementations i-vetor extration Proposition of [7] shows how to alulate the posterior distribution of y for a given utterane on the assumption that the prior is standard normal. (There is no gain in generality by assuming a non-standard normal prior.) The posterior ovariane matrix Cov (y, y) and mean vetor y are given by Cov (y, y) = I + N V y = Cov (y, y) Σ V Σ V! (F N m ).() The usual proedure to alleviate the omputational burden is to store the matries V Σ V and, for eah utterane s, use these matries together with the 0 order Baum-Welh statistis to evaluate the preision matrix I + N (s)v Σ V for the given utterane. Beause of symmetry, only R(R+)/2 real numbers (i.e. the upper triangular part of V Σ V ) need to be stored for eah mixture omponent. Of ourse it would be possible to eonomize on memory by alulating the preision matrix for eah utterane from srath but this is not done in pratie as the omputational burden would be exessive Whitening the Baum-Welh statistis A simplifiation whih an be applied if the mean vetors and ovariane matries m and Σ are taken as given (rather than estimated in the ourse of training the i-vetor extrator) is that the first order statistis an be pre-whitened by subjeting them to the transformation where F L (F N m ) L L = Σ is the Cholesky deomposition of Σ. (Of ourse hanging the Baum-Welh statistis in this way will hange the the estimates of the matries V produed by the eigenvoie training algorithm summarized below. But these hanges anel eah other out in the sense that, for eah utterane, the posterior distribution of the hidden variables y remains unhanged. Thus the point estimate of the i-vetor that is, the mean of the posterior remains unhanged whih is all that we require.) Performing this transformation enables us to take m = 0 and Σ = I in all of the equations in [7]. This simplifies the implementation and, more importantly, it failitates building i- vetor extrators whih do not have to distinguish between full ovariane and diagonal ovariane UBMs. (It was shown in [] that full ovariane UBMs outperform diagonal ovariane UBMs in speaker reognition.) We will assume heneforth that the Baum-Welh statistis have been whitened Training the ivetor extrator As for estimating the matries V, suppose we have a training set where for eah utterane s, the first and seond order moments y(s) and y(s)y (s) have been alulated. The maximum likelihood update formula for V is!! V = F (s) y (s) N (s) y(s)y (s) s s

3 where the sums over s extend over all utteranes in the training set, and for eah utterane s, N (s) and F (s) are the Baum- Welh statistis of order 0 and for mixture omponent. Calulating the first and seond moments is just a matter of evaluating the posterior of y(s). For example, y(s)y (s) = Cov (y(s), y(s)) + y(s) y (s). As for minimum divergene estimation, the idea is to modify the matries V in suh a way as to fore the empirial distribution of the i-vetors to onform to the standard normal prior [2, 3]. Let LL be the Cholesky deomposition of the matrix S y(s)y (s) s where S is the number of training utteranes and the sum extends over all utteranes in the training set. The transformations V V L y(s) L y(s) have the desired effet. Both maximum likelihood and minimum divergene estimation inrease the likelihood of the training set where the likelihood is evaluated as in Proposition 2 of [7]. Note that, just as for run-time i-vetor extration, alulating i-vetor posteriors is the prinipal omputation for both maximum likelihood and minimum divergene training. 3. The variational Bayes algorithm We will show how the memory requirements of the posterior alulation an be substantially redued at the ost of a modest omputational overhead by working with a basis of the i-vetor spae with respet to whih the i-vetor posterior ovariane matries are approximately diagonal, so that the i-vetor omponents are approximately statistially independent in the posterior. The variational Bayes (VB) method is a standard way of enforing this type of posterior independene assumption. In the ase at hand, variational Bayes produes an approximation to the true posterior whih is of very high quality in the sense that, at onvergene, the mean of the posterior distribution that is, the point estimate of the i-vetor is alulated exatly. The only inauray is in the posterior ovariane matrix whih is only approximately diagonal. These posterior ovarianes are used in training the i-vetor extrator (not at run time) but the effet of the diagonal approximation is minimal so that it turns out that using the VB method in training i-vetor extrators leads to no degradation in speaker reognition auray. If the VB algorithm is initialized properly, very few iterations (typially 3, independently of the i-vetor dimension) are needed to obtain aurate i-vetor estimates. Beause the off-diagonal elements of the posterior ovariane matrix are ignored, it follows that the omputational and memory requirements of the VB method sale linearly rather than quadratially in the i-vetor dimension. 3.. VB updates Suppose we are given an utterane represented as a sequene of aousti feature vetors,..., T or for short. We suppose that the prior distribution of y is standard normal. Assume provisionally that it is reasonable to impose diagonal onstraints on the posterior ovariane matrix of y given so that we an write y = (y,..., y R ) Q(y) = Q(y )... Q(y R ). where Q(y) approximates the true posterior P(y ). We will return to the question of why this assumption is reasonable in the next setion. Meantime, we derive a variational Bayes algorithm to alulate Q(y). We introdue the following notation. For r =,..., R, we denote the rth olumn of V by V r so that and diag (V V ) = V y = 0 R r= V V V r y r... V R V R C A. The memory overhead of the VB algorithm is the ost of storing this diagonal matrix for eah mixture omponent, rather than (the upper triangle of) V V as required by the standard posterior alulation outlined in Setion 2.2. Following the standard proedure given in [9] the VB update for Q(y r ) is ln Q(y r ) = E y\y r [ln P(y, )] + onstant. So using to indiate equality up to an additive onstant, we have ln Q(y r ) 2 Letting 2 + E y\y r ( + NV r " # ( t V y) ( t V y) 2 (yr ) 2 t V r ) (y r ) 2 F N r r L r = + y r V r y r. N V r V r, we an read off the posterior expetation and variane of y r from this expression: y r = F L r N y r V r Var(y r ) = L r. r r A omplete VB update iteration onsists in applying this for r =,..., R. Readers familiar with the Jaobi method in numerial linear algebra will reognize that suessive VB iterations implement the Jaobi method for solving the linear system (). It is well known (and easy to verify) that, if the Jaobi method onverges, then it onverges to the solution of the linear system. Convergene in this ase is guaranteed by variational Bayes [9]. Thus, if is run to onvergene, the VB method alulates i-vetors exatly.

4 For effiient implementation, we an use the following reformulation. Set R = F N V y for eah mixture omponent (R for remainder). The VB update of y onsists in performing the following operations for r =,..., R:. R R + N y r V r ( =,..., C) 2. y r P = L r V r R 3. R R N y r V r ( =,..., C) More effiiently (sine diag (V V ) has been preomputed), ) and 2) an be ombined as y r new = L r = L r = L r V r (R + N y r old V r ) R + y r old R + L r L «y r r old N V r V r so that the VB update of y redues to performing the following operations for r =,..., R:. y r P new = L r V r R + ` L y r r old 2. R R N ( y r new y r old ) V r ( =,..., C) VB Initialization The VB algorithm is guaranteed to alulate i-vetors exatly but a good initialization is needed to ensure that it does so quikly. We are free to postmultiply the matries V by any orthogonal matrix without affeting the assumption that y has a standard normal prior distribution; in partiular we an hoose a basis of the i-vetor spae suh that the matrix w V V is diagonal where, for eah mixture omponent, w is the orresponding mixture weight (i.e. prior probability). Sine, for a given utterane of suffiiently long duration, N Nw it follows that the posterior ovariane matrix for an utterane, namely Cov (y, y) in (), is approximately diagonal (as we mentioned in Setion 3.). Note that the quality of the approximation may degrade in the ase of very short utteranes (for whih it may not be the ase that N Nw for eah mixture omponent omponent). Thus a reasonable initial estimate of y for VB is the approximate solution of () obtained by ignoring the off-diagonal elements in Cov (y, y). (Solving a system of equations with a diagonal oeffiient matrix is omputationally trivial.) Unlike the approximation used in [8], this is only an initial estimate hosen so as to ensure rapid onvergene of the VB algorithm. Note that in [8] the approximation is only for extrating i-vetors at run time; it turns out to be too rude to use in training i-vetor extrators. On the other hand, sine the variational Bayes approah to alulating i-vetors is exat, it an be used in off-line training of i-vetor extrators. This is the reason why we are able to experiment with i-vetors of very high dimensionality The variational lower bound The variational Bayes updates are guaranteed to inrease the variational lower bound L on ln P() defined by» P(y, ) L = E ln Q(y) where the expetation is taken with respet to Q(y). Ignoring the ontribution of the seond order statistis (whih does not hange from one VB iteration to the next), the variational lower bound an be expressed in terms of the posterior mean and ovariane as follows L = (V y ) F N (V y ) V y 2 2 tr (N V V Cov (y, y)) D (Q(y) P(y)) where D (Q(y) P(y)) is the Kullbak-Leibler divergene between the posterior Q(y) and the standard normal prior P(y) whih (by the formula for the divergene of two multivariate Gaussians [3]) is given by D (Q(y) P(y)) = R 2 ln Cov (y, y) tr ([ Cov (y, y) + y y ]). Evaluating the variational lower bound on eah VB update turns out to be rather expensive so in pratie it is more useful for troubleshooting than for monitoring onvergene of the VB algorithm at run time. (A simple, effetive run-time onvergene riterion is the Eulidean norm of the differene between suessive i-vetor estimates y new y old.) In onstruting an i-vetor extrator, the usual riterion used to monitor onvergene of the maximum likelihood and minimum divergene training algorithms is the exat likelihood funtion desribed in Proposition 2 of [7] whose evaluation requires omputing exat posterior ovarianes. If the posteriors are alulated with the VB algorithm, the appropriate riterion to use is the aggregate variational lower bound alulated by summing the variational lower bound over all training utteranes. If posteriors are evaluated with the VB method, this riterion is guaranteed to inrease from one training iteration to the next (both for maximum likelihood training and minimum divergene training) so it is useful for troubleshooting. 4.. Testbed 4. Experimental Results We report the results of experiments onduted on the det 2 trials (normal voal effort telephone speeh) in the female portion of the extended ore ondition of the NIST 200 speaker reognition evaluation (thus using a larger set of trials than was provided for in the original evaluation plan ). Results are reported using the following metris: the equal error rate (EER), the normalized detetion ost funtion used in evaluations prior to 200 (2008 NDCF) and the normalized detetion ost funtion defined for the 200 evaluation (200 NDCF) whih severely penalizes false alarms. The purpose of the experiments was to ompare the performane of the VB i-vetor extrator with that of our earlier JFA-based implementation and to investigate the SRE0 evalplan.r6.pdf

5 question of whether improvements in speaker verifiation auray ould be ahieved with very high dimensional i-vetors. Speaker verifiation was arried out with heavy-tailed PLDA, using the front end and data sets for training the UBM, the i-vetor extrators and the PLDA lassifiers desribed in [3]. In omparing the VB i-vetor extrator with the JFA-based implementation, we used the in house CRIM voie ativity detetor (VAD) in extrating Baum-Welh statistis. However we learned in the ourse of the Bosaris workshop at BUT in 200 that the CRIM voie ativity detetor did not perform as well as the elebrated BUT Hungarian phoneme reognizer and BUT kindly made their transripts available to us for the experiments with very high dimensional i-vetors Auray of VB We evaluated the auray of the variational Bayes method by omparing it with our original i-vetor implementation, where the i-vetor extrator was built using exeutables written for Joint Fator Analysis. (The sripts were modified in suh a way as to ignore the speaker labels attahed to reordings in the Swithboard and Mixer training orpora.) We used a standard UBM onfiguration (2048 diagonal Gaussians, 60 dimensional aousti feature vetors) to extrat Baum-Welh statistis. However, in our JFA implementation, the Baum-Welh statistis were not whitened, the means and ovarianes assoiated with the various mixture omponents were re-estimated in the ourse of fator analysis training rather than opied form the UBM (and only UBMs with diagonal ovariane matries were supported). To test the auray of the variational Bayes method we used it in onjuntion with the diagonal ovariane UBM both to train an i-vetor extrator and to alulate the i-vetors at run time. Our implementation in this ase used whitened Baum- Welh statistis so the means and ovarianes assoiated with the various mixture omponents were not re-estimated in the ourse of training the i-vetor extrator. We used 5 iterations of variational Bayes to extrat i-vetors. The i-vetor dimensionality was 400 in both ases, redued to 00 by linear disriminant analysis as in [, ] and heavytailed PLDA lassifiers [3] were used for verifiation. The results were essentially idential: an equal error rate of 3.% for the JFA implementation versus 3.0% for VB and a detetion ost of 0.50 for JFA versus 0.49 for VB (where the detetion ost was measured with the 200 NIST ost funtion). The slight edge observed for the VB method may seem surprising. It appears to be attributable to the fat that the varianes were opied from the UBM in this ase but not in the ase of the JFA-based implementation. The effet of this opying is to overestimate the varianes in the i-vetor extrator by about 5%. (Copying results in overestimates sine the UBM varianes are estimated in a way whih takes no aount of inter-utterane variability.) It is well known that overestimating varianes often proves to be helpful in speeh modeling Effiieny of VB We evaluated the effiieny of the VB method by performing a omparison with the standard approah summarized in (), using whitened Baum-Welh statistis in both ases. On a 2.40 GHz Intel eon CPU, the time taken to extrat an i-vetor with the standard approah was about 0.5 seonds. Almost all of the time was spent in BLAS routines (in partiular 75% of the omputational burden is spent aumulating the ovariane matrix in ()). An estimate of about 0.25 seonds is given [8] whih appears to indiate that ompiler optimization might be useful. For the VB approah we performed 5 VB updates at run time. Under these onditions the time taken to extrat an i- vetor was 0.9 seonds. Thus, as expeted, the VB method is slower (by about a fator of 2 in the ase of 400 dimensional i-vetors) but still quite quik ompared with the ost of extrating Baum-Welh statistis using a large UBM. In working with higher dimensional i-vetors, we always found that 3 5 variational Bayes iterations were suffiient. (The exat number of iterations performed was determined by the Eulidean norm stopping riterion mentioned in Setion 3.3). It follows that, like the memory requirements, the omputational overhead of the VB method sales linearly rather than quadratially in the i-vetor dimension. So the omputational advantage of the standard implementation relative to the VB method atually dereases as the i-vetor dimension inreases High dimensional i-vetors Sine the VB method enables very high dimensional i-vetor extrators to be trained with relatively modest omputational resoures, we onduted some experiments to see if any gains in auray ould be ahieved by inreasing the i-vetor dimensionality. We used the same 2048 omponent diagonal UBM as in the previous setions but we replaed the CRIM VAD with the BUT VAD whih aounts for the lower error rates. In every ase we redued the number of dimensions to 00 by linear disriminant analysis (LDA). Table shows the results of inreasing the i-vetor dimension from 400 to 600 in steps of 400. The equal error rates degrade, but there is a minor improvement in the 200 NDCF. Table : EER / NDCF female extended ore ondition, 200 NIST evaluation. 00 dimensional LDA. Diagonal UBM, 2048 Gaussians. i-vetor EER 2008 NDCF 200 NDCF % % % % Conlusion We have shown how a variational Bayes approah enables exat i-vetor extration to performed in suh a way that omputational and memory requirements sale linearly rather than quadratially in the i-vetor dimensionality. Beause it is an exat method, the variational Bayes algorithm an be used in training i-vetor extrators as well as at run-time. This makes it possible to experiment with i-vetors of muh higher dimension than has previously been envisaged, yielding modest improvements in speaker verifiation auray as measured by the NIST 200 detetion ost funtion. Aknowledgements We would like to thank Brno University of Tehnology for hosting the 200 Bosaris workshop where this work was begun. Speial thanks to Ondrej Glembek who suggested the initialization in Setion 3.2.

6 6. Referenes [] N. Dehak, P. Kenny, R. Dehak, P. Dumouhel, and P. Ouellet, Front-end fator analysis for speaker verifiation, IEEE Trans. Audio, Speeh, Lang. Proess., vol. 9, no. 4, pp , May 20. [2] N. Dehak, P. Torres-Carrasquillo, D. Reynolds, and R. Dehak, Language reognition via ivetors and dimensionality redution, in Pro. Interspeeh, Florene, Aug. 20. [3] G. Martnez, O. Plhot, L. Burget, O. Glembek, and P. Matejka, Language reognition in ivetors spae, in Pro. Interspeeh, Florene, Aug. 20. [4] F. Castaldo, D. Colibro, E. Dalmasso, P. Lafae, and C. Vair, Stream-based speaker segmentation using speaker fators and eigenvoies, in Pro. ICASSP, Las Vegas, Nevada, Mar. 2008, pp [5] C. Vaquero, A. Ortega, and E. Lleida, Intra-session variability ompensation and a hypothesis generation and seletion strategy for speaker segmentation, in Pro. ICASSP, 20, pp [6] S. Shum, N. Dehak, E. Chuangsuwanih, D. Reynolds, and J. Glass, Exploiting intra-onversation variability for speaker diarization, in Pro. Interspeeh, Florene, Aug. 20. [7] P. Kenny, G. Boulianne, and P. Dumouhel, Eigenvoie modeling with sparse training data, IEEE Trans. Speeh Audio Proessing, vol. 3, no. 3, pp , May [8] O. Glembek, L. Burget, P. Matejka, M. Karafiat, and P. Kenny, Simplifiation and optimization of i-vetor extration, in Proeedings ICASSP, 20. [9] C. Bishop, Pattern Reognition and Mahine Learning. New York, NY: Springer Siene+Business Media, LLC, [0]. Zhao, Y. Dong, J. Zhao, L. Lu, J. Liu, and H. Wang, Variational Bayesian Joint Fator Analysis for Speaker Verifiation, in Pro. ICASSP 2009, Taipei, Taiwan, Apr [] P. Matejka, O. Glembek, F. Castaldo, J. Alam, O. Plhot, P. Kenny, L. Burget, and J. Cernoky, Full-ovariane UBM and heavy-tailed PLDA in i-vetor speaker verifiation, in Proeedings ICASSP, 20. [2] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, and P. Dumouhel, A study of inter-speaker variability in speaker verifiation, IEEE Trans. Audio, Speeh and Lang. Proess., vol. 6, no. 5, pp , July [Online]. Available: [3] P. Kenny, Bayesian speaker verifiation with heavy tailed priors, in Pro. Odyssey 200: The speaker and Language Reognition Workshop, Brno, Czeh Rebubli, June 200.

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information

A Small Footprint i-vector Extractor

A Small Footprint i-vector Extractor A Small Footprint i-vector Extractor Patrick Kenny Odyssey Speaker and Language Recognition Workshop June 25, 2012 1 / 25 Patrick Kenny A Small Footprint i-vector Extractor Outline Introduction Review

More information

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013 Ultrafast Pulses and GVD John O Hara Created: De. 6, 3 Introdution This doument overs the basi onepts of group veloity dispersion (GVD) and ultrafast pulse propagation in an optial fiber. Neessarily, it

More information

Error Bounds for Context Reduction and Feature Omission

Error Bounds for Context Reduction and Feature Omission Error Bounds for Context Redution and Feature Omission Eugen Bek, Ralf Shlüter, Hermann Ney,2 Human Language Tehnology and Pattern Reognition, Computer Siene Department RWTH Aahen University, Ahornstr.

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

A model for measurement of the states in a coupled-dot qubit

A model for measurement of the states in a coupled-dot qubit A model for measurement of the states in a oupled-dot qubit H B Sun and H M Wiseman Centre for Quantum Computer Tehnology Centre for Quantum Dynamis Griffith University Brisbane 4 QLD Australia E-mail:

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

Simplified Buckling Analysis of Skeletal Structures

Simplified Buckling Analysis of Skeletal Structures Simplified Bukling Analysis of Skeletal Strutures B.A. Izzuddin 1 ABSRAC A simplified approah is proposed for bukling analysis of skeletal strutures, whih employs a rotational spring analogy for the formulation

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach Measuring & Induing Neural Ativity Using Extraellular Fields I: Inverse systems approah Keith Dillon Department of Eletrial and Computer Engineering University of California San Diego 9500 Gilman Dr. La

More information

Developing Excel Macros for Solving Heat Diffusion Problems

Developing Excel Macros for Solving Heat Diffusion Problems Session 50 Developing Exel Maros for Solving Heat Diffusion Problems N. N. Sarker and M. A. Ketkar Department of Engineering Tehnology Prairie View A&M University Prairie View, TX 77446 Abstrat This paper

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

MUSIC GENRE CLASSIFICATION USING LOCALITY PRESERVING NON-NEGATIVE TENSOR FACTORIZATION AND SPARSE REPRESENTATIONS

MUSIC GENRE CLASSIFICATION USING LOCALITY PRESERVING NON-NEGATIVE TENSOR FACTORIZATION AND SPARSE REPRESENTATIONS 10th International Soiety for Musi Information Retrieval Conferene (ISMIR 2009) MUSIC GENRE CLASSIFICATION USING LOCALITY PRESERVING NON-NEGATIVE TENSOR FACTORIZATION AND SPARSE REPRESENTATIONS Yannis

More information

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems 009 9th IEEE International Conferene on Distributed Computing Systems Modeling Probabilisti Measurement Correlations for Problem Determination in Large-Sale Distributed Systems Jing Gao Guofei Jiang Haifeng

More information

Transformation to approximate independence for locally stationary Gaussian processes

Transformation to approximate independence for locally stationary Gaussian processes ransformation to approximate independene for loally stationary Gaussian proesses Joseph Guinness, Mihael L. Stein We provide new approximations for the likelihood of a time series under the loally stationary

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation

More information

arxiv: v2 [math.pr] 9 Dec 2016

arxiv: v2 [math.pr] 9 Dec 2016 Omnithermal Perfet Simulation for Multi-server Queues Stephen B. Connor 3th Deember 206 arxiv:60.0602v2 [math.pr] 9 De 206 Abstrat A number of perfet simulation algorithms for multi-server First Come First

More information

V. Interacting Particles

V. Interacting Particles V. Interating Partiles V.A The Cumulant Expansion The examples studied in the previous setion involve non-interating partiles. It is preisely the lak of interations that renders these problems exatly solvable.

More information

Infomax Boosting. Abstract. 1. Introduction. 2. Infomax feature pursuit. Siwei Lyu Department of Computer Science Dartmouth College

Infomax Boosting. Abstract. 1. Introduction. 2. Infomax feature pursuit. Siwei Lyu Department of Computer Science Dartmouth College Infomax Boosting Siwei Lyu Department of Computer Siene Dartmouth College Abstrat In this paper, we desribed an effiient feature pursuit sheme for boosting. The proposed method is based on the infomax

More information

4.3 Singular Value Decomposition and Analysis

4.3 Singular Value Decomposition and Analysis 4.3 Singular Value Deomposition and Analysis A. Purpose Any M N matrix, A, has a Singular Value Deomposition (SVD) of the form A = USV t where U is an M M orthogonal matrix, V is an N N orthogonal matrix,

More information

Variation Based Online Travel Time Prediction Using Clustered Neural Networks

Variation Based Online Travel Time Prediction Using Clustered Neural Networks Variation Based Online Travel Time Predition Using lustered Neural Networks Jie Yu, Gang-Len hang, H.W. Ho and Yue Liu Abstrat-This paper proposes a variation-based online travel time predition approah

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

EE 321 Project Spring 2018

EE 321 Project Spring 2018 EE 21 Projet Spring 2018 This ourse projet is intended to be an individual effort projet. The student is required to omplete the work individually, without help from anyone else. (The student may, however,

More information

Sensor management for PRF selection in the track-before-detect context

Sensor management for PRF selection in the track-before-detect context Sensor management for PRF seletion in the tra-before-detet ontext Fotios Katsilieris, Yvo Boers, and Hans Driessen Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, the Netherlands Email: {Fotios.Katsilieris,

More information

Analysis of discretization in the direct simulation Monte Carlo

Analysis of discretization in the direct simulation Monte Carlo PHYSICS OF FLUIDS VOLUME 1, UMBER 1 OCTOBER Analysis of disretization in the diret simulation Monte Carlo iolas G. Hadjionstantinou a) Department of Mehanial Engineering, Massahusetts Institute of Tehnology,

More information

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines

QCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines DOI.56/sensoren6/P3. QLAS Sensor for Purity Monitoring in Medial Gas Supply Lines Henrik Zimmermann, Mathias Wiese, Alessandro Ragnoni neoplas ontrol GmbH, Walther-Rathenau-Str. 49a, 7489 Greifswald, Germany

More information

Scalable Positivity Preserving Model Reduction Using Linear Energy Functions

Scalable Positivity Preserving Model Reduction Using Linear Energy Functions Salable Positivity Preserving Model Redution Using Linear Energy Funtions Sootla, Aivar; Rantzer, Anders Published in: IEEE 51st Annual Conferene on Deision and Control (CDC), 2012 DOI: 10.1109/CDC.2012.6427032

More information

Taste for variety and optimum product diversity in an open economy

Taste for variety and optimum product diversity in an open economy Taste for variety and optimum produt diversity in an open eonomy Javier Coto-Martínez City University Paul Levine University of Surrey Otober 0, 005 María D.C. Garía-Alonso University of Kent Abstrat We

More information

A Characterization of Wavelet Convergence in Sobolev Spaces

A Characterization of Wavelet Convergence in Sobolev Spaces A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat

More information

Ayan Kumar Bandyopadhyay

Ayan Kumar Bandyopadhyay Charaterization of radiating apertures using Multiple Multipole Method And Modeling and Optimization of a Spiral Antenna for Ground Penetrating Radar Appliations Ayan Kumar Bandyopadhyay FET-IESK, Otto-von-Guerike-University,

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED

RESEARCH ON RANDOM FOURIER WAVE-NUMBER SPECTRUM OF FLUCTUATING WIND SPEED The Seventh Asia-Paifi Conferene on Wind Engineering, November 8-1, 9, Taipei, Taiwan RESEARCH ON RANDOM FORIER WAVE-NMBER SPECTRM OF FLCTATING WIND SPEED Qi Yan 1, Jie Li 1 Ph D. andidate, Department

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i

THE METHOD OF SECTIONING WITH APPLICATION TO SIMULATION, by Danie 1 Brent ~~uffman'i THE METHOD OF SECTIONING '\ WITH APPLICATION TO SIMULATION, I by Danie 1 Brent ~~uffman'i Thesis submitted to the Graduate Faulty of the Virginia Polytehni Institute and State University in partial fulfillment

More information

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION

LOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION LOGISIC REGRESSIO I DEPRESSIO CLASSIFICAIO J. Kual,. V. ran, M. Bareš KSE, FJFI, CVU v Praze PCP, CS, 3LF UK v Praze Abstrat Well nown logisti regression and the other binary response models an be used

More information

Grasp Planning: How to Choose a Suitable Task Wrench Space

Grasp Planning: How to Choose a Suitable Task Wrench Space Grasp Planning: How to Choose a Suitable Task Wrenh Spae Ch. Borst, M. Fisher and G. Hirzinger German Aerospae Center - DLR Institute for Robotis and Mehatronis 8223 Wessling, Germany Email: [Christoph.Borst,

More information

FINITE WORD LENGTH EFFECTS IN DSP

FINITE WORD LENGTH EFFECTS IN DSP FINITE WORD LENGTH EFFECTS IN DSP PREPARED BY GUIDED BY Snehal Gor Dr. Srianth T. ABSTRACT We now that omputers store numbers not with infinite preision but rather in some approximation that an be paed

More information

On Certain Singular Integral Equations Arising in the Analysis of Wellbore Recharge in Anisotropic Formations

On Certain Singular Integral Equations Arising in the Analysis of Wellbore Recharge in Anisotropic Formations On Certain Singular Integral Equations Arising in the Analysis of Wellbore Reharge in Anisotropi Formations C. Atkinson a, E. Sarris b, E. Gravanis b, P. Papanastasiou a Department of Mathematis, Imperial

More information

The Hanging Chain. John McCuan. January 19, 2006

The Hanging Chain. John McCuan. January 19, 2006 The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a

More information

Learning to model sequences generated by switching distributions

Learning to model sequences generated by switching distributions earning to model sequenes generated by swithing distributions Yoav Freund A Bell abs 00 Mountain Ave Murray Hill NJ USA Dana on omputer Siene nstitute Hebrew University Jerusalem srael Abstrat We study

More information

Four-dimensional equation of motion for viscous compressible substance with regard to the acceleration field, pressure field and dissipation field

Four-dimensional equation of motion for viscous compressible substance with regard to the acceleration field, pressure field and dissipation field Four-dimensional equation of motion for visous ompressible substane with regard to the aeleration field, pressure field and dissipation field Sergey G. Fedosin PO box 6488, Sviazeva str. -79, Perm, Russia

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

A Heuristic Approach for Design and Calculation of Pressure Distribution over Naca 4 Digit Airfoil

A Heuristic Approach for Design and Calculation of Pressure Distribution over Naca 4 Digit Airfoil IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 PP 11-15 www.iosrjen.org A Heuristi Approah for Design and Calulation of Pressure Distribution over Naa 4 Digit Airfoil G.

More information

Bayesian Analysis of Speaker Diarization with Eigenvoice Priors

Bayesian Analysis of Speaker Diarization with Eigenvoice Priors Bayesian Analysis of Speaker Diarization with Eigenvoice Priors Patrick Kenny Centre de recherche informatique de Montréal Patrick.Kenny@crim.ca A year in the lab can save you a day in the library. Panu

More information

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM CSC2515 Winter 2015 Introdu3on to Mahine Learning Leture 5: Clustering, mixture models, and EM All leture slides will be available as.pdf on the ourse website: http://www.s.toronto.edu/~urtasun/ourses/csc2515/

More information

A Differential Equation for Specific Catchment Area

A Differential Equation for Specific Catchment Area Proeedings of Geomorphometry 2009. Zurih, Sitzerland, 3 ugust - 2 September, 2009 Differential Equation for Speifi Cathment rea J. C. Gallant, M. F. Huthinson 2 CSIRO Land and Water, GPO Box 666, Canberra

More information

Relativistic Dynamics

Relativistic Dynamics Chapter 7 Relativisti Dynamis 7.1 General Priniples of Dynamis 7.2 Relativisti Ation As stated in Setion A.2, all of dynamis is derived from the priniple of least ation. Thus it is our hore to find a suitable

More information

Maximum Likelihood Multipath Estimation in Comparison with Conventional Delay Lock Loops

Maximum Likelihood Multipath Estimation in Comparison with Conventional Delay Lock Loops Maximum Likelihood Multipath Estimation in Comparison with Conventional Delay Lok Loops Mihael Lentmaier and Bernhard Krah, German Aerospae Center (DLR) BIOGRAPY Mihael Lentmaier reeived the Dipl.-Ing.

More information

Optimization of replica exchange molecular dynamics by fast mimicking

Optimization of replica exchange molecular dynamics by fast mimicking THE JOURNAL OF CHEMICAL PHYSICS 127, 204104 2007 Optimization of replia exhange moleular dynamis by fast mimiking Jozef Hritz and Chris Oostenbrink a Leiden Amsterdam Center for Drug Researh (LACDR), Division

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

Robust Recovery of Signals From a Structured Union of Subspaces

Robust Recovery of Signals From a Structured Union of Subspaces Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling

More information

Chapter 2 Linear Elastic Fracture Mechanics

Chapter 2 Linear Elastic Fracture Mechanics Chapter 2 Linear Elasti Frature Mehanis 2.1 Introdution Beginning with the fabriation of stone-age axes, instint and experiene about the strength of various materials (as well as appearane, ost, availability

More information

Lecture 3 - Lorentz Transformations

Lecture 3 - Lorentz Transformations Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Failure Assessment Diagram Analysis of Creep Crack Initiation in 316H Stainless Steel

Failure Assessment Diagram Analysis of Creep Crack Initiation in 316H Stainless Steel Failure Assessment Diagram Analysis of Creep Crak Initiation in 316H Stainless Steel C. M. Davies *, N. P. O Dowd, D. W. Dean, K. M. Nikbin, R. A. Ainsworth Department of Mehanial Engineering, Imperial

More information

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION

COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION 4 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES COMBINED PROBE FOR MACH NUMBER, TEMPERATURE AND INCIDENCE INDICATION Jiri Nozika*, Josef Adame*, Daniel Hanus** *Department of Fluid Dynamis and

More information

23.1 Tuning controllers, in the large view Quoting from Section 16.7:

23.1 Tuning controllers, in the large view Quoting from Section 16.7: Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output

More information

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

A simple expression for radial distribution functions of pure fluids and mixtures

A simple expression for radial distribution functions of pure fluids and mixtures A simple expression for radial distribution funtions of pure fluids and mixtures Enrio Matteoli a) Istituto di Chimia Quantistia ed Energetia Moleolare, CNR, Via Risorgimento, 35, 56126 Pisa, Italy G.

More information

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install

More information

Combined Electric and Magnetic Dipoles for Mesoband Radiation, Part 2

Combined Electric and Magnetic Dipoles for Mesoband Radiation, Part 2 Sensor and Simulation Notes Note 53 3 May 8 Combined Eletri and Magneti Dipoles for Mesoband Radiation, Part Carl E. Baum University of New Mexio Department of Eletrial and Computer Engineering Albuquerque

More information

Preprints of the 19th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 24-29, 2014

Preprints of the 19th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 24-29, 2014 Preprints of the 9th World Congress he International Federation of Automati Control Cape on, South Afria August 4-9, 4 A Step-ise sequential phase partition algorithm ith limited bathes for statistial

More information

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION 09-1289 Citation: Brilon, W. (2009): Impedane Effets of Left Turners from the Major Street at A TWSC Intersetion. Transportation Researh Reord Nr. 2130, pp. 2-8 IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE

More information

Subject: Introduction to Component Matching and Off-Design Operation % % ( (1) R T % (

Subject: Introduction to Component Matching and Off-Design Operation % % ( (1) R T % ( 16.50 Leture 0 Subjet: Introdution to Component Mathing and Off-Design Operation At this point it is well to reflet on whih of the many parameters we have introdued (like M, τ, τ t, ϑ t, f, et.) are free

More information

Reliability Guaranteed Energy-Aware Frame-Based Task Set Execution Strategy for Hard Real-Time Systems

Reliability Guaranteed Energy-Aware Frame-Based Task Set Execution Strategy for Hard Real-Time Systems Reliability Guaranteed Energy-Aware Frame-Based ask Set Exeution Strategy for Hard Real-ime Systems Zheng Li a, Li Wang a, Shuhui Li a, Shangping Ren a, Gang Quan b a Illinois Institute of ehnology, Chiago,

More information

Orthogonal Complement Based Divide-and-Conquer Algorithm (O-DCA) for Constrained Multibody Systems

Orthogonal Complement Based Divide-and-Conquer Algorithm (O-DCA) for Constrained Multibody Systems Orthogonal Complement Based Divide-and-Conquer Algorithm (O-DCA) for Constrained Multibody Systems Rudranarayan M. Mukherjee, Kurt S. Anderson Computational Dynamis Laboratory Department of Mehanial Aerospae

More information

Process engineers are often faced with the task of

Process engineers are often faced with the task of Fluids and Solids Handling Eliminate Iteration from Flow Problems John D. Barry Middough, In. This artile introdues a novel approah to solving flow and pipe-sizing problems based on two new dimensionless

More information

Development of Fuzzy Extreme Value Theory. Populations

Development of Fuzzy Extreme Value Theory. Populations Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of

More information

REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction

REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction Version of 5/2/2003 To appear in Advanes in Applied Mathematis REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS MATTHIAS BECK AND SHELEMYAHU ZACKS Abstrat We study the Frobenius problem:

More information

Counting Idempotent Relations

Counting Idempotent Relations Counting Idempotent Relations Beriht-Nr. 2008-15 Florian Kammüller ISSN 1436-9915 2 Abstrat This artile introdues and motivates idempotent relations. It summarizes haraterizations of idempotents and their

More information

arxiv:gr-qc/ v2 6 Feb 2004

arxiv:gr-qc/ v2 6 Feb 2004 Hubble Red Shift and the Anomalous Aeleration of Pioneer 0 and arxiv:gr-q/0402024v2 6 Feb 2004 Kostadin Trenčevski Faulty of Natural Sienes and Mathematis, P.O.Box 62, 000 Skopje, Maedonia Abstrat It this

More information

Array Design for Superresolution Direction-Finding Algorithms

Array Design for Superresolution Direction-Finding Algorithms Array Design for Superresolution Diretion-Finding Algorithms Naushad Hussein Dowlut BEng, ACGI, AMIEE Athanassios Manikas PhD, DIC, AMIEE, MIEEE Department of Eletrial Eletroni Engineering Imperial College

More information

MultiPhysics Analysis of Trapped Field in Multi-Layer YBCO Plates

MultiPhysics Analysis of Trapped Field in Multi-Layer YBCO Plates Exerpt from the Proeedings of the COMSOL Conferene 9 Boston MultiPhysis Analysis of Trapped Field in Multi-Layer YBCO Plates Philippe. Masson Advaned Magnet Lab *7 Main Street, Bldg. #4, Palm Bay, Fl-95,

More information

Resolving RIPS Measurement Ambiguity in Maximum Likelihood Estimation

Resolving RIPS Measurement Ambiguity in Maximum Likelihood Estimation 14th International Conferene on Information Fusion Chiago, Illinois, USA, July 5-8, 011 Resolving RIPS Measurement Ambiguity in Maximum Likelihood Estimation Wenhao Li, Xuezhi Wang, and Bill Moran Shool

More information

Einstein s Three Mistakes in Special Relativity Revealed. Copyright Joseph A. Rybczyk

Einstein s Three Mistakes in Special Relativity Revealed. Copyright Joseph A. Rybczyk Einstein s Three Mistakes in Speial Relativity Revealed Copyright Joseph A. Rybzyk Abstrat When the evidene supported priniples of eletromagneti propagation are properly applied, the derived theory is

More information

An iterative least-square method suitable for solving large sparse matrices

An iterative least-square method suitable for solving large sparse matrices An iteratie least-square method suitable for soling large sparse matries By I. M. Khabaza The purpose of this paper is to report on the results of numerial experiments with an iteratie least-square method

More information

Improvements in the Modeling of the Self-ignition of Tetrafluoroethylene

Improvements in the Modeling of the Self-ignition of Tetrafluoroethylene Exerpt from the Proeedings of the OMSOL onferene 010 Paris Improvements in the Modeling of the Self-ignition of Tetrafluoroethylene M. Bekmann-Kluge 1 *,. errero 1, V. Shröder 1, A. Aikalin and J. Steinbah

More information

Vector Field Theory (E&M)

Vector Field Theory (E&M) Physis 4 Leture 2 Vetor Field Theory (E&M) Leture 2 Physis 4 Classial Mehanis II Otober 22nd, 2007 We now move from first-order salar field Lagrange densities to the equivalent form for a vetor field.

More information

11.1 Polynomial Least-Squares Curve Fit

11.1 Polynomial Least-Squares Curve Fit 11.1 Polynomial Least-Squares Curve Fit A. Purpose This subroutine determines a univariate polynomial that fits a given disrete set of data in the sense of minimizing the weighted sum of squares of residuals.

More information

Solving Constrained Lasso and Elastic Net Using

Solving Constrained Lasso and Elastic Net Using Solving Constrained Lasso and Elasti Net Using ν s Carlos M. Alaı z, Alberto Torres and Jose R. Dorronsoro Universidad Auto noma de Madrid - Departamento de Ingenierı a Informa tia Toma s y Valiente 11,

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

7.1 Roots of a Polynomial

7.1 Roots of a Polynomial 7.1 Roots of a Polynomial A. Purpose Given the oeffiients a i of a polynomial of degree n = NDEG > 0, a 1 z n + a 2 z n 1 +... + a n z + a n+1 with a 1 0, this subroutine omputes the NDEG roots of the

More information