Nearest Neighbor Discriminant Analysis for Robust Speaker Recognition
|
|
- Johnathan Palmer
- 5 years ago
- Views:
Transcription
1 INTERSPEECH 2014 Nearest Neighbor Discriminant Anaysis for Robust Speaker Recognition Seyed Omid Sadjadi, Jason W. Peecanos, Weizhong Zhu IBM Research, Watson Group Abstract With the advent of i-vectors, inear discriminant anaysis (LDA) has become an integra part of many state-of-the-art speaker recognition systems. Here, LDA is primariy empoyed to annihiate the non-speaker reated (e.g., channe) directions, thereby maximizing the inter-speaker separation. The traditiona approach for computing the LDA transform uses parametric representations for both intra- and inter-speaker scatter matrices that are based on the Gaussian distribution assumption. However, it is known that the actua distribution of i-vectors may not necessariy be Gaussian, and in particuar, in the presence of noise and channe distortions. Motivated by this observation, we present an aternative non-parametric discriminant anaysis (NDA) technique that measures both the within- and between-speaker variation on a oca basis using the nearest neighbor rue. The effectiveness of the NDA method is evauated in the context of noisy speaker recognition tasks using speech materia from the DARPA Robust Automatic Transcription of Speech (RATS) program. Experimenta resuts indicate that the NDA is more effective than the traditiona parametric LDA for speaker recognition under noisy and channe degraded conditions. Index Terms: discriminant anaysis, i-vector, nearest neighbor, speaker recognition 1. Introduction Speaker recognition has evoved significanty over the past few years. The research trend in this domain has graduay migrated from joint factor anaysis (JFA) based methods, which attempt to mode the speaker and channe subspaces separatey [1], towards the i-vector approach that modes both speaker and channe variabiities in a singe ow-dimensiona (e.g., a few hundred) space termed the tota variabiity subspace [2]. Accordingy, there has been a growing interest to design agorithms for the compensation of the channe subspace in the i-vector paradigm. Here, irrespective of the back-end mode, inear discriminant anaysis (LDA) with the Fisher criterion [3] has been the most commony adopted approach for inter-session variabiity compensation. Current state-of-the-art speaker recognition systems empoy the Fisher LDA as a preprocessor to generate dimensionaity reduced and channe-compensated features from i-vectors [4, 5, 6, 7, 8, 9, 10]. The dimensionaity reduced vectors can then be convenienty modeed and scored with various cassifiers such as support vector machines (SVM) [2], probabiistic LDA (PLDA) [11, 12, 13], and the simpe yet effective cosine distance (CD) based method [2]. The Fisher LDA aims at finding the most discriminative feature subset through a inear transformation of the origina input This work was supported in part by Contract No. D11PC20192 DOI/NBC under the RATS program. The views, opinions, findings and recommendations contained in this artice are those of the authors and do not necessariy refect the officia poicy or position of the DOI/NBC. space. Such a transformation attempts to maximize the betweencass (or inter-speaker) scatter whie minimizing the within-cass variation. Traditionay, parametric within- and between-cass scatter matrices are formed based on the Gaussian distribution assumption for the sampes in each cass. However, if the cassconditiona distributions are non-gaussian, one cannot expect the use of such parametric forms to resut in proper feature subsets that are capabe of preserving compex structures within data needed for cassification (e.g., muti-modaity). It is we known in the speaker recognition community that the actua distribution of i-vectors may not necessariy be Gaussian [12]. This is in particuar more probematic when speech recordings are coected in the presence of noise and channe distortions. Hence, despite its popuarity, the parametric LDA may not be the best choice here. To cope with the above noted issue associated with the parametric nature of the scatter matrices, in the semina work of [14], a non-parametric discriminant anaysis (NDA) approach was proposed for genera two-cass pattern recognition probems. It was ater extended to muti-cass probems and successfuy appied in severa other studies for face recognition tasks as we [15, 16]. The NDA measures both the within- and betweencass scatter matrices on a oca basis using the k-nearest neighbor (k-nn) rue, and unike LDA, is generay of fu rank. Note that for a C cass probem, the parametric LDA can provide at most C 1 discriminant features (i.e., the number of casses minus 1). Nonetheess, this is ess probematic in speaker recognition because typicay the number of speakers in the training set exceeds the dimensionaity of the tota variabiity subspace. The non-parametric nature of the scatter matrices in the NDA inherenty resuts in features that can preserve the oca structure (e.g., the cass boundaries) within data which is important for cassification. In this study, we investigate the appication of NDA for robust speaker recognition. In particuar, we evauate the effectiveness of NDA against the traditiona LDA in the context of speaker recognition tasks under actua noisy and channe degraded conditions using speech materia from the DARPA program, Robust Automatic Transcription of Speech (RATS). The RATS data, which is distributed by the Linguistic Data Consortium (LDC) [17], consists of conversationa teephone speech (CTS) recordings that have been retransmitted (through LDC s Muti Radio-Link Channe Coection System) and captured over 8 extremey degraded communication channes with distinct distortion characteristics. The distortion type seen in RATS data is noninear and the noise is to some extent correated with speech. We conduct our speaker experiments with a state-of-the-art i- vector based system [10] and report on fase-reject (miss) rate at 2.5% fase-aarm (FA) rate as performance metric. We aso expore the impact of different configuration parameters for NDA (e.g., the feature dimensionaity as we as the number of neighbors in the k-nn anaysis) on speaker recognition performance. Copyright 2014 ISCA September 2014, Singapore
2 2. I-vector feature extraction In this section we briefy describe the tota variabiity modeing concept which was inspired by the JFA approach that attempts to estimate separate subspaces for speaker and channe variabiities. Unike JFA, the tota variabiity approach assumes that both speaker and channe variabiities reside in the same ow-dimensiona subspace. This concept is cosey reated to the eigenvoice adaptation technique deveoped for automatic speech recognition (ASR) that was first proposed in [18] based on principa component anaysis (PCA), and ater in [19] based on probabiistic PCA (PPCA) [20]. Given a set of observations (i.e., frames) for each speech session, the eigenvoice adaptation technique computes an offset (inear shift) in the prior acoustic space represented by the Gaussian mixture mode (GMM) mean supervectors obtained from a universa background mode (UBM). The key idea here is that variabiity within and across sessions can be described via a few set of parameters (a.k.a factors) in a ow-dimensiona subspace spanned by the coumns of a owrank rectanguar matrix, T, entited the tota variabiity matrix. Mathematicay, the adapted mean supervector, M, for a given set of observations can be modeed M = m + Tx + ɛ, (1) where m is the prior mean supervector, x N (0, I) is a mutivariate random variabe termed an identity 1 vector i-vector, and ɛ N (0, Σ) is a residua noise term to account for the variabiity not captured via T (Σ is typicay copied from the UBM). In other words, for the given observation set, the i-vector represents the coordinates in the tota variabiity subspace. The procedure for training the i-vector extractor (i.e., the T matrix) is simiar to the eigenvoice earning process described in [19] (a simpified version is presented in [21]), except each speech recording (session) is assumed to be produced by a unique speaker. 3. Linear discriminant anaysis (LDA) LDA is widey adopted in pattern recognition probems as a preprocessing stage for feature seection and dimensionaity reduction. It computes an optimum inear projection A: R d R n by maximizing the ratio of the inter-cass scatter to intra-cass variance: y = A T x, (2) where A is a rectanguar matrix with n ineary independent coumns. Here, the within- and between-cass scatter matrices are used to formuate a cass separabiity criterion which converts the matrices into a singe statistic. This statistic takes on arger vaues when the between-cass scatter is arger and the within-cass variance is smaer. Severa such cass separabiity criteria are described in [22], of which the foowing is the most widey used, [ ( )] Â = arg max tr A T S b A, (3) A T S wa=i where S b and S w denote the between- and within- cass scatter matrices, respectivey. The optimization probem in (3) has an anaytica soution that is a matrix whose coumns are the n eigenvectors corresponding to the argest eigenvaues of S 1 w S b. 1 The term i-vector sometimes aso refers to a vector of intermediate size, bigger than the underying cepstra feature vector but much smaer than the GMM supervector. The within-cass scatter matrix measures the scatter of sampes in each cass around the expected vaue of that cass S w = p ie [(x µ i ) (x µ i ) ] T Ci = p iσ i, (4) where p i, µ i, and Σ i are the a priori probabiity (proportiona to the number of sessions per speaker), expected vaue, and covariance matrix for cass i. The between-cass scatter matrix, on the other hand, measures the scatter of cass-conditiona expected vaues around the goba mean S b = p i(µ i µ) (µ i µ) T, (5) where µ is the expected vaue of the training sampes computed µ = E [x] = p iµ i. (6) There are three disadvantages associated with the parametric nature of the scatter matrices in (4) and (5). First, the underying distribution of casses is assumed to be Gaussian with a common covariance matrix for a casses. Therefore, one cannot expect the parametric LDA to generaize we to non-gaussian and muti-moda (as opposed to unimoda) distributions. Second, notice that the rank of S b is C 1, which means the parametric LDA can provide at most C 1 discriminant features. However, this may not be sufficient in appications such as anguage recognition where the number of anguage casses is much smaer than the dimensionaity of the i-vectors (for instance, there are ony 5 target anguage categories in the RATS program). Nevertheess, this may not pose a chaenge for speaker recognition tasks in which the number of training speakers exceeds the dimensionaity of the tota variabiity subspace. Finay, because ony the cass centroids are taken into account for computing S b in (5), the parametric LDA cannot effectivey capture the boundary structure between adjacent casses which is essentia for cassification [22]. To overcome the above noted imitations of LDA, a nonparametric discriminant anaysis technique was proposed in [14], that measures both the within- and between-cass scatters on a oca basis using a nearest neighbor rue. We provide a brief description of NDA in the next section. 4. Nonparametric discriminant anaysis To remedy the imitations identified for LDA, a nonparametric discriminant anaysis techniques was proposed in [14]. In NDA, the expected vaues that represent the goba information about each cass are repaced with oca sampe averages computed based on the k-nn of individua sampes. More specificay, in the NDA approach, the between-cass scatter matrix is defined S b = N i j=1 j i =1 ( w ij x i M ij ) ( x i M ij ) T, (7) where x i denotes the th sampe from cass i, and M ij is the oca mean of k-nn sampes for x i from cass j which is computed M ij = 1 K K NN k (x i, j), (8) k=1 1861
3 V 3 V 2 V 1 V 4 Figure 1: Symboic exampe iustrating the parametric versus nonparametric scatter between two casses. v 1 represents the goba gradient of cass centroids. The vectors {v 2,, v 6} represent the oca gradients. where NN k (x i, j) is the k th nearest neighbor of x i in cass j. The weighting function w ij in (7) is defined w ij = min { d α( x i, NN K(x i, i) ), d α( x i, NN K(x i, j) )} d α (x i, NNK(xi, i)) + dα (x i, NNK(xi, j)), (9) where α R is a constant between zero and infinity, and d(.) denotes the Eucidean distance. The weighting function is introduced in (7) to deemphasize the oca gradients that are arge in magnitude to mitigate their infuence on the scatter matrix. The weight parameters approach 0.5 for sampes near the cassification boundary (e.g., see {v 2, v 3, v 5, v 6} shown in Figure 1), whie dropping off to 0 for sampes that are far from the boundary (e.g., see v 4 in Figure 1). The contro parameter α determines how rapidy such decay in the weights occurs. The nonparametric within-cass scatter matrix, S w, is computed in a simiar fashion as in (7), except the weighting function is set to 1 and the oca gradients are computed within each cass. The NDA transform is then formed by cacuating the eigenvectors of S S w b. 1 Three important observations can be made from a carefu examination of the nonparametric between-cass scatter matrix in (7). First, notice that as the number of nearest neighbors, K, approaches N j, the tota number of sampes in cass j, the oca mean vector, M ij, approaches the goba mean of cass j (i.e., µ j ). In this scenario, if we set the weight parameters to 1, the NDA transform essentiay becomes the LDA projection, which means the LDA is a specia case of the more genera NDA. Second, because a the sampes are taken into account for the cacuation of the nonparametric between-cass scatter matrix (as opposed to ony the cass centroids), S b is generay of fu rank. This means that unike the LDA that provides at most C 1 discriminant features, the NDA generay resuts in d- dimensiona vectors (assuming a d-dimensiona input space) for the cassification. As we discussed before, this is of great importance for appications such as anguage recognition where the number of casses is much smaer than the dimensionaity of the tota subspace (or the input space in genera). Finay, compared to LDA, NDA is more effective in preserving the compex structure (i.e., oca and boundary structure) within and across different casses. As seen from the exampe shown in Figure 1 (where k is set to 1 for simpicity), LDA ony uses the goba gradient obtained with the centroids of the two casses (i.e., v 1) to measure the between-cass scatter. On the other hand, NDA uses the oca gradients (i.e., {v 2,, v 6}) that are emphasized aong the boundary through the weighting function, w ij. Hence, the boundary information becomes embedded into the resuting transformation. V 6 V 5 5. Experiments This section provides a description of our experimenta setup incuding speech data as we as the speaker recognition system used in our evauations. We conduct our speaker recognition experiments using actua noisy and channe degraded speech materia avaiabe from the DARPA RATS program, which is distributed by the LDC [17]. The RATS data consists of CTS recordings that have been retransmitted and captured over 8 extremey degraded high-frequency (HF) radio channes, abeed A H, with distinct noise characteristics. The type of distortion seen in RATS data is noninear (e.g., akin to cipping as we as ampitude compression effects) and the noise is to some extent correated with speech. A tota of five data reeases are avaiabe from the LDC for the RATS speaker recognition task: LDC2012E49, LDC2012E63, LDC2012E69, LDC2012E85, LDC2012E117, which contain speech spoken in five anguages: Levantine Arabic, Dari, Farsi, Pashto, and Urdu. We partitioned these data into two sets for our system training and evauation (consisting of enroment and test). In the RATS program a 6-sided speaker enroment scenario is assumed, and we foow this assumption in our deveopment setup. For system evauation, there are 8 duration-specific tasks, of which we report on resuts for the foowing enroment-test conditions: 120s, 30s, 10s, and 3s. It shoud be noted here that for the 120s condition, there are 6 enroment sessions of 120s of speech and one test session of 120s of speech. This appies simiary to the other durations. The tota number of trias for each of these conditions are: 333k, 163k, 173k, and 172k, respectivey. It is worth remarking here that there are no cross-gender or cross-anguage trias in our evauations. For speech parameterization, we extract 19-dimensiona power normaized cepstra coefficients (PNCC) [23] from 20 ms frames every 10 ms using a 24-channe Gammatone fiterbank spanning the frequency range Hz. The first and second tempora cepstra derivatives are aso computed over a 5- frame window and appended to the static features to capture the dynamic pattern of speech over time. This resuts in 57- dimensiona feature vectors. For non-speech frame dropping, we empoy an unsupervised speech activity detector (SAD) that generates frame-eve decisions using mutipe threshods set on various basic speech features incuding frame og-energy, spectra divergence, and signa-to-noise ratio. After dropping the non-speech frames, goba (utterance eve) cepstra mean and variance normaization (CMVN) is appied to suppress the shortterm inear channe effects. We perform our experiments in the context of a state-of-theart i-vector based speaker recognition system [10]. To earn the i- vector extractor, a gender-independent 1024-component GMM- UBM with diagona covariance matrices is trained using a subset of the deveopment set. The zeroth and first order Baum- Wech statistics are then computed for each recording and used to earn a 400-dimensiona tota variabiity subspace. After extracting 400-dimensiona i-vectors, we either use LDA or NDA for inter-session variabiity compensation. The dimensionaity reduced i-vectors are then centered (the mean is removed) and unit-ength normaized. For scoring, a Gaussian PLDA mode with fu covariance residua noise term [11, 13] is earned using the i-vectors extracted from 13,158 speech segments from 743 unique speakers. The Eigenvoice subspace in the PLDA mode is assumed fu-rank. We incude segments from 120s, 30s and 10s cuts in the PLDA training to expose our mode to the duration conditions seen in the evauation data. The number of sessions per speaker ranges from 5 to 25 segments, and we empoy a one- 1862
4 Normaized eigenvaue LDA NDA Dimension Figure 2: Normaized and sorted eigenvaues of S 1 w S b for the 1 LDA transform (dashed), and S S w b for the NDA transform (soid) obtained with our deveopment i-vectors. versus-rest strategy to compute the inter-speaker scatter matrix in (7). This provides fexibiity on the number nearest neighbors used for computing the oca means. 6. Resuts In this section we summarize our resuts obtained with the experimenta setup presented in Section 5. Figure 2 shows the normaized and sorted eigenvaues for S 1 w S b in LDA (dashed), and S 1 S w b in NDA (soid), which are obtained with our deveopment i-vectors. Comparing the two curves, it is seen that the decay in the eigenvaues of the LDA transform occurs more rapidy than that of the NDA transform. In other words, the speakerdiscriminative information for LDA is confined within a ower dimensiona subspace compared to NDA. Note that from our discussion in Section 4, in NDA the compex oca structure (as opposed to the goba mean scatter) within the data is preserved and the cass boundary information is embedded into the transform. Our hypothesis is that a arger subspace is required for NDA to propery represent such a compex structure. Accordingy, we expect NDA to perform best at arger feature dimensions (in the transformed space) compared to LDA. Resuts of our speaker recognition experiments with LDA and NDA (K = 9) on RATS data are summarized in Tabes 1 and 2, respectivey. The resuts are reported in terms of fasereject (miss) rate at 2.5% fase-aarm (FA) rate (miss@2.5%fa), which is a RATS program metric for the speaker recognition task. These resuts are generated by varying the dimensionaity of the transformed subspace from 150 to 400 with an increment of 50. Two important observations can be made from the resuts presented in the tabes. First, irrespective of the dimensionaity of the feature subspace, NDA consistenty performs better than LDA across the four duration-specific conditions. As we discussed before, this is due to the nonparametric representations for the scatter matrices in NDA that makes no assumption regarding the underying cass-conditiona distributions. In addition, NDA is more effective in capturing the oca structure and boundary information within and across different speakers. Second, LDA performs best with a 200-dimensiona feature space, whie the best performance for NDA is achieved with a 250- dimensiona transform. As noted above, more dimensions are required for NDA to capture the boundary structure and oca information. We aso investigated the impact of the number of nearest neighbors used for computing NDA on speaker recognition performance. The resuts are provided in Tabe 3 for K ranging from 3 to 13 with an increment of 2. Here, the dimensionaity Tabe 1: Speaker recognition performance across the four enroment-test conditions with LDA and different feature dimensions ranging from 150 to 400, in terms of percent miss@2.5%fa. Enro-Test s-120s s-30s s-10s s-3s Tabe 2: Speaker recognition performance across the four enroment-test conditions with NDA (K = 9) and different feature dimensions ranging from 150 to 400. Enro-Test s-120s s-30s s-10s s-3s of the transformed subspace is fixed at 250. For onger duration tasks (i.e., 120s and 30s), the best performance is obtained with K = 9, whie for 10s and 3s tasks the best performance is achieved with K = 7. It is evident that for the best performance to be achieved, the number of nearest neighbors, K, shoud be tuned. In practice, this is typicay accompished using a deveopment set. 7. Concusions LDA has become an integra part of many state-of-the-art speaker recognition systems for inter-session variabiity compensation. Given that the actua distribution of i-vectors may not be necessariy Gaussian as we as the imitations identified for the parametric LDA, we presented an aternative nonparametric discriminant anaysis (NDA) technique that measures both the within- and between-speaker variation on a oca basis using the nearest neighbor rue. Unike LDA, the NDA approach makes no specific assumption regarding the underying cass-conditiona distributions. To evauate the efficacy of NDA, we conducted speaker recognition experiments using actua noisy and channe degraded data from the RATS program. Experimenta resuts indicated effectiveness of NDA against LDA for speaker recognition tasks. A cear advantage of NDA over LDA is that it is generay of fu rank, making it attractive for speech appications (such as anguage recognition) with a imited number of casses. Our preiminary experiments aso confirm the effectiveness of NDA for RATS anguage recognition tasks where the number of anguage casses is much smaer than the dimensionaity of i-vectors. Tabe 3: Speaker recognition performance for different number of nearest neighbors, K, in NDA with 250-dimensiona features. Enro-Test s-120s s-30s s-10s s-3s
5 8. References [1] P. Kenny, G. Bouianne, P. Oueet, and P. Dumouche, Joint factor anaysis versus eigenchannes in speaker recognition, IEEE Trans. Audio Speech Lang. Process., vo. 15, no. 4, pp , [2] N. Dehak, P. Kenny, R. Dehak, P. Dumouche, and P. Oueet, Front-end factor anaysis for speaker verification, IEEE Trans. Audio Speech Lang. Process., vo. 19, no. 4, pp , [3] R. A. Fisher, The use of mutipe measurements in taxonomic probems, Annas of eugenics, vo. 7, no. 2, pp , [4] P. Matějka, O. Gembek, F. Castado, M. J. Aam, O. Pchot, P. Kenny, L. Burget, and J. Černocky, Fu-covariance UBM and heavy-taied PLDA in i-vector speaker verification, in Proc. IEEE ICASSP, Prague, Czech, May 2011, pp [5] A. Kanagasundaram, D. B. Dean, R. J. Vogt, M. Mcaren, S. Sridharan, and M. Mason, Weighted LDA techniques for i-vector based speaker verification, in Proc. IEEE ICASSP, Kyoto, Japan, March 2012, pp [6] A. Kanagasundaram, D. B. Dean, S. Sridharan, and R. J. Vogt, PLDA based speaker verification with weighted LDA techniques, (Odyssey 2012), Singapore, Singapore, June [7] J. Yang, C. Liang, L. Yang, H. Suo, J. Wang, and Y. Yan, Factor anaysis of Lapacian approach for speaker recognition, in Proc. IEEE ICASSP, Kyoto, Japan, March 2012, pp [8] M. McLaren and D. Van Leeuwen, Source-normaized LDA for robust speaker recognition using i-vectors from mutipe speech sources, IEEE Trans. Audio Speech Lang. Process., vo. 20, no. 3, pp , [9] M. McLaren, N. Scheffer, M. Graciarena, L. Ferrer, and Y. Lei, Improving speaker identification robustness to highy channedegraded speech through mutipe system fusion, in Proc. IEEE ICASSP, Vancouver, BC, May 2013, pp [10] W. Zhu, S. Yaman, and J. Peecanos, The IBM RATS phase II speaker recognition system: overview and anaysis, in Proc. IN- TERSPEECH, Lyon, France, August 2013, pp [11] S. J. Prince and J. H. Eder, Probabiistic inear discriminant anaysis for inferences about identity, in Proc. IEEE ICCV, Rio De Janeiro, October 2007, pp [12] P. Kenny, Bayesian speaker verification with heavy taied priors, (Odyssey 2010), Brno, Czech, June [13] D. Garcia-Romero and C. Y. Espy-Wison, Anaysis of i-vector ength normaization in speaker recognition systems. in Proc. IN- TERSPEECH, Forence, Itay, August 2011, pp [14] K. Fukunaga and J. Mantock, Nonparametric discriminant anaysis, IEEE Trans. Pattern Ana. Mach. Inte., vo. 5, no. 6, pp , [15] M. Bressan and J. Vitria, Nonparametric discriminant anaysis and nearest neighbor cassification, Pattern Recognition Lett., vo. 24, no. 15, pp , [16] Z. Li, D. Lin, and X. Tang, Nonparametric discriminant anaysis for face recognition, IEEE Trans. Pattern Ana. Mach. Inte., vo. 31, no. 4, pp , [17] K. Waker and S. Strasse, The RATS radio traffic coection system, (Odyssey 2012), Singapore, Singapore, June [18] R. Kuhn, J.-C. Junqua, P. Nguyen, and N. Niedzieski, Rapid speaker adaptation in eigenvoice space, IEEE Trans. Speech Audio Process., vo. 8, no. 6, pp , [19] P. Kenny, G. Bouianne, and P. Dumouche, Eigenvoice modeing with sparse training data, IEEE Trans. Speech Audio Process., vo. 13, no. 3, pp , [20] M. E. Tipping and C. M. Bishop, Probabiistic principa component anaysis, Journa of the Roya Statistica Society: Series B (Statistica Methodoogy), vo. 61, no. 3, pp , [21] D. Matrouf, N. Scheffer, B. G. Fauve, and J.-F. Bonastre, A straightforward and efficient impementation of the factor anaysis mode for speaker verification. in Proc. INTERSPEECH, Antwerp, Begium, August 2007, pp [22] K. Fukunaga, Introduction to Statistica Pattern Recognition, 2nd ed. New York: Academic press, [23] C. Kim and R. M. Stern, Power-normaized cepstra coefficients (PNCC) for robust speech recognition, in Proc. IEEE ICASSP, Kyoto, Japan, March 2012, pp
Supervised i-vector Modeling - Theory and Applications
Supervised i-vector Modeing - Theory and Appications Shreyas Ramoji, Sriram Ganapathy Learning and Extraction of Acoustic Patterns LEAP) Lab, Eectrica Engineering, Indian Institute of Science, Bengauru,
More informationA Robust Voice Activity Detection based on Noise Eigenspace Projection
A Robust Voice Activity Detection based on Noise Eigenspace Projection Dongwen Ying 1, Yu Shi 2, Frank Soong 2, Jianwu Dang 1, and Xugang Lu 1 1 Japan Advanced Institute of Science and Technoogy, Nomi
More informationII. PROBLEM. A. Description. For the space of audio signals
CS229 - Fina Report Speech Recording based Language Recognition (Natura Language) Leopod Cambier - cambier; Matan Leibovich - matane; Cindy Orozco Bohorquez - orozcocc ABSTRACT We construct a rea time
More informationFRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)
1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using
More informationAn Algorithm for Pruning Redundant Modules in Min-Max Modular Network
An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai
More informationExploring some limits of Gaussian PLDA modeling for i-vector distributions
Odyssey 204: The Speaker and Language Recognition Workshop 6-9 June 204, Joensuu, Finand Exporing some imits of Gaussian PLDA modeing for i-vector distributions Pierre-Miche Bousquet, Jean-François Bonastre,
More informationStochastic Variational Inference with Gradient Linearization
Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,
More informationA Brief Introduction to Markov Chains and Hidden Markov Models
A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,
More informationA Novel Learning Method for Elman Neural Network Using Local Search
Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama
More informationAsynchronous Control for Coupled Markov Decision Systems
INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of
More information(This is a sample cover image for this issue. The actual cover is not yet available at this time.)
(This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna
More informationParagraph Topic Classification
Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA
More informationMulticlass Discriminative Training of i-vector Language Recognition
Odyssey 214: The Speaker and Language Recognition Workshop 16-19 June 214, Joensuu, Finland Multiclass Discriminative Training of i-vector Language Recognition Alan McCree Human Language Technology Center
More informationFirst-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries
c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische
More informationBP neural network-based sports performance prediction model applied research
Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied
More informationStatistical Learning Theory: A Primer
Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO
More informationModified-prior PLDA and Score Calibration for Duration Mismatch Compensation in Speaker Recognition System
INERSPEECH 2015 Modified-prior PLDA and Score Calibration for Duration Mismatch Compensation in Speaker Recognition System QingYang Hong 1, Lin Li 1, Ming Li 2, Ling Huang 1, Lihong Wan 1, Jun Zhang 1
More informationSoft Clustering on Graphs
Soft Custering on Graphs Kai Yu 1, Shipeng Yu 2, Voker Tresp 1 1 Siemens AG, Corporate Technoogy 2 Institute for Computer Science, University of Munich kai.yu@siemens.com, voker.tresp@siemens.com spyu@dbs.informatik.uni-muenchen.de
More informationResearch of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance
Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient
More informationMelodic contour estimation with B-spline models using a MDL criterion
Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex
More informationSteepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1
Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University
More informationOptimality of Inference in Hierarchical Coding for Distributed Object-Based Representations
Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,
More informationSession Variability Compensation in Automatic Speaker Recognition
Session Variability Compensation in Automatic Speaker Recognition Javier González Domínguez VII Jornadas MAVIR Universidad Autónoma de Madrid November 2012 Outline 1. The Inter-session Variability Problem
More informationIn-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017
In-pane shear stiffness of bare stee deck through she finite eement modes G. Bian, B.W. Schafer June 7 COLD-FORMED STEEL RESEARCH CONSORTIUM REPORT SERIES CFSRC R-7- SDII Stee Diaphragm Innovation Initiative
More informationA Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)
A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,
More informationarxiv: v1 [cs.lg] 31 Oct 2017
ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT
More informationSVM-based Supervised and Unsupervised Classification Schemes
SVM-based Supervised and Unsupervised Cassification Schemes LUMINITA STATE University of Pitesti Facuty of Mathematics and Computer Science 1 Targu din Vae St., Pitesti 110040 ROMANIA state@cicknet.ro
More informationFormulas for Angular-Momentum Barrier Factors Version II
BNL PREPRINT BNL-QGS-06-101 brfactor1.tex Formuas for Anguar-Momentum Barrier Factors Version II S. U. Chung Physics Department, Brookhaven Nationa Laboratory, Upton, NY 11973 March 19, 2015 abstract A
More informationDiscriminant Analysis: A Unified Approach
Discriminant Anaysis: A Unified Approach Peng Zhang & Jing Peng Tuane University Eectrica Engineering & Computer Science Department New Oreans, LA 708 {zhangp,jp}@eecs.tuane.edu Norbert Riede Tuane University
More informationStatistical Learning Theory: a Primer
??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa
More informationSTABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION
Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,
More informationBayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?
Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine
More informationSpeaker Verification Using Accumulative Vectors with Support Vector Machines
Speaker Verification Using Accumulative Vectors with Support Vector Machines Manuel Aguado Martínez, Gabriel Hernández-Sierra, and José Ramón Calvo de Lara Advanced Technologies Application Center, Havana,
More informationASummaryofGaussianProcesses Coryn A.L. Bailer-Jones
ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe
More informationSpeaker recognition by means of Deep Belief Networks
Speaker recognition by means of Deep Belief Networks Vasileios Vasilakakis, Sandro Cumani, Pietro Laface, Politecnico di Torino, Italy {first.lastname}@polito.it 1. Abstract Most state of the art speaker
More informationAdjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models
IO Conference Series: Earth and Environmenta Science AER OEN ACCESS Adjustment of automatic contro systems of production faciities at coa processing pants using mutivariant physico- mathematica modes To
More informationA. Distribution of the test statistic
A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch
More informationNEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION
NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen
More informationDIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM
DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing
More informationActive Learning & Experimental Design
Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection
More informationSVM: Terminology 1(6) SVM: Terminology 2(6)
Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are
More informationTraffic data collection
Chapter 32 Traffic data coection 32.1 Overview Unike many other discipines of the engineering, the situations that are interesting to a traffic engineer cannot be reproduced in a aboratory. Even if road
More informationFitting Algorithms for MMPP ATM Traffic Models
Fitting Agorithms for PP AT Traffic odes A. Nogueira, P. Savador, R. Vaadas University of Aveiro / Institute of Teecommunications, 38-93 Aveiro, Portuga; e-mai: (nogueira, savador, rv)@av.it.pt ABSTRACT
More informationSTA 216 Project: Spline Approach to Discrete Survival Analysis
: Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing
More informationAlberto Maydeu Olivares Instituto de Empresa Marketing Dept. C/Maria de Molina Madrid Spain
CORRECTIONS TO CLASSICAL PROCEDURES FOR ESTIMATING THURSTONE S CASE V MODEL FOR RANKING DATA Aberto Maydeu Oivares Instituto de Empresa Marketing Dept. C/Maria de Moina -5 28006 Madrid Spain Aberto.Maydeu@ie.edu
More informationTwo view learning: SVM-2K, Theory and Practice
Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk
More informationLecture 6: Moderately Large Deflection Theory of Beams
Structura Mechanics 2.8 Lecture 6 Semester Yr Lecture 6: Moderatey Large Defection Theory of Beams 6.1 Genera Formuation Compare to the cassica theory of beams with infinitesima deformation, the moderatey
More informationA Separability Index for Distance-based Clustering and Classification Algorithms
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL.?, NO.?, JANUARY 1? 1 A Separabiity Index for Distance-based Custering and Cassification Agorithms Arka P. Ghosh, Ranjan Maitra and Anna
More informationTwo-sample inference for normal mean vectors based on monotone missing data
Journa of Mutivariate Anaysis 97 (006 6 76 wwweseviercom/ocate/jmva Two-sampe inference for norma mean vectors based on monotone missing data Jianqi Yu a, K Krishnamoorthy a,, Maruthy K Pannaa b a Department
More informationLINEAR DETECTORS FOR MULTI-USER MIMO SYSTEMS WITH CORRELATED SPATIAL DIVERSITY
LINEAR DETECTORS FOR MULTI-USER MIMO SYSTEMS WITH CORRELATED SPATIAL DIVERSITY Laura Cottateucci, Raf R. Müer, and Mérouane Debbah Ist. of Teecommunications Research Dep. of Eectronics and Teecommunications
More informationDeakin Research Online
Deakin Research Onine This is the pubished version: Rana, Santu, Liu, Wanquan, Lazarescu, Mihai and Venkatesh, Svetha 2008, Recognising faces in unseen modes : a tensor based approach, in CVPR 2008 : Proceedings
More informationProcess Capability Proposal. with Polynomial Profile
Contemporary Engineering Sciences, Vo. 11, 2018, no. 85, 4227-4236 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ces.2018.88467 Process Capabiity Proposa with Poynomia Profie Roberto José Herrera
More informationIntroduction. Figure 1 W8LC Line Array, box and horn element. Highlighted section modelled.
imuation of the acoustic fied produced by cavities using the Boundary Eement Rayeigh Integra Method () and its appication to a horn oudspeaer. tephen Kirup East Lancashire Institute, Due treet, Bacburn,
More informationAnt Colony Algorithms for Constructing Bayesian Multi-net Classifiers
Ant Coony Agorithms for Constructing Bayesian Muti-net Cassifiers Khaid M. Saama and Aex A. Freitas Schoo of Computing, University of Kent, Canterbury, UK. {kms39,a.a.freitas}@kent.ac.uk December 5, 2013
More informationBayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems
Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University
More informationhttps://doi.org/ /epjconf/
HOW TO APPLY THE OPTIMAL ESTIMATION METHOD TO YOUR LIDAR MEASUREMENTS FOR IMPROVED RETRIEVALS OF TEMPERATURE AND COMPOSITION R. J. Sica 1,2,*, A. Haefee 2,1, A. Jaai 1, S. Gamage 1 and G. Farhani 1 1 Department
More informationTRAVEL TIME ESTIMATION FOR URBAN ROAD NETWORKS USING LOW FREQUENCY PROBE VEHICLE DATA
TRAVEL TIME ESTIMATIO FOR URBA ROAD ETWORKS USIG LOW FREQUECY PROBE VEHICLE DATA Erik Jeneius Corresponding author KTH Roya Institute of Technoogy Department of Transport Science Emai: erik.jeneius@abe.kth.se
More informationStatistical Inference, Econometric Analysis and Matrix Algebra
Statistica Inference, Econometric Anaysis and Matrix Agebra Bernhard Schipp Water Krämer Editors Statistica Inference, Econometric Anaysis and Matrix Agebra Festschrift in Honour of Götz Trenker Physica-Verag
More informationStatistics for Applications. Chapter 7: Regression 1/43
Statistics for Appications Chapter 7: Regression 1/43 Heuristics of the inear regression (1) Consider a coud of i.i.d. random points (X i,y i ),i =1,...,n : 2/43 Heuristics of the inear regression (2)
More informationComponentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems
Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,
More informationTwo-Stage Least Squares as Minimum Distance
Two-Stage Least Squares as Minimum Distance Frank Windmeijer Discussion Paper 17 / 683 7 June 2017 Department of Economics University of Bristo Priory Road Compex Bristo BS8 1TU United Kingdom Two-Stage
More informationAST 418/518 Instrumentation and Statistics
AST 418/518 Instrumentation and Statistics Cass Website: http://ircamera.as.arizona.edu/astr_518 Cass Texts: Practica Statistics for Astronomers, J.V. Wa, and C.R. Jenkins, Second Edition. Measuring the
More informationNonlinear Gaussian Filtering via Radial Basis Function Approximation
51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This
More informationTurbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University
Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver
More informationConsistent linguistic fuzzy preference relation with multi-granular uncertain linguistic information for solving decision making problems
Consistent inguistic fuzzy preference reation with muti-granuar uncertain inguistic information for soving decision making probems Siti mnah Binti Mohd Ridzuan, and Daud Mohamad Citation: IP Conference
More informationMARKOV CHAINS AND MARKOV DECISION THEORY. Contents
MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After
More informationScalable Spectrum Allocation for Large Networks Based on Sparse Optimization
Scaabe Spectrum ocation for Large Networks ased on Sparse Optimization innan Zhuang Modem R&D Lab Samsung Semiconductor, Inc. San Diego, C Dongning Guo, Ermin Wei, and Michae L. Honig Department of Eectrica
More informationA Small Footprint i-vector Extractor
A Small Footprint i-vector Extractor Patrick Kenny Odyssey Speaker and Language Recognition Workshop June 25, 2012 1 / 25 Patrick Kenny A Small Footprint i-vector Extractor Outline Introduction Review
More informationUsually the estimation of the partition function is intractable and it becomes exponentially hard when the complexity of the model increases. However,
Odyssey 2012 The Speaker and Language Recognition Workshop 25-28 June 2012, Singapore First attempt of Boltzmann Machines for Speaker Verification Mohammed Senoussaoui 1,2, Najim Dehak 3, Patrick Kenny
More informationVI.G Exact free energy of the Square Lattice Ising model
VI.G Exact free energy of the Square Lattice Ising mode As indicated in eq.(vi.35), the Ising partition function is reated to a sum S, over coections of paths on the attice. The aowed graphs for a square
More informationNonlinear Analysis of Spatial Trusses
Noninear Anaysis of Spatia Trusses João Barrigó October 14 Abstract The present work addresses the noninear behavior of space trusses A formuation for geometrica noninear anaysis is presented, which incudes
More informationRobustness Techniques for Speech Recognition
Robustness Techniques for peech Recognition Berin Chen, 2004 References: 1. X. Huang et a. poken Language Processing (2001). Chapter 10 2. J. C. Junqua and J. P. Haton. Robustness in Automatic peech Recognition
More informationDo Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix
VOL. NO. DO SCHOOLS MATTER FOR HIGH MATH ACHIEVEMENT? 43 Do Schoos Matter for High Math Achievement? Evidence from the American Mathematics Competitions Genn Eison and Ashey Swanson Onine Appendix Appendix
More informationBASIS COMPENSATION IN NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR SPEECH ENHANCEMENT
BASIS COMPENSATION IN NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR SPEECH ENHANCEMENT Hanwook Chung 1, Eric Pourde and Benoit Champagne 1 1 Dept. of Eectrica and Computer Engineering, McGi University, Montrea,
More informationRobustness Techniques for Speech Recognition
Robustness Techniques for peech Recognition Berin Chen, 2005 References: 1. X. Huang et a. poken Language Processing (2001). Chapter 10 2. J. C. Junqua and J. P. Haton. Robustness in Automatic peech Recognition
More informationMathematical Scheme Comparing of. the Three-Level Economical Systems
Appied Mathematica Sciences, Vo. 11, 2017, no. 15, 703-709 IKAI td, www.m-hikari.com https://doi.org/10.12988/ams.2017.7252 Mathematica Scheme Comparing of the Three-eve Economica Systems S.M. Brykaov
More informationTarget Location Estimation in Wireless Sensor Networks Using Binary Data
Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344
More informationFrom Margins to Probabilities in Multiclass Learning Problems
From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error
More informationA Separability Index for Distance-based Clustering and Classification Algorithms
A Separabiity Index for Distance-based Custering and Cassification Agorithms Arka P. Ghosh, Ranjan Maitra and Anna D. Peterson 1 Department of Statistics, Iowa State University, Ames, IA 50011-1210, USA.
More informationEvolutionary Product-Unit Neural Networks for Classification 1
Evoutionary Product-Unit Neura Networs for Cassification F.. Martínez-Estudio, C. Hervás-Martínez, P. A. Gutiérrez Peña A. C. Martínez-Estudio and S. Ventura-Soto Department of Management and Quantitative
More informationA Comparison Study of the Test for Right Censored and Grouped Data
Communications for Statistica Appications and Methods 2015, Vo. 22, No. 4, 313 320 DOI: http://dx.doi.org/10.5351/csam.2015.22.4.313 Print ISSN 2287-7843 / Onine ISSN 2383-4757 A Comparison Study of the
More informationEXEMPLAR-BASED VOICE CONVERSION USING NON-NEGATIVE SPECTROGRAM DECONVOLUTION
8th ISCA Speech Synthesis Worshop August 31 September 2, 2013 Barceona, Spain EXEMPLAR-BASED VOICE CONVERSION USING NON-NEGATIVE SPECTROGRAM DECONVOLUTION Zhizheng Wu 1,2, Tuomas Virtanen 3, Tomi Kinnunen
More information6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7
6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the
More informationStochastic Complement Analysis of Multi-Server Threshold Queues. with Hysteresis. Abstract
Stochastic Compement Anaysis of Muti-Server Threshod Queues with Hysteresis John C.S. Lui The Dept. of Computer Science & Engineering The Chinese University of Hong Kong Leana Goubchik Dept. of Computer
More informationDetermining The Degree of Generalization Using An Incremental Learning Algorithm
Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c
More informationApproach to Identifying Raindrop Vibration Signal Detected by Optical Fiber
Sensors & Transducers, o. 6, Issue, December 3, pp. 85-9 Sensors & Transducers 3 by IFSA http://www.sensorsporta.com Approach to Identifying Raindrop ibration Signa Detected by Optica Fiber ongquan QU,
More informationUnconditional security of differential phase shift quantum key distribution
Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice
More informationData Mining Technology for Failure Prognostic of Avionics
IEEE Transactions on Aerospace and Eectronic Systems. Voume 38, #, pp.388-403, 00. Data Mining Technoogy for Faiure Prognostic of Avionics V.A. Skormin, Binghamton University, Binghamton, NY, 1390, USA
More informationLimited magnitude error detecting codes over Z q
Limited magnitude error detecting codes over Z q Noha Earief choo of Eectrica Engineering and Computer cience Oregon tate University Corvais, OR 97331, UA Emai: earief@eecsorstedu Bea Bose choo of Eectrica
More informationDISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE
DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,
More informationSequential Decoding of Polar Codes with Arbitrary Binary Kernel
Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient
More informationMoreau-Yosida Regularization for Grouped Tree Structure Learning
Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State
More informationTrainable fusion rules. I. Large sample size case
Neura Networks 19 (2006) 1506 1516 www.esevier.com/ocate/neunet Trainabe fusion rues. I. Large sampe size case Šarūnas Raudys Institute of Mathematics and Informatics, Akademijos 4, Vinius 08633, Lithuania
More informationAdaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller
daptive Noise Canceation Using Deep Cerebear Mode rticuation Controer Yu Tsao, Member, IEEE, Hao-Chun Chu, Shih-Wei an, Shih-Hau Fang, Senior Member, IEEE, Junghsi ee*, and Chih-Min in, Feow, IEEE bstract
More informationStructured sparsity for automatic music transcription
Structured sparsity for automatic music transcription O'Hanon, K; Nagano, H; Pumbey, MD; Internationa Conference on Acoustics, Speech and Signa Processing (ICASSP 2012) For additiona information about
More informationSimplified analysis of EXAFS data and determination of bond lengths
Indian Journa of Pure & Appied Physics Vo. 49, January 0, pp. 5-9 Simpified anaysis of EXAFS data and determination of bond engths A Mishra, N Parsai & B D Shrivastava * Schoo of Physics, Devi Ahiya University,
More informationGeneral Certificate of Education Advanced Level Examination June 2010
Genera Certificate of Education Advanced Leve Examination June 2010 Human Bioogy HBI6T/Q10/task Unit 6T A2 Investigative Skis Assignment Task Sheet The effect of using one or two eyes on the perception
More informationRelated Topics Maxwell s equations, electrical eddy field, magnetic field of coils, coil, magnetic flux, induced voltage
Magnetic induction TEP Reated Topics Maxwe s equations, eectrica eddy fied, magnetic fied of cois, coi, magnetic fux, induced votage Principe A magnetic fied of variabe frequency and varying strength is
More informationNumerical solution of one dimensional contaminant transport equation with variable coefficient (temporal) by using Haar wavelet
Goba Journa of Pure and Appied Mathematics. ISSN 973-1768 Voume 1, Number (16), pp. 183-19 Research India Pubications http://www.ripubication.com Numerica soution of one dimensiona contaminant transport
More informationFast Blind Recognition of Channel Codes
Fast Bind Recognition of Channe Codes Reza Moosavi and Erik G. Larsson Linköping University Post Print N.B.: When citing this work, cite the origina artice. 213 IEEE. Persona use of this materia is permitted.
More information