Speaker Adaptation Based on Sparse and Low-rank Eigenphone Matrix Estimation
|
|
- Douglas McGee
- 5 years ago
- Views:
Transcription
1 INTERSPEECH 2014 Speaker Aaptation Base on Sparse an Low-rank Eigenphone Matrix Estimation Wen-Lin Zhang 1, Dan Qu 1, Wei-Qiang Zhang 2, Bi-Cheng Li 1 1 Zhengzhou Information Science an Technology Institute, Zhengzhou, China 2 Department of Electronic Engineering, Tsinghua University, Beijing, China zwlin 2004@163com, quanquan@sinacom, wqzhang@tsinghuaeucn, lbclm@163com Abstract The eigenphone base speaker aaptation outperforms the conventional MLLR an eigenvoice methos when the aaptation ata is sufficient, but it suffers from severe over-fitting when the aaptation ata is limite In this paper, l 1 an nuclear norm regularization are applie simultaneously to obtain a more robust eigenphone estimation, resulting in a sparse an low-rank eigenphone matrix The sparse constraint can reuce the number of free parameters while the low rank constraint can limit the imension of phone variation subspace, which are both benefit to the generalization ability Experimental results show that the propose metho can improve the aaptation performance substantially, especially when the amount of aaptation ata is limite Inex Terms: eigenphones, speaker aaptation, l 1 regularization, nuclear norm regularization 1 Introuction Moel space speaker aaptation is an important technique in moern speech recognition system Given some aaptation ata, the parameters of a speaker inepenent (SI) system are transforme to match the speaking pattern of an unknown speaker, resulting in a speaker aapte (SA) system To eal with the sparsity of the aaptation ata, parameter sharing schemes are usually aopte For example, in eigenvoice base metho [1], the speaker epenent (SD) moels are assume to lie in a low imensional subspace, namely the speaker subspace The subspace bases, ie eigenvoices, are share among all speakers For each new speaker, a speaker-specific coorinate vector, namely speaker factor, is estimate to obtain the SA moel The maximum likelihoo linear regression (MLLR) metho [2] estimates a set of linear transformations to transform an SI moel into a new SD moel Using regression class trees, the HMM state components can be groupe into regression classes with each class sharing the same transformation matrix Recently, a novel phone subspace base metho, ie eigenphone base metho, was propose [3] Differently from the speaker subspace base metho, the phone variation patterns of a speaker are assume to be in a low imensional subspace, calle phone variation subspace The coorinates of the whole phone set are share among ifferent speakers During speaker aaptation, a speaker epenent eigenphone matrix which represents the main phone variation patterns for a specific speaker is estimate Due to its more elaborate moeling, the eigenphone metho performs better than both the eigenvoice an MLLR methos when sufficient amounts of aaptation ata are available However, with limite amounts of aaptation ata, the maximum likelihoo estimation shows severe over-fitting, resulting in very ba aaptation performance [3] Even with a fine tune Gaussian prior, the eigenphone matrix estimate by the maximum a posterior (MAP) criterion still oes not match the performance of the eigenvoice metho In machine learning, regularization techniques are wiely employe to aress the problem of ata sparsity an moel complexity Recently, regularization has been wiely aopte in speech processing an recognition applications For instance, l 1 an l 2 regularization are propose for spectral e-noising in speech recognition [4,5] In [6], similar regularization methos are aopte to improve the estimation of state-specific parameters in the subspace Gaussian mixture moel (SGMM) In [7], l 1 regularization is use to reucing the nonzero connections of eep neural networks without sacrificing speech recognition performance In this paper, we investigate the regularize estimation of the eigenphone matrix for speaker aaptation l 1 norm regularization is use to control the sparsity of the matrix an the nuclear norm regularization forces the eigenphone matrix to be low-rank The basic consierations are that being sparse can alleviate over-fitting an being low-rank can automatically control the imension of the phone variation subspace In the next section, a brief overview of the eigenphone base speaker aaptation metho is given The use of the l 1 norm an nuclear norm regularization are escribe in Section III, an the optimization of the sparse an low-rank eigenphone matrix is presente in Section IV Finally, in Section V, we present experiments on supervise speaker aaptation of a Manarin tonal syllable recognition system 2 Review of the eigenphone base speaker aaptation metho Given a set of speaker inepenent HMMs containing a total of M mixture components across all states an moels an a D-imensional speech feature vector, let µ m, µ m (s) an u m (s) = µ m (s) µ m enote the SI mean vector, the SD mean vector an the phone variation vector for speaker s an mixture component m respectively In eigenphone base speaker aaptation metho, the phone variation vectors {u m(s)} M m=1 are assume to be locate in a speaker epenent N(N << M) imensional phone variation subspace Let v 0(s) an {v i(s)} N i=1 enote the origin an the basis vectors of speaker s s phone variation subspace respectively, then the phone variation vector {u m (s)} M m=1 can be written as u m (s) = v 0 (s) + N l mn v n (s) (1) n=1 Copyright 2014 ISCA September 2014, Singapore
2 where l mn is the coefficient of component m corresponing to basis vector v n(s) We call {v i(s)} N i=0 the eigenphones of speaker s an [ ] T l m1 l m2 l mn the phone coorinate vector of component m The eigenphone ecomposition of speaker s s phone variation matrix can be expresse by the following equation U(s) = [ u 1(s) u 2(s) u M (s) ] = V (s) L (2) where V (s) = [ v 0(s) v 1(s) v 2(s) v N (s) ] an l 11 l 21 l 31 l M1 L = l 12 l 22 l 32 l M2 l 1N l 2N l 3N l MN Equation (2) can be viewe as the ecomposition of the phone variation matrix U(s) to the multiplication of two lowrank matrices V (s) an L The eigenphone matrix V (s) is speaker epenent, which summarizes the main phone variation patterns of speaker s The phone coorinate matrix L is speaker inepenent, which implicitly reflects the correlation information between ifferent Gaussian components Given a set of training speaker SD moels, L can be obtaine using principal component analysis (PCA) [3] During speaker aaptation, given some aaptation ata, the eigenphone matrix V (s) is estimate using the maximum likelihoo criterion Let O = {o(1), o(2),, o(t )} enotes the sequence of feature vectors of the aaptation ata Using the expectation maximization (EM) algorithm, the auxiliary function to be optimize is given as follows Q(V (s)) = 1 γ m (t) 2 t m [o(t) µ m (s)] T Σ 1 m [o(t) µ m (s)], (3) where µ m (s) = µ m + u m, γ m(t) is the posterior probability of being in mixture m at time t given the observation sequence O an current estimation of SD moel Suppose the covariance matrix Σ m is iagonal, let σ m, enotes its th iagonal element an o (t), µ m, an v n, (s) represent the th component of o(t), µ m an v n (s) respectively Then Equation (3) can be simplifie to Q(V (s)) = 1 2 [ γ m(t)σ 1 m, o m,(t) ˆl T 2 mν (s)], t m (4) where o m,(t) = o (t) µ m,, ˆl m = [1, l m1, l m2,, l mn ] T an ν (s) = [v 0, (s), v 1, (s), v 2, (s),, v N, (s)] T, which is the th row of the eigenphone matrix V (s) Define A = t m b = t m Equation (4) can be further simplifie to Q(V (s)) = 1 2 γ m(t)σ 1 m,ˆl mˆlt m γ m (t)σ 1 m,o m,(t)ˆl m, [ ] ν (s) T A ν (s) b T ν (s) + Const (5) Setting the erivative of (5) with respect to ν (s) to zero yiels ˆν (s) = A 1 b Because of the inepenence of ifferent feature imensions, {ˆν (s)} D =1 can be calculate in parallel very efficiently The size of the eigenphone matrix V (s) is (N + 1) D, which has more free parameters than the eigenvoice metho For the MLLR metho with a global transformation matrix an a bias vector, the parameter size is of (D+1) D So the eigenphone metho is more flexible an elaborate When sufficient amounts of aaptation ata are available, better aaptation performance can be obtaine But when the aaptation ata is limite, performance egraes quickly The recognition rate can be even worse than the unaapte SI system when very limite amounts of aaptation are available In orer to alleviate the overfitting problem, a Gaussian prior is assume an a MAP aaptation metho is erive in [3] In this paper, we aress the problem using an explicit matrix regularization function 3 Sparse an low-rank eigenphone matrix estimation In fact, the center of the eigenphone aaptation metho is the robust estimation of the eigenphone matrix V (s) This type of problem, ie the estimation of an unknown matrix from some observation ata, has appeare frequently in the literature of iverse fiels Regularization has been prove to be a vali metho to overcome the ata scarcity One wiely use regularizer is the l 1 norm For the eigenphone matrix V (s), the matrix l 1 norm can be written as V (s) 1 = ν (s) 1 = n v n,(s) The l 1 norm regularization is sometimes referre to as the lasso, which can rive an element-wise shrinkage of V (s) towars zero, thus leaing to a sparse matrix solution Recently, in many matrix estimation problems, such as matrix completion [8] an robust PCA [9], a nuclear norm regularizer was use to obtain a low-rank solution In fact, this approach is closely relate to the iea of using the l 1 norm as a surrogate for sparsity, because low-rank correspons to sparsity of the vector of singular values an the nuclear norm is the l 1 norm of the vector of singular values For the eigenphone matrix V (s), the nuclear norm can be written as V (s) = N i=1 κi, where κi are the singular values of V (s) In eigenphone base speaker aaptation, sparsity an lowrank constraints can be applie simultaneously to obtain more robust estimation of the eigenphone matrix The reasons are as follows: firstly, sparsity constraint can reuce the free parameters, thus alleviates over-fitting; seconly, when the aaptation ata is insufficient, many speaker specific phone variation pattern will not be observe an a low imensional phone variation subspace shoul be assume, ie the rank of the eigenphone matrix shoul be limite The solutions of low-rank estimation problems are in general not sparse at all In this paper, a linear combination of the l 1 an nuclear norm was use to obtain a simultaneously sparse an low-rank matrix [10] The resulting regularize objective function is as following Q (V (s)) = Q(V (s)) + λ 1 V (s) 1 + λ 2 V (s), (6) where λ 1, λ 2 > 0 4 Optimization There is no close form solution to the regularize objective function (6) Numerous approaches have been propose in literature to solve the l 1 norm an nuclear norm penalty problems 2973
3 separately For the mixe norm penalty problem, we aopte the incremental proximal escent algorithm [10, 11] For a convex regularizer R(X), X R m n, the proximal operator is efine as 1 prox R (X) = arg min Y 2 Y X 2 F + R(Y ) (7) where F enotes the Frobenius norm of a matrix The proximal operator for the l 1 norm is the soft thresholing operator prox γ 1 (X) = sgn(x) ( X γ) + (8) where enotes the Haamar prouct of two matrices, (x) + = max{x, 0} The sign function (sgn), prouct an maximum are all taken component-wise For the nuclear norm, the proximal operator is given by the shrinkage operation as follows [11] If X = P iag(ν 1, ν 2,, ν n)q T is the singular value ecomposition of X, then prox γ (X) = P iag((ν i γ) +)Q T (9) The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set The incremental proximal escent algorithm [11] coul be viewe as a natural extension of the iterate projection algorithm, which activates each convex set moeling a constraint iniviually by means of its projection operator In this paper, an accelerate version of the incremental proximal escent algorithm is introuce for estimation of the eigenphone matrix V, which can be summarize as following Algorithm 1 Accelerate Incremental Proximal Descent Algorithm for Sparse an Low-rank Eigenphone Matrix Estimation 1: θ θ 0 Initialize the escent step size 2: V ˆV ˆV is the solution of (5) 3: Q new Q(V ) + λ 1 V 1 + λ 2 V Equation (6) 4: repeat 5: Q ol Q new, θ ηθ 6: repeat Search for the step size 7: V V θ V Q(V ) 8: V prox θλ1 1 (V ) 9: V prox θλ2 (V ) 10: Q new Q(V ) + λ 1 V 1 + λ 2 V 11: if Q new > Q ol then 12: θ η 1 θ 13: en if 14: until Q new < Q ol 15: until Q ol Q new / Q ol < ϵ In Algorithm 1, V Q(V ) is the graient of (5), which can be easily calculate from ν (s)q(v ) = A ν (s) + b Step 7 is the normal graient escent step of the original objective function Q(V (s)) In Step 8 an Step 9, the proximal operators of the l 1 norm an nuclear norm are applie in sequential The initial escent step size θ 0 can be set to inverse of the Lipschitz constant [12] of Q(V (s)) In this paper, to accelerate the convergence spee, the escent step size is increase by a preefine factor η(> 1) for each iteration (Step 5) From Step 6 to 14, we check for the value of the regularize objective function (6) an reuce the step size by a factor of η 1 until it is ecrease The whole proceure is iterate until the relative change of (6) is small than a preefine threshol ϵ (Step 15) 5 Experiments Experiments were performe on a Manarin Chinese continuous speech recognition task using the Microsoft speech corpus [13] The training set contains 19,688 sentences from 100 speakers with a total of 454,315 syllables (about 33 hours total) The testing set consists of 25 speakers an each speaker contributes 20 sentences (the average length of a sentence is about 5 secons) All experiments were base on the stanar HTK (v 341) tool set ( [14]) The frame length an frame step size were set as 25ms an 10ms, respectively Acoustic features were constructe from 13 imensional Mel-frequency cepstral coefficients an their first an secon erivatives The basic units for acoustic moeling are 27 initial an 157 tonal final units of Manarin Chinese as escribe in [13] Monophone moels were first create using all 19,688 sentences Then all possible cross-syllable triphone expansions base on the full syllable ictionary were generate, resulting in 295,180 triphones Out of these triphones, 95,534 triphones actually occur in the training corpus Each triphone was moele by a 3-state left-to-right HMM without skips After ecision tree base state clustering, the number of unique tie states was reuce to 2,392 We then use the HTKs Gaussian splitting capability to incrementally increase the number of Gaussian components per state to 8, resulting in 19,136 ifferent Gaussian components in the SI moel Stanar regression class tree base MLLR metho was use to obtain the 100 training speakers SD moels HVite was use as the ecoer with a full connecte syllable recognition network All 1,679 tonal syllables are liste in the network an any syllable can be followe by any other syllable, or they may be separate by short pause or silence This recognition task puts the highest eman on the quality of the acoustic moels We rew 1, 2, 4, 6, 8 an 10 sentences ranomly from each testing speaker for aaptation in supervise moe an tonal syllable recognition rate was measure among the remaining 10 sentences To ensure statistical robustness of the results, each experiment was repeate 8 times using cross-valiation an the recognition rates were average The recognition accuracy of the SI moel is 5304% (the baseline reference result reporte in [13] is 5121%) For the purpose of comparison, we carrie out three experiments using conventional MLLR + MAP, eigenvoice an eigenphone base aaptation methos without regularization For MLLR + MAP aaptation, we experimente with ifferent parameter settings an the best result was obtaine at a prior weighting factor of 10 (for MAP) an 32 regression classes with a 3-block-iagonal transformation matrix (for MLLR) For eigenvoice aaptation, the imension K of the speaker subspace was varie from 10 to 100 For the eigenphone base metho, both the ML an MAP estimation schemes were teste Aaptation experiment results of the above methos are summarize in Table I For MAP eigenphone metho, λ enotes the prior weighting factor From Table 1, it can be observe that when the aaptation ata is sufficient, the eigenphone base metho outperforms the MAP+MLLR metho But when the aaptation is limite to 1 or 2 sentences (about 5 10 secons), performance egraation emerges ue to overfitting The situation is worse when higher imensional eigenphone subspace is use MAP estimation using a Gaussain prior (equivalent to an l 2 regularization term) can alleviate overfitting to some extent To prevent the performance from egraation, a large prior weight is require, which egraes the performance when the aaptation ata is sufficient 2974
4 Table 1: Average tonal syllable recognition rate (%) after speaker aaptation using conventional methos Methos Number of aaptation sentences MAP+MLLR Eigenvoice K = K = K = K = K = ML Eigenphone N = N = MAP Eigenphone, N = 50 λ = λ = λ = λ = MAP Eigenphone, N = 100 λ = λ = λ = λ = We teste the propose metho with ifferent regularization parameters, where λ 1 is varie between 0 an 100, λ 2 is varie between 0 an 200 Table 2 presents the typical results It can be observe that the nuclear norm regularization (λ 1 = 0, λ 2 0) improves the performance for both N = 50 an N = 100, especially when the aaptation ata is limite to less than 2 sentences A large weighting factor (λ 2 > 100) is neee to obtain the best recognition rates We calculate the average rank of the eigenphone matrix (V (s), which imension is (N + 1) D) over all the testing speakers in each test It is observe that for 1 an 2 sentences, the average rank is small than the feature imension (D=39) When more aaptation ata is provie, the average rank keeps equal to 39 So it can be conclue that the nuclear norm regularization effectively prevents the imension of the phone variation subspace from large than it is necessary Compare with the nuclear norm regularization, l 1 regularization (λ 1 0, λ 2 = 0) can improve the performance further with a small weighting factor (λ 1 < 50) This can be attribute to the sparse constraint introuce, which can reuce the free parameters, thus prevents the estimation of the eigenphone matrix from over-fitting The larger the number of eigenphones (N), the larger the weighting factor (λ 1 ) to achieve the best performance In all testing conition, many elements of (V )(s) become zero, resulting a sparse eigenphone matrix When less aaptation is provie or a large weighting factor λ 1 is use, the eigenphone matrix become more sparse, which means that less free parameters are estimate Combining the l 1 norm an the nuclear norm regularization, performance can be further improve In this situation, compare with using the nuclear norm regularization alone, a relatively small weighting factor of λ 2 < 30 is neee For 1 sentence (about 5s) aaptation, the best result is 5524% (when λ 1 = 20, λ 2 = 10 an N = 50), which is comparable to the best result obtaine by the eigenvoice metho (5572% when K = 60) There is about 1% relative improvement compare Table 2: Average tonal syllable recognition rate (%) after speaker aaptation base on sparse an low-rank eigenphone matrix estimation (λ 1, λ 2 ) Number of aaptation sentences N = 50 (0, 120) (0, 140) (0, 160) (20, 0) (20, 10) (20, 20) (30, 0) (30, 10) (30, 20) N = 100 (0, 120) (0, 140) (0, 160) (20, 0) (20, 10) (20, 20) (30, 0) (30, 10) (30, 20) with the l 1 regularization (5472% when λ 1 = 20, λ 2 = 0 an N = 50) an about 24% relative improvement compare with the MAP Eigenphone metho (5392% when σ ( 2) = 2000 an N = 100) For 2-sentence (about 10s) aaptation, the best result is 5724%, which is slightly better than the best result of eigenvoice (5711% when K = 80) For 4 sentences an more aaptation ata, the performance is also improve compare with the ML eigenphone metho Even with 10 sentences (about 50s) aaptation ata, the best result (6144%) is better than that of the MAP (6070%) an ML eigenphone metho (6062%) Again, the average rank of the eigenphone matrix (V (s)) is small than 39 when there is 1 or 2 aaptation sentences It seems that sparse constraint plays a key role in the performance improvement an the low-rank constraint is a goo complement 6 Conclusion In this paper, we investigate applying l 1 an nuclear norm regularization simultaneously to improve the robustness of the estimation of the eigenphone matrix in eigenphone base speaker aaptation The l 1 regularization introuces sparseness an reuces the number of free parameters, thus alleviates over-fitting The nuclear norm regularization forces the eigenphone matrix to be low-rank, thus prevents the imension of the phone variation subspace from being too high than necessary Their linear combination results in a simultaneous sparse an low-rank eigenphone matrix From our experiments on a Manarin Chinese syllable recognition task, we observe substantial performance improvement uner all testing conitions compare with conventional methos 7 Acknowlegements This work was supporte in part by the National Natural Science Founation of China (No an No ) 2975
5 8 References [1] R Kuhn, J-C Junqua, P Nguyen, an N Niezielski, Rapi speaker aaptation in eigenvoice space, IEEE Trans Speech Auio Process, vol 8, no 6, pp , Nov 2000 [2] M J F Gales, Maximum likelihoo linear transformations for HMM-base speech recognition, Comput Speech Lang, vol 12, no 2, pp 75 98, Apr 1998 [3] W-L Zhang, W-Q Zhang, an B-C Li, Speaker aaptation base on speaker-epenent eigenphone estimation, in Proc of ASRU, Dec 2011, pp [4] Q F Tan, P G Georgiou, an S S Narayanan, Enhance sparse imputation techniques for a robust speech recognition front-en, IEEE Trans Acoust, Speech, Signal Process, vol 19, no 8, pp , Nov 2011 [5] Q F Tan an S S Narayanan, Novel variations of group sparse regularization techniques with applications to noise robust automatic speech recognition, IEEE Trans Acoust, Speech, Signal Process, vol 20, no 4, pp , May 2012 [6] L Lu, A Ghoshal, an S Renals, Regularize subspace gaussian mixture moels for speech recognition, IEEE Signal Process Lett, vol 18, no 7, pp , July 2011 [7] D Yu, F Seie, G Li, an L Deng, Exploiting sparseness in eep neural networks for large vocabulary speech recognition, in Proc of ICASSP, Mar 2012, pp [8] J-F Cai, E J Canès, an Z Shen, A singular value thresholing algorithm for matrix completion, SIAM J Optimization, vol 20, no 4, pp , Jan 2010 [9] E J Canès, X Li, Y Ma, an J Wright, Robust principal component analysis? J ACM, vol 58, no 3, pp 11:1 11:37, May 2011 [10] E Richar an P-A Savalle, Estimation of simultaneously sparse an low rank matrices, in Proc of ICML, July 2012, pp [11] D P Bertsekas, Incremental proximal methos for large scale convex optimization, Math Program, vol 129, no 2, pp , Oct 2011 [12] K-C Toh an S Yun, An accelerate proximal graient algorithm for nuclear norm regularize linear least squares prolems, Pacific J Optim, vol 6, no 3, pp , 2010 [13] E Chang, Y Shi, J Zhou et al, Speech lab in a box : a Manarin speech toolbox to jumpstart speech relate research, in Proc of Eurospeech, 2001, pp [14] S Young, G Evermann, M Gales et al, The HTK Book (for HTK Version 34),
Speaker adaptation based on regularized speaker-dependent eigenphone matrix estimation
Zhang et al EURASIP Journal on Audio, Speech, and Music Processing 2014, 2014:11 RESEARCH Speaker adaptation based on regularized speaker-dependent eigenphone matrix estimation Wen-Lin Zhang 1*,Wei-QiangZhang
More informationMulti-View Clustering via Canonical Correlation Analysis
Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationInfluence of weight initialization on multilayer perceptron performance
Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -
More informationMulti-View Clustering via Canonical Correlation Analysis
Keywors: multi-view learning, clustering, canonical correlation analysis Abstract Clustering ata in high-imensions is believe to be a har problem in general. A number of efficient clustering algorithms
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationOn combinatorial approaches to compressed sensing
On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu
More informationHybrid Fusion for Biometrics: Combining Score-level and Decision-level Fusion
Hybri Fusion for Biometrics: Combining Score-level an Decision-level Fusion Qian Tao Raymon Velhuis Signals an Systems Group, University of Twente Postbus 217, 7500AE Enschee, the Netherlans {q.tao,r.n.j.velhuis}@ewi.utwente.nl
More informationA POSTFILTER TO MODIFY THE MODULATION SPECTRUM IN HMM-BASED SPEECH SYNTHESIS
24 IEEE International Conference on Acoustic, Speech an Signal Processing (ICASSP A POSTFILTER TO MODIFY THE MODULATION SPECTRUM IN HMM-BASED SPEECH SYNTHESIS Shinnosuke Takamichi, Tomoki Toa, Graham Neubig,
More informationMulti-View Clustering via Canonical Correlation Analysis
Kamalika Chauhuri ITA, UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakae Karen Livescu Karthik Sriharan Toyota Technological Institute at Chicago, 6045 S. Kenwoo Ave., Chicago, IL kamalika@soe.ucs.eu
More informationOptimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations
Optimize Schwarz Methos with the Yin-Yang Gri for Shallow Water Equations Abessama Qaouri Recherche en prévision numérique, Atmospheric Science an Technology Directorate, Environment Canaa, Dorval, Québec,
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationTime-of-Arrival Estimation in Non-Line-Of-Sight Environments
2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor
More informationMulti-View Clustering via Canonical Correlation Analysis
Kamalika Chauhuri ITA, UC San Diego, 9500 Gilman Drive, La Jolla, CA Sham M. Kakae Karen Livescu Karthik Sriharan Toyota Technological Institute at Chicago, 6045 S. Kenwoo Ave., Chicago, IL kamalika@soe.ucs.eu
More informationDiagonalization of Matrices Dr. E. Jacobs
Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationu!i = a T u = 0. Then S satisfies
Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace
More informationLecture 2: Correlated Topic Model
Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables
More informationRobust Low Rank Kernel Embeddings of Multivariate Distributions
Robust Low Rank Kernel Embeings of Multivariate Distributions Le Song, Bo Dai College of Computing, Georgia Institute of Technology lsong@cc.gatech.eu, boai@gatech.eu Abstract Kernel embeing of istributions
More informationAdmin BACKPROPAGATION. Neural network. Neural network 11/3/16. Assignment 7. Assignment 8 Goals today. David Kauchak CS158 Fall 2016
Amin Assignment 7 Assignment 8 Goals toay BACKPROPAGATION Davi Kauchak CS58 Fall 206 Neural network Neural network inputs inputs some inputs are provie/ entere Iniviual perceptrons/ neurons Neural network
More informationSpace-time Linear Dispersion Using Coordinate Interleaving
Space-time Linear Dispersion Using Coorinate Interleaving Jinsong Wu an Steven D Blostein Department of Electrical an Computer Engineering Queen s University, Kingston, Ontario, Canaa, K7L3N6 Email: wujs@ieeeorg
More informationLie symmetry and Mei conservation law of continuum system
Chin. Phys. B Vol. 20, No. 2 20 020 Lie symmetry an Mei conservation law of continuum system Shi Shen-Yang an Fu Jing-Li Department of Physics, Zhejiang Sci-Tech University, Hangzhou 3008, China Receive
More informationEigenvoice Speaker Adaptation via Composite Kernel PCA
Eigenvoice Speaker Adaptation via Composite Kernel PCA James T. Kwok, Brian Mak and Simon Ho Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Hong Kong [jamesk,mak,csho]@cs.ust.hk
More informationCapacity Analysis of MIMO Systems with Unknown Channel State Information
Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,
More informationImplicit Differentiation
Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,
More informationNonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain
Nonlinear Aaptive Ship Course Tracking Control Base on Backstepping an Nussbaum Gain Jialu Du, Chen Guo Abstract A nonlinear aaptive controller combining aaptive Backstepping algorithm with Nussbaum gain
More informationAutomatic Speech Recognition (CS753)
Automatic Speech Recognition (CS753) Lecture 21: Speaker Adaptation Instructor: Preethi Jyothi Oct 23, 2017 Speaker variations Major cause of variability in speech is the differences between speakers Speaking
More informationImproving Estimation Accuracy in Nonrandomized Response Questioning Methods by Multiple Answers
International Journal of Statistics an Probability; Vol 6, No 5; September 207 ISSN 927-7032 E-ISSN 927-7040 Publishe by Canaian Center of Science an Eucation Improving Estimation Accuracy in Nonranomize
More informationEuler equations for multiple integrals
Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................
More informationA New Minimum Description Length
A New Minimum Description Length Soosan Beheshti, Munther A. Dahleh Laboratory for Information an Decision Systems Massachusetts Institute of Technology soosan@mit.eu,ahleh@lis.mit.eu Abstract The minimum
More informationWUCHEN LI AND STANLEY OSHER
CONSTRAINED DYNAMICAL OPTIMAL TRANSPORT AND ITS LAGRANGIAN FORMULATION WUCHEN LI AND STANLEY OSHER Abstract. We propose ynamical optimal transport (OT) problems constraine in a parameterize probability
More informationIntroduction to Machine Learning
How o you estimate p(y x)? Outline Contents Introuction to Machine Learning Logistic Regression Varun Chanola April 9, 207 Generative vs. Discriminative Classifiers 2 Logistic Regression 2 3 Logistic Regression
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationHomework 2 Solutions EM, Mixture Models, PCA, Dualitys
Homewor Solutions EM, Mixture Moels, PCA, Dualitys CMU 0-75: Machine Learning Fall 05 http://www.cs.cmu.eu/~bapoczos/classes/ml075_05fall/ OUT: Oct 5, 05 DUE: Oct 9, 05, 0:0 AM An EM algorithm for a Mixture
More informationCascaded redundancy reduction
Network: Comput. Neural Syst. 9 (1998) 73 84. Printe in the UK PII: S0954-898X(98)88342-5 Cascae reunancy reuction Virginia R e Sa an Geoffrey E Hinton Department of Computer Science, University of Toronto,
More informationA Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation
A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an
More informationChapter 2 Lagrangian Modeling
Chapter 2 Lagrangian Moeling The basic laws of physics are use to moel every system whether it is electrical, mechanical, hyraulic, or any other energy omain. In mechanics, Newton s laws of motion provie
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationMonte Carlo Methods with Reduced Error
Monte Carlo Methos with Reuce Error As has been shown, the probable error in Monte Carlo algorithms when no information about the smoothness of the function is use is Dξ r N = c N. It is important for
More informationON THE OPTIMALITY SYSTEM FOR A 1 D EULER FLOW PROBLEM
ON THE OPTIMALITY SYSTEM FOR A D EULER FLOW PROBLEM Eugene M. Cliff Matthias Heinkenschloss y Ajit R. Shenoy z Interisciplinary Center for Applie Mathematics Virginia Tech Blacksburg, Virginia 46 Abstract
More informationThermal conductivity of graded composites: Numerical simulations and an effective medium approximation
JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University
More informationPDE Notes, Lecture #11
PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =
More informationFast image compression using matrix K-L transform
Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China.
More informationFLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction
FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More informationMonaural speech separation using source-adapted models
Monaural speech separation using source-adapted models Ron Weiss, Dan Ellis {ronw,dpwe}@ee.columbia.edu LabROSA Department of Electrical Enginering Columbia University 007 IEEE Workshop on Applications
More informationDiscrete Mathematics
Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More informationTHE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS
THE EFFICIENCIES OF THE SPATIAL MEDIAN AND SPATIAL SIGN COVARIANCE MATRIX FOR ELLIPTICALLY SYMMETRIC DISTRIBUTIONS BY ANDREW F. MAGYAR A issertation submitte to the Grauate School New Brunswick Rutgers,
More informationSYNCHRONOUS SEQUENTIAL CIRCUITS
CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents
More informationA Review of Multiple Try MCMC algorithms for Signal Processing
A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications
More informationLecture 2 Lagrangian formulation of classical mechanics Mechanics
Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,
More informationOptimization of Geometries by Energy Minimization
Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.
More information'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21
Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting
More informationSwitching Time Optimization in Discretized Hybrid Dynamical Systems
Switching Time Optimization in Discretize Hybri Dynamical Systems Kathrin Flaßkamp, To Murphey, an Sina Ober-Blöbaum Abstract Switching time optimization (STO) arises in systems that have a finite set
More informationSeparation of Variables
Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical
More informationSYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS
SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS GEORGE A HAGEDORN AND CAROLINE LASSER Abstract We investigate the iterate Kronecker prouct of a square matrix with itself an prove an invariance
More informationShort Intro to Coordinate Transformation
Short Intro to Coorinate Transformation 1 A Vector A vector can basically be seen as an arrow in space pointing in a specific irection with a specific length. The following problem arises: How o we represent
More informationLinear Methods for Regression. Lijun Zhang
Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived
More informationCS9840 Learning and Computer Vision Prof. Olga Veksler. Lecture 2. Some Concepts from Computer Vision Curse of Dimensionality PCA
CS9840 Learning an Computer Vision Prof. Olga Veksler Lecture Some Concepts from Computer Vision Curse of Dimensionality PCA Some Slies are from Cornelia, Fermüller, Mubarak Shah, Gary Braski, Sebastian
More informationSituation awareness of power system based on static voltage security region
The 6th International Conference on Renewable Power Generation (RPG) 19 20 October 2017 Situation awareness of power system base on static voltage security region Fei Xiao, Zi-Qing Jiang, Qian Ai, Ran
More informationSpurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009
Spurious Significance of reatment Effects in Overfitte Fixe Effect Moels Albrecht Ritschl LSE an CEPR March 2009 Introuction Evaluating subsample means across groups an time perios is common in panel stuies
More informationIntroduction to the Vlasov-Poisson system
Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its
More informationOne-dimensional I test and direction vector I test with array references by induction variable
Int. J. High Performance Computing an Networking, Vol. 3, No. 4, 2005 219 One-imensional I test an irection vector I test with array references by inuction variable Minyi Guo School of Computer Science
More informationA PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,
More informationTEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE
TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationAn Analytical Expression of the Probability of Error for Relaying with Decode-and-forward
An Analytical Expression of the Probability of Error for Relaying with Decoe-an-forwar Alexanre Graell i Amat an Ingmar Lan Department of Electronics, Institut TELECOM-TELECOM Bretagne, Brest, France Email:
More informationCMA-ES with Optimal Covariance Update and Storage Complexity
CMA-ES with Optimal Covariance Upate an Storage Complexity Oswin Krause Dept. of Computer Science University of Copenhagen Copenhagen, Denmark oswin.krause@i.ku.k Díac R. Arbonès Dept. of Computer Science
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationProof of SPNs as Mixture of Trees
A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a
More informationExperiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition
Experiments with a Gaussian Merging-Splitting Algorithm for HMM Training for Speech Recognition ABSTRACT It is well known that the expectation-maximization (EM) algorithm, commonly used to estimate hidden
More informationHyperbolic Moment Equations Using Quadrature-Based Projection Methods
Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the
More informationHeeyoul (Henry) Choi. Dept. of Computer Science Texas A&M University
Heeyoul (Henry) Choi Dept. of Computer Science Texas A&M University hchoi@cs.tamu.edu Introduction Speaker Adaptation Eigenvoice Comparison with others MAP, MLLR, EMAP, RMP, CAT, RSW Experiments Future
More informationA Modification of the Jarque-Bera Test. for Normality
Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 21, NO. 9, SEPTEMBER
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 21, NO. 9, SEPTEMBER 2013 1791 Joint Uncertainty Decoding for Noise Robust Subspace Gaussian Mixture Models Liang Lu, Student Member, IEEE,
More informationLower Bounds for the Smoothed Number of Pareto optimal Solutions
Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.
More informationAgmon Kolmogorov Inequalities on l 2 (Z d )
Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,
More information. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.
S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial
More informationEVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION OF UNIVARIATE TAYLOR SERIES
MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1117 1130 S 0025-5718(00)01120-0 Article electronically publishe on February 17, 2000 EVALUATING HIGHER DERIVATIVE TENSORS BY FORWARD PROPAGATION
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationComputing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions
Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5
More informationConcentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification
Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper,
More informationSparse Reconstruction of Systems of Ordinary Differential Equations
Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA
More informationAll s Well That Ends Well: Supplementary Proofs
All s Well That Ens Well: Guarantee Resolution of Simultaneous Rigi Boy Impact 1:1 All s Well That Ens Well: Supplementary Proofs This ocument complements the paper All s Well That Ens Well: Guarantee
More informationAnalytic Scaling Formulas for Crossed Laser Acceleration in Vacuum
October 6, 4 ARDB Note Analytic Scaling Formulas for Crosse Laser Acceleration in Vacuum Robert J. Noble Stanfor Linear Accelerator Center, Stanfor University 575 San Hill Roa, Menlo Park, California 945
More informationHyperbolic Systems of Equations Posed on Erroneous Curved Domains
Hyperbolic Systems of Equations Pose on Erroneous Curve Domains Jan Norström a, Samira Nikkar b a Department of Mathematics, Computational Mathematics, Linköping University, SE-58 83 Linköping, Sween (
More informationThe Principle of Least Action and Designing Fiber Optics
University of Southampton Department of Physics & Astronomy Year 2 Theory Labs The Principle of Least Action an Designing Fiber Optics 1 Purpose of this Moule We will be intereste in esigning fiber optic
More informationSurvey-weighted Unit-Level Small Area Estimation
Survey-weighte Unit-Level Small Area Estimation Jan Pablo Burgar an Patricia Dörr Abstract For evience-base regional policy making, geographically ifferentiate estimates of socio-economic inicators are
More informationPart I: Web Structure Mining Chapter 1: Information Retrieval and Web Search
Part I: Web Structure Mining Chapter : Information Retrieval an Web Search The Web Challenges Crawling the Web Inexing an Keywor Search Evaluating Search Quality Similarity Search The Web Challenges Tim
More informationTRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS
TRAJECTORY TRACKING FOR FULLY ACTUATED MECHANICAL SYSTEMS Francesco Bullo Richar M. Murray Control an Dynamical Systems California Institute of Technology Pasaena, CA 91125 Fax : + 1-818-796-8914 email
More informationLeft-invariant extended Kalman filter and attitude estimation
Left-invariant extene Kalman filter an attitue estimation Silvere Bonnabel Abstract We consier a left-invariant ynamics on a Lie group. One way to efine riving an observation noises is to make them preserve
More informationCUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, and Tony Wu
CUSTOMER REVIEW FEATURE EXTRACTION Heng Ren, Jingye Wang, an Tony Wu Abstract Popular proucts often have thousans of reviews that contain far too much information for customers to igest. Our goal for the
More information11.7. Implicit Differentiation. Introduction. Prerequisites. Learning Outcomes
Implicit Differentiation 11.7 Introuction This Section introuces implicit ifferentiation which is use to ifferentiate functions expresse in implicit form (where the variables are foun together). Examples
More informationNeural Network Training By Gradient Descent Algorithms: Application on the Solar Cell
ISSN: 319-8753 Neural Networ Training By Graient Descent Algorithms: Application on the Solar Cell Fayrouz Dhichi*, Benyounes Ouarfi Department of Electrical Engineering, EEA&TI laboratory, Faculty of
More informationAnalyzing Tensor Power Method Dynamics in Overcomplete Regime
Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical
More informationPure Further Mathematics 1. Revision Notes
Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,
More informationJUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson
JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises
More informationSection 2.1 The Derivative and the Tangent Line Problem
Chapter 2 Differentiation Course Number Section 2.1 The Derivative an the Tangent Line Problem Objective: In this lesson you learne how to fin the erivative of a function using the limit efinition an unerstan
More information