Extended SMART Algorithms for Non-Negative Matrix Factorization

Size: px
Start display at page:

Download "Extended SMART Algorithms for Non-Negative Matrix Factorization"

Transcription

1 Extended SMART Agorithms for Non-Negative Matrix Factorization Andrzej CICHOCKI 1, Shun-ichi AMARI 2 Rafa ZDUNEK 1, Rau KOMPASS 1, Gen HORI 1 and Zhaohui HE 1 Invited Paper 1 Laboratory for Advanced Brain Signa Processing 2 Amari Research Unit for Mathematica Neuroscience, BSI, RIKEN, Wako-shi JAPAN Abstract. In this paper we derive a famiy of new extended SMART Simutaneous Mutipicative Agebraic Reconstruction Technique agorithms for Non-negative Matrix Factorization NMF. The proposed agorithms are characterized by improved efficiency and convergence rate and can be appied for various distributions of data and additive noise. Information theory and information geometry pay key roes in the derivation of new agorithms. We discuss severa oss functions used in information theory which aow us to obtain generaized forms of mutipicative NMF earning adaptive agorithms. We aso provide fexibe and reaxed forms of the NMF agorithms to increase convergence speed and impose an additiona constraint of sparsity. The scope of these resuts is vast since discussed generaized divergence functions incude a arge number of usefu oss functions such as the Amari α divergence, Reative entropy, Bose-Einstein divergence, Jensen-Shannon divergence, J-divergence, Arithmetic-Geometric AG Taneja divergence, etc. We appied the deveoped agorithms successfuy to Bind or semi bind Source Separation BSS where sources may be generay statisticay dependent, however are subject to additiona constraints such as nonnegativity and sparsity. Moreover, we appied a nove mutiayer NMF strategy which improves performance of the most proposed agorithms. 1 Introduction and Probem Formuation NMF Non-negative Matrix Factorization caed aso PMF Positive Matrix Factorization is an emerging technique for data mining, dimensionaity reduction, pattern recognition, object detection, cassification, gene custering, sparse nonnegative representation and coding, and bind source separation BSS [1, 2, 3, 4, 5, 6]. The NMF, first introduced by Paatero and Trapper, and further On eave from Warsaw University of Technoogy, Poand On eave from Institute of Teecommunications, Teeinformatics and Acoustics, Wrocaw University of Technoogy, Poand Freie University, Berin, Germany On eave from the South China University, Guangzhou, China

2 2 investigated by many researchers [7, 8, 9, 10, 4, 11, 12], does not assume expicity or impicity sparseness, smoothness or mutua statistica independence of hidden atent components, however it usuay provides quite a sparse decomposition [1, 13, 9, 5]. NMF has aready found a wide spectrum of appications in PET, spectroscopy, chemometrics and environmenta science where the matrices have cear physica meanings and some normaization or constraints are imposed on them for exampe, the matrix A has coumns normaized to unit ength [7, 2, 3, 5, 14, 15]. Recenty, we have appied NMF with tempora smoothness and spatia constraints to improve the anaysis of EEG data for eary detection of Azheimer s disease [16]. A NMF approach is promising in many appications from engineering to neuroscience since it is designed to capture aternative structures inherent in data and, possiby to provide more bioogica insight. Lee and Seung introduced NMF in its modern formuation as a method to decompose patterns or images [1, 13]. NMF decomposes the data matrix Y = [y1, y2,..., yn] R m N as a product of two matrices A R m n and = [x1, x2,..., xn] R n N having ony non-negative eements. Athough some decompositions or matrix factorizations provide an exact reconstruction of the data i.e., Y = A, we sha consider here decompositions which are approximative in nature, i.e., Y = A + V, A 0, 0 1 or equivaenty yk = Axk + vk, k = 1, 2,..., N or in a scaar form as y i k = n j=1 a ijx j k +ν i k, i = 1,..., m, with a ij 0 and x jk 0 where V R m N represents the noise or error matrix depending on appications, yk = [y 1 k,..., y m k] T is a vector of the observed signas typicay positive at the discrete time instants 3 k whie xk = [x 1 k,..., x n k] T is a vector of nonnegative components or source signas at the same time instant [17]. Due to additive noise the observed data might sometimes take negative vaues. In such a case we appy the foowing approximation: ŷ i k = y i k if y i k is positive and otherwise ŷ i k = ε, where ε is a sma positive constant. Our objective is to estimate the mixing basis matrix A and sources subject to nonnegativity constraints of a entries of A and. Usuay, in BSS appications it is assumed that N >> m n and n is known or can be reativey easiy estimated using SVD or PCA. Throughout this paper, we use the foowing notations: x j k = x jk, y i k = y and z = [A] means -th eement of the matrix A, and the ij-th eement of the matrix A is denoted by a ij. The main objective of this contribution is to derive a famiy of new fexibe and improved NMF agorithms that aow to generaize or combine different criteria in order to extract physicay meaningfu sources, especiay for biomedica signa appications such as EEG and MEG. 3 The data are often represented not in the time domain but in a transform domain such as the time frequency domain, so index k may have different meanings.

3 3 2 Extended Lee-Seung Agorithms and Fixed Point Agorithms Athough the standard NMF without any auxiiary constraints provides sparseness of its component, we can achieve some contro of this sparsity as we as smoothness of components by imposing additiona constraints in addition to non-negativity constraints. In fact, we can incorporate smoothness or sparsity constraints in severa ways [9]. One of the simpe approach is to impement in each iteration step a noninear projection which can increase the sparseness and/or smoothness of estimated components. An aternative approach is to add to the oss function suitabe reguarization or penaty terms. Let us consider the foowing constrained optimization probem: Minimize: D α F A, = 1 2 Y A 2 F + α A J A A + α J s. t. a ij 0, x jk 0, i, j, k, 2 where α A and α 0 are nonnegative reguarization parameters and terms J and J A A are used to enforce a certain appication-dependent characteristics of the soution. As a specia practica case we have J = jk f x jk, where f are suitaby chosen functions which are the measures of smoothness or sparsity. In order to achieve sparse representation we usuay choose fx jk = x jk or simpy fx jk = x jk, or aternativey fx jk = x jk nx jk with constraints x jk 0. Simiar reguarization terms can be aso impemented for the matrix A. Note that we treat both matrices A and in a symmetric way. Appying the standard gradient descent approach, we have D α F A, D α F A, a ij a ij η ij, x jk x jk η jk, 3 where η ij and η jk are positive earning rates. The gradient components can be expressed in a compact matrix form as: D α F A, = [ Y T + A T J A A ] ij + α A, 4 D α F A, = [ A T Y + A T J A] jk + α. 5 Here, we foow the Lee and Seung approach to choose specific earning rates [1, 3]: η ij = a ij x jk [A T, η jk = ] ij [A T A ] jk, 6

4 4 that eads to a generaized robust mutipicative update rues: [ ] Y T ] ij α A ϕ A a ij ε a ij a ij [A T, 7 ] ij + ε [ ] [A T Y ] jk α ϕ x jk ε x jk x jk [A T, 8 A ] jk + ε where the noninear operator is defined as [x] ε = max{ε, x} with a sma positive ε and the functions ϕ A a ij and ϕ x jk are defined as ϕ A a ij = J AA, ϕ x jk = J. 9 Typicay, ε = is introduced in order to ensure non-negativity constraints and avoid possibe division by zero. The above Lee-Seung agorithm can be considred as an extension of the we known ISRA Image Space Reconstruction Agorithm agorithm. The above agorithm reduces to the standard Lee-Seung agorithm for α A = α = 0. In the specia case, by using the 1 -norm reguarization terms fx = x 1 for both matrices and A the above mutipicative earning rues can be simpified as foows: ] ] [[Y T ] ij α A [[A T Y ] jk α a ij a ij ε [A T ] ij + ε, x jk x jk ε [A T A ] jk + ε, 10 with normaization in each iteration as foows a ij a ij / m a ij. Such normaization is necessary to provide desired sparseness. Agorithm 10 provides a sparse representation of the estimated matrices and the sparseness measure increases with increasing vaues of reguarization coefficients, typicay α = It is worth to note that we can derive as aternative to the Lee-Seung agorithm 10 a Fixed Point NMF agorithm by equaizing the gradient components of 4-5 for 1 -norm reguarization terms to zero [18] : D α F Y A = AT A A T Y + α = 0, 11 A D α F Y A = AT Y T + α A = These equations suggest the foowing fixed point updates rues: { [ ]} [ ] max ε, A T A + A T Y α = A T A + A T Y α, 13 ε { [ A max ε, Y T α A T +]} [ = Y T α A T +],14 ε where [A] + means Moore-Penrose pseudo-inverse and max function is componentwise. The above agorithm can be considered as noninear projected Aternating Least Squares ALS or noninear extension of EM-PCA agorithm.

5 5 Furthermore, using the Interior Point Gradient IPG approach an additive agorithm can be derived which is written in a compact matrix form using MATLAB notations: A A η A A./ A. A Y, 15 η./a A. A A Y, 16 where operators. and./ mean component-wise mutipications and division, respectivey, and η A and η are diagona matrices with positive entries representing suitaby chosen earning rates [19]. Aternativey, the mosty used oss function for the NMF that intrinsicay ensures non-negativity constraints and it is reated to the Poisson eihood is based on the generaized Kuback-Leiber divergence aso caed I-divergence: D KL1 Y A = y y n + [A] y, 17 [A] On the basis of this cost function we proposed a modified Lee-Seung earning agorithm: x jk a ij x jk m a ij y /[A ] m q=1 a qj a ij N x jk y /[A ] N p=1 x jp 1+αs, 18 1+αsA, 19 where additiona sma reguarization terms α s 0 and α s 0 are introduced in order to enforce sparseness of the soution, if necessary. Typica vaues of the reguarization parameters are α s = α sa = Rau Kompass proposed to appy beta divergence to combine the both Lee- Seung agorithms 10 and into one fexibe and eegant agorithm with a singe parameter [10]. Let us consider beta divergence in the foowing generaized form as the cost for the NMF probem [10, 20, 6]: D β K Y A = y y β [A]β ββ [A] β [A] y β + 1 +α 1 + α A A 1, 20 where α and α A are sma positive reguarization parameters which contro the degree of smoothing or sparseness of the matrices A and, respectivey, and 1 -norms A 1 and 1 are introduced to enforce sparse representation of soutions. It is is interesting to note that for β = 1 we obtain the square Eucidean distance expressed by Frobenius norm 2, whie for the singuar cases β = 0 and β = 1 the beta divergence has to be defined as imiting cases as β 0 and β 1, respectivey. When these imits are evauated one gets for

6 6 β 0 the generaized Kuback-Leiber divergence caed I-divergence defined by equations 17 and for β 1 the Itakura-Saito distance can be obtained: D I S Y A = [ n [A] + y ] y [A] The choice of the β parameter depends on statistica distribution of data and the beta divergence corresponds to the Tweedie modes [21, 20]. For exampe, the optima choice of the parameter for the norma distribution is β = 1, for the gamma distribution is β 1, for the Poisson distribution β 0, and for the compound Poisson β 1, 0. From the beta generaized divergence we can derive various kinds of NMF agorithms: Mutipicative based on the standard gradient descent or the Exponentiated Gradient EG agorithms see next section, additive agorithms using Projected Gradient PG or Interior Point Gradient IPG, and Fixed Point FP agorithms. In order to derive a fexibe NMF earning agorithm, we compute the gradient of 20 with respect to eements of matrices x jk = x j k = [] jk and a ij = [A] ij. as foows D β K = D β K = m a ij [A] β y [A] β 1 N [A] β + α, 22 y [A] β 1 x jk + α A. 23 Simiar to the Lee and Seung approach, by choosing suitabe earning rates: η jk = x jk a ij m a ij[a] β, η ij = N [A]β x, 24 jk we obtain mutipicative update rues [10, 6]: [ m x jk x ij y /[A] 1 β α ] ε jk m a ij [A] β, 25 [ N a ij a /[A] 1 β y jk α A ] ε ij N [A]β x, jk 26 where again the rectification defined as [x] ε = max{ε, x} with a sma ε is introduced in order to avoid zero and negative vaues. 3 SMART Agorithms for NMF There are two arge casses of generaized divergences which can be potentiay used for deveoping new fexibe agorithms for NMF: the Bregman divergences

7 7 and the Csiszár s ϕ-divergences [22, 23, 24]. In this contribution we imit our discussion to the some generaized entropy divergences. Let us consider at beginning the generaized K-L divergence dua to 17: D KL A Y = [A] [A] n [A] + y y 27 subject to nonnegativity constraints see Eq. 17 In order to derive the earning agorithm et us appy mutipicative exponentiated gradient EG descent updates to the oss function 27: where x jk x jk exp D KL η jk x jk D KL = D KL =, a ij a ij exp D KL η ij a ij, 28 m a ij n[a] a ij n y 29 N x jk n[a] x jk n y. 30 Hence, we obtain the simpe mutipicative earning rues: x jk x jk exp a ij a ij exp m N y η jk a ij n = x jk [A] y η ij x jk n = a ij [A] m y [A] N y [A] ηjk a ij 31 eη ijx jk 32 The nonnegative earning rates η jk and η ij can take different forms. Typicay, for simpicity and in order to guarantee stabiity of the agorithm we assume that η jk = η j = ω m a ij 1, η ij = η j = ω N x jk 1, where ω 0, 2 is an over-reaxation parameter. The EG updates can be further improved in terms of convergence, computationa efficiency and numerica stabiity in severa ways. In order to keep weight magnitudes bounded, Kivinen and Warmuth proposed a variation of the EG method that appies a normaization step after each weight update. The normaization ineary rescaes a weights so that they sum to a constant. Moreover, instead of the exponent function we can appy its reinearizing approximation: e u max{0.5, 1+u}. To further acceerate its convergence, we may appy individua adaptive earning rates defined as η jk η jk c if the corresponding gradient component D KL / has the same sign in two consecutive steps and η jk η jk /c otherwise, where c > 1 typicay c = [25].

8 8 The above mutipicative earning rues can be written in a more generaized and compact matrix form using MATLAB notations:. exp η. A ny./a + ɛ 33 A A. exp η A. ny./a + ɛ, 34 A A diag{1./suma, 1}, 35 where in practice a sma constant ε = is introduced in order to ensure positivity constraints and/or to avoid possibe division by zero, and η A and η are non-negative scaing matrices representing individua earning rates. The above agorithm may be considered as an aternating minimization/projection extension of the we known SMART Simutaneous Mutipicative Agebraic Reconstruction Technique [26, 27]. This means that the above NMF agorithm can be extended to MART and BI-MART Bock-Iterative Mutipicative Agebraic Reconstruction Technique [26]. It shoud be noted that the parameters weights {x jk, a ij } are restricted to positive vaues, the resuting updates rues can be written: nx jk nx jk η jk D KL n x jk, na ij na ij η ij D KL n a ij, 36 where the natura ogarithm projection is appied component-wise. Thus, in a sense, the EG approach takes the same steps as the standard gradient descent GD, but in the space of ogarithm of the parameters. In other words, in our current appication the scaings of the parameters {x jk, a ij } are best adapted in og-space, where their gradients are much better behaved. 4 NMF Agorithms Using Amari α-divergence It is interesting to note, that the above SMART agorithm can be derived as a specia case for a more genera oss function caed Amari α-divergence see aso Liese & Vajda, Cressie-Read disparity, Kompass generaized divergence and Eguchi-Minami beta divergence 4 [29, 28, 23, 22, 10, 30]: D A Y A = 1 αα 1 y α z 1 α αy + α 1z 37 We note that as specia cases of the Amari α-divergence for α = 2, 0.5, 1, we obtain the Pearson s, Heinger and Neyman s chi-square distances, respectivey, whie for the cases α = 1 and α = 0 the divergence has to be defined by the imits α 1 and α 0, respectivey. When these imits are evauated one obtains 4 Note that this form of α divergence differs sighty with the oss function of Amari given in 1985 and 2000 [28, 23] by the additiona term. This term is needed to aow de-normaized variabes, in the same way that extended Kuback-Leiber divergence differs from the standard form without terms z y [24].

9 9 for α 1 the generaized Kuback-Leiber divergence defined by equations 17 and for α 0 the dua generaized KL divergence 27. The gradient of the above cost function can be expressed in a compact form as D A = 1 α m a ij [1 y z α ], D A = 1 α N x jk [1 y z α ]. 38 However, instead of appying the standard gradient descent we use the projected ineary transformed gradient approach which can be considered as generaization of exponentiated gradient: D A Φx jk Φx jk η jk Φx jk, Φa D A ij Φa ij η ij Φa ij, 39 where Φx is a suitabe chosen function. Hence, we have x jk Φ 1 D A Φx jk η jk Φx jk a ij Φ 1 D A Φa ij η ij Φa ij, It can be shown that such noninear scaing or transformation provides stabe soution and the gradients are much better behaved in Φ space. In our case, we empoy Φx = x α and choose the earning rates as foows η jk = α 2 Φx jk /x 1 α jk m a ij, η ij = α 2 Φa ij /a 1 α ij N x jk, 42 which eads directy to the new earning agorithm 5 : the rigorous convergence proof is omitted due to ack of space m x jk x a ij y /z α 1/α jk m q=1 a, a ij a ij qj N y /z α 1/α x jk N t=1 x 43 jt This agorithm can be impemented in simiar compact matrix form using the MATLAB notations:. A Y + ε./ A + ε. α. 1/α, 44 A A. Y + ε./ A + ε. α. 1/α, 45 A A diag{1./suma, 1}. 5 For α = 0 instead of Φx = x α we have used Φx = nx.

10 10 Aternativey, appying the EG approach, we can obtain the foowing mutipicative agorithm: { m [ α ] } y x jk x jk exp η jk a ij 1, 46 z { } α 1] x jk a ij a ij exp η ij N [ y z 5 Generaized SMART agorithms. 47 The main objective of this paper is to show that the earning agorithm 31 and 32 can be generaized to the foowing fexibe agorithm: [ m ] [ N ] x jk x jk exp η jk a ij ρy, z, a ij a ij exp η ij x jk ρy, z where the error functions defined as 48 DY A ρy, z = 49 z can take different forms depending on the chosen or designed oss cost function DY A see Tabe 1. As an iustrative exampe et us consider the Bose-Einstein divergence: BE α Y A = 1 + αy y n y + αz + αz n 1 + αz y + αz. 50 This oss function has many interesting properties: 1. BE α y z = 0 if z = y amost everywhere. 2. BE α y z = BE 1/α z y 3. For α = 1, BE α simpifies to the symmetric Jensen-Shannon divergence measure see Tabe im α BE α y z = KLy z and for α sufficienty sma BE α y z KLz y. The gradient of the Bose-Einstein oss function in respect to z can be expressed as BE α Y A z and in respect to x jk and a ij as y + αz = α n 1 + αz 51 BE α = m a ij BE α z, BE α = N x jk BE α z. 52

11 11 Hence, appying the standard un-normaized EG approach 28 we obtain the earning rues 48 with the error function ρy, z = α n y + αz /1 + αz. It shoud be noted that the error function ρy, z = 0 if and ony if y = z. 6 Muti-ayer NMF In order to improve performance of the NMF, especiay for i-conditioned and bady scaed data and aso to reduce risk to get stuck in oca minima of nonconvex minimization, we have deveoped a simpe hierarchica and muti-stage procedure in which we perform a sequentia decomposition of nonnegative matrices as foows: In the first step, we perform the basic decomposition factorization Y = A 1 1 using any avaiabe NMF agorithm. In the second stage, the resuts obtained from the first stage are used to perform the simiar decomposition: 1 = A 2 2 using the same or different update rues, and so on. We continue our decomposition taking into account ony the ast achieved components. The process can be repeated arbitrariy many times unti some stopping criteria are satisfied. In each step, we usuay obtain gradua improvements of the performance. Thus, our mode has the form: Y = A 1 A 2 A L L, with the basis nonnegative matrix defined as A = A 1 A 2 A L. Physicay, this means that we buid up a system that has many ayers or cascade connections of L mixing subsystems. The key point in our nove approach is that the earning update process to find parameters of sub-matrices and A is performed sequentiay, i.e. ayer by ayer 6. In each step or each ayer, we can use the same cost oss functions, and consequenty, the same earning minimization rues, or competey different cost functions and/or corresponding update rues. This can be expressed by the foowing procedure: Mutiayer NMF Agorithm Set: 0 = Y, For = 1, 2,... L, do : Initiaize randomy A 0 and/or 0, For k = 1, 2,..., K, do : End End k A k A k = arg min {D 1 A k 1 0 }, { } = arg min D 1 A k A 0, [ ] k a ij m a, ij = K, A = A K, 6 The mutiayer system for NMF and BSS is subject of our patent pending in RIKEN BSI, March 2006.

12 12 Tabe 1. Extended SMART NMF adaptive agorithms and corresponding oss functions. a ij a ij expp N eη ij x jk ρy, z, x jk x jk exp P m η jk a ij ρy, z a j = P m a ij = 1, j, a ij 0 y > 0, z = [A] > 0, x jk 0 Minimization of oss function Corresponding error function ρy, z 1. K-L I-divergence, D KL A Y z n z y + y z 2. Reative A-G divergence AG ry A y + z n y + z 2y + y z 3. Symmetric A-G divergence AGY A y + z 2 n y + z 2 2 y z ρy, z = n y z ρy, z = n 2y y + z ρy, z = y z 2z 2 y z + n y + z 4. Reative Jensen-Shannon divergence 2y n 2y y + z + z y 5. Symmetric Jensen-Shannon divergence y n 2y y + z + z n 2z y + z 6. Bose-Einstein divergence BEY A y n 1 + αy + αz n 1 + αz y + αz y + αz 7. J-divergence D JY A y z 2 n y z 8. Trianguar Discrimination D T Y A y z 2 y + z 9. Amari s α divergence D A Y A 1 αα 1 ρy, z = y z y + z ρy, z = n y + z 2z ρy, z = α n y + αz 1 + αz ρy, z = 1 2 n y z + y z 2z y α z 1 α y + α 1z y ρy, z = 1 α ρy, z = 2y y + z 2 1 y α 1 z

13 13 7 Simuation Resuts A the NMF agorithms discussed in this paper see Tabe 1 have been extensivey tested for many difficut benchmarks for signas and images with various statistica distributions. Simuations resuts confirmed that the deveoped agorithms are stabe, efficient and provide consistent resuts for a wide set of parameters. Due to the imit of space we give here ony one iustrative exampe: The a b c d Fig. 1. Exampe 1: a The origina 5 source signas; b Estimated sources using the standard Lee-Seung agorithm 7 and 8 with SIR = 8.8, 17.2, 8.7, 19.3, 12.4 [db]; c Estimated sources using 20 ayers appied to the standard Lee-Seung agorithm 7 and 8 with SIR = 9.3, 16.1, 9.9, 18.5, 15.8 [db], respectivey; d Estimated source signas using 20 ayers and the new hybrid agorithm 14 with 48 with the Bose Shannon divergence with α = 2; individua performance for estimated source signas: SIR = 15, 17.8, 16.5, 19, 17.5 [db], respectivey. five partiay statisticay dependent nonnegative source signas shown in Fig.1 a have been mixed by randomy generated uniformy distributed nonnegative matrix A R To the mixing signas strong uniform distributed noise with SNR=10 db has been added. Using the standard mutipicative NMF Lee-Sung agorithms we faied to estimate the origina sources. The same agorithm with 20 ayers of the mutiayer system described above gives better resuts see Fig.1 c. However, even better performance for the mutiayer system provides the hybrid SMART agorithm 48 with Bose-Einstein cost function see Tabe 1 for estimation the matrix and the Fixed Point agorithm projected pseudo-

14 14 inverse 14 for estimation of the matrix A see Fig.1 d. We aso tried to appy the ICA agorithms to sove the probem but due to partia dependence of the sources the performance was poor. The most important feature of our approach consists in appying muti-ayer technique that reduces the risk of getting stuck in oca minima, and hence, a considerabe improvement in the performance of NMF agorithms, especiay projected gradient agorithms. 8 Concusions and Discussion In this paper we considered a wide cass of oss functions that aowed us to derive a famiy of robust and efficient nove NMF agorithms. The optima choice of a oss function depends on the statistica distribution of the data and additive noise, so different criteria and agorithms updating rues shoud be appied for estimating the matrix A and the matrix depending on a priori knowedge about the statistics of the data. We derived severa mutipicative agorithms with improved performance for arge scae probems. We found by extensive simuations that mutiayer technique pays a key roe in improving the performance of bind source separation when using the NMF approach. References [1] Lee, D.D., Seung, H.S.: Learning of the parts of objects by non-negative matrix factorization. Nature [2] Cho, Y.C., Choi, S.: Nonnegative features of spectro-tempora sounds for cassification. Pattern Recognition Letters [3] Sajda, P., Du, S., Parra, L.: Recovery of constituent spectra using non-negative matrix factorization. In: Proceedings of SPIE Voume 5207, Waveets: Appications in Signa and Image Processing [4] Guiamet, D., Vitri a, J., Schiee, B.: Introducing a weighted nonnegative matrix factorization for image cassification. Pattern Recognition Letters [5] Li, H., Adai, T., W. Wang, D.E.: Non-negative matrix factorization with orthogonaity constraints for chemica agent detection in Raman spectra. In: IEEE Workshop on Machine Learning for Signa Processing, Mystic USA 2005 [6] Cichocki, A., Zdunek, R., Amari, S.: Csiszar s divergences for non-negative matrix factorization: Famiy of new agorithms. Springer LNCS [7] Paatero, P., Tapper, U.: Positive matrix factorization: A nonnegative factor mode with optima utiization of error estimates of data vaues. Environmetrics [8] Oja, E., Pumbey, M.: Bind separation of positive sources using nonnegative PCA. In: 4th Internationa Symposium on Independent Component Anaysis and Bind Signa Separation, Nara, Japan 2003 [9] Hoyer, P.: Non-negative matrix factorization with sparseness constraints. Journa of Machine Learning Research [10] Kompass, R.: A generaized divergence measure for nonnegative matrix factorization, Neuroinfomatics Workshop, Torun, Poand 2005

15 [11] Dhion, I., Sra, S.: Generaized nonnegative matrix approximations with Bregman divergences. In: NIPS -Neura Information Proc. Systems, Vancouver Canada [12] Berry, M., Browne, M., Langvie, A., Pauca, P., Pemmons, R.: Agorithms and appications for approximate nonnegative matrix factorization. Computationa Statistics and Data Anaysis pemmons/papers.htm. [13] Lee, D.D., Seung, H.S.: Agorithms for nonnegative matrix factorization. Voume 13. NIPS, MIT Press 2001 [14] Novak, M., Mammone, R.: Use of nonnegative matrix factorization for anguage mode adaptation in a ecture transcription task. In: Proceedings of the 2001 IEEE Conference on Acoustics, Speech and Signa Processing. Voume 1., Sat Lake City, UT [15] Feng, T., Li, S.Z., Shum, H.Y., Zhang, H.: Loca nonnegative matrix factorization as a visua representation. In: Proceedings of the 2nd Internationa Conference on Deveopment and Learning, Cambridge, MA [16] Chen, Z., Cichocki, A., Rutkowski, T.: Constrained non-negative matrix factorization method for EEG anaysis in eary detection of Azheimer s disease. In: IEEE Internationa Conference on Acoustics, Speech, and Signa Processing,, ICASSP- 2006, Tououse, France 2006 [17] Cichocki, A., Amari, S.: Adaptive Bind Signa And Image Processing New revised and improved edition. John Wiey, New York 2003 [18] Cichocki, A., Zdunek, R.: NMFLAB Tooboxes for Signa and Image Processing JAPAN 2006 [19] Merritt, M., Zhang, Y.: An interior-point gradient method for arge-scae totay nonnegative east squares probems. Technica report, Department of Computationa and Appied Mathematics, Rice University, Houston, Texas, USA 2004 [20] Minami, M., Eguchi, S.: Robust bind source separation by beta-divergence. Neura Computation [21] Jorgensen, B.: The Theory of Dispersion Modes. Chapman and Ha 1997 [22] Csiszár, I.: Information measures: A critica survey. In: Prague Conference on Information Theory, Academia Prague. Voume A [23] Amari, S., Nagaoka, H.: Methods of Information Geometry. Oxford University Press, New York 2000 [24] Zhang, J.: Divergence function, duaity and convex anaysis. Neura Computation [25] Schraudof, N.: Gradient-based manipuation of non-parametric entropy estimates. IEEE Trans. on Neura Networks [26] Byrne, C.: Acceerating the EMML agorithm and reated iterative agorithms by rescaed bock-iterative RBI methods. IEEE Transactions on Image Processing [27] Byrne, C.: Choosing parameters in bock-iterative or ordered subset reconstruction agorithms. IEEE Transactions on Image Progressing [28] Amari, S.: Differentia-Geometrica Methods in Statistics. Springer Verag 1985 [29] Amari, S.: Information geometry of the EM and em agorithms for neura networks. Neura Networks [30] Cressie, N.A., Read, T.: Goodness-of-Fit Statistics for Discrete Mutivariate Data. Springer, New York

Non-Negative Matrix Factorization with Quasi-Newton Optimization

Non-Negative Matrix Factorization with Quasi-Newton Optimization Non-Negative Matrix Factorization with Quasi-Newton Optimization Rafal ZDUNEK, Andrzej CICHOCKI Laboratory for Advanced Brain Signal Processing BSI, RIKEN, Wako-shi, JAPAN Abstract. Non-negative matrix

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization

Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization Andrzej CICHOCKI and Rafal ZDUNEK Laboratory for Advanced Brain Signal Processing, RIKEN BSI, Wako-shi, Saitama

More information

An Information Geometrical View of Stationary Subspace Analysis

An Information Geometrical View of Stationary Subspace Analysis An Information Geometrica View of Stationary Subspace Anaysis Motoaki Kawanabe, Wojciech Samek, Pau von Bünau, and Frank C. Meinecke Fraunhofer Institute FIRST, Kekuéstr. 7, 12489 Berin, Germany Berin

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Novel Multi-layer Non-negative Tensor Factorization with Sparsity Constraints

Novel Multi-layer Non-negative Tensor Factorization with Sparsity Constraints Novel Multi-layer Non-negative Tensor Factorization with Sparsity Constraints Andrzej CICHOCKI 1, Rafal ZDUNEK 1, Seungjin CHOI, Robert PLEMMONS 2, and Shun-ichi AMARI 1 1 RIKEN Brain Science Institute,

More information

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR 1 Maximizing Sum Rate and Minimizing MSE on Mutiuser Downink: Optimaity, Fast Agorithms and Equivaence via Max-min SIR Chee Wei Tan 1,2, Mung Chiang 2 and R. Srikant 3 1 Caifornia Institute of Technoogy,

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

Symbolic models for nonlinear control systems using approximate bisimulation

Symbolic models for nonlinear control systems using approximate bisimulation Symboic modes for noninear contro systems using approximate bisimuation Giordano Poa, Antoine Girard and Pauo Tabuada Abstract Contro systems are usuay modeed by differentia equations describing how physica

More information

Available online at ScienceDirect. Procedia Computer Science 96 (2016 )

Available online at  ScienceDirect. Procedia Computer Science 96 (2016 ) Avaiabe onine at www.sciencedirect.com ScienceDirect Procedia Computer Science 96 (206 92 99 20th Internationa Conference on Knowedge Based and Inteigent Information and Engineering Systems Connected categorica

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

Uniformly Reweighted Belief Propagation: A Factor Graph Approach

Uniformly Reweighted Belief Propagation: A Factor Graph Approach Uniformy Reweighted Beief Propagation: A Factor Graph Approach Henk Wymeersch Chamers University of Technoogy Gothenburg, Sweden henkw@chamers.se Federico Penna Poitecnico di Torino Torino, Itay federico.penna@poito.it

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS A SIPLIFIED DESIGN OF ULTIDIENSIONAL TRANSFER FUNCTION ODELS Stefan Petrausch, Rudof Rabenstein utimedia Communications and Signa Procesg, University of Erangen-Nuremberg, Cauerstr. 7, 958 Erangen, GERANY

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Nonnegative Tensor Factorization with Smoothness Constraints

Nonnegative Tensor Factorization with Smoothness Constraints Nonnegative Tensor Factorization with Smoothness Constraints Rafal ZDUNEK 1 and Tomasz M. RUTKOWSKI 2 1 Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology,

More information

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model Appendix of the Paper The Roe of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Mode Caio Ameida cameida@fgv.br José Vicente jose.vaentim@bcb.gov.br June 008 1 Introduction In this

More information

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU Copyright information to be inserted by the Pubishers Unspitting BGK-type Schemes for the Shaow Water Equations KUN XU Mathematics Department, Hong Kong University of Science and Technoogy, Cear Water

More information

Numerical solution of one dimensional contaminant transport equation with variable coefficient (temporal) by using Haar wavelet

Numerical solution of one dimensional contaminant transport equation with variable coefficient (temporal) by using Haar wavelet Goba Journa of Pure and Appied Mathematics. ISSN 973-1768 Voume 1, Number (16), pp. 183-19 Research India Pubications http://www.ripubication.com Numerica soution of one dimensiona contaminant transport

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

Primal and dual active-set methods for convex quadratic programming

Primal and dual active-set methods for convex quadratic programming Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

A Ridgelet Kernel Regression Model using Genetic Algorithm

A Ridgelet Kernel Regression Model using Genetic Algorithm A Ridgeet Kerne Regression Mode using Genetic Agorithm Shuyuan Yang, Min Wang, Licheng Jiao * Institute of Inteigence Information Processing, Department of Eectrica Engineering Xidian University Xi an,

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Wavelet shrinkage estimators of Hilbert transform

Wavelet shrinkage estimators of Hilbert transform Journa of Approximation Theory 163 (2011) 652 662 www.esevier.com/ocate/jat Fu ength artice Waveet shrinkage estimators of Hibert transform Di-Rong Chen, Yao Zhao Department of Mathematics, LMIB, Beijing

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

Optimum Design Method of Viscous Dampers in Building Frames Using Calibration Model

Optimum Design Method of Viscous Dampers in Building Frames Using Calibration Model The 4 th Word Conference on Earthquake Engineering October -7, 8, Beijing, China Optimum Design Method of iscous Dampers in Buiding Frames sing Caibration Mode M. Yamakawa, Y. Nagano, Y. ee 3, K. etani

More information

Least-squares Independent Component Analysis

Least-squares Independent Component Analysis Neura Computation, vo.23, no., pp.284 30, 20. Least-squares Independent Component Anaysis Taiji Suzuki Department of Mathematica Informatics, The University of Tokyo 7-3- Hongo, Bunkyo-ku, Tokyo 3-8656,

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

Convolutional Networks 2: Training, deep convolutional networks

Convolutional Networks 2: Training, deep convolutional networks Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks

More information

High-order approximations to the Mie series for electromagnetic scattering in three dimensions

High-order approximations to the Mie series for electromagnetic scattering in three dimensions Proceedings of the 9th WSEAS Internationa Conference on Appied Mathematics Istanbu Turkey May 27-29 2006 (pp199-204) High-order approximations to the Mie series for eectromagnetic scattering in three dimensions

More information

Some Measures for Asymmetry of Distributions

Some Measures for Asymmetry of Distributions Some Measures for Asymmetry of Distributions Georgi N. Boshnakov First version: 31 January 2006 Research Report No. 5, 2006, Probabiity and Statistics Group Schoo of Mathematics, The University of Manchester

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems Source and Reay Matrices Optimization for Mutiuser Muti-Hop MIMO Reay Systems Yue Rong Department of Eectrica and Computer Engineering, Curtin University, Bentey, WA 6102, Austraia Abstract In this paper,

More information

Soft Clustering on Graphs

Soft Clustering on Graphs Soft Custering on Graphs Kai Yu 1, Shipeng Yu 2, Voker Tresp 1 1 Siemens AG, Corporate Technoogy 2 Institute for Computer Science, University of Munich kai.yu@siemens.com, voker.tresp@siemens.com spyu@dbs.informatik.uni-muenchen.de

More information

Nonlinear Analysis of Spatial Trusses

Nonlinear Analysis of Spatial Trusses Noninear Anaysis of Spatia Trusses João Barrigó October 14 Abstract The present work addresses the noninear behavior of space trusses A formuation for geometrica noninear anaysis is presented, which incudes

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Wavelet Galerkin Solution for Boundary Value Problems

Wavelet Galerkin Solution for Boundary Value Problems Internationa Journa of Engineering Research and Deveopment e-issn: 2278-67X, p-issn: 2278-8X, www.ijerd.com Voume, Issue 5 (May 24), PP.2-3 Waveet Gaerkin Soution for Boundary Vaue Probems D. Pate, M.K.

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

Supervised i-vector Modeling - Theory and Applications

Supervised i-vector Modeling - Theory and Applications Supervised i-vector Modeing - Theory and Appications Shreyas Ramoji, Sriram Ganapathy Learning and Extraction of Acoustic Patterns LEAP) Lab, Eectrica Engineering, Indian Institute of Science, Bengauru,

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation Approximation and Fast Cacuation of Non-oca Boundary Conditions for the Time-dependent Schrödinger Equation Anton Arnod, Matthias Ehrhardt 2, and Ivan Sofronov 3 Universität Münster, Institut für Numerische

More information

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY The ogic of Booean matrices C. R. Edwards Schoo of Eectrica Engineering, Universit of Bath, Caverton Down, Bath BA2 7AY A Booean matrix agebra is described which enabes man ogica functions to be manipuated

More information

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems Convergence Property of the Iri-Imai Agorithm for Some Smooth Convex Programming Probems S. Zhang Communicated by Z.Q. Luo Assistant Professor, Department of Econometrics, University of Groningen, Groningen,

More information

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY The ogic of Booean matrices C. R. Edwards Schoo of Eectrica Engineering, Universit of Bath, Caverton Down, Bath BA2 7AY A Booean matrix agebra is described which enabes man ogica functions to be manipuated

More information

BASIS COMPENSATION IN NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR SPEECH ENHANCEMENT

BASIS COMPENSATION IN NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR SPEECH ENHANCEMENT BASIS COMPENSATION IN NON-NEGATIVE MATRIX FACTORIZATION MODEL FOR SPEECH ENHANCEMENT Hanwook Chung 1, Eric Pourde and Benoit Champagne 1 1 Dept. of Eectrica and Computer Engineering, McGi University, Montrea,

More information

Active Learning & Experimental Design

Active Learning & Experimental Design Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997 Stochastic utomata etworks (S) - Modeing and Evauation Pauo Fernandes rigitte Pateau 2 May 29, 997 Institut ationa Poytechnique de Grenobe { IPG Ecoe ationae Superieure d'informatique et de Mathematiques

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

A Comparison Study of the Test for Right Censored and Grouped Data

A Comparison Study of the Test for Right Censored and Grouped Data Communications for Statistica Appications and Methods 2015, Vo. 22, No. 4, 313 320 DOI: http://dx.doi.org/10.5351/csam.2015.22.4.313 Print ISSN 2287-7843 / Onine ISSN 2383-4757 A Comparison Study of the

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

Global Optimality Principles for Polynomial Optimization Problems over Box or Bivalent Constraints by Separable Polynomial Approximations

Global Optimality Principles for Polynomial Optimization Problems over Box or Bivalent Constraints by Separable Polynomial Approximations Goba Optimaity Principes for Poynomia Optimization Probems over Box or Bivaent Constraints by Separabe Poynomia Approximations V. Jeyakumar, G. Li and S. Srisatkunarajah Revised Version II: December 23,

More information

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient

More information

Non-negative matrix factorization and term structure of interest rates

Non-negative matrix factorization and term structure of interest rates Non-negative matrix factorization and term structure of interest rates Hellinton H. Takada and Julio M. Stern Citation: AIP Conference Proceedings 1641, 369 (2015); doi: 10.1063/1.4906000 View online:

More information

Structured sparsity for automatic music transcription

Structured sparsity for automatic music transcription Structured sparsity for automatic music transcription O'Hanon, K; Nagano, H; Pumbey, MD; Internationa Conference on Acoustics, Speech and Signa Processing (ICASSP 2012) For additiona information about

More information

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds.

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. Proceedings of the 2012 Winter Simuation Conference C. aroque, J. Himmespach, R. Pasupathy, O. Rose, and A. M. Uhrmacher, eds. TIGHT BOUNDS FOR AMERICAN OPTIONS VIA MUTIEVE MONTE CARO Denis Beomestny Duisburg-Essen

More information

The influence of temperature of photovoltaic modules on performance of solar power plant

The influence of temperature of photovoltaic modules on performance of solar power plant IOSR Journa of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vo. 05, Issue 04 (Apri. 2015), V1 PP 09-15 www.iosrjen.org The infuence of temperature of photovotaic modues on performance

More information

Fast Blind Recognition of Channel Codes

Fast Blind Recognition of Channel Codes Fast Bind Recognition of Channe Codes Reza Moosavi and Erik G. Larsson Linköping University Post Print N.B.: When citing this work, cite the origina artice. 213 IEEE. Persona use of this materia is permitted.

More information

Dynamic Optimization of Batch Processes: I. Characterization of the Nominal Solution

Dynamic Optimization of Batch Processes: I. Characterization of the Nominal Solution Dynamic Optimization of Batch Processes: I. Characterization of the Nomina Soution B. Srinivasan, S. Paanki*, D. Bonvin Institut d Automatique, Écoe Poytechnique Fédérae de Lausanne, CH-115 Lausanne, Switzerand.

More information

UI FORMULATION FOR CABLE STATE OF EXISTING CABLE-STAYED BRIDGE

UI FORMULATION FOR CABLE STATE OF EXISTING CABLE-STAYED BRIDGE UI FORMULATION FOR CABLE STATE OF EXISTING CABLE-STAYED BRIDGE Juan Huang, Ronghui Wang and Tao Tang Coege of Traffic and Communications, South China University of Technoogy, Guangzhou, Guangdong 51641,

More information

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization Scaabe Spectrum ocation for Large Networks ased on Sparse Optimization innan Zhuang Modem R&D Lab Samsung Semiconductor, Inc. San Diego, C Dongning Guo, Ermin Wei, and Michae L. Honig Department of Eectrica

More information

THE DIRECT KINEMATICS OF REDUNDANT PARALLEL ROBOT FOR PREDICTIVE CONTROL

THE DIRECT KINEMATICS OF REDUNDANT PARALLEL ROBOT FOR PREDICTIVE CONTROL HE DIREC KINEMAICS OF REDUNDAN PARALLEL ROBO FOR PREDICIVE CONROL BELDA KVĚOSLAV, BÖHM JOSEF, VALÁŠEK MICHAEL Department of Adaptive Systems, Institute of Information heory and Automation, Academy of Sciences

More information

Rate-Distortion Theory of Finite Point Processes

Rate-Distortion Theory of Finite Point Processes Rate-Distortion Theory of Finite Point Processes Günther Koiander, Dominic Schuhmacher, and Franz Hawatsch, Feow, IEEE Abstract We study the compression of data in the case where the usefu information

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

An explicit Jordan Decomposition of Companion matrices

An explicit Jordan Decomposition of Companion matrices An expicit Jordan Decomposition of Companion matrices Fermín S V Bazán Departamento de Matemática CFM UFSC 88040-900 Forianópois SC E-mai: fermin@mtmufscbr S Gratton CERFACS 42 Av Gaspard Coriois 31057

More information

Incremental Reformulated Automatic Relevance Determination

Incremental Reformulated Automatic Relevance Determination IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER 22 4977 Incrementa Reformuated Automatic Reevance Determination Dmitriy Shutin, Sanjeev R. Kukarni, and H. Vincent Poor Abstract In this

More information

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS ISEE 1 SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS By Yingying Fan and Jinchi Lv University of Southern Caifornia This Suppementary Materia

More information

Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models IO Conference Series: Earth and Environmenta Science AER OEN ACCESS Adjustment of automatic contro systems of production faciities at coa processing pants using mutivariant physico- mathematica modes To

More information

Fitting Algorithms for MMPP ATM Traffic Models

Fitting Algorithms for MMPP ATM Traffic Models Fitting Agorithms for PP AT Traffic odes A. Nogueira, P. Savador, R. Vaadas University of Aveiro / Institute of Teecommunications, 38-93 Aveiro, Portuga; e-mai: (nogueira, savador, rv)@av.it.pt ABSTRACT

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

From Margins to Probabilities in Multiclass Learning Problems

From Margins to Probabilities in Multiclass Learning Problems From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error

More information

Efficient Part-of-Speech Tagging with a Min-Max Modular Neural-Network Model

Efficient Part-of-Speech Tagging with a Min-Max Modular Neural-Network Model Appied Inteigence 19, 65 81, 2003 c 2003 Kuwer Academic Pubishers. Manufactured in The Netherands. Efficient Part-of-Speech Tagging with a Min-Max Moduar Neura-Network Mode BAO-LIANG LU Department of Computer

More information

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Introduction to Simuation - Lecture 13 Convergence of Mutistep Methods Jacob White Thans to Deepa Ramaswamy, Micha Rewiensi, and Karen Veroy Outine Sma Timestep issues for Mutistep Methods Loca truncation

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

Width of Percolation Transition in Complex Networks

Width of Percolation Transition in Complex Networks APS/23-QED Width of Percoation Transition in Compex Networs Tomer Kaisy, and Reuven Cohen 2 Minerva Center and Department of Physics, Bar-Ian University, 52900 Ramat-Gan, Israe 2 Department of Computer

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

Two-sample inference for normal mean vectors based on monotone missing data

Two-sample inference for normal mean vectors based on monotone missing data Journa of Mutivariate Anaysis 97 (006 6 76 wwweseviercom/ocate/jmva Two-sampe inference for norma mean vectors based on monotone missing data Jianqi Yu a, K Krishnamoorthy a,, Maruthy K Pannaa b a Department

More information

Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller

Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller daptive Noise Canceation Using Deep Cerebear Mode rticuation Controer Yu Tsao, Member, IEEE, Hao-Chun Chu, Shih-Wei an, Shih-Hau Fang, Senior Member, IEEE, Junghsi ee*, and Chih-Min in, Feow, IEEE bstract

More information