Multilayer Kerceptron

Size: px
Start display at page:

Download "Multilayer Kerceptron"

Transcription

1 Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: Abstract Mutiayer Perceptrons (MLP) are formuated within Support Vector Machine (SVM) framework by constructing mutiayer networks of SVMs The couped approximation scheme can take advantage of generaization capabiities of the SVM and the combinatory feature of the hidden ayer of MLP The network, the Mutiayer Kerceptron (MLK) assumes its own backpropagation procedure that we sha derive here Tuning rue wi be provided for quadratic cost function, with reguarization capabiity as we A further appeaing property of our approach is that by the aid of the so caed kerne trick the MLK computations can be performed in the dua space 1 Introduction Mutiayer Perceptrons (MLP) and Support Vector Machines (SVM) have been extensivey studied in the iterature For an exceent review, see [3, 2] and references therein Here we extend SVMs to muti-ayer structures and provide the backpropagation tuning rues for this system By appying the so caed kerne-trick, we embed the probem into a space having scaar product For other approaches using the same trick, see, eg, [4, 6, 7] 2 Network Architecture 21 Notations Numbers (a), vectors (a), and matrices (A) are denoted by different etter types A T denotes the transpose of matrix A Extension of vector a by component a is Journa of Appied Mathematics, 24: ,

2 written as [a; a] R stands for rea numbers 2 indicates the L 2 norm induced by the scaar product, of Eucidean space E, ie, e 2 e, e (e E) 22 Buiding Bocks 221 SVM SVMs are popuar approximation toos [9, 10, 8, 6, 5] SVMs approximate {x(t), d(t)} t1t sampe pairs, where each x(t) input is in input space X and d(t) R The approximation is inear, but it occurs in feature space H Inputs x(t) are mapped to feature space by ϕ : x X H (1) One can interpret ϕ(x) as the representation of input x The form of the SVM approximation is Formay, the SVM-task is defined as f w : x X w, ϕ(x) H (w H) (2) min w T H[w] : C V [d(t), f w (x(t))] w 2 H t1 (C > 0), (3) where V [, ] is the so caed oss function, which can assume quadratic, ɛ-insensitive, or other forms [4] That is, SVMs are reguarized inear approximators [2] Instead of using the expicit ϕ mapping, feature space H can be expoited through kerne k, that is H H(k) [11], where ϕ(x) k(, x) Kerne k assumes the reproducing property [1, 11] f( ), k(, x) H f(x) (x X, f H), (4) and H is caed Reproducing Kerne Hibert Space (RKHS) Scaar product of any function with kerne k(, x) evauates the function at x in RKHS H Scaar product in feature space can be computed impicity by means of the kerne k(u, v) ϕ(u), ϕ(v) H (u, v X) (5) In particuar, for w N α j ϕ(z j ) j1 (α j R, z j X) we have N N f w (x) w, ϕ(x) H α j ϕ(z j ), ϕ(x) H α j k(z j, x) (6) j1 Thus, function f w can be evauated by means of coefficients α j, sampes z j and the kerne k without expicit reference to representation ϕ(x) This is caed the kerne trick j1 2

3 222 MLP An MLP network has mutipe ayers, each performing non-inear mapping x g(w x) (7) Here, g is a differentiabe non-inear function In the MLP task, we tune matrix W for a ayers so that the network approximates the samped input-output mapping given by input-output training pairs {x(t), d(t)} That is, the objective is to minimize the squared error ε 2 (t) : d(t) y(t) 2 2 min, (8) W 1,W 2, where the output of the network at time t is y(t), for a times The MLP task is soved by the we-known backpropagation agorithm 23 The MLK Architecture The mapping of a genera MLP ayer (ie, Eq (7)) can be written as x g w i, x, (9) where wi T denotes the i th row of matrix W The SVM can aso be inserted into the MLP: Let a genera ayer of the network assume the form 1 w 1, ϕ(x) H x g (10) w N, ϕ(x) H A network made of such ayers, see Fig 1, wi be caed Mutiayer Kerceptron (MLK) The input (x ) of each ayer is the output of the preceding ayer (y 1 ) The externa word is the 0 th ayer providing input to the first ayer of the MLK x y 1 R N I, where N I is the dimension of the th ayer Inputs x to ayer are mapped by features ϕ and are mutipied by the weights wi This twostep process can be accompished impicity by making use of kerne k and the expansion property for wi s The resut is vector s R N S, which undergoes non-inear processing g, where function g is differentiabe The output of this non-inear function is the input to the next ayer, ie, ayer x +1 The output of the ast ayer (ayer L, the output of the network) wi be referred to as y Given that y x +1 R N o, the output dimension of ayer is N o Beow, we show that (i) a backpropagation rue can be derived for MLKs, which (ii) requires the kernes ony, so computations can be accompished in dua space 1 For the sake of simpicity et us choose sampe space X as finite dimensiona Eucidean space, ie, R n 3

4 <w,> i H g i -1 y x (x) s +1 yx Figure 1: The th ayer of the MLK, 1, 2, L The input (x ) of each ayer is the output of the preceding ayer (y 1 ) The externa word is the 0 th ayer providing input to the first ayer of the MLK Inputs x to ayer are mapped by features mapping ϕ undergo scaar mutipication by the weights (w i ) of the ayer in RKHS H H (k ) The resut is vector s, which undergoes noninear processing g, with a differentiabe function The output of this non-inear function is the input to the next ayer, ayer x +1 The output of the network is the output of the ast ayer 3 MLK Backpropagation A sighty more genera task, which incorporates reguarizing terms, too, is formaized beow: c(t) : ε 2 (t) + r(t) where ε 2 (t) d(t) y(t) 2 2 and r(t) L min {H w i : 1,,L; i1,,n S }, (11) N S 1 i1 λ i w i (t) 2 H (λ i 0) are the approximation and the reguarization terms of the cost function, respectivey, and y(t) denotes the output of the network for the t th input Parameters λ i contro the trade-off between approximation and reguarization For λ i 0 the best approximation is searched ike in the MLP task (Eq (8)) Increasing the λ i vaues, the smoothness of the approximation wi increase With the notations introduced above, the foowing statements can be proven Theorem 1 (expicit case) Let us suppose that the x w, ϕ (x) and the H g functions are a differentiabe ( 1,, L) Then, backpropagation rue can be derived for MLK if the cost function has the form c(t) ε 2 (t) + N L S λ i w i(t) 2 (λ H i 0) (12) 1 i1 Theorem 2 (impicit case) Assume that the foowing hods 4

5 1 Constraint on differentiabiity: Kernes k are differentiabe with respect to both arguments, functions g are aso differentiabe ( 1,, L) 2 Expansion property: The initia weights wi (1) of the network can be expressed in the dua representation, ie, H w i(1) N i (1) j1 α i,j(1) ϕ (z i,j(1)) ( 1,, L; i 1,, N S) Then backpropagation appies for MLK if the cost function has the form c(t) ε 2 (t) + (13) N L S λ i w i(t) 2 (λ H i 0) (14) 1 i1 This procedure spares the expansion property (13), which then remains vaid for the tuned network The agorithm is impicit in the sense that it can be reaized in the dua space The pseudo codes of the MLK backpropagation agorithms are provided in Tabe 1 and Tabe 2, respectivey Derivations of these agorithms, both for the expicit and for the impicit forms, are provided in the next subsection MLK-backpropagation can be envisioned as foows (see Tabe 1 and 2 simutaneousy): 1 backpropagated error δ (t) starts from δ L (t) and is computed by a backward recursion via the differentia expression d[s+1 (t)] d[s (t)] 2 expression d[s+1 (t)] d[s (t)] can be determined by means of feature mapping ϕ +1, or, in an impicit fashion, through kernes k +1 3 two components pay roes in the tuning of ws: (a) forgetting ( is accompished ) by scaing the weights wi with mutipier 1 2µ i (t) λ i, where λ i is the reguarization coefficient (b) adaptation occurs through the backpropagated error Weights at ayer are tuned by feature space representation of x (t), the actua input arriving at ayer Tuning is weighted by the backpropagated error 31 Derivation of the Backpropagation Agorithm for MLKs d[c(t)] Gradient d[wi (t)] is derived first Then it is embedded into steepest descent tuning 2 The c(t) error has two terms, the approximation and the reguarization 2 Steepest descent is used to iustrate the concepts Other types of gradient optimizations beyond steepest descent may be utiized For exampe, different versions of the momentum method or the conjugate gradient procedure coud have their respective advantages 5

6 Tabe 1: Pseudocode of the expicit MLK backpropagation agorithm Inputs sampe points: {x(t), d(t)} t1,,t,t cost function: λ i 0 ( 1,, L; i 1,, N S ) earning rates: µ i (t) > 0 ( 1,, L; i 1,, N S ; t 1,, T ) Network initiaization size: L (number of ayers), N I, N S, N o ( 1,, L) parameters: w i (1) ( 1,, L; i 1,, N S ) Start computation Choose sampe x(t) Feedforward computation x (t) ( 2,, L + 1), s (t) ( 2,, L) a Backpropagation of error L whie 1 if ( L) δ L (t) 2 [y(t) d(t)] T (g L) (s L (t)) ese d[s +1 (t)] d[ w +1 i (t),ϕ +1 (u) H +1] [ (g d[s (t)] d[u] ux +1 (t) ) (s b (t))] δ (t) δ +1 (t) d[s+1 (t)] d[s (t)] Weight update for a i: 1 i NS wi (t + 1) (1 2µ i (t) λ i ) w i (t) µ i (t) δ i (t) ϕ (x (t)) 1 End computation a The output of the network, ie, y(t) x L+1 (t) is aso computed b Here: i 1,, N +1 S 6

7 Tabe 2: Pseudocode of the impicit MLK backpropagation agorithm Inputs sampe points: {x(t), d(t)} t1,,t,t cost function: λ i 0 ( 1,, L; i 1,, N S ) earning rates: µ i (t) > 0 ( 1,, L; i 1,, N S ; t 1,, T ) Network initiaization size: L (number of ayers), N I, N S, N o ( 1,, L) parameters: w i (1)-expansions ( 1,, L; i 1,, N S ) coefficients: α i (1) RN i (1) ancestors: z i,j (1), where j 1,, N i (1) Start computation Choose sampe x(t) Feedforward computation x (t) ( 2,, L + 1), s (t) ( 2,, L) a Backpropagation of error L whie 1 if ( L) δ L (t) 2 [y(t) d(t)] T (g L) (s L (t)) ese d[s +1 N (t)] +1 i (t) d[s (t)] α +1 ij (t) [k +1 ] y(z +1 ij (t), x +1 (t)) j1 δ (t) δ +1 (t) d[s+1 (t)] d[s (t)] Weight update for a i: 1 i NS Ni (t + 1) N i (t) + 1 α i (t + 1) [( 1 2µ i (t) ) λ i α i (t); µ i (t) δ i (t)] z i,j (t + 1) z i,j (t) (j 1,, N i (t)) z i,j (t + 1) x (t) (j Ni (t + 1)) 1 End computation a The output of the network, ie, y(t) x L+1 (t) is aso computed b i 1,, N +1 S Note aso that (k ) y denotes the derivative of kerne k according to its second argument [ (g ) ] (s b (t)) 7

8 terms: c(t) ε 2 (t) + r(t) (15) 311 Gradient of the Approximation Term First, we ist basic reations, invoved by the MLK structure For the case of simpicity, beow, index t sha be dropped [precise form: x x (t), y y (t), s s (t),w i w i (t)] x y 1 R N I ( 1,, L + 1) (16) x +1 g (s ) ( 1,, L) (17) w 1, ϕ (x ) H s w i, ϕ (x ) H ( 1,, L; i 1,, N S) (18) w 1, ϕ (g 1 (s 1 )) H w i, ϕ (g 1 (s 1 )) H ( 2,, L; i 1,, N S) (19) w +1 1, ϕ +1 (g (s )) H +1 s +1 w +1 i, ϕ +1 (g (s )) H +1 ( 1,, L 1; i 1,, N +1 S ) Let the backpropagated error for ayer be defined as δ (t) : d[ε2 (t)] d[s (t)] (20) ( 1,, L) (21) The specia case of the ast ayer is as foows: [ δ L (t) d[ε2 (t)] d d(t) g L (s L (t)) ] 2 d[s L (t)] 2 d[s L (t)] (22) 2 [g L ( s L (t) ) d(t) ] T ( g L ) ( s L (t) ) (23) 2 [y(t) d(t)] T (g L) (s L (t)) (24) Here we used the chain rue and made use of the rue vaid for vectors d[ d y 2 2 ] dy 2(y d) T, (25) 8

9 and inserted the reation imposed by the MLK architecture Expression d[s +1 (t)] d[s (t)] y(t) g L ( s L (t) ), (26) ( 1,, L 1) (27) can be computed by using Eq (20) It is sufficient to consider terms ike d[ w, ϕ(g(s)) H ] (28) and then to compie the fu derivative from them The vaue of (28) is computed by means of the foowing emma Lemma 1 Let w H H(k) be a point in the RKHS Let us assume the foowing 1 Let kerne k be differentiabe wrt both arguments and et k y denote the derivative of the kerne according to its second argument 2 In the impicit case we aso assume that w is within the image space of the feature space representation of a finite number of points z i That is w Im (ϕ(z 1 ), ϕ(z 2 ),, ϕ(z N )) H (29) Let this expansion be w N α j ϕ(z j ), where α j R Then we have two cases: 1 Expicit case: 2 Impicit case: Proof j1 d[ w, ϕ(g(s)) H ] d[ w, ϕ(g(s)) H ] d [ w, ϕ(u) H ] d[u] g (s) (30) ug(s) N α j k y(z j, g(s)) g (s) (31) j1 1 Expicit case: the statement foows from the chain rue 9

10 2 Impicit case: d[ w, ϕ(g(s)) H ] j [ ] d j α j ϕ(z j ), ϕ(g(s)) H [ ] d j α j ϕ(z j ), ϕ(g(s)) H [ ] d j α j k (z j, g(s)) (32) (33) (34) α j k y(z j, g(s)) g (s) (35) The first equation has the expansion of w and the inear property of the scaar product was utiized Then, the reation k(u, v) ϕ(u), ϕ(v) H (36) between feature mapping and the kerne was appied The ast step foows from the chain rue Let us turn back to the computation of Eq (27): 1 Expicit case: According to the emma we have d[s +1 (t)] d[ w d[s +1 i (t),ϕ +1 (u) H +1] (t)] d[u] d[ w +1 i (t),ϕ +1 (u) H +1] d[u] ug (s (t)) ux +1 (t) ( 1,, L 1; i 1,, N +1 S ) (g ) (s (t)) (37) [ (g ) (s (t))] (38) In the second equation (i) we used identity (17) and (ii) pued out the term ( g ) ( s (t) ) according to the matrix mutipication rues 2 Impicit case: For terms w +1 i (t) we have the expansion property expressed by Eq (13) This was our starting assumption In subsection 313, we sha see that this feature is inherited from time to time Thus, w +1 i (t) N +1 i (t) j1 α +1 ij (t) ϕ +1 (z +1 ij (t)) ( 1,, L 1; i 1,, N +1 S ) (39) 10

11 and the derivative (27) we need assumes the form d[s +1 (t)] d[s (t)] N +1 i (t) α +1 j1 N +1 i (t) α +1 j1 ij (t) [k +1 ] y(z +1 ij (t), g (s (t))) (g ) (s (t)) ij (t) [k +1 ] y(z +1 ij (t), x +1 (t)) ( 1,, L 1; i 1,, N +1 S ) (40) [ (g ) ] (s (t)) (41) Here, the second equation is based on identity (17) Matrix term ( g ) ( s (t) ) was pued out according to the matrix mutipication rues Appying the chain rue and the definition of δ +1 (t), we have δ (t) d[ε2 (t)] d[s (t)] d[ε2 (t)] d[s +1 (t)] d[s+1 (t)] d[s δ +1 (t) d[s+1 (t)] (t)] d[s (t)] ( 1,, L 1) (42) One can appy the chain rue once again and can make use the definitions of δ (t) and s (t) to show that d[ε 2 (t)] d[wi (t)] d[ε2 (t)] d[s i (t)] d[s i (t)] d[wi (t)] δ i(t) ϕ (x (t)) ( 1,, L; i 1,, NS), (43) which is the desired derivative Note that the derivative can be expressed by using number δi (t) and by the feature representation of the input x (t) arriving to the th ayer, ie, by ϕ (x (t)) 312 Reguarization Term This term is reativey simpe: [ L N S d λ i d[r(t)] w ] i (t) 2 H d[wi (t)] 1 i1 d[wi (t)] 2λ i wi(t) ( 1,, L; i 1,, NS) (44) Note that the respective terms of the derivative are scaed actua weights [wi (t)] This form enabes our impicit tuning 11

12 313 Cost Term Using identity d[c(t)] d[wi (t)] d[ε2 (t)] d[r(t)] d[wi + (t)] d[wi (t)] ( 1,, L; i 1,, NS) (45) as we as our resuts on the approximation and the reguarization terms [ie, Eqs (43), and (44)], we arrive to the steepest descent form w i(t + 1) w i(t) µ i(t) So we have d[c(t)] d[w i (t)] ( 1,, L; i 1,, N S) (46) w i(t + 1) w i(t) µ i(t) (δ i(t) ϕ (x (t)) + 2λ i w i(t) ) (47) The same in dua form is as foows (1 2µ i(t) λ i) w i(t) µ i(t) δ i(t) ϕ (x (t)) (48) ( 1,, L; i 1,, N S) α i(t + 1) [ ( 1 2µ i(t) λ i) α i (t); µ i(t) δi(t)] ( 1,, L; i 1,, NS) (49) In turn, the expansion property of the weight vectors of the network [ie, Eq (13)] is inherited from time to time In particuar, the expansion is vaid for parameter set wi received at the end of the computation In summing up, MLK can be tuned by the backpropagation procedure The derived expicit and impicit procedures are summarized in Tabe 1 and Tabe 2, respectivey 4 Concusions Theoretica description of a nove mutiayer mode, the Mutiayer Kerceptron was provided This network unifies the advantages of Mutiayer Perceptrons and Support Vector Machines: (i) It earns the weights and earning is subject to reguarization (ii) MLK aows for feature representations (iii) MLK computes the output quicky through earned weights (iv) MLK can have hidden ayers, and thus, it can combine SVM partitionings Advantages and disadvantages of the approach for different databases remain to be seen References [1] N Aronszajn Theory of Reproducing Kernes Trans of Am Math Soc, 68: , 1950 [2] T Evgeniou, M Ponti, and T Poggio Reguarization Networks and Support Vector Machines Advances in Computationa Mathematics, 13(1):1 50,

13 [3] S Haykin Neura Networks Prentice Ha, New Jersey, USA, 1999 [4] R Herbrich Learning Kerne Cassifiers MIT Press, 2002 [5] K-R Müer, A Smoa, G Rätsch, B Schökopf, J Kohmorgen, and V Vapnik Predicting Time Series with Support Vector Machines In Advances in Kerne Methods, pages MIT Press, 1999 [6] B Schökopf and AJ Smoa Learning with Kernes MIT Press, Cambridge, MA, 2002 [7] J Shawe-Tayor and N Cristianini Kerne Methods for Pattern Anaysis Cambridge University Press, 2004 [8] V Vapnik, S Goowich, and A Smoa Support Vector Method for Function Estimation, Regression Estimation and Signa Processing, voume Vo 9 MIT Press, Cambridge, MA, neura information processing systems edition, 1997 [9] VN Vapnik The Nature of Statistica Learning Theory Springer-Verag New York, Inc, 1995 [10] VN Vapnik Statistica Learning Theory Wiey, Chichester, GB, 1998 [11] G Wahba Support Vector Machines, Reproducing Kerne Hibert Spaces, and Randomized GACV In Advances in Kerne Methods, pages MIT Press,

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

Convolutional Networks 2: Training, deep convolutional networks

Convolutional Networks 2: Training, deep convolutional networks Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

From Margins to Probabilities in Multiclass Learning Problems

From Margins to Probabilities in Multiclass Learning Problems From Margins to Probabiities in Muticass Learning Probems Andrea Passerini and Massimiiano Ponti 2 and Paoo Frasconi 3 Abstract. We study the probem of muticass cassification within the framework of error

More information

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS FORECASTING TEECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODES Niesh Subhash naawade a, Mrs. Meenakshi Pawar b a SVERI's Coege of Engineering, Pandharpur. nieshsubhash15@gmai.com

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

14 Separation of Variables Method

14 Separation of Variables Method 14 Separation of Variabes Method Consider, for exampe, the Dirichet probem u t = Du xx < x u(x, ) = f(x) < x < u(, t) = = u(, t) t > Let u(x, t) = T (t)φ(x); now substitute into the equation: dt

More information

6 Wave Equation on an Interval: Separation of Variables

6 Wave Equation on an Interval: Separation of Variables 6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.

More information

Smoothers for ecient multigrid methods in IGA

Smoothers for ecient multigrid methods in IGA Smoothers for ecient mutigrid methods in IGA Cemens Hofreither, Stefan Takacs, Water Zuehner DD23, Juy 2015 supported by The work was funded by the Austrian Science Fund (FWF): NFN S117 (rst and third

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

Math 124B January 31, 2012

Math 124B January 31, 2012 Math 124B January 31, 212 Viktor Grigoryan 7 Inhomogeneous boundary vaue probems Having studied the theory of Fourier series, with which we successfuy soved boundary vaue probems for the homogeneous heat

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

SVM-based Supervised and Unsupervised Classification Schemes

SVM-based Supervised and Unsupervised Classification Schemes SVM-based Supervised and Unsupervised Cassification Schemes LUMINITA STATE University of Pitesti Facuty of Mathematics and Computer Science 1 Targu din Vae St., Pitesti 110040 ROMANIA state@cicknet.ro

More information

Some Properties of Regularized Kernel Methods

Some Properties of Regularized Kernel Methods Journa of Machine Learning Research 5 (2004) 1363 1390 Submitted 12/03; Revised 7/04; Pubished 10/04 Some Properties of Reguarized Kerne Methods Ernesto De Vito Dipartimento di Matematica Università di

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1 Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

Appendix A: MATLAB commands for neural networks

Appendix A: MATLAB commands for neural networks Appendix A: MATLAB commands for neura networks 132 Appendix A: MATLAB commands for neura networks p=importdata('pn.xs'); t=importdata('tn.xs'); [pn,meanp,stdp,tn,meant,stdt]=prestd(p,t); for m=1:10 net=newff(minmax(pn),[m,1],{'tansig','purein'},'trainm');

More information

Support Vector Machine and Its Application to Regression and Classification

Support Vector Machine and Its Application to Regression and Classification BearWorks Institutiona Repository MSU Graduate Theses Spring 2017 Support Vector Machine and Its Appication to Regression and Cassification Xiaotong Hu As with any inteectua project, the content and views

More information

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio

A unified framework for Regularization Networks and Support Vector Machines. Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY and CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1654 March23, 1999

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION

CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION Statistica Sinica 16(2006), 425-439 CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION Yi Lin and Ming Yuan University of Wisconsin-Madison and Georgia Institute of Technoogy

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

An explicit Jordan Decomposition of Companion matrices

An explicit Jordan Decomposition of Companion matrices An expicit Jordan Decomposition of Companion matrices Fermín S V Bazán Departamento de Matemática CFM UFSC 88040-900 Forianópois SC E-mai: fermin@mtmufscbr S Gratton CERFACS 42 Av Gaspard Coriois 31057

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5].

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5]. PRODUCTS OF NEARLY HOLOMORPHIC EIGENFORMS JEFFREY BEYERL, KEVIN JAMES, CATHERINE TRENTACOSTE, AND HUI XUE Abstract. We prove that the product of two neary hoomorphic Hece eigenforms is again a Hece eigenform

More information

Fitting affine and orthogonal transformations between two sets of points

Fitting affine and orthogonal transformations between two sets of points Mathematica Communications 9(2004), 27-34 27 Fitting affine and orthogona transformations between two sets of points Hemuth Späth Abstract. Let two point sets P and Q be given in R n. We determine a transation

More information

Restricted weak type on maximal linear and multilinear integral maps.

Restricted weak type on maximal linear and multilinear integral maps. Restricted weak type on maxima inear and mutiinear integra maps. Oscar Basco Abstract It is shown that mutiinear operators of the form T (f 1,..., f k )(x) = R K(x, y n 1,..., y k )f 1 (y 1 )...f k (y

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

4 1-D Boundary Value Problems Heat Equation

4 1-D Boundary Value Problems Heat Equation 4 -D Boundary Vaue Probems Heat Equation The main purpose of this chapter is to study boundary vaue probems for the heat equation on a finite rod a x b. u t (x, t = ku xx (x, t, a < x < b, t > u(x, = ϕ(x

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

Lecture Notes 4: Fourier Series and PDE s

Lecture Notes 4: Fourier Series and PDE s Lecture Notes 4: Fourier Series and PDE s 1. Periodic Functions A function fx defined on R is caed a periodic function if there exists a number T > such that fx + T = fx, x R. 1.1 The smaest number T for

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information

Establishment of Weak Conditions for Darboux- Goursat-Beudon Theorem

Establishment of Weak Conditions for Darboux- Goursat-Beudon Theorem Georgia Southern University Digita Commons@Georgia Southern Mathematica Sciences Facuty Pubications Department of Mathematica Sciences 2009 Estabishment of Weak Conditions for Darboux- Goursat-Beudon Theorem

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Introduction to Simulation - Lecture 13. Convergence of Multistep Methods. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Introduction to Simuation - Lecture 13 Convergence of Mutistep Methods Jacob White Thans to Deepa Ramaswamy, Micha Rewiensi, and Karen Veroy Outine Sma Timestep issues for Mutistep Methods Loca truncation

More information

Kernel Trick Embedded Gaussian Mixture Model

Kernel Trick Embedded Gaussian Mixture Model Kerne Trick Embedded Gaussian Mixture Mode Jingdong Wang, Jianguo Lee, and Changshui Zhang State Key Laboratory of Inteigent Technoogy and Systems Department of Automation, Tsinghua University Beijing,

More information

MA 201: Partial Differential Equations Lecture - 10

MA 201: Partial Differential Equations Lecture - 10 MA 201: Partia Differentia Equations Lecture - 10 Separation of Variabes, One dimensiona Wave Equation Initia Boundary Vaue Probem (IBVP) Reca: A physica probem governed by a PDE may contain both boundary

More information

Discriminant Analysis: A Unified Approach

Discriminant Analysis: A Unified Approach Discriminant Anaysis: A Unified Approach Peng Zhang & Jing Peng Tuane University Eectrica Engineering & Computer Science Department New Oreans, LA 708 {zhangp,jp}@eecs.tuane.edu Norbert Riede Tuane University

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Research Article Building Infinitely Many Solutions for Some Model of Sublinear Multipoint Boundary Value Problems

Research Article Building Infinitely Many Solutions for Some Model of Sublinear Multipoint Boundary Value Problems Abstract and Appied Anaysis Voume 2015, Artice ID 732761, 4 pages http://dx.doi.org/10.1155/2015/732761 Research Artice Buiding Infinitey Many Soutions for Some Mode of Subinear Mutipoint Boundary Vaue

More information

A Ridgelet Kernel Regression Model using Genetic Algorithm

A Ridgelet Kernel Regression Model using Genetic Algorithm A Ridgeet Kerne Regression Mode using Genetic Agorithm Shuyuan Yang, Min Wang, Licheng Jiao * Institute of Inteigence Information Processing, Department of Eectrica Engineering Xidian University Xi an,

More information

Tracking Control of Multiple Mobile Robots

Tracking Control of Multiple Mobile Robots Proceedings of the 2001 IEEE Internationa Conference on Robotics & Automation Seou, Korea May 21-26, 2001 Tracking Contro of Mutipe Mobie Robots A Case Study of Inter-Robot Coision-Free Probem Jurachart

More information

Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller

Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller daptive Noise Canceation Using Deep Cerebear Mode rticuation Controer Yu Tsao, Member, IEEE, Hao-Chun Chu, Shih-Wei an, Shih-Hau Fang, Senior Member, IEEE, Junghsi ee*, and Chih-Min in, Feow, IEEE bstract

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS A SIPLIFIED DESIGN OF ULTIDIENSIONAL TRANSFER FUNCTION ODELS Stefan Petrausch, Rudof Rabenstein utimedia Communications and Signa Procesg, University of Erangen-Nuremberg, Cauerstr. 7, 958 Erangen, GERANY

More information

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations)

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations) Homework #4 Answers and Hints (MATH452 Partia Differentia Equations) Probem 1 (Page 89, Q2) Consider a meta rod ( < x < ), insuated aong its sides but not at its ends, which is initiay at temperature =

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

Strauss PDEs 2e: Section Exercise 2 Page 1 of 12. For problem (1), complete the calculation of the series in case j(t) = 0 and h(t) = e t.

Strauss PDEs 2e: Section Exercise 2 Page 1 of 12. For problem (1), complete the calculation of the series in case j(t) = 0 and h(t) = e t. Strauss PDEs e: Section 5.6 - Exercise Page 1 of 1 Exercise For probem (1, compete the cacuation of the series in case j(t = and h(t = e t. Soution With j(t = and h(t = e t, probem (1 on page 147 becomes

More information

CS 331: Artificial Intelligence Propositional Logic 2. Review of Last Time

CS 331: Artificial Intelligence Propositional Logic 2. Review of Last Time CS 33 Artificia Inteigence Propositiona Logic 2 Review of Last Time = means ogicay foows - i means can be derived from If your inference agorithm derives ony things that foow ogicay from the KB, the inference

More information

Seasonal Time Series Data Forecasting by Using Neural Networks Multiscale Autoregressive Model

Seasonal Time Series Data Forecasting by Using Neural Networks Multiscale Autoregressive Model American ourna of Appied Sciences 7 (): 372-378, 2 ISSN 546-9239 2 Science Pubications Seasona Time Series Data Forecasting by Using Neura Networks Mutiscae Autoregressive Mode Suhartono, B.S.S. Uama and

More information

Notes on Backpropagation with Cross Entropy

Notes on Backpropagation with Cross Entropy Notes on Backpropagation with Cross Entropy I-Ta ee, Dan Gowasser, Bruno Ribeiro Purue University October 3, 07. Overview This note introuces backpropagation for a common neura network muti-cass cassifier.

More information

Universal Consistency of Multi-Class Support Vector Classification

Universal Consistency of Multi-Class Support Vector Classification Universa Consistency of Muti-Cass Support Vector Cassification Tobias Gasmachers Dae Moe Institute for rtificia Inteigence IDSI, 6928 Manno-Lugano, Switzerand tobias@idsia.ch bstract Steinwart was the

More information

Primal and dual active-set methods for convex quadratic programming

Primal and dual active-set methods for convex quadratic programming Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

Lecture 6: Moderately Large Deflection Theory of Beams

Lecture 6: Moderately Large Deflection Theory of Beams Structura Mechanics 2.8 Lecture 6 Semester Yr Lecture 6: Moderatey Large Defection Theory of Beams 6.1 Genera Formuation Compare to the cassica theory of beams with infinitesima deformation, the moderatey

More information

Theory of Generalized k-difference Operator and Its Application in Number Theory

Theory of Generalized k-difference Operator and Its Application in Number Theory Internationa Journa of Mathematica Anaysis Vo. 9, 2015, no. 19, 955-964 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ijma.2015.5389 Theory of Generaized -Difference Operator and Its Appication

More information

Research Article Numerical Range of Two Operators in Semi-Inner Product Spaces

Research Article Numerical Range of Two Operators in Semi-Inner Product Spaces Abstract and Appied Anaysis Voume 01, Artice ID 846396, 13 pages doi:10.1155/01/846396 Research Artice Numerica Range of Two Operators in Semi-Inner Product Spaces N. K. Sahu, 1 C. Nahak, 1 and S. Nanda

More information

Multicategory Classification by Support Vector Machines

Multicategory Classification by Support Vector Machines Muticategory Cassification by Support Vector Machines Erin J Bredensteiner Department of Mathematics University of Evansvie 800 Lincon Avenue Evansvie, Indiana 47722 eb6@evansvieedu Kristin P Bennett Department

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Efficient Visual-Inertial Navigation using a Rolling-Shutter Camera with Inaccurate Timestamps

Efficient Visual-Inertial Navigation using a Rolling-Shutter Camera with Inaccurate Timestamps Efficient Visua-Inertia Navigation using a Roing-Shutter Camera with Inaccurate Timestamps Chao X. Guo, Dimitrios G. Kottas, Ryan C. DuToit Ahmed Ahmed, Ruipeng Li and Stergios I. Roumeiotis Mutipe Autonomous

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Equilibrium of Heterogeneous Congestion Control Protocols

Equilibrium of Heterogeneous Congestion Control Protocols Equiibrium of Heterogeneous Congestion Contro Protocos Ao Tang Jiantao Wang Steven H. Low EAS Division, Caifornia Institute of Technoogy Mung Chiang EE Department, Princeton University Abstract When heterogeneous

More information

TELECOMMUNICATION DATA FORECASTING BASED ON ARIMA MODEL

TELECOMMUNICATION DATA FORECASTING BASED ON ARIMA MODEL TEECOMMUNICATION DATA FORECASTING BASED ON ARIMA MODE Anjuman Akbar Muani 1, Prof. Sachin Muraraka 2, Prof. K. Sujatha 3 1Student ME E&TC,Shree Ramchandra Coege of Engineering, onikand,pune,maharashtra

More information

King Fahd University of Petroleum & Minerals

King Fahd University of Petroleum & Minerals King Fahd University of Petroeum & Mineras DEPARTMENT OF MATHEMATICAL SCIENCES Technica Report Series TR 369 December 6 Genera decay of soutions of a viscoeastic equation Saim A. Messaoudi DHAHRAN 3161

More information

Week 6 Lectures, Math 6451, Tanveer

Week 6 Lectures, Math 6451, Tanveer Fourier Series Week 6 Lectures, Math 645, Tanveer In the context of separation of variabe to find soutions of PDEs, we encountered or and in other cases f(x = f(x = a 0 + f(x = a 0 + b n sin nπx { a n

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation Approximation and Fast Cacuation of Non-oca Boundary Conditions for the Time-dependent Schrödinger Equation Anton Arnod, Matthias Ehrhardt 2, and Ivan Sofronov 3 Universität Münster, Institut für Numerische

More information

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS ISEE 1 SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS By Yingying Fan and Jinchi Lv University of Southern Caifornia This Suppementary Materia

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

TM Electromagnetic Scattering from 2D Multilayered Dielectric Bodies Numerical Solution

TM Electromagnetic Scattering from 2D Multilayered Dielectric Bodies Numerical Solution TM Eectromagnetic Scattering from D Mutiayered Dieectric Bodies Numerica Soution F. Seydou,, R. Duraiswami, N.A. Gumerov & T. Seppänen. Department of Eectrica and Information Engineering University of

More information

Solution of Wave Equation by the Method of Separation of Variables Using the Foss Tools Maxima

Solution of Wave Equation by the Method of Separation of Variables Using the Foss Tools Maxima Internationa Journa of Pure and Appied Mathematics Voume 117 No. 14 2017, 167-174 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-ine version) ur: http://www.ijpam.eu Specia Issue ijpam.eu Soution

More information

Structural health monitoring of concrete dams using least squares support vector machines

Structural health monitoring of concrete dams using least squares support vector machines Structura heath monitoring of concrete dams using east squares support vector machines *Fei Kang ), Junjie Li ), Shouju Li 3) and Jia Liu ) ), ), ) Schoo of Hydrauic Engineering, Daian University of echnoogy,

More information

Extended SMART Algorithms for Non-Negative Matrix Factorization

Extended SMART Algorithms for Non-Negative Matrix Factorization Extended SMART Agorithms for Non-Negative Matrix Factorization Andrzej CICHOCKI 1, Shun-ichi AMARI 2 Rafa ZDUNEK 1, Rau KOMPASS 1, Gen HORI 1 and Zhaohui HE 1 Invited Paper 1 Laboratory for Advanced Brain

More information

Symbolic models for nonlinear control systems using approximate bisimulation

Symbolic models for nonlinear control systems using approximate bisimulation Symboic modes for noninear contro systems using approximate bisimuation Giordano Poa, Antoine Girard and Pauo Tabuada Abstract Contro systems are usuay modeed by differentia equations describing how physica

More information

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues

Smoothness equivalence properties of univariate subdivision schemes and their projection analogues Numerische Mathematik manuscript No. (wi be inserted by the editor) Smoothness equivaence properties of univariate subdivision schemes and their projection anaogues Phiipp Grohs TU Graz Institute of Geometry

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

Methods for Ordinary Differential Equations. Jacob White

Methods for Ordinary Differential Equations. Jacob White Introduction to Simuation - Lecture 12 for Ordinary Differentia Equations Jacob White Thanks to Deepak Ramaswamy, Jaime Peraire, Micha Rewienski, and Karen Veroy Outine Initia Vaue probem exampes Signa

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

V.B The Cluster Expansion

V.B The Cluster Expansion V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f(q ) = exp ( βv( q )) 1, which is obtained by summing over

More information

International Journal "Information Technologies & Knowledge" Vol.5, Number 1,

International Journal Information Technologies & Knowledge Vol.5, Number 1, Internationa Journa "Information Tecnoogies & Knowedge" Vo.5, Number, 0 5 EVOLVING CASCADE NEURAL NETWORK BASED ON MULTIDIMESNIONAL EPANECHNIKOV S KERNELS AND ITS LEARNING ALGORITHM Yevgeniy Bodyanskiy,

More information

On a geometrical approach in contact mechanics

On a geometrical approach in contact mechanics Institut für Mechanik On a geometrica approach in contact mechanics Aexander Konyukhov, Kar Schweizerhof Universität Karsruhe, Institut für Mechanik Institut für Mechanik Kaiserstr. 12, Geb. 20.30 76128

More information

Trainable fusion rules. I. Large sample size case

Trainable fusion rules. I. Large sample size case Neura Networks 19 (2006) 1506 1516 www.esevier.com/ocate/neunet Trainabe fusion rues. I. Large sampe size case Šarūnas Raudys Institute of Mathematics and Informatics, Akademijos 4, Vinius 08633, Lithuania

More information

Partial permutation decoding for MacDonald codes

Partial permutation decoding for MacDonald codes Partia permutation decoding for MacDonad codes J.D. Key Department of Mathematics and Appied Mathematics University of the Western Cape 7535 Bevie, South Africa P. Seneviratne Department of Mathematics

More information

arxiv: v1 [cs.lg] 31 Oct 2017

arxiv: v1 [cs.lg] 31 Oct 2017 ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT

More information

Least-squares Independent Component Analysis

Least-squares Independent Component Analysis Neura Computation, vo.23, no., pp.284 30, 20. Least-squares Independent Component Anaysis Taiji Suzuki Department of Mathematica Informatics, The University of Tokyo 7-3- Hongo, Bunkyo-ku, Tokyo 3-8656,

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information