Training Algorithm for Extra Reduced Size Lattice Ladder Multilayer Perceptrons

Size: px
Start display at page:

Download "Training Algorithm for Extra Reduced Size Lattice Ladder Multilayer Perceptrons"

Transcription

1 Training Agorithm for Extra Reduced Size Lattice Ladder Mutiayer Perceptrons Daius Navakauskas Division of Automatic Contro Department of Eectrica Engineering Linköpings universitet, SE Linköping, Sweden WWW: E-mai: March 5, 2003 AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPING Report no.: LiTH-ISY-R-2499 Submitted to journa Informatica ISSN Technica reports from the Contro & Communication group in Linköping are avaiabe at

2

3 Training Agorithm for Extra Reduced Size Lattice Ladder Mutiayer Perceptrons Daius Navakauskas Radioeectronics Department, Eectronics Facuty Vinius Gediminas Technica University Naugarduko 41, 2600 Vinius, Lithuania E-mai: March 5, 2003 Abstract A quick gradient training agorithm for a specific neura network structure caed an extra reduced size attice adder mutiayer perceptron is introduced. Presented derivation of the agorithm utiizes recenty found by author simpest way of exact computation of gradients for rotation parameters of attice adder fiter. Deveoped neura network training agorithm is optima in terms of minima number of constants, mutipication and addition operations, whie the reguarity of the structure is aso preserved. Keywords: attice adder fiter, attice adder mutiayer perceptron, adaptation, gradient adaptive attice agorithms. 1 Introduction During the ast decade a ot of new structures for artificia neura networks (ANN) were introduced fufiing the need to have modes for noninear processing of time-varying signas [6, 7, 16, 17]. One of many and perhaps the most straightforward way to insert tempora behaviour into ANNs is to use digita fiters in a pace of synapses of mutiayer perceptron (MLP). Foowing that way, a timedeay neura network [18], FIR/IIR MLPs [1, 20], gamma MLP [8], a cascade neura network [4] to name a few ANN architectures, were deveoped. A attice adder reaization of IIR fiters incorporated as MLP synapses forms a structure of attice adder mutiayer perceptron (LLMLP) firsty introduced by [2] and foowed by severa simpified versions proposed by the author [9, 11]. A LLMLP is an appeaing structure for advanced signa processing, however, even moderate impementation of its training hinders the fact that a ot of storage and computation power must be aocated [10]. Currenty with Automatic Contro Division at ISY at Linköping University. 1

4 We known neura network training agorithms such as backpropagation and its modifications, conjugate-gradients, Quasi-Newton, Levenberg-Marquardt, etc. or their adaptive counterparts ike tempora-backpropagation [19], IIR MLP training agorithm [3], recursive Levenberg-Marquardt [13], etc., essentiay are based on the use of gradients (aso caed sensitivity functions) partia derivatives of the cost function with respect to current weights. Here we are going to show how the exporation of specific to attice adder fiter (LLF) computation of gradients eads to efficient reaization of overa famiy of LLMLP training agorithms. Whie doing so, we present quickest in terms of number of constants, mutipication and addition operations training agorithm for an extra reduced size LLMLP (XLLMLP). 2 Towards simpified computations In order not to obscure main ideas, we wi first work ony with one LLF and afterwards in Section 4 generaize and utiize resuts in a deveopment of training agorithm for a specific XLLMLP structure. Athough, the fina LLF training agorithm is fairy simpe, many intermediate steps invoved in the derivation coud be confusing. Thus, here we wi present previous resuts on cacuation of LLF gradients, introduce the simpifications we have chosen, and ony in next Section 3 actuay derive a simpified expressions. Consider one attice adder fiter (see Figure 1 in next page) used in XLLMLP structure, when its computations are expressed by [ ] fj 1 (n) b j (n) [ cos Θj sin Θ j sin Θ j M s out (n) v j b j (n), j0 ][ ] fj (n), j 1,2,...,M; (1a) zb j 1 (n) (1b) with boundary conditions b 0 (n) f 0 (n); f M (n) s in (n). (1c) Here we used the foowing notations: s in (n) and s out (n) are signas at input and output of Mth order LLF, f j (n) and b j (n) are forward and backward signas foating in jth section of LLF, Θ j and v j are attice and adder coefficients correspondingy, whie z is a deay operator such that zb j (n) b j (n 1). It coud be shown (see, for exampe, [5]) that gradient expressions for the cacuation of LLF coefficients require M recursions, yieding a training agorithm with tota compexity proportiona to M 2. One possibe way to simpify the LLF gradients cacuation was presented in [15]. We assume that the concept of fowgraph transposition is aready known (if not consut, for exampe, [14, pages ]). Appying the fowgraph transposition rues to the LLF equations (1a) and (1b), we obtain the LLF transpose reaization. The resuting system gives rise to the foowing recurrent 2

5 repacemen s in (n) cos Θ M Lattice f 1 (n) cos Θ 1 f 0 (n) sinθ M sin Θ M sinθ 1 sin Θ 1 z z cos Θ M cos Θ 1 b M (n) b 1 (n) b 0 (n) v M v 1 v 0 Ladder s out (n) Figure 1: A attice adder fiter of order M. Input signa s in (n) is processed according to (1) using attice Θ x and adder v x coefficients and as a resut output signa s out (n) is obtained. Some forward f x (n) and backward b x (n) signas foating inside of LLF are shown whie by z we indicate a deay operator. reation [ ] [ gj (n) 1 t j 1 (n) z [ 0 + ] [ cos Θj sin Θ j sin Θ j ], j M,...,2,1, v j 1 ] [ ] gj 1 (n) t j (n) (2a) with boundary conditions t M (n) v M, g 0 (n) t 0 (n), g M (n) s out (n). (2b) Here by g j (n) and t j (n) we indicated forward and backward signas foating in jth section of transposed LLF. After simpe re-arrangements (see, for exampe, [10, pages 51 54]) it coud be shown that fitered regressor components aternativey coud be expressed as Θ j (n) f j(n)t j (n) b j (n)g j (n) e(n). (3) Here e(n) is a LLF error. The main idea given by J. R.-Fonoosa and E. Masgrau towards simpifying the gradient computations for attice parameters is to find a recurrence reation that reaizes the mapping [ ] fj 1 (n)t j 1 (n) b j 1 (n)g j 1 (n) [ ] fj (n)t j (n), (4) b j (n)g j (n) in such a way that a the necessary transfer functions may be obtained from one singe fiter resuting in an agorithm with tota compexity proportiona to M. 3

6 3 Derivation of simpified cacuation of LLF gradients There are in tota 14 possibe and impementabe ways of reaization of mapping (4). The optima way in a sense of minima number of constants, addition and deay operations invoved in the cacuations of the gradients for LLF, and aso reguarity of the regressor attice structure, was reported in [12]. It forms the basis for new training agorithm to be deveoped in next section. Thus, et us show in the step-by-step fashion the re-arrangements invoved in the derivation of optima reaization of mapping (4). Accordingy, we are seeking a simpe reaization for the foowing optima augmented mapping f j 1 (n)g j 1 (n) f j (n)g j (n) b j 1 (n)g j 1 (n) f j (n)t j (n) b j (n)g j (n) f j 1 (n)t j 1 (n), (5) b j (n)t j (n) b j 1 (n)t j 1 (n) whie the overa system describing singe section of regressor attice is expressed by [ ] [ ][ ] fj 1 (n)g j 1 (n) cos Θj sin Θ j f j (n)g j 1 (n) ; (6a) b j (n)g j 1 (n) sin Θ j zb j 1 (n)g j 1 (n) [ ] [ ][ ] fj 1 (n)t j 1 (n) cos Θj sin Θ j f j (n)t j 1 (n) ; (6b) b j (n)t j 1 (n) sin Θ j zb j 1 (n)t j 1 (n) [ ] [ ][ ][ ] fj (n)g j (n) 1 cos Θj sin Θ j fj (n)g j 1 (n) (6c) f j (n)t j 1 (n) z sin Θ j f j (n)t j (n) [ ] 0 + f v j (n); j 1 [ ] [ ][ ][ ] bj (n)g j (n) 1 cos Θj sin Θ j bj (n)g j 1 (n) (6d) b j (n)t j 1 (n) z sin Θ j b j (n)t j (n) [ ] 0 + b v j (n). j 1 In order to achieve mapping (5) two information fow directions in (6) must be reversed: now f j (n)g j 1 (n) must be computed based on f j 1 (n)g j 1 (n), and b j 1 (n) t j 1 (n) must be computed based on b j (n)t j 1 (n). For the first re-direction to be fufied we take (6a) and re-arrange it as foows [ ] [ ] fj (n)g j 1 (n) 1 0 b j (n)g j 1 (n) 0 z 1 1 sin Θ j sin Θ j 1 Simiary, taking (6b) and re-arranging we get 1 [ ] [ ] fj 1 (n)t j 1 (n) 1 0 b j 1 (n)t j 1 (n) 0 z 1 sin Θ j sin Θ j 1 [ ] fj 1 (n)g j 1 (n). (7a) b j 1 (n)g j 1 (n) [ ] fj (n)t j 1 (n). (7b) b j (n)t j 1 (n) 4

7 For the convenience we rewrite unchanged expressions (6c) and (6d) here [ ] [ fj (n)g j (n) 1 f j (n)t j 1 (n) z [ 0 + [ ] [ bj (n)g j (n) 1 b j (n)t j 1 (n) z [ 0 + ][ cos Θj sin Θ j sin Θ j ] f j (n); ][ cos Θj sin Θ j sin Θ j ] b j (n). v j 1 v j 1 ][ ] fj (n)g j 1 (n) f j (n)t j (n) ][ ] bj (n)g j 1 (n) b j (n)t j (n) (7c) (7d) Now, (7) describes a system that has no conficting information fow directions. However, there are two advance operations to be performed in (7a) and in (7b), making the system non-causa, hence un-impementabe. There is, however, more to this computation than first meets the eye. Notice first, the mapping (5) needs ony four expressions to be specified. Thus a simpification of (7) becomes pausibe. It coud be shown that (7) coud be simpified drasticay and finay expressed by f j (n)g j (n) b j (n)g j (n) f j 1 (n)t j 1 (n) z 0 b j 1 (n)t j 1 (n) sin Θ j I sin Θ j sin Θ j sin Θ j f j 1 (n)g j 1 (n) 0 0 z 0 0 b j 1 (n)g j 1 (n) f j (n)t j (n) + v 0 j 1 f j 1 (n) b j (n)t j (n) b j 1 (n) (8a) The simpified system of singe section of regressor attice is aready causa. Moreover, its impementation requires ony 2 deay operators, 3 constants (sinθ j, sin Θ j and v j 1 ) and 8 addition operations (more evident from pictoria representation of (8) that we skiped to save space). In order to finish the derivation, the boundary conditions when M such systems are cascaded must be considered. Based on (1c) and (2b) we get such new boundary conditions f 0 (n)g 0 (n) f 0 (n)t 0 (n); b 0 (n)g 0 (n) b 0 (n)t 0 (n); (8b) f M (n)t M (n) v M ; b M (n)t M (n) v M b M (n). 4 Simpified training agorithm for extra reduced size LLMLP One way of LLMLP structure reduction coud be achieved by restricting each neuron to have ony one output attice adder fiter whie connecting ayers through conventiona synaptic coefficients. It yieds an extra reduced size 5

8 s 1 1 (n) LLF 1 s 1(n) ŝ 1(n) Φ s 1(n) s 1 i (n) LLF i s i (n) ŝ h (n) Φ s h (n) s 1 N 1 (n) LLF N 1 s N 1 (n) ŝ N (n) Φ s N (n) {Θ,V } [N 1,M ] W [N 1,N ] Φ ( ) [N ] Figure 2: A ayer of XLLMLP. Input signas sx 1 (n) from a previous ayer are processed according to (9) firsty by a group of attice adder fiters LLF x (see Figure 1 for more detaied treatment), then by static synapses wx,x, finay by neuron activation functions Φ ( ) forming current ayer output signas s x(n). Emphasized intermediate signas: LLF output signas s x(n) and signas before activation functions ŝ x(n). For the purpose of cearness size of main matrices (shown in brackets) are reveaed at the bottom. attice adder mutiayer perceptron structure (XLLMLP) thats pictoria representation is given on next page in Figure 2 and definition foows. Definition 1 ([11]) A XLLMLP of size L ayers, {N 0, N 1,...,N L } nodes and fiter orders {M 1,M 2,...,M L } is defined by { N 1 s h(n) Φ i1 } M vij b ij(n), h 1,...,N, 1,...,L, (9a) wih j0 } {{ } } s i(n) {{ } ŝ h(n) when oca fow of information in the attice part of fiters is defined by [ ] [ ] [ ] f i,j 1 (n) cos Θ b ij (n) ij sin Θ ij f ij (n) sinθ ij cos Θ ij zb i,j 1 (n), j 1,2,...,M, (9b) with initia and boundary conditions b i0(n) f i0(n); f i,m+1(n) s 1 i (n). (9c) 6

9 Here s h (n) is an output signa of the output neuron; s i (n) is an output signa of the fiter that is connected to i input neuron; wih represents singe (static) weight connecting two neurons in a ayer; Θ ij and v ij are weights of fiter s attice and adder parts correspondingy; b ij (n) and f ij (n) are forward and backward prediction errors of the fiter; i, j, h and index inputs, fiter coefficients, outputs and ayers respectivey. Let us consider cacuation of sensitivity functions using backpropagation agorithm for XLLMLP th hidden ayer neurons. It coud be shown [11] that sensitivity functions for XLLMLP coud be expressed by w ih(n) s i(n) δ h(n); N vij(n) b ij(n) δh(n); h1 Θ ij(n) vir(n) M r0 Θ ij N b ir(n) wihδ h(n). h1 (10a) (10b) (10c) Note, that these expressions are simiar to standard LLF gradient expressions in a way that here we used additiona indexes to state LLF position in the whoe XLLMLP architecture (being precise, we showed expressions for sensitivity functions for jth coefficients of LLF connecting ith input neuron with hth output neuron in th ayer of LLMLP) and repaced output error e(n) term by the generaized oca instantaneous error, i.e., δh (n) E(n)/ ŝ h (n), that coud be expicity expressed by e h (n)φ L{ ŝ h (n)}, L, δh(n) Φ { ŝ h (n)} M v +1 hj γ+1 hj (n)n w +1 hp δ+1 p (n), L, j0 p1 (10d) with γ +1 hj [ ϕ +1 h,j 1 (n) γ +1 hj (n) (n) and its companion ϕ+1 hj (n) by ] [ ][ cos Θ +1 hj sin Θ +1 hj sin Θ +1 hj cos Θ +1 hj ϕ +1 hj (n) zγ +1 h,j 1 (n) ]. (10e) Aiming to present simpified training of XLLMLP we repace (10c) with Θ ij(n) f ij (n)t ij (n) b ij (n)g ij (n) cos Θ ij }{{} Θ ij(n) wihδ h(n), (11) N h1 7

10 where order recursion computation is done in a way of (8) by f ij (n)g ij (n) b ij (n)g ij (n) f i,j 1 (n)t i,j 1 (n) b i,j 1 (n)t i,j 1 (n) 1 z sin Θ ij sin Θ ij 0 sinθ ij z 0 sinθ ij z sinθ ij 0 z z sinθ ij 0 z sin Θ ij sin Θ ij 1 fi,j 1 (n)g i,j 1 (n) 0 b i,j 1 (n)g i,j 1 (n) + v 0 i,j 1, f ij (n)t ij (n) b ij (n)t ij (n) f i,j 1 (n) b i,j 1 (n) (12a) with boundary conditions: f i0(n)g i0(n) f i0(n)t i0(n); b i0(n)g i0(n) b i0(n)t i0(n); (12b) f im (n)t im (n) v im ; b im (n)t im (n) b im (n)v im. (12c) Fu XLLMLP training agorithm is presented at the end of the artice. Let us here ony summarize main steps of it: 1. XLLMLP reca. Given input pattern s 0 i (n) is presented to the input ayer (see ine 1 of the agorithm). Afterwards it is processed through XLLMLP in a ayer by ayer fashion (ines 2 14): first by corresponding attice adder fiters (ines 3 9), then by static synapses (ines 10 13). In that way XLLMLP output pattern s L h (n) is obtained. 2. Node errors. Using rezuts of previous cacuations, errors of output ayer neurons are determined (ine 15). Then, again in a ayer by ayer fashion however this time in the backward direction these errors are processed through compete XLLMLP (ines 16 23). Foowing that way errors of hidden ayer neurons δh (n) are cacuated. 3. Gradient terms. Simpified cacuation of gradient terms for the attice weight updates is done as in (12). Let us here be more specific. Working in a ayer by ayer fashion and taking fiter one by one (ines 24 38; note however that processing order here is insignificant), agorithm proceeds as foow: at ine 25 initia conditions are assigned as in (12c); in ines two ower equations from (12a) are evauated for a sections of the fiter; at ine 30 eft boundary conditions are fufied as in (12b); for a sections of the fiter (ines 31 37) by ines remaining upper equations from (12a) and in ine 36 aso Θ ij (n) from (11) are evauated. 4. Weight updates. Dynamic synapses (LLFs) are updated at ines 41 and 42 reaizing expressions (10b) and (11), whie static ones at ine 44 reaizing (10a) and taking into account their previous vaues, training parameter µ, node errors and gradient terms. 5. Stabiity test. New parameter vaues of attice fiters are checked if they satisfy stabiity requirement (see ines 46 48), if some of them do not satisfy od parameter vaues are substituted at ine 47. 8

11 5 Concusions In this paper we deat with the probem of computationa efficiency of attice adder mutiayer perceptrons training agorithms that are based on the computation of gradients, for exampe, backpropagation, conjugate-gradient or Levenberg- Marquardt. Here we expored cacuations that are most computationay demanding and specific to attice-adder fiter computation of gradients for attice (rotation) parameters. The optima way in a sense of minima number of constants, addition and deay operations invoved in aforementioned computations for singe attice-adder fiter, and aso reguarity of the regressor structure, was found recenty by the author. Based on it quick training agorithm for the extra reduced size attice adder mutiayer perceptron was here derived. Not surprisingy, presented agorithm requires approximatey M times (where M is the order of fiter) ess computations whie it foows exact gradient path (because of the fact that derivations do not invoved any approximations) when coefficients of fiters are assumed to be stationary. More importanty, incorporated in the agorithm impementation of a singe section of regressor attice wi require ony 2 deay eements, 3 constants and 8 addition operations. Experimenta study of the proposed agorithm was not considered because of the fact that computations of gradients are exact and possibe comparison inherenty wi be biased depending on the chosen way of impementation. Acknowedgements Research was supported by The Roya Swedish Academy of Sciences and The Swedish Institute New Visby project Ref. No. 2473/2002(381/T81). References [1] A. D. Back and A. C. Tsoi. FIR and IIR synapses, a new neura network architecture for time series modeing. Neura Computation, 3: , [2] A. D. Back and A. C. Tsoi. An adaptive attice architecture for dynamic mutiayer perceptrons. Neura Computation, 4: , [3] A. D. Back and A. C. Tsoi. A simpified gradient agorithm for IIR synapse mutiayer perceptrons. Neura Computation, 5: , [4] A. D. Back and A. C. Tsoi. A cascade neura network mode with noninear poes and zeros. In Proceedings of 1996 Internationa Conference on Neura Information Processing, voume 1, pages IEEE Press, [5] S. Haykin. Adaptive Fiter Theory. Prentice-Ha Internationa, Inc., 3rd edition, [6] S. Haykin. Neura Networks: A Comprehensive Foundation. Prentice-Ha, Upper Sadde River, NJ, 2nd edition,

12 [7] A. Juditsky, H. Hjamarsson, A. Benveniste, B. Deyon, L. Ljung, J. Sjöberg, and Q. Zhang. Noninear back-box modeing in system identification: Mathematica foundations. Automatica, 31: , [8] S. Lawrence, A. D. Back, A. C. Tsoi, and C. L. Gies. The Gamma MLP - using mutipe tempora resoutions for improved cassification. In Neura Networks for Signa Processing 7, pages IEEE Press, [9] D. Navakauskas. A reduced size attice-adder neura network. In Signa Processing Society Workshop on Neura Networks for Signa Processing, pages , Cambridge, Engand, IEEE Press. [10] D. Navakauskas. Artificia Neura Network for the Restoration of Noise Distorted Songs Audio Records. Doctora dissertation, Vinius Gediminas Technica University, No. 434, September [11] D. Navakauskas. Reducing impementation input of attice-adder mutiayer perceptrons. In Proceedings of the 15th European Conference on Circuit Theory and Design, voume 3, pages , Espoo, Finand, Hesinki University of Technoogy. [12] D. Navakauskas. Speeding up the training of attice-adder mutiayer perceptrons. Technica Report LiTH-ISY-R-2417, Department of Eectrica Engineering, Linköping University, SE Linköping, Sweden, [13] L. S. H. Ngia and J. Sjöberg. Efficient training of neura nets for noninear adaptive fitering using a recursive Levenberg-Marquardt agorithm. IEEE Transactions on Signa Processing, 48(7): , [14] P. A. Regaia. Adaptive IIR Fitering in Signa Processing and Contro. Marce Dekker, Inc., [15] J. A. Rodriguez-Fonoosa and E. Masgrau. Simpified gradient cacuation in adaptive IIR attice fiters. IEEE Transactions on Signa Processing, 39(7): , [16] J. Sjöberg, Q. Zhang, L. Ljung, A. Benveniste, B. Deyon, P.-Y. Gorennec, H. Hjamarsson, and A. Juditsky. Noninear back-box modeing in system identification: A unified overview. Automatica, 31: , [17] A. C. Tsoi and A. D. Back. Discrete time recurrent neura network architectures: A unifying review. Neurocomputing, 15: , [18] A. Waibe, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. Phoneme recognition using time-deay neura networks. IEEE Transactions on Acoustics, Speech, and Signa Processing, 37(3): , [19] E. A. Wan. Tempora backpropagation for FIR neura networks. In Proceedings of Internationa Joint Conference on Neura Networks, pages IEEE Press, [20] E. A. Wan. Finite Impuse Response Neura Networks with Appications in the Time Series Prediction. Doctora dissertation, Department of Eectrica Engineering, Stanford University, USA, November

13 XLLMLP Training Agorithm Avaiabe at time n. New input pattern: s 0 i (n), i 1,2,...,N0 ; Desired output pattern: d L h (n), h 1,2,...,NL ; Static coefficients: wih (n), h 1,2,...,N ; Ladder coefficients: vij (n), j 0,1,...,M ; Lattice coefficients: Θ ij (n), j 1,2,...,M ; Backward & forward signas: b ij (n), f ij (n), j 0,1,...,M ; Gradients w.r.t. attice weights: Θ ij (n 1), j 1,2,...,M ; Regressor attices signas: bgij (n 1), ft ij (n 1), fg ij, bt ij, j 0,...,M ; Tempora variabe: temp; Backward & forward gradients:ϕ ij (n 1), γ ij (n 1), j 0,2,...,M. 1. XLLMLP Reca. 1 Let f 1 i,m 1 (n) s 0 i (n). 2 for 1,2,...,L do 3 for i 1,2,...,N 1 do 4 for j M,...,2,1 do [ ] [ f i,j 1 (n) cos Θ 5 b ij (n) ij (n) sin Θ ij (n) ][ sinθ ij (n) cos Θ ij (n) 6 end for j 7 b i0 (n) f i0 (n); 8 s M+1 i (n) vij (n)b ij (n). 9 end for i j0 10 for h 1,2,...,N do 11 ŝ h N+1 (n) w ih s i (n); i1 12 s h (n) Φ { ŝ h (n)}. 13 end for h 14 end for b i,j 1 fij (n) ]. (n 1) 2. Node Errors. 15 e L h (n) d h(n) s L h (n), h 1,2,...,NL. 16 for L 1,L 2,...,1 and h 1,2,...,N do 17 Let γ h,m (n) for j M,...,2,1 do [ ] [ ϕ h,j 1 (n) cos Θ 19 γhj (n) hj (n) sin Θ hj (n) ][ sin Θ hj (n) cos Θ hj (n) 20 end for j ϕ hj (n) ]. (n 1) γ h,j 1 11

14 21 ϕ h0 (n) γ h0 (n). e h (n)φ L{ ŝ h (n)}, L, 22 δh (n) Φ { ŝ h (n)} M v +1 hj (n)n w +1 hp δ+1 p (n), L. 23 end for h, 3. Gradient Terms. j0 hj γ+1 p1 24 for 1,2,...,L and i 1,2,...,N 1 do 25 Let bt i,m v i,m (n)b ij (n), ft i,m s 1 i (n)v i,m (n). 26 for j M,...,2,1 do 27 bt i,j 1 bt ij [ ft ij (n) + bg i,j 1 (n 1)] sinθ ij (n) +v i,j 1 (n)b i,j 1 (n); 28 ft i,j 1 (n) ft i,j (n 1) + v i,j 1 (n)f i,j 1 (n); 29 end for j 30 Let temp bt i0, fg i0 ft i0 (n 1). 31 for j 1,2,...,M do 32 ft i,j 1 (n) ft ij (n 1) [ fg i,j 1 + bt ij] sinθ ij (n); 33 fg ij fg i,j 1 + [ bg i,j 1 (n 1) + ft ij (n)] sin Θ ij (n); 34 bgi,j 1 (n) temp; 35 temp bg i,j 1 (n 1) + [ fg i,j 1 + bt ij] sin Θ ij (n); 36 Θ ij (n) [ ft ij (n) temp] /cos Θ ij (n). 37 end for j 38 end for i, 4. Weight Updates. 39 for 1,2,...,L and i 1,2,...,N 1 do 40 for j 1,2,...,M do 41 vij (n + 1) v ij (n) + N µb ij (n) δh (n); h1 42 Θ ij (n + 1) Θ ij (n) + µ Θ N ij (n) wih δ h (n). 43 end for j h1 44 for h 1,2,...,N do w ih (n + 1) w ih (n) + µδ h (n) s i (n). 45 end for i, 5. Stabiity Test. 46 for 1,2,...,L and i 1,2,...,N 1 and j 1,2,...,M do 47 if Θ ij (n + 1) > π/2 then Θ ij (n + 1) Θ ij (n). 48 end for j,i, 12

Quick Training Algorithm for Extra Reduced Size Lattice-Ladder Multilayer Perceptrons

Quick Training Algorithm for Extra Reduced Size Lattice-Ladder Multilayer Perceptrons INFORMATICA, 2003, Vo. 14, No. 2, 223 236 223 2003 Institute of Mathematics and Informatics, Vinius Quick Training Agorithm for Extra Reduced Size Lattice-Ladder Mutiayer Perceptrons Daius NAVAKAUSKAS

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

The influence of temperature of photovoltaic modules on performance of solar power plant

The influence of temperature of photovoltaic modules on performance of solar power plant IOSR Journa of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vo. 05, Issue 04 (Apri. 2015), V1 PP 09-15 www.iosrjen.org The infuence of temperature of photovotaic modues on performance

More information

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller

Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller daptive Noise Canceation Using Deep Cerebear Mode rticuation Controer Yu Tsao, Member, IEEE, Hao-Chun Chu, Shih-Wei an, Shih-Hau Fang, Senior Member, IEEE, Junghsi ee*, and Chih-Min in, Feow, IEEE bstract

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

Convolutional Networks 2: Training, deep convolutional networks

Convolutional Networks 2: Training, deep convolutional networks Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

V.B The Cluster Expansion

V.B The Cluster Expansion V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f( q ) = exp ( βv( q )), which is obtained by summing over

More information

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation Approximation and Fast Cacuation of Non-oca Boundary Conditions for the Time-dependent Schrödinger Equation Anton Arnod, Matthias Ehrhardt 2, and Ivan Sofronov 3 Universität Münster, Institut für Numerische

More information

MONTE CARLO SIMULATIONS

MONTE CARLO SIMULATIONS MONTE CARLO SIMULATIONS Current physics research 1) Theoretica 2) Experimenta 3) Computationa Monte Caro (MC) Method (1953) used to study 1) Discrete spin systems 2) Fuids 3) Poymers, membranes, soft matter

More information

Seasonal Time Series Data Forecasting by Using Neural Networks Multiscale Autoregressive Model

Seasonal Time Series Data Forecasting by Using Neural Networks Multiscale Autoregressive Model American ourna of Appied Sciences 7 (): 372-378, 2 ISSN 546-9239 2 Science Pubications Seasona Time Series Data Forecasting by Using Neura Networks Mutiscae Autoregressive Mode Suhartono, B.S.S. Uama and

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes Maima and Minima 1. Introduction In this Section we anayse curves in the oca neighbourhood of a stationary point and, from this anaysis, deduce necessary conditions satisfied by oca maima and oca minima.

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

V.B The Cluster Expansion

V.B The Cluster Expansion V.B The Custer Expansion For short range interactions, speciay with a hard core, it is much better to repace the expansion parameter V( q ) by f(q ) = exp ( βv( q )) 1, which is obtained by summing over

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

https://doi.org/ /epjconf/

https://doi.org/ /epjconf/ HOW TO APPLY THE OPTIMAL ESTIMATION METHOD TO YOUR LIDAR MEASUREMENTS FOR IMPROVED RETRIEVALS OF TEMPERATURE AND COMPOSITION R. J. Sica 1,2,*, A. Haefee 2,1, A. Jaai 1, S. Gamage 1 and G. Farhani 1 1 Department

More information

A simple reliability block diagram method for safety integrity verification

A simple reliability block diagram method for safety integrity verification Reiabiity Engineering and System Safety 92 (2007) 1267 1273 www.esevier.com/ocate/ress A simpe reiabiity bock diagram method for safety integrity verification Haitao Guo, Xianhui Yang epartment of Automation,

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch The Eighth Internationa Symposium on Operations Research and Its Appications (ISORA 09) Zhangiaie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 402 408 Minimizing Tota Weighted Competion

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

A Statistical Framework for Real-time Event Detection in Power Systems

A Statistical Framework for Real-time Event Detection in Power Systems 1 A Statistica Framework for Rea-time Event Detection in Power Systems Noan Uhrich, Tim Christman, Phiip Swisher, and Xichen Jiang Abstract A quickest change detection (QCD) agorithm is appied to the probem

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

Throughput Optimal Scheduling for Wireless Downlinks with Reconfiguration Delay

Throughput Optimal Scheduling for Wireless Downlinks with Reconfiguration Delay Throughput Optima Scheduing for Wireess Downinks with Reconfiguration Deay Vineeth Baa Sukumaran vineethbs@gmai.com Department of Avionics Indian Institute of Space Science and Technoogy. Abstract We consider

More information

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY The ogic of Booean matrices C. R. Edwards Schoo of Eectrica Engineering, Universit of Bath, Caverton Down, Bath BA2 7AY A Booean matrix agebra is described which enabes man ogica functions to be manipuated

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY

School of Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY The ogic of Booean matrices C. R. Edwards Schoo of Eectrica Engineering, Universit of Bath, Caverton Down, Bath BA2 7AY A Booean matrix agebra is described which enabes man ogica functions to be manipuated

More information

Math 124B January 17, 2012

Math 124B January 17, 2012 Math 124B January 17, 212 Viktor Grigoryan 3 Fu Fourier series We saw in previous ectures how the Dirichet and Neumann boundary conditions ead to respectivey sine and cosine Fourier series of the initia

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information

MODELING OF A THREE-PHASE APPLICATION OF A MAGNETIC AMPLIFIER

MODELING OF A THREE-PHASE APPLICATION OF A MAGNETIC AMPLIFIER TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES MODELING OF A THREE-PHASE APPLICATION OF A MAGNETIC AMPLIFIER L. Austrin*, G. Engdah** * Saab AB, Aerosystems, S-81 88 Linköping, Sweden, ** Roya

More information

VI.G Exact free energy of the Square Lattice Ising model

VI.G Exact free energy of the Square Lattice Ising model VI.G Exact free energy of the Square Lattice Ising mode As indicated in eq.(vi.35), the Ising partition function is reated to a sum S, over coections of paths on the attice. The aowed graphs for a square

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

FOURIER SERIES ON ANY INTERVAL

FOURIER SERIES ON ANY INTERVAL FOURIER SERIES ON ANY INTERVAL Overview We have spent considerabe time earning how to compute Fourier series for functions that have a period of 2p on the interva (-p,p). We have aso seen how Fourier series

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5].

(f) is called a nearly holomorphic modular form of weight k + 2r as in [5]. PRODUCTS OF NEARLY HOLOMORPHIC EIGENFORMS JEFFREY BEYERL, KEVIN JAMES, CATHERINE TRENTACOSTE, AND HUI XUE Abstract. We prove that the product of two neary hoomorphic Hece eigenforms is again a Hece eigenform

More information

arxiv:hep-ph/ v1 15 Jan 2001

arxiv:hep-ph/ v1 15 Jan 2001 BOSE-EINSTEIN CORRELATIONS IN CASCADE PROCESSES AND NON-EXTENSIVE STATISTICS O.V.UTYUZH AND G.WILK The Andrzej So tan Institute for Nucear Studies; Hoża 69; 00-689 Warsaw, Poand E-mai: utyuzh@fuw.edu.p

More information

Interconnect effects on performance of Field Programmable Analog Array

Interconnect effects on performance of Field Programmable Analog Array nterconnect effects on performance of Fied Programmabe Anaog Array D. Anderson,. Bir, O. A. Pausinsi 3, M. Spitz, K. Reiss Motoroa, SPS, Phoenix, Arizona, USA, University of Karsruhe, Karsruhe, Germany,

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Published in: Proceedings of the Twenty Second Nordic Seminar on Computational Mechanics

Published in: Proceedings of the Twenty Second Nordic Seminar on Computational Mechanics Aaborg Universitet An Efficient Formuation of the Easto-pastic Constitutive Matrix on Yied Surface Corners Causen, Johan Christian; Andersen, Lars Vabbersgaard; Damkide, Lars Pubished in: Proceedings of

More information

Supervised i-vector Modeling - Theory and Applications

Supervised i-vector Modeling - Theory and Applications Supervised i-vector Modeing - Theory and Appications Shreyas Ramoji, Sriram Ganapathy Learning and Extraction of Acoustic Patterns LEAP) Lab, Eectrica Engineering, Indian Institute of Science, Bengauru,

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

Safety Evaluation Model of Chemical Logistics Park Operation Based on Back Propagation Neural Network

Safety Evaluation Model of Chemical Logistics Park Operation Based on Back Propagation Neural Network 1513 A pubication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 6, 017 Guest Editors: Fei Song, Haibo Wang, Fang He Copyright 017, AIDIC Servizi S.r.. ISBN 978-88-95608-60-0; ISSN 83-916 The Itaian Association

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient

More information

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS

A SIMPLIFIED DESIGN OF MULTIDIMENSIONAL TRANSFER FUNCTION MODELS A SIPLIFIED DESIGN OF ULTIDIENSIONAL TRANSFER FUNCTION ODELS Stefan Petrausch, Rudof Rabenstein utimedia Communications and Signa Procesg, University of Erangen-Nuremberg, Cauerstr. 7, 958 Erangen, GERANY

More information

Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models IO Conference Series: Earth and Environmenta Science AER OEN ACCESS Adjustment of automatic contro systems of production faciities at coa processing pants using mutivariant physico- mathematica modes To

More information

Math 124B January 31, 2012

Math 124B January 31, 2012 Math 124B January 31, 212 Viktor Grigoryan 7 Inhomogeneous boundary vaue probems Having studied the theory of Fourier series, with which we successfuy soved boundary vaue probems for the homogeneous heat

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Approximated MLC shape matrix decomposition with intereaf coision constraint Thomas Kainowski Antje Kiese Abstract Shape matrix decomposition is a subprobem in radiation therapy panning. A given fuence

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Improved Min-Sum Decoding of LDPC Codes Using 2-Dimensional Normalization

Improved Min-Sum Decoding of LDPC Codes Using 2-Dimensional Normalization Improved Min-Sum Decoding of LDPC Codes sing -Dimensiona Normaization Juntan Zhang and Marc Fossorier Department of Eectrica Engineering niversity of Hawaii at Manoa Honouu, HI 968 Emai: juntan, marc@spectra.eng.hawaii.edu

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value ISSN 1746-7659, Engand, UK Journa of Information and Computing Science Vo. 1, No. 4, 017, pp.91-303 Converting Z-number to Fuzzy Number using Fuzzy Expected Vaue Mahdieh Akhbari * Department of Industria

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information

Numerical solution of one dimensional contaminant transport equation with variable coefficient (temporal) by using Haar wavelet

Numerical solution of one dimensional contaminant transport equation with variable coefficient (temporal) by using Haar wavelet Goba Journa of Pure and Appied Mathematics. ISSN 973-1768 Voume 1, Number (16), pp. 183-19 Research India Pubications http://www.ripubication.com Numerica soution of one dimensiona contaminant transport

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

D. Prémel, J.M. Decitre and G. Pichenot. CEA, LIST, F Gif-sur-Yvette, France

D. Prémel, J.M. Decitre and G. Pichenot. CEA, LIST, F Gif-sur-Yvette, France SIMULATION OF EDDY CURRENT INSPECTION INCLUDING MAGNETIC FIELD SENSOR SUCH AS A GIANT MAGNETO-RESISTANCE OVER PLANAR STRATIFIED MEDIA COMPONENTS WITH EMBEDDED FLAWS D. Préme, J.M. Decitre and G. Pichenot

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

Symbolic models for nonlinear control systems using approximate bisimulation

Symbolic models for nonlinear control systems using approximate bisimulation Symboic modes for noninear contro systems using approximate bisimuation Giordano Poa, Antoine Girard and Pauo Tabuada Abstract Contro systems are usuay modeed by differentia equations describing how physica

More information

Identification of macro and micro parameters in solidification model

Identification of macro and micro parameters in solidification model BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vo. 55, No. 1, 27 Identification of macro and micro parameters in soidification mode B. MOCHNACKI 1 and E. MAJCHRZAK 2,1 1 Czestochowa University

More information

International Journal "Information Technologies & Knowledge" Vol.5, Number 1,

International Journal Information Technologies & Knowledge Vol.5, Number 1, Internationa Journa "Information Tecnoogies & Knowedge" Vo.5, Number, 0 5 EVOLVING CASCADE NEURAL NETWORK BASED ON MULTIDIMESNIONAL EPANECHNIKOV S KERNELS AND ITS LEARNING ALGORITHM Yevgeniy Bodyanskiy,

More information

Efficient Generation of Random Bits from Finite State Markov Chains

Efficient Generation of Random Bits from Finite State Markov Chains Efficient Generation of Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

Tracking Control of Multiple Mobile Robots

Tracking Control of Multiple Mobile Robots Proceedings of the 2001 IEEE Internationa Conference on Robotics & Automation Seou, Korea May 21-26, 2001 Tracking Contro of Mutipe Mobie Robots A Case Study of Inter-Robot Coision-Free Probem Jurachart

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, we as various

More information

Notes on Backpropagation with Cross Entropy

Notes on Backpropagation with Cross Entropy Notes on Backpropagation with Cross Entropy I-Ta ee, Dan Gowasser, Bruno Ribeiro Purue University October 3, 07. Overview This note introuces backpropagation for a common neura network muti-cass cassifier.

More information

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT Söerhaus-Workshop 2009 October 16, 2009 What is HILBERT? HILBERT Matab Impementation of Adaptive 2D BEM joint work with M. Aurada, M. Ebner, S. Ferraz-Leite, P. Godenits, M. Karkuik, M. Mayr Hibert Is

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES ANDRZEJ DUDEK AND ANDRZEJ RUCIŃSKI Abstract. For positive integers k and, a k-uniform hypergraph is caed a oose path of ength, and denoted by

More information

arxiv: v4 [math.na] 25 Aug 2014

arxiv: v4 [math.na] 25 Aug 2014 BIT manuscript No. (wi be inserted by the editor) A muti-eve spectra deferred correction method Robert Speck Danie Ruprecht Matthew Emmett Michae Minion Matthias Boten Rof Krause arxiv:1307.1312v4 [math.na]

More information

Discrete Techniques. Chapter Introduction

Discrete Techniques. Chapter Introduction Chapter 3 Discrete Techniques 3. Introduction In the previous two chapters we introduced Fourier transforms of continuous functions of the periodic and non-periodic (finite energy) type, as we as various

More information

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017 In-pane shear stiffness of bare stee deck through she finite eement modes G. Bian, B.W. Schafer June 7 COLD-FORMED STEEL RESEARCH CONSORTIUM REPORT SERIES CFSRC R-7- SDII Stee Diaphragm Innovation Initiative

More information

Evolutionary Product-Unit Neural Networks for Classification 1

Evolutionary Product-Unit Neural Networks for Classification 1 Evoutionary Product-Unit Neura Networs for Cassification F.. Martínez-Estudio, C. Hervás-Martínez, P. A. Gutiérrez Peña A. C. Martínez-Estudio and S. Ventura-Soto Department of Management and Quantitative

More information

Unconditional security of differential phase shift quantum key distribution

Unconditional security of differential phase shift quantum key distribution Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice

More information

A Robust Voice Activity Detection based on Noise Eigenspace Projection

A Robust Voice Activity Detection based on Noise Eigenspace Projection A Robust Voice Activity Detection based on Noise Eigenspace Projection Dongwen Ying 1, Yu Shi 2, Frank Soong 2, Jianwu Dang 1, and Xugang Lu 1 1 Japan Advanced Institute of Science and Technoogy, Nomi

More information

The Binary Space Partitioning-Tree Process Supplementary Material

The Binary Space Partitioning-Tree Process Supplementary Material The inary Space Partitioning-Tree Process Suppementary Materia Xuhui Fan in Li Scott. Sisson Schoo of omputer Science Fudan University ibin@fudan.edu.cn Schoo of Mathematics and Statistics University of

More information

Improving the Reliability of a Series-Parallel System Using Modified Weibull Distribution

Improving the Reliability of a Series-Parallel System Using Modified Weibull Distribution Internationa Mathematica Forum, Vo. 12, 217, no. 6, 257-269 HIKARI Ltd, www.m-hikari.com https://doi.org/1.12988/imf.217.611155 Improving the Reiabiity of a Series-Parae System Using Modified Weibu Distribution

More information

Theory of Generalized k-difference Operator and Its Application in Number Theory

Theory of Generalized k-difference Operator and Its Application in Number Theory Internationa Journa of Mathematica Anaysis Vo. 9, 2015, no. 19, 955-964 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/10.12988/ijma.2015.5389 Theory of Generaized -Difference Operator and Its Appication

More information

Adaptive Fuzzy Sliding Control for a Three-Link Passive Robotic Manipulator

Adaptive Fuzzy Sliding Control for a Three-Link Passive Robotic Manipulator Adaptive Fuzzy Siding Contro for a hree-link Passive Robotic anipuator Abstract An adaptive fuzzy siding contro (AFSC scheme is proposed to contro a passive robotic manipuator. he motivation for the design

More information

Worst Case Analysis of the Analog Circuits

Worst Case Analysis of the Analog Circuits Proceedings of the 11th WSEAS Internationa Conference on CIRCUITS, Agios Nikoaos, Crete Isand, Greece, Juy 3-5, 7 9 Worst Case Anaysis of the Anaog Circuits ELENA NICULESCU*, DORINA-MIOARA PURCARU* and

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems Convergence Property of the Iri-Imai Agorithm for Some Smooth Convex Programming Probems S. Zhang Communicated by Z.Q. Luo Assistant Professor, Department of Econometrics, University of Groningen, Groningen,

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

Research Article Fast 3D Node Localization in Multipath for UWB Wireless Sensor Networks Using Modified Propagator Method

Research Article Fast 3D Node Localization in Multipath for UWB Wireless Sensor Networks Using Modified Propagator Method Distributed Sensor Networks, Artice ID 32535, 8 pages http://dxdoiorg/55/24/32535 Research Artice Fast 3D Node ocaization in Mutipath for UWB Wireess Sensor Networks Using Modified Propagator Method Hong

More information