Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller

Size: px
Start display at page:

Download "Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller"

Transcription

1 daptive Noise Canceation Using Deep Cerebear Mode rticuation Controer Yu Tsao, Member, IEEE, Hao-Chun Chu, Shih-Wei an, Shih-Hau Fang, Senior Member, IEEE, Junghsi ee*, and Chih-Min in, Feow, IEEE bstract This paper proposes a deep cerebear mode articuation controer (DCMC) for adaptive noise canceation (NC). We expand upon the conventiona CMC by stacking singe-ayer CMC modes into mutipe ayers to form a DCMC mode and derive a modified backpropagation training agorithm to earn the DCMC parameters. Compared with conventiona CMC, the DCMC can characterize noninear transformations more effectivey because of its deep structure. Experimenta resuts confirm that the proposed DCMC mode outperforms the CMC in terms of residua noise in an NC task, showing that DCMC provides enhanced modeing capabiity based on channe characteristics. I. INTRODUCTION The goa of an adaptive noise canceation (NC) system is to remove noise components from signas. In NC systems, inear fiters are widey used for their simpe structure and satisfactory performance in genera conditions, where east mean square (MS) [1] and normaized MS [] are two effective criteria to estimate the fiter parameters. However, when the system has a noninear and compex response, a inear fiter may not provide optima performance. ccordingy, some noninear adaptive fiters have been deveoped. Notabe exampes incude the unscented Kaman fiter [3, 4] and the Voterra fiter [5, 6]. Meanwhie, cerebear mode articuation controer (CMC), a feed forward neura network mode, has been used as a compex piecewise inear fiter [7, 8]. Experimenta resuts showed that CMC provided satisfactory performance in terms of mean squared error (MSE) for noninear systems [9, 10]. CMC mode is a partiay connected perceptron-ike associative memory network [11]. Owing to its pecuiar structure, it overcomes fast growing probems and earning difficuties when the amount of training data is imited as compared to other neura networks [8, 1, 13]. Moreover, because of its simpe computation and good generaization capabiity, the CMC mode has been widey used to contro compex dynamica systems [14], noninear systems [9, 10], robot manipuators [15], and muti-input muti-output (MIMO) contro [16, 147]. Yu Tsao is with the Research Center for Information Technoogy Innovation, cademia Sinica, Taipei, Taiwan (corresponding author to provide phone: ; emai: yu.tsao@citi.sinica.edu.tw). Hao-Chun Chu and Shih-Wei an, Shih-Hau Fang, Junghsi ee, and Chih-Min in are with the Department of Eectrica Engineering, Yuan Ze University, Taoyuan, Taiwan. (e-mai: {david46331, {shfang, eeee, cm}@saturn.yzu.edu.tw). More recenty, deep earning has become a part of many state-of-the-art systems, particuary computer vision [18-0] and speech recognition [1-3]. Numerous studies indicate that by stacking severa shaow structures into a singe deep structure, the overa system coud achieve better data representation and, thus, more effectivey dea with noninear and high compexity tasks. Successfu exampes incude stacking denoising autoencoders [0], stacking sparse coding [4], mutiayer nonnegative matrix factorization [4], and deep neura networks [6, 7]. In this study, we propose a deep CMC (DCMC) framework that stacks severa singe-ayered CMCs. We aso derive a modified backpropagation agorithm to train the DCMC mode. Experimenta resuts on NC tasks show that the DCMC provides better resuts than conventiona CMC in terms of MSE scores..1 System Overview II. PROPOSED GORITHM Figure 1 shows the bock diagram of a typica NC system containing two microphones, one externa and the other interna. The externa microphone receives the noise source signa n(k), whie the interna one receives the noisy signa v(k). The noisy signa is a mixture of the signa of interest s(k) and the damage noise signa z(k). Therefore, v(k) = s(k) + z(k), where z(k) is generated by passing the noise signa n(k) through an unknown channe F( ). The reation between the noise signa n(k) and damage noise z(k) is from [8]. The NC system computes a fiter, F ( ), which transforms n(k) to y(k), so that the fina output, (v(k) y(k)), is cose to the signa of interest, s(k). The parameters in F ( ) are updated by minimizing the MSE. Recenty, the concept of deep earning has garnered great attention. Inspired by deep earning, we propose a DCMC framework, which stacks severa ayers of the singe-ayered Interest Signa s(k) Noise Signa n(k) z(k) + Noisy Signa v(k) + Output (v k y k ) + - Unknown System F( ) y(k) DCMC System F ( ) Figure 1. Bock diagram of the proposed NC system. (k)

2 CMC, to construct the fiter F ( ), as indicated in Fig. 1. Fig. shows the architecture of the DCMC, which is composed of a puraity of CMC ayers. The, R, and W in Fig. denote the association memory space, receptive fied space, and weight memory space, respectivey, in a CMC mode. In the next section we wi detai these three spaces. The output of the first ayer CMC is treated as the input for the next CMC ayer. The derived F ( ), as modeed by the DCMC, can better characterize the signas by using mutipe noninear processing ayers, and thus achieve an improved noise canceation performance. Input x 1 x x N ayer 1 ayer ayer CMC R W CMC R W Figure. rchitecture of the deep CMC.. Review of the CMC Mode This section reviews the structure and parameter-earning agorithm of the CMC mode.. Structure of a CMC Fig. 3 shows a CMC mode with five spaces: an input space, an association memory space, a receptive fied space, a weight memory space, and an output space. The main functions of these five spaces are as foows: 1) Input space: This space is the input of the CMC. In Fig. 3, the input vector is x = [x 1, x,, x N ] T R N, where N is the feature dimension. ) ssociation memory space: This space hods the excitation functions of the CMC, and it has a muti-ayer concept. Pease note that the ayers here (indicating the depth x x N Input ssociation memory () DCMC CMC R W Weight memory Receptive-fied (W) (R) Figure 3. rchitecture of a CMC. Output Output x 1 y 1 b y of association memory space) are different from those presented in Section.1 (indicating the number of CMCs in a DCMC). To avoid confusion, we ca the ayer for the association memory S_ayer and the ayer for the CMC number ayer in the foowing discussion. Fig. 4 shows an exampe of an association memory space with a two-dimensiona input vector, x = [x 1, x ] T with N =. The B and UB are ower and upper bounds, respectivey. We first divide x 1 into bocks (, B) and x into bocks (a, b). Next, by shifting each variabe an eement, we get bocks (C, D) and bocks (c, d) for the second S_ayer. S_ayer4 S_ayer3 S_ayer S_ayer1 h g f e d c b a Variabe x UB Gg B B Bb C E Figure 4. CMC with a two-dimensiona vector (N = ). ikewise, by shifting another variabe, we can generate another S_ayer. In Fig. 4, we have four S_ayers, each S_ayer having two bocks. Therefore, the bock number is eight (N B = 8) for one variabe; accordingy, the overa association memory space has 16 bocks (N = N B N). Each bock contains an excitation function, which must be a continuousy bounded function, such as the Gaussian, trianguar, or waveet function. In this study, we use the Gaussian function [as shown in Fig. 4]: φ i = exp [ (x i m i ) ], = 1,, N B ; i = 1,, N, (1) where x i is the input signa, and m i and represent the associative memory functions within the mean and variance, respectivey, of the i-th input of the -th bock. 3) Receptive fied space: In Fig. 4, areas formed by bocks are caed receptive fieds. The receptive fied space has eight areas ( =8): a, Bb, Cc, Dd, Ee, Ff, Gg, and Hh. Given the input x, the -th receptive fied function is represented as b = φ i = exp [ ( (x i m i ) N N i=1 i=1 )]. () In the foowing, we express the receptive fied functions in the form of vectors, namey, b = [b 1, b,, b NR ] T. 4) Weight memory space: This space specifies the adustabe weights of the resuts of the receptive fied space: G Dd B Ff D F H Variabe x 1 UB S_ayer1 S_ayer S_ayer3 S_ayer4

3 w = [ 1,,, ] T for t = 1,,, M, (3) where M denotes the output vector dimension. 5) Output space: From Fig. 3, the output of the CMC is y = w T N b = exp [ ( (x i m i ) R N =1 i=1 )], (4) where y is the t-th eement of the output vector, y = [y 1, y,, y ] T. The output of state point is the agebraic sum of outputs of receptive fieds (a, Bb, Cc, Dd, Ee, Ff, Gg, and Hh) mutipied by the corresponding weights. B. Parameters of daptive earning gorithm To estimate the parameters in the association memory, receptive fied, and weight memory spaces of the CMC, we first define an obective function: O(k) = 1 [ =1 (k)], (5) where error signa (k) = d (k) y (k), indicating the error between the desired response d (k) and the fiter s output y (k), at the k-th sampe. Based on Eq. (5), the normaized gradient descent method can be used to derive the update rues for the parameters in a CMC mode: where where m i (k + 1) = m i (k) + μ m, m i (x i m i ) = b m ( i ( ) =1 ); (k + 1) = (k) + μ σ, (x i m i = b ) σ ( i ( ) 3 =1 ); (k + 1) = (k) + μ w where.3 Proposed DCMC Mode w t = b., w t. Structure of the DCMC From Eq. (4), the output of the first ayer y 1 is obtained by 1 1 ) (6) (7) (8) y 1 = 1 N exp [ ( (x i m i =1 i=1 )], (9) 1 where y 1 is the t-th eement of the output of y 1 1, and is the number of receptive fieds in the first ayer. Next, the correation of the output of the (-1)-th ayer (y 1 ) and that of the -th ayer (y ) can be formuated as (y i 1 m i) y = =1 exp [ ( i=1 )], = ~, (10) N where N is the input dimension of the -th ayer (output dimension of the ( 1)-th ayer); is the number of receptive fieds in the -th ayer; y is the t-th eement of y ; m i, σ i, and w are the parameters in the -th CMC; is the tota ayer number of CMC in a DCMC. 1) Backpropagation gorithm for DCMC ssume that the output vector of a DCMC is y = [y 1, y,, y ] T R, where M is the feature dimension, the obective function of the DCMC is O(k) = 1 [d (k) y (k)] =1. (11) In the foowing, we present the backpropagation agorithm to estimate the parameters in the DCMC. Because the update rues for means and variances and weights are different, they are presented separatey. 1) The update agorithm of means and variances: The update agorithms of the means and variances for the ast ayer (the -th ayer) of DCMC are the same as that of CMC (as shown in Eqs. (6) and (7)). For the penutimate ayer (the (-1)-th ayer), the parameter updates are: z = b p ip z ip b p, (1) where b p is the p-th receptive fied function of the (-1)-th ayer. We define the momentum δ zp = b of the p-th p receptive fied function in the (-1)-th ayer. Then, we have = =1, (13) δ zp b b p b where b is the -th receptive fied function for the -th ayer. Notaby, by repacing z with m and σ in Eq. (13), we obtain momentums δ mp and δ σp. Simiary, we can derive the momentum, δ zq, for the q-th receptive fied function in the (-)-th ayer by: = = δ zq = b q b p p=1 b q b p b p p=1 b δ zp, q (14) where b q is the q-th receptive fied function for the (-)-th ayer, and is the number of receptive fieds in the ( 1)-th ayer. Based on the normaized gradient descent method, the earning of m i (the i-th mean parameter in the -th receptive fied in the -th ayer) is: m i (k + 1) = m i (k) + μ m b m i δ m ; (15) simiary, the earning agorithm of (the i-th variance parameter in the -th receptive fied in the -th ayer) is: (k + 1) = (k) + μ σ b σ i δ σ, (16) where μ m in Eq. (15) and μ σ in Eq. (16) are the earning rates for the mean and variance updates, respectivey. ) The update agorithm of weights: The update rue of the weight in the ast ayer (the -th ayer) of DCMC is the same as that for CMC. For the penutimate ayer (the (-1)-th ayer), the parameter update is:

4 w = y t t w t y t, (17) = y t : N b y R r =1 y r=1 t b y, (18) r where the momentum of the (-1)-th ayer δ wt = δ wt where y r is the r-th eement of the y. Simiary, the momentum for the (-)-th ayer can be computed by: δ wc = b y c =1 y b y =1 = b y c =1 y b δ w t =1. (19) ccording to the normaized gradient descent method, the earning agorithm of (weight for the -th receptive fied and the t-th output in the -th ayer) is defined as (k + 1) = (k) + μ w y t w t where μ w is the earning rate for the weights. III. 3.1 Experimenta Setup EXPERIMENTS δ wt, (0) In the experiment, we consider the signa of interest s(k) = sin (0.06k) mutipied by a white noise signa, normaized within [ 1, 1], as shown in Fig. 5 (). The noise signa, n(k), is generated by white noise, normaized within [ 1.5, 1.5]. tota of 100 training sampes are used in this experiment. The noise signa n(k) wi go through a noninear channe generating the damage noise z(k). The reation between n(k) and z(k) is z(k) = F(n(k)), where F( ) represents the function of the noninear channe. In this experiment, we used tweve different functions, { 0.6 (n(k)) i 1 ; 0.6 cos ((n(k)) i 1 ) ; 0.6 sin ((n(k)) i 1 ), i=1,, 3, 4 } to generate four different damage noise signas z(k). The noisy signas v(k) associated with four different z(k) signas, with three representative channe functions, namey, F = 0.6 (n(k)) 3, F( ) = 0.6 cos ((n(k)) 3 ), and F( ) = 0.6 sin ((n(k)) 3 ) are shown in Figs. 5 (B), (C), and (D), respectivey. We foowed reference [8] to set up the parameters of the DCMC, as characterized beow: 1) Number of ayers (S_ay r): 4 ) Number of bocks (N B )=8: C i(5 (N e )/ 4 (S_ay r)) 4 (S_ay r) = 8. 3) Number of receptive fieds ( )= 8. 4) ssociative memory functions: φ i = xp [ (x i m i ) ], i = 1; = 1,,. Note that C i( ) represents the unconditiona carry of the remainder. Signa range detection is required to set the UB and B necessary to incude a the signas. In this study, () Signa of interest (B) F( ) = 0.6 (n(k)) 3 (C)F( ) = 0.6 cos ((n(k)) 3 ) (D)F( ) = 0.6 sin((n(k)) 3 ) Figure 5. () Signa of interest s(k). (B-D) Noisy signa v(k) with three channe functions. our preiminary resuts show that [UB B]=[3-3] gives the best performance. Pease note that the main goa this study is to investigate whether DCMC can yied better NC resuts than a singe-ayer CMC. Therefore, we report the resuts using [3-3] for both CMC and DCMC in the foowing discussions. The initia means of the Gaussian function (m i ) are set in the midde of each bock (N B ) and the initia variances of the Gaussian function ( ) are determined by the size of each bock (N B ). With [UB B]=[3-3], we initiaize the mean parameters as: m i1 =.4, m i = 1.8, m i3 = 1., m i4 = 0.6, m i5 = 0.6, m i6 = 1., m i7 = 1.8, m i8 =.4, so that the eight bocks can cover [UB B] more eveny. Meanwhie, we set = 0.6 for =1,..8, and the initia weights ( ) zeros. Based on our experiments, the parameters initiaized differenty ony affect the performance at the first few epochs and converge to simiar vaues quicky. The earning rates are chosen as μ s = μ z = μ w = μ m = μ σ = The parameters within a ayers of the DCMC are the same. In this study, we examine the performance of DCMCs formed by three, five, and seven ayers of CMCs, which are denoted as DCMC(3), DCMC(5), and DCMC(7), respectivey. The input dimension was set as N=1; the output dimensions for CMC and DCMCs were set as M = 1 and M = 1, respectivey. 3. Experimenta Resuts This section compares DCMC with different agorithms based on two performance metrics, the MSE and the convergence speed. Fig. 6 shows the converged MSE under a CMC and a DCMC under the three different settings, (S_ayer =, N e = 5), (S_ayer = 4, N e = 5), and (S_ayer = 4, N e = 9) testing on the channe function F( ) = 0.6 cos ((n(k)) 3 ). To compare the performance of the proposed DCMC, we conducted experiments using two popuar adaptive fiter methods, namey MS [1] and

5 the Voterra fiter [5, 6]. For a fair comparison, the earning epochs are set the same for MS, Voterra, CMC, and DCMC, where there are 100 data sampes in each epoch. The parameters of MS and the Voterra fiter are tested and the best resuts are reported in Fig. 6. From Fig. 6, we see that DCMC outperforms not ony Voterra and MS, but aso CMC under the three setups. The same trends are observed across the 1 channe functions, and thus ony the resut of F( ) = 0.6 cos ((n(k)) 3 ) is presented as a representative. () F( ) = 0.6 (n(k)) 3 Figure 6. MSE of MS, Voterra, CMC, and DCMC with channe function F( ) = 0.6 cos ((n(k)) 3 ). Speed is aso an important performance metric for NC tasks. Fig. 7 shows the convergence speed and MSE reduction rate versus number of epochs, for different agorithms. For ease of comparison and due to imited space, Fig. 7 ony shows the resuts of three-ayer DCMC (denoted as DCMC in Fig. 7) since the trends of DCMC performances are consistent across different ayer numbers. For CMC and DCMC, we adopted S_ayer = 4, N e = 5. Fig. 7 shows the resuts of three channe functions: F( ) = 0.6 (n(k)) 3, F( ) = 0.6 cos ((n(k)) 3 ), and F( ) = 0.6 sin ((n(k)) 3 ). The resuts in Fig. 7 show that MS and Voterra yied better performance than CMC and DCMC when the number of epoch is few. On the other hand, when the number of epoch increases, both DCMC and CMC give ower MSEs compared to that from MS and Voterra, over a testing channes. Moreover, DCMC consistenty outperforms CMC with ower converged MSE scores. The resuts show that the performance gain of the DCMC becomes increasingy more significant as the noninearity of the channes increases. Finay, we note that the performances of both DCMC and CMC became saturated around 400 epochs. In a rea-word appication, a deveopment set of data can be used to determine the saturation point, so that the adaptation can be switched off. (B) F( ) = 0.6 cos ((n(k)) 3 ) (C) F( ) = 0.6 sin ((n(k)) 3 ) Figure 7. MSEs of MS, Voterra, CMC, and DCMC with three types of channe functions. More resuts are presented in Simuation resuts of a CMC and that of a DCMC, both for 400 epochs of training, are shown in Figs. 8 () and (B), respectivey. The resuts show that the proposed DCMC can achieve better fitering performance than that from the CMC for this noise canceation system. () CMC (B) DCMC Figure 8. Recovered signa using () CMC and (B) DCMC, where F( ) = 0.6 cos ((n(k)) 3 ).

6 Tabe I ists the mean and variance of MSE scores for MS, Voterra, CMC, and DCMC across 1 channe functions. The MSE for each method at a channe function was obtained with 1000 epochs of training. From the resuts, both CMC and DCMC give ower MSE than MS and Voterra. In addition to the resuts in Tabe I, we adopted the dependent t-test for the hypothesis test on the 1 sets of resuts. The t-test resuts reveaed that DCMC outperforms CMC with P-vaues = TBE I. MEN ND VIRINCE OF MSES FOR MS, VOTERR, CMC, ND DCMC OVER 1 CHNNE FUCNTIONS MS Voterra CMC DCMC Mean Variance IV. CONCUSION The contribution of the present study was two-fod: First, inspired by the recent success of deep earning agorithms, we extended the CMC structure into a deep one, termed deep CMC (DCMC). Second, a backpropagation agorithm was derived to estimate the DCMC parameters. Due to the five-space structure, the backpropagation for DCMC is different from that used in the reated artificia neura networks. The parameter updates invoved in DCMC training incude two parts (1) The update agorithm of means and variances; () The update agorithm of weights. Experimenta resuts of the NC tasks showed that the proposed DCMC can achieve better noise canceation performance when compared with that from the conventiona singe-ayer CMC. In future, we wi investigate the capabiities of the DCMC on other signa processing tasks, such as echo canceation and singe-microphone noise reduction. Meanwhie, advanced deep earning agorithms used in deep neura networks, such as dropout and sparsity constraints, wi be incuded in the DCMC framework. Finay, ike reated deep earning researches, identifying a way to optimize the number of ayers and initia parameters in DCMC per the amount of training data are important future works. REFERENCES [1] B. Widrow, et a., daptive noise canceing: Principes and appications, Proceedings of the IEEE, vo. 63 (1), pp , [] S. Haykin, daptive Fiter Theory, fourth edition, Prentice-Ha, 00. [3] E.. Wan and R. van der Merwe, The unscented Kaman fiter for noninear estimation, in Proc. S-SPCC, pp , 000. [4] F. Daum, Noninear fiters: beyond the Kaman fiter, IEEE erospace and Eectronic Systems Magazine, vo. 0 (8), pp , 005. [5]. Tan and J. Jiang, daptive Voterra fiters for active contro of noninear noise processes, IEEE Transactions on Signa Processing, vo. 49 (8), pp , 001. [6] V. John Mathews, daptive Voterra fiters using orthogona structures, IEEE Signa Processing etters, vo. 3 (1), pp [7] G. Horvath and T. Szabo, CMC neura network with improved generaization property for system modeing, in Proc. IMTC, vo., pp , 00. [8] C. M. in,. Y. Chen, and D. S. Yeung, daptive fiter design using recurrent cerebear mode articuation controer, IEEE Trans. on Neura Networks, vo. 1 (7), pp , 010. [9] C. M. in and Y. F. Peng, daptive CMC-based supervisory contro for uncertain noninear systems, IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vo. 34 (), pp , 004. [10] C. P. Hung, Integra variabe structure contro of noninear system using a CMC neura network earning approach, IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, vo. 34 (1), pp , 004. [11] J. S. bus, new approach to manipuator contro: The cerebear mode articuation controer (CMC), Journa of Dynamic Systems, Measurement, and Contro, vo. 97 (3), pp. 0 7, [1] P. E. M. meida and M. G. Simoes, Parametric CMC networks: Fundamentas and appications of a fast convergence neura structure, IEEE Trans. Ind. ppicat., vo. 39 (5), pp , 003. [13] C. M. in,. Y. Chen, and C. H. Chen, RCMC hybrid contro for MIMO uncertain noninear systems using siding-mode technoogy, IEEE Trans. Neura Netw., vo. 18 (3), pp , 007. [14] S. Commuri and F.. ewis, CMC neura networks for contro of noninear dynamica systems: Structure, stabiity and passivity, utomatica, vo. 33 (4), pp , [15] Y. H. Kim and F.. ewis, Optima design of CMC neura-network controer for robot manipuators, IEEE Trans. on Systems, Man, and Cybernetics, Part C: ppications and Reviews, vo. 30 (1), pp. -31, 000. [16] J. Y. Wu, MIMO CMC neura network cassifier for soving cassification probems, ppied Soft Computing, vo. 11 (), pp , 011. [17] Z. R. Yu, T. C. Yang, and J. G. Juang, ppication of CMC and FPG to a twin rotor MIMO system, in Proceedings ICIE, pp , 010. [18] C. Farabet, C. Couprie,. Naman, and Y. ecun, earning hierarchica features for scene abeing, IEEE Trans. on Pattern naysis and Machine Inteigence, vo. 35 (8), pp , 013. [19] H. ee, C. Ekanadham, and. Y. Ng, Sparse deep beief net mode for visua area V, in Proc. NIPS, 007. [0] P. Vincent, et a., Stacked denoising autoencoders: earning usefu representations in a deep network with a oca denoising criterion, The Journa of Machine earning Research, vo. 11, pp , 010. [1] G. Hinton et a., Deep neura networks for acoustic modeing in speech recognition, IEEE Signa Processing Magazine, vo. 9 (6), pp. 8-97, 01. [] Y. ecun, Y. Bengio, and G. Hinton, Deep earning, Nature, vo. 51, pp , 015. [3] S. M. Siniscachi, T. Svendsen, and C. H. ee, n artificia neura network approach to automatic speech processing, Neurocomputing, vo. 140, pp , 014. [4] Y. He, K. Kavukcuogu, Y. Wang,. Szam, and Y. Qi, Unsupervised feature earning by deep sparse coding, in SDM, pp , 014. [5]. Cichocki and R. Zdunek, Mutiayer nonnegative matrix factorization, Eectronics etters, pp , 006. [6] S. iang and R. Srikant, Why deep neura networks? arxiv preprint arxiv: [7] J. Ba and R. Caruana, Do deep nets reay need to be deep? In Procs NIPS, pp , 014. [8] C. T. in and C. F. Juang, n adaptive neura fuzzy fiter and its appications, IEEE Trans. on Systems, Man, and Cybernetics,B: Cybernetics, vo. 7 (4), pp , 1997.

Multilayer Kerceptron

Multilayer Kerceptron Mutiayer Kerceptron Zotán Szabó, András Lőrincz Department of Information Systems, Facuty of Informatics Eötvös Loránd University Pázmány Péter sétány 1/C H-1117, Budapest, Hungary e-mai: szzoi@csetehu,

More information

A Novel Learning Method for Elman Neural Network Using Local Search

A Novel Learning Method for Elman Neural Network Using Local Search Neura Information Processing Letters and Reviews Vo. 11, No. 8, August 2007 LETTER A Nove Learning Method for Eman Neura Networ Using Loca Search Facuty of Engineering, Toyama University, Gofuu 3190 Toyama

More information

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron

A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Neura Information Processing - Letters and Reviews Vo. 5, No. 2, November 2004 LETTER A Soution to the 4-bit Parity Probem with a Singe Quaternary Neuron Tohru Nitta Nationa Institute of Advanced Industria

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

Convolutional Networks 2: Training, deep convolutional networks

Convolutional Networks 2: Training, deep convolutional networks Convoutiona Networks 2: Training, deep convoutiona networks Hakan Bien Machine Learning Practica MLP Lecture 8 30 October / 6 November 2018 MLP Lecture 8 / 30 October / 6 November 2018 Convoutiona Networks

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

Unconditional security of differential phase shift quantum key distribution

Unconditional security of differential phase shift quantum key distribution Unconditiona security of differentia phase shift quantum key distribution Kai Wen, Yoshihisa Yamamoto Ginzton Lab and Dept of Eectrica Engineering Stanford University Basic idea of DPS-QKD Protoco. Aice

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

Training Algorithm for Extra Reduced Size Lattice Ladder Multilayer Perceptrons

Training Algorithm for Extra Reduced Size Lattice Ladder Multilayer Perceptrons Training Agorithm for Extra Reduced Size Lattice Ladder Mutiayer Perceptrons Daius Navakauskas Division of Automatic Contro Department of Eectrica Engineering Linköpings universitet, SE-581 83 Linköping,

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

Adaptive Fuzzy Sliding Control for a Three-Link Passive Robotic Manipulator

Adaptive Fuzzy Sliding Control for a Three-Link Passive Robotic Manipulator Adaptive Fuzzy Siding Contro for a hree-link Passive Robotic anipuator Abstract An adaptive fuzzy siding contro (AFSC scheme is proposed to contro a passive robotic manipuator. he motivation for the design

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE

DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE DISTRIBUTION OF TEMPERATURE IN A SPATIALLY ONE- DIMENSIONAL OBJECT AS A RESULT OF THE ACTIVE POINT SOURCE Yury Iyushin and Anton Mokeev Saint-Petersburg Mining University, Vasiievsky Isand, 1 st ine, Saint-Petersburg,

More information

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1 Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Consistent linguistic fuzzy preference relation with multi-granular uncertain linguistic information for solving decision making problems

Consistent linguistic fuzzy preference relation with multi-granular uncertain linguistic information for solving decision making problems Consistent inguistic fuzzy preference reation with muti-granuar uncertain inguistic information for soving decision making probems Siti mnah Binti Mohd Ridzuan, and Daud Mohamad Citation: IP Conference

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch

Minimizing Total Weighted Completion Time on Uniform Machines with Unbounded Batch The Eighth Internationa Symposium on Operations Research and Its Appications (ISORA 09) Zhangiaie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 402 408 Minimizing Tota Weighted Competion

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR 1 Maximizing Sum Rate and Minimizing MSE on Mutiuser Downink: Optimaity, Fast Agorithms and Equivaence via Max-min SIR Chee Wei Tan 1,2, Mung Chiang 2 and R. Srikant 3 1 Caifornia Institute of Technoogy,

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

Tracking Control of Multiple Mobile Robots

Tracking Control of Multiple Mobile Robots Proceedings of the 2001 IEEE Internationa Conference on Robotics & Automation Seou, Korea May 21-26, 2001 Tracking Contro of Mutipe Mobie Robots A Case Study of Inter-Robot Coision-Free Probem Jurachart

More information

Structural Control of Probabilistic Boolean Networks and Its Application to Design of Real-Time Pricing Systems

Structural Control of Probabilistic Boolean Networks and Its Application to Design of Real-Time Pricing Systems Preprints of the 9th Word Congress The Internationa Federation of Automatic Contro Structura Contro of Probabiistic Booean Networks and Its Appication to Design of Rea-Time Pricing Systems Koichi Kobayashi

More information

Decoupled Parallel Backpropagation with Convergence Guarantee

Decoupled Parallel Backpropagation with Convergence Guarantee Zhouyuan Huo 1 Bin Gu 1 Qian Yang 1 Heng Huang 1 Abstract Backpropagation agorithm is indispensabe for the training of feedforward neura networks. It requires propagating error gradients sequentiay from

More information

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems Source and Reay Matrices Optimization for Mutiuser Muti-Hop MIMO Reay Systems Yue Rong Department of Eectrica and Computer Engineering, Curtin University, Bentey, WA 6102, Austraia Abstract In this paper,

More information

Disturbance decoupling by measurement feedback

Disturbance decoupling by measurement feedback Preprints of the 19th Word Congress The Internationa Federation of Automatic Contro Disturbance decouping by measurement feedback Arvo Kadmäe, Üe Kotta Institute of Cybernetics at TUT, Akadeemia tee 21,

More information

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes A Fundamenta Storage-Communication Tradeoff in Distributed Computing with Stragging odes ifa Yan, Michèe Wigger LTCI, Téécom ParisTech 75013 Paris, France Emai: {qifa.yan, michee.wigger} @teecom-paristech.fr

More information

Supervised i-vector Modeling - Theory and Applications

Supervised i-vector Modeling - Theory and Applications Supervised i-vector Modeing - Theory and Appications Shreyas Ramoji, Sriram Ganapathy Learning and Extraction of Acoustic Patterns LEAP) Lab, Eectrica Engineering, Indian Institute of Science, Bengauru,

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Polar Snakes: a fast and robust parametric active contour model

Polar Snakes: a fast and robust parametric active contour model Poar Snakes: a fast and robust parametric active contour mode Christophe Coewet To cite this version: Christophe Coewet. Poar Snakes: a fast and robust parametric active contour mode. IEEE Int. Conf. on

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

Quick Training Algorithm for Extra Reduced Size Lattice-Ladder Multilayer Perceptrons

Quick Training Algorithm for Extra Reduced Size Lattice-Ladder Multilayer Perceptrons INFORMATICA, 2003, Vo. 14, No. 2, 223 236 223 2003 Institute of Mathematics and Informatics, Vinius Quick Training Agorithm for Extra Reduced Size Lattice-Ladder Mutiayer Perceptrons Daius NAVAKAUSKAS

More information

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM

DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM DIGITAL FILTER DESIGN OF IIR FILTERS USING REAL VALUED GENETIC ALGORITHM MIKAEL NILSSON, MATTIAS DAHL AND INGVAR CLAESSON Bekinge Institute of Technoogy Department of Teecommunications and Signa Processing

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

Extended SMART Algorithms for Non-Negative Matrix Factorization

Extended SMART Algorithms for Non-Negative Matrix Factorization Extended SMART Agorithms for Non-Negative Matrix Factorization Andrzej CICHOCKI 1, Shun-ichi AMARI 2 Rafa ZDUNEK 1, Rau KOMPASS 1, Gen HORI 1 and Zhaohui HE 1 Invited Paper 1 Laboratory for Advanced Brain

More information

Neural Networks Compression for Language Modeling

Neural Networks Compression for Language Modeling Neura Networks Compression for Language Modeing Artem M. Grachev 1,2, Dmitry I. Ignatov 2, and Andrey V. Savchenko 3 arxiv:1708.05963v1 [stat.ml] 20 Aug 2017 1 Samsung R&D Institute Rus, Moscow, Russia

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Math 124B January 17, 2012

Math 124B January 17, 2012 Math 124B January 17, 212 Viktor Grigoryan 3 Fu Fourier series We saw in previous ectures how the Dirichet and Neumann boundary conditions ead to respectivey sine and cosine Fourier series of the initia

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

Nonlinear Analysis of Spatial Trusses

Nonlinear Analysis of Spatial Trusses Noninear Anaysis of Spatia Trusses João Barrigó October 14 Abstract The present work addresses the noninear behavior of space trusses A formuation for geometrica noninear anaysis is presented, which incudes

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

The influence of temperature of photovoltaic modules on performance of solar power plant

The influence of temperature of photovoltaic modules on performance of solar power plant IOSR Journa of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vo. 05, Issue 04 (Apri. 2015), V1 PP 09-15 www.iosrjen.org The infuence of temperature of photovotaic modues on performance

More information

Centralized Coded Caching of Correlated Contents

Centralized Coded Caching of Correlated Contents Centraized Coded Caching of Correated Contents Qianqian Yang and Deniz Gündüz Information Processing and Communications Lab Department of Eectrica and Eectronic Engineering Imperia Coege London arxiv:1711.03798v1

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

II. PROBLEM. A. Description. For the space of audio signals

II. PROBLEM. A. Description. For the space of audio signals CS229 - Fina Report Speech Recording based Language Recognition (Natura Language) Leopod Cambier - cambier; Matan Leibovich - matane; Cindy Orozco Bohorquez - orozcocc ABSTRACT We construct a rea time

More information

International Journal "Information Technologies & Knowledge" Vol.5, Number 1,

International Journal Information Technologies & Knowledge Vol.5, Number 1, Internationa Journa "Information Tecnoogies & Knowedge" Vo.5, Number, 0 5 EVOLVING CASCADE NEURAL NETWORK BASED ON MULTIDIMESNIONAL EPANECHNIKOV S KERNELS AND ITS LEARNING ALGORITHM Yevgeniy Bodyanskiy,

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Symbolic models for nonlinear control systems using approximate bisimulation

Symbolic models for nonlinear control systems using approximate bisimulation Symboic modes for noninear contro systems using approximate bisimuation Giordano Poa, Antoine Girard and Pauo Tabuada Abstract Contro systems are usuay modeed by differentia equations describing how physica

More information

Title Sinusoidal Signals. Author(s) Sakai, Hideaki; Fukuzono, Hayato. Conference: Issue Date DOI

Title Sinusoidal Signals. Author(s) Sakai, Hideaki; Fukuzono, Hayato. Conference: Issue Date DOI Tite Anaysis of Adaptive Fiters in Fee Sinusoida Signas Authors) Sakai, Hideaki; Fukuzono, Hayato Proceedings : APSIPA ASC 2009 : Asi Citation Information Processing Association, Conference: 430-433 Issue

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

FREQUENCY modulated differential chaos shift key (FM-

FREQUENCY modulated differential chaos shift key (FM- Accepted in IEEE 83rd Vehicuar Technoogy Conference VTC, 16 1 SNR Estimation for FM-DCS System over Mutipath Rayeigh Fading Channes Guofa Cai, in Wang, ong ong, Georges addoum Dept. of Communication Engineering,

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

FRIEZE GROUPS IN R 2

FRIEZE GROUPS IN R 2 FRIEZE GROUPS IN R 2 MAXWELL STOLARSKI Abstract. Focusing on the Eucidean pane under the Pythagorean Metric, our goa is to cassify the frieze groups, discrete subgroups of the set of isometries of the

More information

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver

More information

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Algorithms to solve massively under-defined systems of multivariate quadratic equations Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations

More information

Target Location Estimation in Wireless Sensor Networks Using Binary Data

Target Location Estimation in Wireless Sensor Networks Using Binary Data Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344

More information

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value

Converting Z-number to Fuzzy Number using. Fuzzy Expected Value ISSN 1746-7659, Engand, UK Journa of Information and Computing Science Vo. 1, No. 4, 017, pp.91-303 Converting Z-number to Fuzzy Number using Fuzzy Expected Vaue Mahdieh Akhbari * Department of Industria

More information

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems

Convergence Property of the Iri-Imai Algorithm for Some Smooth Convex Programming Problems Convergence Property of the Iri-Imai Agorithm for Some Smooth Convex Programming Probems S. Zhang Communicated by Z.Q. Luo Assistant Professor, Department of Econometrics, University of Groningen, Groningen,

More information

Evolutionary Product-Unit Neural Networks for Classification 1

Evolutionary Product-Unit Neural Networks for Classification 1 Evoutionary Product-Unit Neura Networs for Cassification F.. Martínez-Estudio, C. Hervás-Martínez, P. A. Gutiérrez Peña A. C. Martínez-Estudio and S. Ventura-Soto Department of Management and Quantitative

More information

Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models IO Conference Series: Earth and Environmenta Science AER OEN ACCESS Adjustment of automatic contro systems of production faciities at coa processing pants using mutivariant physico- mathematica modes To

More information

Safety Evaluation Model of Chemical Logistics Park Operation Based on Back Propagation Neural Network

Safety Evaluation Model of Chemical Logistics Park Operation Based on Back Propagation Neural Network 1513 A pubication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 6, 017 Guest Editors: Fei Song, Haibo Wang, Fang He Copyright 017, AIDIC Servizi S.r.. ISBN 978-88-95608-60-0; ISSN 83-916 The Itaian Association

More information

Approach to Identifying Raindrop Vibration Signal Detected by Optical Fiber

Approach to Identifying Raindrop Vibration Signal Detected by Optical Fiber Sensors & Transducers, o. 6, Issue, December 3, pp. 85-9 Sensors & Transducers 3 by IFSA http://www.sensorsporta.com Approach to Identifying Raindrop ibration Signa Detected by Optica Fiber ongquan QU,

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

Indirect Optimal Control of Dynamical Systems

Indirect Optimal Control of Dynamical Systems Computationa Mathematics and Mathematica Physics, Vo. 44, No. 3, 24, pp. 48 439. Transated from Zhurna Vychisite noi Matematiki i Matematicheskoi Fiziki, Vo. 44, No. 3, 24, pp. 444 466. Origina Russian

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information

Melodic contour estimation with B-spline models using a MDL criterion

Melodic contour estimation with B-spline models using a MDL criterion Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Interactive Fuzzy Programming for Two-level Nonlinear Integer Programming Problems through Genetic Algorithms

Interactive Fuzzy Programming for Two-level Nonlinear Integer Programming Problems through Genetic Algorithms Md. Abu Kaam Azad et a./asia Paciic Management Review (5) (), 7-77 Interactive Fuzzy Programming or Two-eve Noninear Integer Programming Probems through Genetic Agorithms Abstract Md. Abu Kaam Azad a,*,

More information

Mathematical Scheme Comparing of. the Three-Level Economical Systems

Mathematical Scheme Comparing of. the Three-Level Economical Systems Appied Mathematica Sciences, Vo. 11, 2017, no. 15, 703-709 IKAI td, www.m-hikari.com https://doi.org/10.12988/ams.2017.7252 Mathematica Scheme Comparing of the Three-eve Economica Systems S.M. Brykaov

More information

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997 Stochastic utomata etworks (S) - Modeing and Evauation Pauo Fernandes rigitte Pateau 2 May 29, 997 Institut ationa Poytechnique de Grenobe { IPG Ecoe ationae Superieure d'informatique et de Mathematiques

More information

arxiv: v1 [cs.lg] 31 Oct 2017

arxiv: v1 [cs.lg] 31 Oct 2017 ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT

More information

Simplified analysis of EXAFS data and determination of bond lengths

Simplified analysis of EXAFS data and determination of bond lengths Indian Journa of Pure & Appied Physics Vo. 49, January 0, pp. 5-9 Simpified anaysis of EXAFS data and determination of bond engths A Mishra, N Parsai & B D Shrivastava * Schoo of Physics, Devi Ahiya University,

More information

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance adar/ racing of Constant Veocity arget : Comparison of Batch (LE) and EKF Performance I. Leibowicz homson-csf Deteis/IISA La cef de Saint-Pierre 1 Bd Jean ouin 7885 Eancourt Cede France Isabee.Leibowicz

More information

A Better Way to Pretrain Deep Boltzmann Machines

A Better Way to Pretrain Deep Boltzmann Machines A Better Way to Pretrain Deep Botzmann Machines Rusan Saakhutdino Department of Statistics and Computer Science Uniersity of Toronto rsaakhu@cs.toronto.edu Geoffrey Hinton Department of Computer Science

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

arxiv: v1 [cs.ds] 12 Nov 2018

arxiv: v1 [cs.ds] 12 Nov 2018 Quantum-inspired ow-rank stochastic regression with ogarithmic dependence on the dimension András Giyén 1, Seth Loyd Ewin Tang 3 November 13, 018 arxiv:181104909v1 [csds] 1 Nov 018 Abstract We construct

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channes arxiv:cs/060700v1 [cs.it] 6 Ju 006 Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department University

More information

Legendre Polynomials - Lecture 8

Legendre Polynomials - Lecture 8 Legendre Poynomias - Lecture 8 Introduction In spherica coordinates the separation of variabes for the function of the poar ange resuts in Legendre s equation when the soution is independent of the azimutha

More information

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU

Copyright information to be inserted by the Publishers. Unsplitting BGK-type Schemes for the Shallow. Water Equations KUN XU Copyright information to be inserted by the Pubishers Unspitting BGK-type Schemes for the Shaow Water Equations KUN XU Mathematics Department, Hong Kong University of Science and Technoogy, Cear Water

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

Soft Clustering on Graphs

Soft Clustering on Graphs Soft Custering on Graphs Kai Yu 1, Shipeng Yu 2, Voker Tresp 1 1 Siemens AG, Corporate Technoogy 2 Institute for Computer Science, University of Munich kai.yu@siemens.com, voker.tresp@siemens.com spyu@dbs.informatik.uni-muenchen.de

More information

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS

NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS NOISE-INDUCED STABILIZATION OF STOCHASTIC DIFFERENTIAL EQUATIONS TONY ALLEN, EMILY GEBHARDT, AND ADAM KLUBALL 3 ADVISOR: DR. TIFFANY KOLBA 4 Abstract. The phenomenon of noise-induced stabiization occurs

More information

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS

FORECASTING TELECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODELS FORECASTING TEECOMMUNICATIONS DATA WITH AUTOREGRESSIVE INTEGRATED MOVING AVERAGE MODES Niesh Subhash naawade a, Mrs. Meenakshi Pawar b a SVERI's Coege of Engineering, Pandharpur. nieshsubhash15@gmai.com

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information