* o * * o o 1.5. o o. o * 0.5 * * o * * o. o o. oo o. * * * o * 0.5. o o * * * o o. * * oo. o o o o. * o * * o* o

Size: px
Start display at page:

Download "* o * * o o 1.5. o o. o * 0.5 * * o * * o. o o. oo o. * * * o * 0.5. o o * * * o o. * * oo. o o o o. * o * * o* o"

Transcription

1 Pseudinverse Learning Algrithm fr Feedfrward Neural Netwrs Λ PING GUO and MICHAEL R. LYU Department f Cmputer Science and Engineering, The Chinese University f Hng Kng, Shatin, NT, Hng Kng. SAR f P.R. CHINA pgu@cse.cuh.edu.h, lyu@cse.cuh.edu.h Abstract: - A supervised learning algrithm (Pseudinverse Learning Algrithm, PIL) fr feedfrward neural netwrs is develped. The algrithm is based n generalized linear algebraic methds and it adpts matrix inner prducts and pseudinverse peratins. The algrithm eliminates learning errrs by adding hidden layers and will give a perfect learning. Unlie the gradient descent algrithm, the PIL is a feedfrwardnly, fully autmated algrithm, including n critical user-dependent parameters such as learning rate r mmentum cnstant. Key-Wrds: - Feedfrward neural netwrs, Supervised learning, Generalized linear algebra, Pseudinverse learning algrithm, Fast perfect learning. Intrductin Several adaptive learning algrithms fr multilayer feedfrward neural netwrs have been prpsed[][2]. Mst f these algrithms are based n variatins f the gradient descent algrithm, fr example, Bac Prpagatin (BP) algrithm[]. They usually have a pr cnvergence rate and smetimes fall int lcal minima[3]. Cnvergence t lcal minima can be caused by the insufficient number f hidden neurns as well as imprper initial weight settings. Hwever, slw cnvergence rate is a cmmn prblem f the gradient descent methds, including the BP algrithm. Varius attempts have been made t speed up learning, such as prper initializatin f weight taviding lcal minima, an adaptive least square algrithm using the secnd rder terms f errr fr weight updating[4]. There is anther drawbac fr mst gradient descent algrithms, namely, learning factrs prblem", such as learning rate, mmentum cnstant. The values f these parameters are ften crucial fr the success f the algrithm. Mst gradient descent methds depend n these parameters which have t be specified by the user, as n theretical basis fr chsing them exists. Furthermre, fr applicatins which require high precisin utput, such as the predictin f chatic time series, the nwn algrithms are ften still t slw and inefficient. Fr example, lie staced general- Λ This research wr was fully supprted by a grant frm the Research Grants Cuncil f the Hng Kng Special Administrative Regin (Prject N. CUHK493/E). izatin[5], which needs t train a lt f netwrs t get level training samples, it is very cmputatintime cnsuming when using BP algrithm t perfrm the required tas. In rder t reduce training time and investigate the generalizatin prperties f learned neural netwr, in this paper a Pseudinverse Learning algrithm (PIL) is prpsed, which is a feedfrwardnly algrithm. Learning errrs are transferred frward and the netwr architecture established. The trained weights previusly in the netwr are nt changed. Hence, the learning errr is minimized n each layer separately and nt glbally fr the netwr as a whle. By adding layers t eliminate errrs, all examples f a training set can be perfect learned. 2 The Netwr Structure and Learning Algrithm 2. The Netwr Structure Let us cnsider a multilayer feedfrward neural netwr. The netwr has ne input layer, ne utput layer and several hidden layers. While the number f hidden layer depend n the desired learning accuracy and the examples f a training set t be learned in this paper. The weight matrix W l cnnects layer l and layer l + with elements wi;j. l Element wi;j l cnnects neurns i f layer l with neurns j f layer l +. Nte that the W matrix cnnects the input layer and the first hidden layer, the W L matrix cnnects

2 the last hidden layer and the utput layer. We assume nly the input layer has bias neurn, while the hidden layer(s) and the utput layer have n bias neurn. The nnlinear activate functin is ff( ), fr example, we can use s called sigmidal functin, ff(x) = +e x () which utput is in a range f (,), r a hyperblic functin tanh(x) = ex e x e x +e x (2) which utput is in a range f (-,) as an activate functin. Given a training data set D = fx i ; i g N i= ; Let (x i ; i ) be the i th input-utput training pair, where x i = (x ;x 2 ;:::;x n ) 2 R n is the input signal vectr and i = ( ; 2 ;:::; m ) 2 R m is the crrespnd target utput vectr. Fr given N sets f input utput vectr pairs as examples t be learned, we can summarize all given input vectrs int a matrix X with N rws and n + clumns. Here the last clumn f X is a bias neurn f cnstant value Each rwfx cntains the signals f ne input vectr. X =[Xj ], where matrix X cnsist f all signal x i as rw vectrs. All desired target utput vectrs are summarized int a matrix O with N rws and m clumns. Each rwf the matrix O cntains the signals f ne utput vectr i. In the designed netwr structure, the activate functin is nt applied t the utput layer, s the last layer is linear. Basically, the tas f training the netwr means trying t find the weight matrix which minimizes the sum-square-errr functin, E = 2N NX mx i= j= jjg j (x i ; ) i jjj 2 : (3) Where g(x; ) is a netwr mapping functin and is the netwr parameter set. In three layer structure case, g j (x; ) = NX i= w i;j ff i ( nx l= w i;l x l + i ): (4) where i is a bias value fr netwr input. Fr simplifying, we can write the system cst functin in matrix frm, E = 2N Trace[(G O)T (G O)]; (5) Prpagating the given examples thrugh the netwr, multiplying the utput f layer l with the weights between layers l and l +, and applying the nnlinear activate functin t all matrix elements, we get: Y l+ = ff(y l W l ); (6) and the netwr utput shuld be G = Y L W L : (7) where we use superscript L t dnate the last hidden layer utput and the last weight matrix. By examining the abve equatins, refrmulating the tas f training, the prblem becmes; minimize jjy L W L Ojj 2 : (8) This is a linear least square prblem. If we can find the netwr weight parameter such that maes jjy L W L Ojj 2 =,wewill have trained the neural netwr t learn all given examples exactly, that is, a perfect learning. Withut lss generalizatin, in the fllwing discussin we drp superscript index L in equatin (7). 2.2 Pesudinverse Slutin Nw let us discuss the equatin YW = O; W 2 R p m ; Y 2 R N p ; O 2 R N m (9) When p<n;the system is underdetermined system. Ntice that such a system either has n slutin r has an infinity f slutins. If Y 2 R N N is invertible and has been learned in L layer, the system f equatin (9) is, in principle, easy t slve. The unique slutin fr last layer weight matrix is W = Y O. If Y is an arbitrary matrix in R N p ; then it becmes mre difficult t slve equatin (9). There may be nne, ne r an infinite number f slutins depending n where O 2 R(Y) and whether N ran(y) >. One wuld lie t be able t find a matrix (r matrices) C, such that slutin f 9 are f the frm CO. But if O =2 R(Y); then equatin (9) has n slutin. Frm linear algebra therem, it has: Therem The system YW = O has a slutin if and nly if ran([y; O]) = ran(y); () Prf: See reference [6]. We intend t use pseudinverse slutin fr finding weight matrix, the reasn is that the therem frm linear algebra states that pseudinverse slutin is the best apprximatin. Therem 2 Suppse that X 2 R p m ; A 2 R N p ; B 2 R N m ; The best apprximate slutin f the equatin AX = B is X = A + B (The superscript + dentes the pseudinverse matrix) Prf: Frm reference [6], it is easy prved. Frm the therem 2, it has, Crllary The best apprximate slutin f AX = I is X = A + :

3 Frm abve analysis, we try t find the utput layer weight in this way: Let W = Y + O, the learning prblem becmes jjyy + O Ojj 2 =, where Y + is the pseudinverse f Y. This is equal t find the matrix Y s that YY + I =, where I is the identity matrix. S the tas f training the netwr becmes that f managing t raise the ran f matrix Y up t full ran. As sn as Y becmes a full ran matrix, the YY + will becme the identity matrix I. Nte that since we multiply Y n the right by Y +, it needs nly requiring the right inverse f Y t exist, nt necessarily fr Y + t be a tw-sided inverse f Y. This means that Y needs nt be a square matrix, but its number f clumn shuld nt be less than its number f rw. This cnditin requires that hidden neurn numbers shuld be greater than r equal t N. If the cnditin is satisfied, we can find an exact slutin fr weight matrix. When we chse hidden neurn number t be equal t N, with such a netwr structure, we can find the weight matrix which can exactly mapping the training set. 2.3 Psudinverse Learning Algrithm Based n the abve discussin, we first let weight matrix W be equal t Yp which isann Nmatrix. Then we apply nnlinear activate functin, that is t cmpute Y = ff(y W ), then cmpute the (Y ) +, the pseudinverse f Y, and s n. Because the algrithm is feedfrward nly, n errr will prpagate bac t preceding layer f neural netwr. We cannt use standard errr frm E = Trace[(G 2N O)T (G O)] t judge whether the trained netwr has reached the desired accuracy during training prcedure. Instead, we use the criterin jjy l (Y l ) + Ijj 2 < E. At each layer, we cmpute jjy l Y l+ Ijj 2, If it is less than the desired errr, we set W L =(Y L ) + Oand stp the training prcedure. Otherwise, let W l = (Y l ) +, add anther layer, feed frward this layer utput t next layer again, until we reach the required learning accuracy. T use any nnlinear activate functin in the hidden ndes is t utilize the nnlinearity f the functin and t increase the linear independency amng the clumn vectrs r, equivalently, the ran f the matrix. It is prved that sigmidal functins can raise the dimensin f the input space up t the number f the hidden neurns[7]. S thrugh nnlinear activate actin, the ran f the weight matrix will be raised layer by layer. With abve discussin, we prpse a feedfrward-nly algrithm which reduce learning errrs n every layer. First we establish a tw layer neural netwr. If the given precisin cannt be reached, a third layer is added t eliminate the remained errr. If the third layer added still cannt satisfy the desired accuracy, then anther hidden layer is added again t reduce the learning errrs until the required accuracy is achieved. Frm a mathematical pint f view, we can summarize the algrithm int the fllwing steps: Step. Set hidden neurn number ass N; and let Y = X. Step 2. Cmpute (Y ) + =Pseudinverse(Y ). Step 3. Cmpute jjy l (Y l ) + Ijj 2. If it is less than the given errr E, g t step 6. If nt, g n t the next step. Step 4. Let W l = (Y l ) +. Feed frward the result t the next layer, cmpute Y l+ = ff(w l Y l ). Step 5. Cmpute (Y l+ ) + = Pseudinverse (Y l+ ), l ψ l +, and g t step 3. Step 6. Let W L =(Y L ) + O. Step 7. Stp training, the netwr mapping functin is g = ff(:::ff(w ff(w Y ))) W L : 3 Add and Delete Sample The prpsed algrithm is a batch way learning algrithm, in which we assume that all the input data are available at the time f training. Hwever, in real-time applicatin, as a new input vectr is given t the netwr, the weight matrix must be updated. Or, we need t delete a sample frm the learned weight matrix. It is nt efficient at all if we recmpute the pseudinvese f a new weight matrix with PIL algrithm. When we assign the hidden neurn number is equal t the number f training samples, add r delete the sample is equivalent with add r delete hidden neurn number. Here we use add r delete neurn algrithm t efficiently cmpute the pseudinverse matrix. Accrding t Griville's therem[8], the first clumns f the Y matrix cnsist f a submatrix, the pseudinverse f this submatrix can be calculated frm the previus ( )-th pseudinverse submatrix.» Y Y + + = (I y b T ) b T () where the vectr y is the -th clumn vectr f the matrix Y, while b = 8 < : (I Y Y + )y ; if CD 6= (Y + )T Y + y +jjy + y jj 2 ; therwise (2) where CD = jji Y Y + y jj. It needs at mst N times iterative cycle t btain the pseudinverse f a matrix if there are N clumns in this matrix.

4 With this therem, we can add the hidden neurns relative easy t calculate the pseudinverse matrix. When a hidden neurn is deleted, the matrix need t be update. It is nt efficient at all if we cmpute the pseudinverse matrix frm beginning. Here we cnsider using brdering algrithm[9] t cmpute the inverse f the matrix. Given the inverse t a matrix, the methd shws hw t find the inverse f a ( +) (+ ) matrix, which is the same ld matrix with an additinal rw and clumn attached t its brders. If the clumn vectr y i in Y is linearly independent each ther, by definitin, Y + =(Y T Y) Y T (3) Let V = Y T Y;we efficiently calculate V + frm the prir V withut inverting a matrix. The algrithm is V V = + ff vvt v ff + ff vt ff (4) where v = V Y T y + ; and ff = V YT y + When delete a vectr frm the matrix, cnsider the riginal matrix cntaining + vectr pairs. The ey step is t cmpute V frm V + : When the ( +) th pair is deleted frm the matrix, we rewrite V + as the fur partitins: V b T c + = A b (5) where A is ; b is ; and c is a scalar. By cmparing equatin 4, it is apparent that A = V + ff vvt ; b=(/ff)v; and c=/ff: Frm thee expressins, we find that the desired result is V = A c bbt (6) The inverse f matrix nw can be calculated frm ( +) ( + ) matrix. This is equivalent with deleting the last hidden neurn and updating the weight matrix. This will be very useful in the case f leave ne ut crss-validatin partitin training samples (CVPS). Because in each CVPS data set there nly ne sample is different with ttal sample set. We can first cmpute the inverse f matrix which is learned based n ttal sample set, then at each time, mve nly ne sample t the last clumn (rw) psitin, and use the abve algrithm t delete this sample. In this way we can btain the learned weight matrices with CVPS data sets efficiently. 4 Numerical Examples The algrithm is tested with the fllwing functin mapping examples. Example. Cnsider a nnlinear mapping prblem f Sine functin by neural netwr. Fr the training set, 5 input-utput signals (x i ;y i ) pairs were generated with x i = 2ß Λ i=49, fr i = ; ; 2; ;49, and crrespnd y i were cmputed using y i = sin(x i ). The given learning errr is E = 7. If learning errr E < 7, we regard that perfect learning has been reached. Fr this prblem, input neurn number is n + = 2 including bias ne, utput neurn is m = and hidden layer neurn number is N = 5. After using the PIL algrithm prpsed abve, we reach the perfect learning when tw hidden layers are added. The trained netwr altgether has 4 layers including input and utput layer. The actual learning errr is E =7: Example 2. The nnlinear mapping f 8 input quantities x i int three utput quantities y i prblem, defined by Biegler-König and Bärmann in []: y = (x Λ x 2 + x 3 Λ x 4 + x 5 Λ x 6 + x 7 Λ x 8 )=4: y 2 = (x + x 2 + x 3 + x 4 + x 5 + x 6 + x 7 + x 8 )=8: y 3 = ( y ) :5 (7) All three functins are defined fr values between and and prduce values in this range. Fr the training set, 5 sets f input signals x i were randmly generated in the range f t, the crrespnding y i were cmputed using the abve equatin. The desired learning errr we give is E = : 7. When training is finished, nly ne hidden layer is added, and the actual learning errr is E =3: fr this prblem. Example 3. Anther functinal mapping prblem is y = sin(x)cs(3x) + x=3. Lie example, we use 5 examples with x i in the regin f t ß t train the netwr. Perfect learning is reached after tw hidden layers are added. Actual learning errr is E =4: Generalizatin What is the generalizatin abilities f trained netwrs? We als tested the ability f trained netwrs t frecast functin values f examples nt belnging t the training set. Fr Sine functinal mapping, we train the netwr using 2 examples with x i =2ßΛi=9, fr i =;;2; ;9, and the crrespnding y i were cmputed using y i = sin(x i ). After the netwr is trained, N = input signals x i randmly generated within the range ft2ßare used t test the netwr, the crrespnding y i were cmputed using trained netwr. Figure a and b shw the results fr example

5 Output Output (a) (c) Output Output (b) (d) Fig.. The trained netwr utput fr (a) y = sin(x) functin mapping, (b) y = sin(x)cs(x) + x=3 functin mapping, (c) fr functin defined in equatin (8) with 2 learning examples, and (d) fr functin defined in equatin (8) with nly 5 learning examples. Λ" is training data, " is test data. and 3. It is reasnable gd. We have als tested examples 2 and 3 with 2 examples training netwr and using randmly generated input signals fr testing. Fr further investigating the prpsed netwr architecture and learning algrithm's respnse t unlearned data, let us see anther example. Example 4. A sin(x) lie nnlinear functin is defined by: y = 8 < : x; if» x<ß=2; ß x; if ß=2» x<3ß=2; x 2ß; if 3ß=2» x» 2ß: (8) First, 2 examples with x i = 2ß Λ i=9, fr i = ; ; 2; ;9, and the crrespnding y i were cmputed using abve equatin are used t train the netwr, then randm input signals generated in the range f t 2ß are used t test the trained netwr. The result is shwn in Figure c. When using 5 set examples f(,), (ß=2; ), (ß; ); (3ß=2; ); (2ß; )g t train the netwr, we get a netwr structure which has ne hidden layer with 5 hidden neurns. The learning errr is E =3: Afterward, sets f input signals x i which were randmly generated within the range f t 2ß are used t test the netwr. The result is shwn in Figure d. Frm the Figure d, it can be seen that the netwr acts lie asine functin. It shuld be reminded that the architecture and weight matrices is the same as in example and example 4 when using the abve 5 set examples. Frm this result, it can be nwn that the netwr frecast unlearned data ability is better fr smth functin when the data in the range f training input space. When 5 set examples with x i =2ßΛi=49, fr i =;;2; ;49, and the crrespnding y i were cmputed using crrespnding equatin are used t train the netwr, then randmly generated input signals in the range f t 2ß are used t test the trained netwr. In example and 4, nly W L matrix is different. The ther matrices are the same while 5 set examples are used t train the netwr. But the netwr's respnse t the same input matrix is ttally different. The methd f staced generalizatin[5] prvides a way f cmbining trained netwrs tgether which uses partitining f the data set t find an verall system with usually imprved generalizatin perfrmance. The experiments shw that with smthed functin r piece wise smthed functin, the trained netwr generalizatin perfrmance is gd with staced generalizatin. Hwever, fr nise data set, if the netwr is vertrained, the generalizatin ability will be pr. Using staced generalizatin can nt imprve the netwr perfrmance when ver trained netwrs are used. When verfitting t the nise ccur, staced generalizatin nt a suitable technique fr imprving netwr generalizatin perfrmance. We shuld see ther generalizatin techniques suchas ensemble netwrs[][2] t imprve the netwr perfrmance, but this tpic is beynd the scpe f this paper. 5 Discussin On examining the algrithm, it can be seen that we d nt need t cnsider the questin f hw the weight matrix shuld be initialized t avid lcal minima. We just feedfrward examples t get a weight matrix and the slutin will nt cnverge t lcal minima. This is different frm the BP algrithm. It can als be seen that training prcedure is in fact the prcessing f raising the ran f weight matrix. When a matrix f sme hidden layer utput becmes full ran, the right inverse f the matrix can be btained, and we end the training prcedure. Frm the learning prcedure, it is bvius that n differentiable activate functin is needed. We nly require that the activate functin can perfrm nnlinear transfrm t raise the ran f the weight matrix. Because the PIL algrithm is based n the nnlinear functin transfrmatin t raise the matrix ran, it will fail if there tw r mre input vectrs are identity in the input matrix. But this case can be eliminated thrugh preprcessing input patterns.

6 The BP as well as ther gradient algrithm requires user selected parameter, such as step size r mmentum cnstant. These parameters have effect n the learning speed. There is n theretical basis which guides us hw t select these parameters t speed learning. In PIL, such a prblem des nt exist. Anther characteristics is that if the input matrix has ran N then a right inverse exists, and we will get a linear netwr with nly tw layers. Fr mst prblems, with tw hidden layers, the netwr can reach the perfect learning. Frm the examples, we see that netwr layer number is nt nly dependent n learning accuracy, but als n the examples t be learned. The algrithm is suitable fr sme applicatins which require high precisin utput, in which case the netwr structure is less imprtant than precisin utput. One f the algrithm's imprtant feature is that desired utput matrix T is embedded in the weight matrix W L which cnnects last hidden layer and utput layer. This give us a very easy and fast way t get the weight matrix fr different target utput, as lng as input matrix is the same. Fr example, after we have trained the netwr t learning Sine functin mapping in the regin frm t 2ß, we nly need recalculate the W L, in rder t get Csine functin mapping prblem in the same regin with Sine functin. Fr BP algrithm, it is necessary t train whle netwr again t get all weight matrices fr Csine functin mapping, thugh input matrix is the same with Sine functin. We have nt cmpared the verall perfrmance f this algrithm with thers. Obviusly, the number f iteratins is nt avalid metric cnsidering the fact that the calculatin cmplexity per iteratin is nt the same fr any f the algrithms. Hwever, if we cnsider the CPU time cst n training netwr t reach the high learning accuracy using the same machine, the PIL algrithm is bviusly fast than ther gradient descent algrithms in its learning speed. 6 Summary The pseudinverse learning algrithm was intrduced in this paper. The algrithm is mre effective than the standard BP and ther gradient descent algrithm fr mst prblems. The algrithm des nt cntain any user-dependent parameters whse values are crucial fr the success f the algrithm. The mathematical peratins are simple, it is nly based n generalized linear algebra and adpt pseudinverse and matrix inner prduct peratins. On cnsidering its learning speed and accuracy, the PIL algrithm is mst cmpetitive t ther gradient descent algrithms in real r near real time practical use. With the PIL algrithm, it allws us t investigate the cmputatin-intensive techniques such as staced generatin mre efficiently. References: [] D. E. Rumelhurt, G. E. Hintn, and R. J. Williams, Learning Internal Representatin by Errr Prpagatin", in Parallel Distributed Prcessing, Vl., D.E.Rumelhart and J.L.McClelland, Eds. (MIT Press, Cambridge, MA), Chapter 8, 986. [2] E. Barnard Optimizatin fr training neural nets". IEEE Transactins n Neural Netwrs, Vl.3, N.2, pp232-24, 992. [3] F. A. Zdewy, Wesswls and B. Etienne, Avid False lcal minima by prper Initializatins f cnnectins", IEEE Transactin n Neural netwrs, Vl. 3, pp899-95, 992. [4] S. Kllias, and D. Anastassiu, An adaptive least squares algrithm fr the efficient training f artificial neural netwrs", IEEE Transactin n Circuit and System, CAS-36, pp92-,989. [5] D. H. Wlpert, Staced Generalizatin", Neural Netwrs, Vl.5, pp24-259, 992. [6] Thmas L. Bullin and Patric L. Odell, Generalized Inverse Matrices", Jhn Wiley and Sns, Inc. (New Yr), 97. [7] S. Tamura, Capabilities f a Tree Layer Feedfrward Neural Netwr", Prceedings f Internatinal Jint Cnference n Neural Netwrs, pp , (Seattle), 99. [8] C. R. Ra and S. K. Mitra, Generalized Inverse f Matrices and Its Applicatins, Wiley, (New Yr), 97. [9] Jn F. Claerbut, Fundamentals f Gephysical Dada Prcessing with applicatins t petrleum prspecting", McGraw-Hill Inc.,(USA), (TN 27, P4C6), 976. [] F. Biegler-König, and F. Bärmann,, A learning Algrithm fr Multilayered Neural Netwrs Based n Linear Least Squares Prblems", Neural Netwrs, vl. 6, pp27-3, 993. []M. P. Perrne and L. N. Cper, When netwrs disagree: ensemble methds fr hybrid neural netwrs", in R. J. Mammne Ed., Artificial Neural Netwrs fr Speech and Visin, Chapman and Hall, pp26-42, (Lndn), 993. [2] Ping Gu, Averaging ensemble neural netwrs in parameter space", In Prceedings f fifth internatinal cnference n neural infrmatin prcessing, pp , (Japan), 998.

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

Artificial Neural Networks MLP, Backpropagation

Artificial Neural Networks MLP, Backpropagation Artificial Neural Netwrks MLP, Backprpagatin 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

COMP9444 Neural Networks and Deep Learning 3. Backpropagation

COMP9444 Neural Networks and Deep Learning 3. Backpropagation COMP9444 Neural Netwrks and Deep Learning 3. Backprpagatin Tetbk, Sectins 4.3, 5.2, 6.5.2 COMP9444 17s2 Backprpagatin 1 Outline Supervised Learning Ockham s Razr (5.2) Multi-Layer Netwrks Gradient Descent

More information

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation III-l III. A New Evaluatin Measure J. Jiner and L. Werner Abstract The prblems f evaluatin and the needed criteria f evaluatin measures in the SMART system f infrmatin retrieval are reviewed and discussed.

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network Research Jurnal f Applied Sciences, Engineering and Technlgy 5(2): 465-469, 2013 ISSN: 2040-7459; E-ISSN: 2040-7467 Maxwell Scientific Organizatin, 2013 Submitted: May 08, 2012 Accepted: May 29, 2012 Published:

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic. Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

Lyapunov Stability Stability of Equilibrium Points

Lyapunov Stability Stability of Equilibrium Points Lyapunv Stability Stability f Equilibrium Pints 1. Stability f Equilibrium Pints - Definitins In this sectin we cnsider n-th rder nnlinear time varying cntinuus time (C) systems f the frm x = f ( t, x),

More information

5 th grade Common Core Standards

5 th grade Common Core Standards 5 th grade Cmmn Cre Standards In Grade 5, instructinal time shuld fcus n three critical areas: (1) develping fluency with additin and subtractin f fractins, and develping understanding f the multiplicatin

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

Preparation work for A2 Mathematics [2017]

Preparation work for A2 Mathematics [2017] Preparatin wrk fr A2 Mathematics [2017] The wrk studied in Y12 after the return frm study leave is frm the Cre 3 mdule f the A2 Mathematics curse. This wrk will nly be reviewed during Year 13, it will

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION NUROP Chinese Pinyin T Chinese Character Cnversin NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION CHIA LI SHI 1 AND LUA KIM TENG 2 Schl f Cmputing, Natinal University f Singapre 3 Science

More information

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving.

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving. Sectin 3.2: Many f yu WILL need t watch the crrespnding vides fr this sectin n MyOpenMath! This sectin is primarily fcused n tls t aid us in finding rts/zers/ -intercepts f plynmials. Essentially, ur fcus

More information

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department

More information

Chapter 3 Kinematics in Two Dimensions; Vectors

Chapter 3 Kinematics in Two Dimensions; Vectors Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern 0.478/msr-04-004 MEASUREMENT SCENCE REVEW, Vlume 4, N. 3, 04 Methds fr Determinatin f Mean Speckle Size in Simulated Speckle Pattern. Hamarvá, P. Šmíd, P. Hrváth, M. Hrabvský nstitute f Physics f the Academy

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b . REVIEW OF SOME BASIC ALGEBRA MODULE () Slving Equatins Yu shuld be able t slve fr x: a + b = c a d + e x + c and get x = e(ba +) b(c a) d(ba +) c Cmmn mistakes and strategies:. a b + c a b + a c, but

More information

Preparation work for A2 Mathematics [2018]

Preparation work for A2 Mathematics [2018] Preparatin wrk fr A Mathematics [018] The wrk studied in Y1 will frm the fundatins n which will build upn in Year 13. It will nly be reviewed during Year 13, it will nt be retaught. This is t allw time

More information

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are: Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track

More information

Learning to Control an Unstable System with Forward Modeling

Learning to Control an Unstable System with Forward Modeling 324 Jrdan and Jacbs Learning t Cntrl an Unstable System with Frward Mdeling Michael I. Jrdan Brain and Cgnitive Sciences MIT Cambridge, MA 02139 Rbert A. Jacbs Cmputer and Infrmatin Sciences University

More information

Admin. MDP Search Trees. Optimal Quantities. Reinforcement Learning

Admin. MDP Search Trees. Optimal Quantities. Reinforcement Learning Admin Reinfrcement Learning Cntent adapted frm Berkeley CS188 MDP Search Trees Each MDP state prjects an expectimax-like search tree Optimal Quantities The value (utility) f a state s: V*(s) = expected

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

Feedforward Neural Networks

Feedforward Neural Networks Feedfrward Neural Netwrks Yagmur Gizem Cinar, Eric Gaussier AMA, LIG, Univ. Grenble Alpes 17 March 2017 Yagmur Gizem Cinar, Eric Gaussier Multilayer Perceptrns (MLP) 17 March 2017 1 / 42 Reference Bk Deep

More information

Public Key Cryptography. Tim van der Horst & Kent Seamons

Public Key Cryptography. Tim van der Horst & Kent Seamons Public Key Cryptgraphy Tim van der Hrst & Kent Seamns Last Updated: Oct 5, 2017 Asymmetric Encryptin Why Public Key Crypt is Cl Has a linear slutin t the key distributin prblem Symmetric crypt has an expnential

More information

Cambridge Assessment International Education Cambridge Ordinary Level. Published

Cambridge Assessment International Education Cambridge Ordinary Level. Published Cambridge Assessment Internatinal Educatin Cambridge Ordinary Level ADDITIONAL MATHEMATICS 4037/1 Paper 1 Octber/Nvember 017 MARK SCHEME Maximum Mark: 80 Published This mark scheme is published as an aid

More information

AP Physics Kinematic Wrap Up

AP Physics Kinematic Wrap Up AP Physics Kinematic Wrap Up S what d yu need t knw abut this mtin in tw-dimensin stuff t get a gd scre n the ld AP Physics Test? First ff, here are the equatins that yu ll have t wrk with: v v at x x

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification Agenda Artificial Intelligence and its applicatins Lecture 6 Supervised Learning Prfessr Daniel Yeung danyeung@ieee.rg Dr. Patrick Chan patrickchan@ieee.rg Suth China University f Technlgy, China Learning

More information

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS 1 Influential bservatins are bservatins whse presence in the data can have a distrting effect n the parameter estimates and pssibly the entire analysis,

More information

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1 Administrativia Assignment 1 due thursday 9/23/2004 BEFORE midnight Midterm eam 10/07/2003 in class CS 460, Sessins 8-9 1 Last time: search strategies Uninfrmed: Use nly infrmatin available in the prblem

More information

Copyright Paul Tobin 63

Copyright Paul Tobin 63 DT, Kevin t. lectric Circuit Thery DT87/ Tw-Prt netwrk parameters ummary We have seen previusly that a tw-prt netwrk has a pair f input terminals and a pair f utput terminals figure. These circuits were

More information

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India CHAPTER 3 INEQUALITIES Cpyright -The Institute f Chartered Accuntants f India INEQUALITIES LEARNING OBJECTIVES One f the widely used decisin making prblems, nwadays, is t decide n the ptimal mix f scarce

More information

Particle Size Distributions from SANS Data Using the Maximum Entropy Method. By J. A. POTTON, G. J. DANIELL AND B. D. RAINFORD

Particle Size Distributions from SANS Data Using the Maximum Entropy Method. By J. A. POTTON, G. J. DANIELL AND B. D. RAINFORD 3 J. Appl. Cryst. (1988). 21,3-8 Particle Size Distributins frm SANS Data Using the Maximum Entrpy Methd By J. A. PTTN, G. J. DANIELL AND B. D. RAINFRD Physics Department, The University, Suthamptn S9

More information

COASTAL ENGINEERING Chapter 2

COASTAL ENGINEERING Chapter 2 CASTAL ENGINEERING Chapter 2 GENERALIZED WAVE DIFFRACTIN DIAGRAMS J. W. Jhnsn Assciate Prfessr f Mechanical Engineering University f Califrnia Berkeley, Califrnia INTRDUCTIN Wave diffractin is the phenmenn

More information

Physical Layer: Outline

Physical Layer: Outline 18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin

More information

B. Definition of an exponential

B. Definition of an exponential Expnents and Lgarithms Chapter IV - Expnents and Lgarithms A. Intrductin Starting with additin and defining the ntatins fr subtractin, multiplicatin and divisin, we discvered negative numbers and fractins.

More information

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor Prgress In Electrmagnetics Research M, Vl. 35, 5 6, 24 The Research n Flux Linkage Characteristic Based n BP and RBF Neural Netwrk fr Switched Reluctance Mtr Yan Cai, *, Siyuan Sun, Chenhui Wang, and Cha

More information

Five Whys How To Do It Better

Five Whys How To Do It Better Five Whys Definitin. As explained in the previus article, we define rt cause as simply the uncvering f hw the current prblem came int being. Fr a simple causal chain, it is the entire chain. Fr a cmplex

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

Training Algorithms for Recurrent Neural Networks

Training Algorithms for Recurrent Neural Networks raining Algrithms fr Recurrent Neural Netwrks SUWARIN PAAAVORAKUN UYN NGOC PIEN Cmputer Science Infrmatin anagement Prgram Asian Institute f echnlgy P.O. Bx 4, Klng Luang, Pathumthani 12120 AILAND http://www.ait.ac.th

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction Neural Netwrks with Wavelet Based Denising Layers fr Time Series Predictin UROS LOTRIC 1 AND ANDREJ DOBNIKAR University f Lublana, Faculty f Cmputer and Infrmatin Science, Slvenia, e-mail: {urs.ltric,

More information

Chapter 2 GAUSS LAW Recommended Problems:

Chapter 2 GAUSS LAW Recommended Problems: Chapter GAUSS LAW Recmmended Prblems: 1,4,5,6,7,9,11,13,15,18,19,1,7,9,31,35,37,39,41,43,45,47,49,51,55,57,61,6,69. LCTRIC FLUX lectric flux is a measure f the number f electric filed lines penetrating

More information

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017 Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA Mental Experiment regarding 1D randm walk Cnsider a cntainer f gas in thermal

More information

Module 4: General Formulation of Electric Circuit Theory

Module 4: General Formulation of Electric Circuit Theory Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated

More information

Homology groups of disks with holes

Homology groups of disks with holes Hmlgy grups f disks with hles THEOREM. Let p 1,, p k } be a sequence f distinct pints in the interir unit disk D n where n 2, and suppse that fr all j the sets E j Int D n are clsed, pairwise disjint subdisks.

More information

We can see from the graph above that the intersection is, i.e., [ ).

We can see from the graph above that the intersection is, i.e., [ ). MTH 111 Cllege Algebra Lecture Ntes July 2, 2014 Functin Arithmetic: With nt t much difficulty, we ntice that inputs f functins are numbers, and utputs f functins are numbers. S whatever we can d with

More information

Trigonometric Ratios Unit 5 Tentative TEST date

Trigonometric Ratios Unit 5 Tentative TEST date 1 U n i t 5 11U Date: Name: Trignmetric Ratis Unit 5 Tentative TEST date Big idea/learning Gals In this unit yu will extend yur knwledge f SOH CAH TOA t wrk with btuse and reflex angles. This extensin

More information

Determining Optimum Path in Synthesis of Organic Compounds using Branch and Bound Algorithm

Determining Optimum Path in Synthesis of Organic Compounds using Branch and Bound Algorithm Determining Optimum Path in Synthesis f Organic Cmpunds using Branch and Bund Algrithm Diastuti Utami 13514071 Prgram Studi Teknik Infrmatika Seklah Teknik Elektr dan Infrmatika Institut Teknlgi Bandung,

More information

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~ Sectin 6-2: Simplex Methd: Maximizatin with Prblem Cnstraints f the Frm ~ Nte: This methd was develped by Gerge B. Dantzig in 1947 while n assignment t the U.S. Department f the Air Frce. Definitin: Standard

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION. Instructions: If asked to label the axes please use real world (contextual) labels

ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION. Instructions: If asked to label the axes please use real world (contextual) labels ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION Instructins: If asked t label the axes please use real wrld (cntextual) labels Multiple Chice Answers: 0 questins x 1.5 = 30 Pints ttal Questin Answer Number 1

More information

NUMBERS, MATHEMATICS AND EQUATIONS

NUMBERS, MATHEMATICS AND EQUATIONS AUSTRALIAN CURRICULUM PHYSICS GETTING STARTED WITH PHYSICS NUMBERS, MATHEMATICS AND EQUATIONS An integral part t the understanding f ur physical wrld is the use f mathematical mdels which can be used t

More information

Department of Electrical Engineering, University of Waterloo. Introduction

Department of Electrical Engineering, University of Waterloo. Introduction Sectin 4: Sequential Circuits Majr Tpics Types f sequential circuits Flip-flps Analysis f clcked sequential circuits Mre and Mealy machines Design f clcked sequential circuits State transitin design methd

More information

Math Foundations 20 Work Plan

Math Foundations 20 Work Plan Math Fundatins 20 Wrk Plan Units / Tpics 20.8 Demnstrate understanding f systems f linear inequalities in tw variables. Time Frame December 1-3 weeks 6-10 Majr Learning Indicatrs Identify situatins relevant

More information

Math 105: Review for Exam I - Solutions

Math 105: Review for Exam I - Solutions 1. Let f(x) = 3 + x + 5. Math 105: Review fr Exam I - Slutins (a) What is the natural dmain f f? [ 5, ), which means all reals greater than r equal t 5 (b) What is the range f f? [3, ), which means all

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Dataflow Analysis and Abstract Interpretation

Dataflow Analysis and Abstract Interpretation Dataflw Analysis and Abstract Interpretatin Cmputer Science and Artificial Intelligence Labratry MIT Nvember 9, 2015 Recap Last time we develped frm first principles an algrithm t derive invariants. Key

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

EDA Engineering Design & Analysis Ltd

EDA Engineering Design & Analysis Ltd EDA Engineering Design & Analysis Ltd THE FINITE ELEMENT METHOD A shrt tutrial giving an verview f the histry, thery and applicatin f the finite element methd. Intrductin Value f FEM Applicatins Elements

More information

ENG2410 Digital Design Sequential Circuits: Part A

ENG2410 Digital Design Sequential Circuits: Part A ENG2410 Digital Design Sequential Circuits: Part A Fall 2017 S. Areibi Schl f Engineering University f Guelph Week #6 Tpics Sequential Circuit Definitins Latches Flip-Flps Delays in Sequential Circuits

More information

Pure adaptive search for finite global optimization*

Pure adaptive search for finite global optimization* Mathematical Prgramming 69 (1995) 443-448 Pure adaptive search fr finite glbal ptimizatin* Z.B. Zabinskya.*, G.R. Wd b, M.A. Steel c, W.P. Baritmpa c a Industrial Engineering Prgram, FU-20. University

More information

Thermodynamics Partial Outline of Topics

Thermodynamics Partial Outline of Topics Thermdynamics Partial Outline f Tpics I. The secnd law f thermdynamics addresses the issue f spntaneity and invlves a functin called entrpy (S): If a prcess is spntaneus, then Suniverse > 0 (2 nd Law!)

More information