A Neuro-Fuzzy System on System Modeling and Its. Application on Character Recognition

Size: px
Start display at page:

Download "A Neuro-Fuzzy System on System Modeling and Its. Application on Character Recognition"

Transcription

1 A Neuro-Fuzzy System on System Modelng and Its Applcaton on Character Recognton C. J. Chen 1, S. M. Yang 2, Z. C. Wang 3 1 Department of Avaton Servce Management Alethea Unversty Tawan, ROC 2,3 Department of Aeronautcs and Astronautcs Natonal Cheng Kung Unversty Tawan, ROC Keywords: system dentfcaton, neuro-fuzzy system, Takag-Sugeno fuzzy system, Mamdan fuzzy system. Abstract It has been known that fuzzy system provde a framework to handle uncertantes and vagueness, however, the Sugeno or Mamdan fuzzy rule often face dffcultes n decdng the number of nference rules and nput/output membershp functons. A fve-layer neuro-fuzzy model s developed for applcatons n system dentfcaton of engneerng system. Smulaton and analyss show that both Sugeno and Mamdan neuro-fuzzy models have good performance n system dentfcaton. A benchmark test s appled to valdate the model accuracy n nonlnear system modelng, and the former s superor n dentfcaton accuracy. Therefore, an experment of pattern recognton s taken by way of the Sugeno fuzzy rules n neuro-fuzzy system, and t consequently gets good works n the applcaton. 321

2 1. Abstract It has been known that fuzzy system provde a framework to handle uncertantes and vagueness, however, the Sugeno or Mamdan fuzzy rule often face dffcultes n decdng the number of nference rules and nput/output membershp functons. A fve-layer neuro-fuzzy model s developed for applcatons n system dentfcaton of engneerng system. Smulaton and analyss show that both Sugeno and Mamdan neuro-fuzzy models have good performance n system dentfcaton. A benchmark test s appled to valdate the model accuracy n nonlnear system modelng, and the former s superor n dentfcaton accuracy. Therefore, an experment of pattern recognton s taken by way of the Sugeno fuzzy rules n neuro-fuzzy system, and t consequently gets good works n the applcaton. 2. Introducton There have been sgnfcant developments n the feld of neural networks and fuzzy system. A neural network s a data processng system that mmcs the bologcal neural network by usng numerous artfcal neurons n order to acqure, optmze and store expermental statstcs. The challenges are the mappng rules nvsble and have lttle physcal meanng, and the tranng s often tme consumng. A fuzzy system s dstngushed by ts ablty of handlng numercal data and naccurate lngustc nformaton. Its performance strongly depends on the selecton of nput/output membershp functons and fuzzy logc rules. The rules and membershp functons are determned by expert s knowledge or experences; however, such decson makng may not be easly transferable. The ntegraton of neural networks and fuzzy system that combnes ther advantages and overcomes ther shortcomngs has been preferable n uncertan and complcated systems. Ln and Cunnngham [1] presented a fuzzy-neuro system modelng. Azam and Vanlandngham [2] also developed an adaptve self-organzng 322

3 neuro-fuzzy technque for system dentfcaton. Palt [3] developed a tranng algorthm for Takag-Sugeno of neuro-fuzzy network. Efe and Kaynak [4] proposed a neuro-fuzzy approach for dentfcaton and control of nonlnear systems. Km and Vachtsevanos [5] also presented a polynomal fuzzy neural network for dentfcaton and control. Fuzzy rule-based system dentfcaton has also been reported by Chakraborty and Pal [6]. Recently, Gorrosteta and Pedraza [7] developed a neuro fuzzy modelng of control systems. Zaheeruddn and Garma [8] proposed a neuro-fuzzy approach for predcton of work effcency. Cheng [9] appled a hybrd learnng-based neuro-fuzzy nference system for system modelng. Banakar and Azeem [10] also presented an dentfcaton and predcton of nonlnear dynamcal plants usng TSK and wavelet neuro-fuzzy models. Yang, et al [11] proposed a development of a self-organzed neuro-fuzzy model for system dentfcaton. Reference [3], [4], [7], [8], [9], [10] and [13] are concerned about the Sugeno fuzzy rules n the neuro-fuzzy system, whle reference [5], [6], [11] and [12] are assocated wth Mamdan fuzzy rules. However, the studes above are concerned wth the system dentfcaton by neuro-fuzzy approach, usng Sugeno or Mamdan fuzzy rules ndvdually. In ths paper, a performance comparson usng the Sugeno and the Mamdan fuzzy rules n the neuro-fuzzy system s conducted. 3. Neuro-fuzzy Model An artfcal neural network can learn complex functonal relatons by generalzng from a lmted tranng data, and thus can serve as a black-box of nonlnear dynamc systems. There are several layers of smple processng elements called the neurons, and the nterconnectons among them by weghts. The fuzzy system s a framework based on the concept of f-then rules to cope wth uncertan and ambguous problems. Two of most commonly used fuzzy nference systems are Mamdan fuzzy model [12] and Sugeno fuzzy model [13]. The former descrbes a 323

4 system by usng the natural language that makes t more ntutve and easy to realze, whle the latter specfes a system by usng the mathematcal relatons that makes t preferable to optmzaton and adapton. 3.1 Mamdan fuzzy rules A neuro-fuzzy system wth fve-layer feed-forward and the fuzzy nference of Mamdan fuzzy model s developed as shown n Fg. 1, whle the frst layer and the least layer s called the nput and output layer, respectvely. The other layers between nput and output layer are named the hdden layers. By determnng the fuzzy logc rules and optmzng the membershp functons through the connectve weghts, a vald neuro-fuzzy system s establshed. Layer 1 defnes the nput nodes and layer 5 ndcates the output nodes. Layer 2 and layer 4 are the term nodes of membershp functon to express the lngustc terms. Layer 3 defnes the nodes representng the fuzzy rules. A seres-parallel dentfcaton model [14] for nonlnear system can be wrtten as $ y( k + 1) = f ( y( k), y( k 1),..., y( k n + 1); u( k), u( k 1),..., u( k m + 1)) (1) where $ y( k + 1) s the estmated output of the neuro-fuzzy model at tme step k+1, [u(k), y(k)] represents the nput-output par f the plant at tme k, and n and m are the maxmum lags n the nput and output respectvely. Equaton (1) ndcates that $ y( k + 1) s a functon of the past n values of the plant output y ( k ), = 0,1 n-1, and the past m values of the nput u( k j ), j = 0,1 m-1. Each node n the frst layer s an nput node n proporton to one nput varable, and there s no computaton n ths layer. Each node drectly transmts sgnals to the next layer, O = 1 x, where O 1 s the output value of the th node n layer 1, and x s the th nput varable. Fuzzfcaton s done n the second layer wth each node correspondng to one 324

5 lngustc term of the nput varables va 2 O 1 m j 2 O 2 = exp j to calculate the σ j 2 membershp value of the fuzzy sets, where m j 2 and σ j 2 are the center (mean) and wdth (varance) of the Gaussan membershp functon of the j th node n layer 2, respectvely. Each node n layer 3 represents ts fuzzy rule whch has the form, R : If x s A and x s A... and x s A, then y s B (2) m m where R denotes the th fuzzy rule, x j, j = 1, 2 m s the lngustc varable of nput, y s the lngustc varable of output of the fuzzy rule R, and A 1, A 2, A m and B are the values of membershp functons. The weght of the lnks are set to unty and the output of node j determned by fuzzy AND operaton, O j3 = mn( O 2), where I j s the set of ndces of the nodes n layer 2 connected to node j n layer 3. Layer 4 represents the consequent part of the fuzzy rules and the performance of fuzzy OR 2 operaton on each node whch can be wrtten as 4 = max ( j3( j ) ) 325 j I I j O O w, where I s the set of ndces of the nodes n layer 3 that are connected to node n layer 4. Weght wj expresses the assocaton between the j th rule and the th output lngustc varable. The nodes of layer 3 and layer 4 are fully connected. Note that the strength of the IF-part and THEN-part rules presented by the node j n layer 3 and node n layer 4 are postve nvarably. The last layer computes the output value of the neuro-fuzzy model va the center of area defuzzfcaton, = ( σ ) / ( σ ) O m O O (3) 5 j 4 j 4 j 4 j 4 j 4 j I j I where I s the set of ndces of the nodes n layer 4 connected to node n layer 5. m j 4 and σ j 4 are the center (mean) and wdth (varance) of the membershp functon of the j th node n layer 4, respectvely. The weght of the lnks between the layer 4 and 5 are set to unty.

6 The desgn of the neuro-fuzzy system s by a three-phase learnng process to locate the ntal membershp functons n phase 1, fnd the fuzzy rules n phase 2, and tune the membershp functons of the nput/output varables n phase 3. In phase 1, the cneter and the wdth of the ntal membershp functon are determned by the feature-map algorthm [15]. x( k) m ( k) = mn{ x( k) m ( k ) } (4) c m ( k + 1) = m ( k) + α ( x( k) m ( k )) (5) c c c m ( k + 1) = m ( k) for m m (6) c where x( k ) and m ( k ) are the nput data and the center of membershp functon, respectvely. The subscrpt c ndcates the assocatve closet value and α s a decreasng scalar learnng rate. Ths adaptve formula runs ndependently for each nput and output lngustc varables. Once m ( k ) s calculated, the wdth σ ( k ) can be determned by the frst-nearest-neghbor heurstc, σ = ( m m ) / r (7) c where r s the overlap parameter. After the membershp functons of σ and m have been calculated, the backpropagaton learnng algorthm s added to fnd the fuzzy rules n phase 2. The output of layer 2 s transmtted to layer 3 to fnd the frng strength of each rule node. Based on the frng strength and the node output n layer 4, the correct consequence-lnk for each node can be determned by usng error backpropagaton learnng method wth the am of mnmzng the error functon = 2 E ( d( k) y( k )) / 2, where d(k) s the desred output and y(k) s the current output. The weght of the lnks from layer 3 to 4 s tuned va the update rule, w ( k + 1) = w ( k) + w ( k ) (8) j j j m 4( σ 4O 4) ( m 4σ 4O 4) where wj ( k) = η( d( k) y( k)) O j3wjσ 4 f j = $ r 2, and ( σ O )

7 w ( k ) = 0, otherwse. j $ r O w and η s the learnng rate. By 2 = Arg max( j3( j ) ) j adjustng the weght, the correct consequent lnk of each rule node s determned, and for every antecedent clause, the centrod of all the possble consequent s calculated. Only the domnant rule whose consequent has the hghest membershp value s selected. After the fuzzy rules have been deduced, a supervsed learnng s appled to tune the membershp functons optmally n phase 3. Intally, the lnks between rule nodes n layer 3 and consequent nodes n layer 4 are fully connected, for the consequence of rule nodes are not decded yet. Startng at the output node, a backward pass s used to compute the gradent of the error functon for all hdden nodes. In layer 5, the center and the wdth of each Gaussan membershp functon are the adjustable parameters. The error propagated to the proceedng layer s δ ( k) = d( k) y( k ) (9) 5 By usng Equaton (3) and the gradent of center m 4, the center parameter s updated va ( 1) ( ) η( ( ) ( )) m 4 k + = m 4 k + d k y k Smlarly, the wdth parameter s σ O / σ O (10) σ ( k + 1) = σ ( k) + η( d( k) y( k)) O 4 4 m ( σ O ) ( m σ O ) ( σ4o 4) (11) The error sgnal n layer 4 s derved as δ ( k) = ( d( k) y( k)) σ 4 4 m ( σ O ) ( m σ O ) ( 2 σ 4O 4) (12) By the same token, only the error sgnal δ 3 s needed and t s dentcal to δ 4 n layer 3. In layer 2, the center and wdth parameter are renovated by ( O m ) m ( k + 1) = m ( k) 2O η q (13) k ( σ 2) k 327

8 ( O m ) σ k σ k O η q (14) ( + 1) = 2( ) k ( σ 2) k where q k = 1 when O 2 = mn(nput of the k th rule node) and q k = 0 for the others. The above learnng algorthm hghlghts the computaton procedures n the neuro-fuzzy model. 3.2 Sugeno fuzzy rules The structure of a neuro-fuzzy system usng Sugeno fuzzy model s smlar to Fg.1 except the fuzzy rules and the defuzzfcaton as shown n Fg. 2. The rules of the Sugeno fuzzy model are descrbed as Ru : If x1 s Au 1 and x2 s Au 2... and p up, then u = u0 + u up p x s A y c c x c x (15) where R u sgnfes the uth fuzzy rule, x v, v =1, 2 p, are the p nputs to the system, y u s the output consequent of the fuzzy rule R u. A u1, A u2,, A up are the varance parameters of membershp functons, and c u0, c u1 c up are the real parameters. In layer 3, the weghts are calculated by takng the average of the ndvdual rule s contrbutons and then transmtted to the next layer, and the layer output s Ov 3 = wu 2 = wu 2 / w u2 (16) Each node n layer 4 s a square mode wth a mode functon O 4 = O 3 y, and v v u the output value as the summaton of all ncomng sgnals by weghted average, O w y w (17) 5 = / v u u u v v Because of the weghted average n calculatng the output of Sugeno neuro-fuzzy model, no membershp functon tunng s requred n ths layer. The neuro-fuzzy system s by a two-phase learnng process. In the frst phase, the antecedent parameters are fxed, and the consequent parameters of fuzzy rules are updated by 328

9 usng gradent descent algorthm smlar to phase 2 n the Mamdan neuro-fuzzy model. Intally, the nput data s fed nto the system and the frng strength of fuzzy rules are computed by Gaussan functon. The weght vector of each frng rule T c = [ c, c,..., c ] s updated accordng to u u0 u1 up T c ( k + 1) = c ( k) + g ( k)α [ y ( k) y( k)][1, x ( k)] (18) u u uo u u where guo( k ) s the decreasng rate, 0 gu0( t ) < 1, and α u s the frng strength of the rule u. The antecedent parameters are tuned by wn = mn{ wn }, and j j wn j mn{ srmj, srnj } max{ slmj, slnj} = c c mj nj (19) where c mj and c nj are the center of the wnner rule and the frst runner-up respectvely. Smlarly, sr mj and sl mj are the rght and left spreads of the wnner fuzzy rules, sr nj and sl nj are the rght and left spreads of the runner-up rules. wn j s the relatve wdth of the wndow and the dmenson wn has the smallest relatve wdth. The spread s rv then s renovated va s ( k + 1) = s ( k) + η( k)( c ( k) s ( k)), sgn( y y ) = sgn( y y ) (20) rv rv rv rv u r l s ( k + 1) = s ( k) η( k)( c ( k) s ( k )), otherwse (21) rv rv rv rv where c rv s the center of wnner rule, η s the learnng rate, y r and y l are the output computed ndependently for each rule. The centers of the fuzzy sets are updated when only a normal fuzzy rule fres. The center c r s moved towards the nput sample x( k) accordng to c ( k + 1) = c ( k) + η( k) α ( k)[ x( k) c ( k )] (22) r r r r The fve-layer networks n Mamdan and Sugeno fuzzy models are to compare ther performance n system dentfcaton. 329

10 4. Modelng and Applcaton It has been proved that the desred accuracy of a nonlnear mappng can be obtaned by nuero-fuzzy systems. A second-order hghly nonlnear dfference equaton s employed to llustrate the neuro-fuzzy model applcatons. y( k 1) y( k 2)( y( k 1) + 2.5) y( k) = + u( k) y( k 1) + y( k 2) The neuro-fuzzy model has three nputs: u( k ), y( k 1), y( k 2) and one output: y( k ). The frst two nputs are parttoned nto seven lngustc spaces {NL, NM, NS, ZE, PS, PM, PL} for ndcatng the negatve large, negatve medum, negatve small, zero, postve small, postve medum, and postve large respectvely, and the nput y( k 2) s parttoned nto fve lngustc spaces {NL, NS, ZE, PS, PL}. The overlap parameter n Eq. (7) s set at r=1.5, the decreasng rate g ( k ) n Eq. (18) s set at 0.9, and the ntal learnng rate s set at η( k ) = Fgure 3 and 4 show the resultant membershp functons of the nput u( k ), y( k 1), and y( k 2) of the neuro-fuzzy model wth Sugeno and Mamdan fuzzy rules. The membershp functons of Segeno fuzzy rules are adjusted by the weghted average, and the way of defuzzfcaton n layer 5 s by the calculaton of lnear relatons. Compared to the Mamdan neuro-fuzzy model, the membershp functons of Sugeno neuro-fuzzy model are not as complcated n calculaton. The performance of the two neuro-fuzzy models are tested by 200 data ponts acqured from the snusod nput sgnal u( k) = sn(2 π k / 25). The outputs of both models are shown n Fg. 5(a) and 3(b). Both takng 300 epochs n the smulaton, and t can be seen that both models perform well n dentfcaton. However, the output of the Sugeno neuro-fuzzy model s so close to the benchmark and has smaller dscrepancy. The dfference between the output of plant and output of neuro-fuzzy models s shown n Fg. 6. It can be seen clearly that the dscrepancy n Mamdan neuro-fuzzy model s hgher than n Sugeno uo (23) 330

11 neuro-fuzzy model, and the later model s adopted as the tool for the followng experment. As for the applcaton, the pattern recognton s taken nto consderaton. Fgure 7 (a) wth 800*600 pxels s vewed as the sample book for the experment, whle t s cut by 207 elements and stored nto a matrx for the desred output, whle the black and whte ponts represent +1 and 0 respectvely. Here some randomly nose ponts are added nto the matrx as the materal of tranng data. After tranng 300 epochs, the result of smulaton s shown as Fg. 7 (b) and (c), whle the former llustrates the correspondng cut elements between desred (red lne) and modelng (blue lne) output, and the later s the mage after smulatng. By the Same token, the lcense plate whch s cut by 539 elements s taken as another example for the experment, and the sample and smulatons are shown n Fg. 8 (a), (b) and (c). It can be clearly seen that t works well and correctly n the pattern recognton. 5. Concluson A performance comparson of the same case between the Sugeno and the Mamdan fuzzy rules n the neuro-fuzzy model whch has a structure of fve-layer s proposed. Both models have the ablty of dentfyng the fuzzy rules and tunng the membershp functons automatcally for the system dentfcaton. The crteron test on a nonlnear system wth three nputs and one output reveals that the membershp functons can be adjusted sensbly and optmally. Instead of needng experts experences and knowledge, the fve-layer neuro-fuzzy n Sugeno and Mamdan fuzzy models can be constructed just from the nput-output tranng data. Fgure 6 shows the error between the two models, the error of Sugeno neuro-fuzzy model s almost equal to zero and the error of Mamdan neuro-fuzzy model has the phenomenon of vbraton. Compared to the dfference by Fg. 5 (a) and (b), both models have good performance n system dentfcaton. However, the error of Sugeno neuro-fuzzy model s smaller than Mamdan neuro-fuzzy model, and the former s closer to the 331

12 benchmark and has a good approach. From Fg. 7 (c) and Fg. 8 (c), on the other hand, the applcaton by the Sugeno neuro-fuzzy model can recognze the mages successfully. References [1] Ln, Y. and Cunnngham, G. A., A New Approach to Fuzzy-Neural System Modelng, IEEE Trans. on Fuzzy Syst., Vol. 3 No. 2, pp , [2] Azam, F. and Vanlandngham, H. F., Adaptve Self Organzng Feature Map Neuro-Fuzzy Technque for Dynamcs System Identfcaton, IEEE Int. Sym. on Intellgent Control - Proceedngs, pp , [3] Palt, A. K., Effcent Tranng Algorthm for Takag- Sugeno Type Neuro-Fuzzy Network, IEEE Int. Conf. on Fuzzy Systems, Vol. 3, pp , [4] Efe, M. O. and Kaynak, O., Neuro-Fuzzy Approach for Identfcaton and Control of Nonlnear Systems, IEEE Int. Sym. on Industral Electroncs, Vol. 1, pp. TU2-TU11, [5] Km, S. and Vachtsevanos, G. A., A Polynomal Fuzzy Neural Network for Identfcaton and Control, Ann. Conf. on New Fronters n Fuzzy Logc and Soft Computng, pp. 5-9, [6] Chakraborty, D. and Pal, N. R., Integrated Feature Analyss and Fuzzy Rule-Based System Identfcaton n a Neuro-Fuzzy Paradgm, IEEE Trans. on Systems, Man, and Cybernetcs, Vol. 31, No. 3, pp , [7] Gorrosteta, E. and Pedraza, C., Neuro Fuzzy Modelng of Control Systems, IEEE Conf. on Electroncs, Communcatons and Computers, pp. 23, [8] Zaheeruddn and Garma, A Neuro-Fuzzy Approach for Predcton of Human Work Effcency n Nosy Envronment, Appled Soft Computng Journal, Vol. 6, No. 3, pp , [9] Cheng, K. H., Hybrd Learnng-Based Neuro-Fuzzy Inference System: A New Approach for System Modelng, Int. J. Systems Scence, Vol. 39, No. 6, pp , [10] Banakar, A. and Azeem, M. F., Identfcaton and Predcton of Nonlnear Dynamcal Plants Usng TSK and Wavelet Neuro-Fuzzy Models, IEEE Conf. 332

13 on Intellgent Systems, pp , [11] Yang, S. M., Chen, C. J., Chang, Y. Y., and Tung, Y. Z., Development of a Self-Organzed Neuro-Fuzzy Model for System Identfcaton, ASME Trans. J. on Vbraton and Acoustcs, Vol. 129, No. 4, pp , [12] Mamdan, E. H. and Asslan, S., Experment n Lngustc Synthess wth a Fuzzy Logc Controller, Int. J. on Man-Machne Studes, Vol. 7, No. 1, pp. 1-13, [13] Takag, T. and Sugeno, M., Fuzzy Identfcaton of Systems and Its Applcatons to Modelng and Control, IEEE Trans. on Systems, Man and Cybernetcs, Vol. SMC-15, No. 1, pp , [14] Narendra, K. S., and Parthasarathy, K., Identfcaton and Control of Dynamc Systems Usng Neural Networks, IEEE Trans. on Neural Networks, Vol. 1, pp. 4-27,

14 Layer 5 : Output node Layer 4 : Output Term Nodes Layer 3 : Rule Nodes Layer 2 : Input Term Nodes Layer 1 : Input Nodes Fg. 1 The structure of the neuro-fuzzy model wth Mamdan fuzzy rule. 334

15 Layer 5 Output Node Layer 4 Output Term Nodes Layer 3 Rule Nodes Layer 2 Input Term Nodes Layer 1 Input Nodes Fg. 2 The structure of the neuro-fuzzy model wth Sugeno fuzzy rule. 335

16 u(k) u(k) y(k-1) y(k-1) y(k-2) Fg. 3 The membershp functons of three nputs of Sugeno neuro-fuzzy model y(k-2) Fg. 4 The membershp functons of three nputsof Mamdan neuro-fuzzy model. 336

17 5 4 Plant Sugeno 3 Output Tme (a) 5 4 Plant Mamdan 3 Output Tme (b) Fg. 5 Modelng of the plant and the neuro-fuzzy system wth a snusod nput sgnal n (a)sugeno fuzzy rules and (b) Mamdan fuzzy rules. 337

18 Sugeno Mamdan 0.2 Error Tme Fg. 6 The dfference between output of plant and neuro-fuzzy model n Sugeno and Mamdan fuzzy rules. Fg. 7 (a) The mage of door plate (b) The result of smulaton (c) The mage after smulatng 338

19 Fg. 7 (a) The mage of lcense plate (b) The result of smulaton (c) The mage after smulatng 339

20 340

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Determining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application

Determining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application 7 Determnng Transmsson Losses Penalty Factor Usng Adaptve Neuro Fuzzy Inference System (ANFIS) For Economc Dspatch Applcaton Rony Seto Wbowo Maurdh Hery Purnomo Dod Prastanto Electrcal Engneerng Department,

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Identification of Linear Partial Difference Equations with Constant Coefficients

Identification of Linear Partial Difference Equations with Constant Coefficients J. Basc. Appl. Sc. Res., 3(1)6-66, 213 213, TextRoad Publcaton ISSN 29-434 Journal of Basc and Appled Scentfc Research www.textroad.com Identfcaton of Lnear Partal Dfference Equatons wth Constant Coeffcents

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

APPLICATION OF AN ADAPTIVE NEURO INFERENCE SYSTEM FOR CONTINUOUS MONITORING AND CONTROL OF AN EXTRACTIVE DISTILLATION PLANT

APPLICATION OF AN ADAPTIVE NEURO INFERENCE SYSTEM FOR CONTINUOUS MONITORING AND CONTROL OF AN EXTRACTIVE DISTILLATION PLANT Journal of Chemcal Bahman Technology ZareNezhad, and Metallurgy, Al Amnan 48,, 203, 99-03 APPLICATION OF AN ADAPTIVE NEURO INFERENCE SYSTEM FOR CONTINUOUS MONITORING AND CONTROL OF AN EXTRACTIVE DISTILLATION

More information

System identifications by SIRMs models with linear transformation of input variables

System identifications by SIRMs models with linear transformation of input variables ORIGINAL RESEARCH System dentfcatons by SIRMs models wth lnear transformaton of nput varables Hrofum Myama, Nortaka Shge, Hrom Myama Graduate School of Scence and Engneerng, Kagoshma Unversty, Japan Receved:

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Lecture 23: Artificial neural networks

Lecture 23: Artificial neural networks Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Adaptive RFID Indoor Positioning Technology for Wheelchair Home Health Care Robot. T. C. Kuo

Adaptive RFID Indoor Positioning Technology for Wheelchair Home Health Care Robot. T. C. Kuo Adaptve RFID Indoor Postonng Technology for Wheelchar Home Health Care Robot Contents Abstract Introducton RFID Indoor Postonng Method Fuzzy Neural Netor System Expermental Result Concluson -- Abstract

More information

Parameter Estimation for Dynamic System using Unscented Kalman filter

Parameter Estimation for Dynamic System using Unscented Kalman filter Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,

More information

829. An adaptive method for inertia force identification in cantilever under moving mass

829. An adaptive method for inertia force identification in cantilever under moving mass 89. An adaptve method for nerta force dentfcaton n cantlever under movng mass Qang Chen 1, Mnzhuo Wang, Hao Yan 3, Haonan Ye 4, Guola Yang 5 1,, 3, 4 Department of Control and System Engneerng, Nanng Unversty,

More information

Fuzzy Boundaries of Sample Selection Model

Fuzzy Boundaries of Sample Selection Model Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN

More information

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG 6th Internatonal Conference on Mechatroncs, Materals, Botechnology and Envronment (ICMMBE 6) De-nosng Method Based on Kernel Adaptve Flterng for elemetry Vbraton Sgnal of the Vehcle est Kejun ZEG PLA 955

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Journal of Chemical and Pharmaceutical Research, 2014, 6(5): Research Article

Journal of Chemical and Pharmaceutical Research, 2014, 6(5): Research Article Avalable onlne www.jocpr.com Journal of Chemcal and Pharmaceutcal Research, 014, 6(5):1683-1688 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Multple mode control based on VAV ar condtonng system

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Atmospheric Environmental Quality Assessment RBF Model Based on the MATLAB

Atmospheric Environmental Quality Assessment RBF Model Based on the MATLAB Journal of Envronmental Protecton, 01, 3, 689-693 http://dxdoorg/10436/jep0137081 Publshed Onlne July 01 (http://wwwscrporg/journal/jep) 689 Atmospherc Envronmental Qualty Assessment RBF Model Based on

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013 Feature Selecton & Dynamc Trackng F&P Textbook New: Ch 11, Old: Ch 17 Gudo Gerg CS 6320, Sprng 2013 Credts: Materal Greg Welch & Gary Bshop, UNC Chapel Hll, some sldes modfed from J.M. Frahm/ M. Pollefeys,

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION

CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala

More information

FUZZY FINITE ELEMENT METHOD

FUZZY FINITE ELEMENT METHOD FUZZY FINITE ELEMENT METHOD RELIABILITY TRUCTURE ANALYI UING PROBABILITY 3.. Maxmum Normal tress Internal force s the shear force, V has a magntude equal to the load P and bendng moment, M. Bendng moments

More information

Lab 2e Thermal System Response and Effective Heat Transfer Coefficient

Lab 2e Thermal System Response and Effective Heat Transfer Coefficient 58:080 Expermental Engneerng 1 OBJECTIVE Lab 2e Thermal System Response and Effectve Heat Transfer Coeffcent Warnng: though the experment has educatonal objectves (to learn about bolng heat transfer, etc.),

More information

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology

More information

Scroll Generation with Inductorless Chua s Circuit and Wien Bridge Oscillator

Scroll Generation with Inductorless Chua s Circuit and Wien Bridge Oscillator Latest Trends on Crcuts, Systems and Sgnals Scroll Generaton wth Inductorless Chua s Crcut and Wen Brdge Oscllator Watcharn Jantanate, Peter A. Chayasena, and Sarawut Sutorn * Abstract An nductorless Chua

More information

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM

FUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Building A Fuzzy Inference System By An Extended Rule Based Q-Learning

Building A Fuzzy Inference System By An Extended Rule Based Q-Learning Buldng A Fuzzy Inference System By An Extended Rule Based Q-Learnng Mn-Soeng Km, Sun-G Hong and Ju-Jang Lee * Dept. of Electrcal Engneerng and Computer Scence, KAIST 373- Kusung-Dong Yusong-Ku Taejon 35-7,

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Discretization of Continuous Attributes in Rough Set Theory and Its Application*

Discretization of Continuous Attributes in Rough Set Theory and Its Application* Dscretzaton of Contnuous Attrbutes n Rough Set Theory and Its Applcaton* Gexang Zhang 1,2, Lazhao Hu 1, and Wedong Jn 2 1 Natonal EW Laboratory, Chengdu 610036 Schuan, Chna dylan7237@sna.com 2 School of

More information

An identification algorithm of model kinetic parameters of the interfacial layer growth in fiber composites

An identification algorithm of model kinetic parameters of the interfacial layer growth in fiber composites IOP Conference Seres: Materals Scence and Engneerng PAPER OPE ACCESS An dentfcaton algorthm of model knetc parameters of the nterfacal layer growth n fber compostes o cte ths artcle: V Zubov et al 216

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

A Fast Computer Aided Design Method for Filters

A Fast Computer Aided Design Method for Filters 2017 Asa-Pacfc Engneerng and Technology Conference (APETC 2017) ISBN: 978-1-60595-443-1 A Fast Computer Aded Desgn Method for Flters Gang L ABSTRACT *Ths paper presents a fast computer aded desgn method

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Process Optimization by Soft Computing and Its Application to a Wire Bonding Problem

Process Optimization by Soft Computing and Its Application to a Wire Bonding Problem Internatonal Journal of Appled Scence and Engneerng 2004 2, : 59-7 Process Optmzaton by Soft Computng and Its Applcaton to a Wre Bondng Problem Ch-Bn Cheng Department of Industral Engneerng and Management,

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

A neural network with localized receptive fields for visual pattern classification

A neural network with localized receptive fields for visual pattern classification Unversty of Wollongong Research Onlne Faculty of Informatcs - Papers (Archve) Faculty of Engneerng and Informaton Scences 2005 A neural network wth localzed receptve felds for vsual pattern classfcaton

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Code_Aster. Identification of the model of Weibull

Code_Aster. Identification of the model of Weibull Verson Ttre : Identfcaton du modèle de Webull Date : 2/09/2009 Page : /8 Responsable : PARROT Aurore Clé : R70209 Révson : Identfcaton of the model of Webull Summary One tackles here the problem of the

More information

GA-Based Fuzzy Kalman Filter for Tracking the Maneuvering Target Sun Young Noh*, Bum Jik Lee *, Young Hoon Joo **, and Jin Bae Park *

GA-Based Fuzzy Kalman Filter for Tracking the Maneuvering Target Sun Young Noh*, Bum Jik Lee *, Young Hoon Joo **, and Jin Bae Park * ICCAS005 June -5, KINEX, Gyeongg-Do, Korea GA-Based uzzy Kalman lter for rackng the aneuverng arget Sun Young Noh*, Bum Jk Lee *, Young Hoon Joo **, and Jn Bae Park * * Department of Electrcal and Electronc

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN

COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN Int. J. Chem. Sc.: (4), 04, 645654 ISSN 097768X www.sadgurupublcatons.com COEFFICIENT DIAGRAM: A NOVEL TOOL IN POLYNOMIAL CONTROLLER DESIGN R. GOVINDARASU a, R. PARTHIBAN a and P. K. BHABA b* a Department

More information

Short Term Load Forecasting using an Artificial Neural Network

Short Term Load Forecasting using an Artificial Neural Network Short Term Load Forecastng usng an Artfcal Neural Network D. Kown 1, M. Km 1, C. Hong 1,, S. Cho 2 1 Department of Computer Scence, Sangmyung Unversty, Seoul, Korea 2 Department of Energy Grd, Sangmyung

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Identification of Instantaneous Modal Parameters of A Nonlinear Structure Via Amplitude-Dependent ARX Model

Identification of Instantaneous Modal Parameters of A Nonlinear Structure Via Amplitude-Dependent ARX Model Identfcaton of Instantaneous Modal Parameters of A Nonlnear Structure Va Ampltude-Dependent ARX Model We Chh Su(NCHC), Chung Shann Huang(NCU), Chng Yu Lu(NCU) Outlne INRODUCION MEHODOLOGY NUMERICAL VERIFICAION

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering,

COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION. Erdem Bala, Dept. of Electrical and Computer Engineering, COMPUTATIONALLY EFFICIENT WAVELET AFFINE INVARIANT FUNCTIONS FOR SHAPE RECOGNITION Erdem Bala, Dept. of Electrcal and Computer Engneerng, Unversty of Delaware, 40 Evans Hall, Newar, DE, 976 A. Ens Cetn,

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

arxiv:cs.cv/ Jun 2000

arxiv:cs.cv/ Jun 2000 Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São

More information

Beyond Zudilin s Conjectured q-analog of Schmidt s problem

Beyond Zudilin s Conjectured q-analog of Schmidt s problem Beyond Zudln s Conectured q-analog of Schmdt s problem Thotsaporn Ae Thanatpanonda thotsaporn@gmalcom Mathematcs Subect Classfcaton: 11B65 33B99 Abstract Usng the methodology of (rgorous expermental mathematcs

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

A Hierarchical Fuzzy-neural Multi-model Applied in Nonlinear Systems Identification and Control

A Hierarchical Fuzzy-neural Multi-model Applied in Nonlinear Systems Identification and Control A Herarchcal Fuzzy-neural Mult-model Appled n Nonlnear Systems Identfcaton and Control Feng Ye School of Physcs & Informaton Engneerng Janghan Unversty Wuhan, Chna yefenglj@yahoo.com.cn We-mn Q School

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Unified Subspace Analysis for Face Recognition

Unified Subspace Analysis for Face Recognition Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

EXPERT CONTROL BASED ON NEURAL NETWORKS FOR CONTROLLING GREENHOUSE ENVIRONMENT

EXPERT CONTROL BASED ON NEURAL NETWORKS FOR CONTROLLING GREENHOUSE ENVIRONMENT EXPERT CONTROL BASED ON NEURAL NETWORKS FOR CONTROLLING GREENHOUSE ENVIRONMENT Le Du Bejng Insttute of Technology, Bejng, 100081, Chna Abstract: Keyords: Dependng upon the nonlnear feature beteen neural

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES Proceedngs of the Fourth Internatonal Conference on Machne Learnng and Cybernetcs, Guangzhou, 8- August 005 FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES DING-ZHOU CAO, SU-LIN PANG, YUAN-HUAI

More information

Wavelet chaotic neural networks and their application to continuous function optimization

Wavelet chaotic neural networks and their application to continuous function optimization Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,

More information