Sparse Training Procedure for Kernel Neuron *

Size: px
Start display at page:

Download "Sparse Training Procedure for Kernel Neuron *"

Transcription

1 Sparse ranng Procedure for Kerne Neuron * Janhua XU, Xuegong ZHANG and Yanda LI Schoo of Mathematca and Computer Scence, Nanng Norma Unversty, Nanng 0097, Jangsu Provnce, Chna xuanhua@ema.nnu.edu.cn Department of Automaton, snghua Unversty / State Key Laboratory of Integent echnoogy and Systems, Beng 00084, Chna zhangxg@ma.tsnghua.edu.cn Abstract: he kerne neuron s the generazaton of cassca McCuoch-Ptts neuron usng Mercer kernes. In order to contro generazaton abty and prune structure of kerne neuron, we construct a reguarzed rsk functona ncudng both emprca rsk functona and Lapace reguarzaton term n ths paper. Based on the gradent descent method, a nove tranng agorthm s desgned, whch s referred to as sparse tranng procedure for kerne neuron. Such a procedure can reaze the man deas: kerne, reguarzaton (or arge margn) and sparseness n the kerne machnes (e.g. support vector machnes, kerne Fsher dscrmnant anayss, etc.), and can dea wth the nonnear cassfcaton and regresson probems effectvey. Keywords: Kerne Neuron, Support Vector Machne, Sparseness, Reguarzaton.. Introducton In the artfca neura networks the basc eement s McCuoch-Ptts neuron (or M-P) []. Rosenbatt [] proposed the frst earnabe procedure: Perceptron, whch coud ony dea wth the neary separabe cases as a smpe near cassfer. In order to hande the more compcated rea-word probems, many modes and ther tranng procedures are ntroduced, e.g. back propagaton tranng method for mutayer perceptron [3], adane wth some nonnear transform [4], and rada bass functon net (RBF) [5]. Recenty, severa kerne-based machnes for nonnear probems, such as support vector machnes (SVM) [6-8], kerne Fsher dscrmnant anayss (KFD) [9], and arge margn kerne pocket agorthm [0], are ganng more and more attenton n the nonnear cassfer desgnng. here exst three attractve concepts: kerne dea, arge margn or reguarzaton, and sparseness. he kerne dea s an effectve technque to * hs work s supported by Natona Natura Scence Foundaton of Chna, proect No reazng the nonnear transform mpcty. XU et a [] ntroduced the kerne neuron by generazng M-P neuron through Mercer kernes, and constructed a smpe tranng agorthm based on the gradent descent method. Kerne neuron and t tranng procedure can be consdered as a unfed framework for three nonnear technques mentoned above n the neura networks. he reguarzaton technque deveoped by khonov & Arsenn [] s to hande -posed probems. Such a technque has wdey been used n neura networks. It has been found that addng a proper reguarzaton term to an obectve functona can resut n sgnfcant mprovements n net generazaton [3], and aso can prune the structure of nets [4]. here many are three usua reguarzaton terms: the squared or Gaussan, absoute or Lapace, and normazed or Cauchy reguarzaton terms. Wth respect to the effecency of supervsed earnng, Sato and Nakano [3] gave a detaed comparson on the three reguarzaon terms and dfferent earnng agorthms, and ponted out that the combnaton of the squared reguarzaton term and the second order earnng agorthm drastcay mproves the convergence and generazaton abty. Wams [5] concuded that a Lapace reguarzaton term s more approprate than the Gaussan one from the vewpont of net prunng. Ishkawa [6] used a Lapace reguarzer to construct a smpe but effectve earnng method caed a structura earnng wth forgettng n order to prunng the forward neura nets. In ths paper, n order to mprove the generazaton abty and to get a sparse dscrmnant or regresson functon for kerne neuron, we add the Lapace reguarzaton term to the orgna emprca rsk functona defned n the paper of XU et a []. Based on gradent descent approach, a tranng agorthm s constructed. It can be referred to as the sparse tranng procedure for kerne neuron, and can reaze three man deas n support vector machnes. As a nonnear technque, t can hande both nonnear cassfcaton and regresson probems effectvey.

2 . Defnton of Kerne Neuron hs paper s devoted to two probems: cassfcaton and regresson probems, n neura networks. Let {( x, y ), ( x, y ),..., ( x, y ),..., ( x, y )} () be the tranng set of..d sampes, where n x R. For the cassfcaton probem of bnary casses ω, ), suppose ( ω +, f x ω y = (), f x ω whe for the regresson probem, suppose y R, =,...,. he cassca M-P neuron s defned as o ( x ) = f (( w x) + b) (3) where o s the output of neuron, x s the nput vector, w and b are the weght vector and threshod respectvey. Functon f mpes the transform functon. For the M-P neuron, f s a hard mtng functon,.e. the sgn functon. In neura networks, f s a contnuousy dfferentabe and monotone functon, e.g., sgmod functon and near functon. In the paper of Xu, et a [], a kerne verson of M-P neuron s defned as o( x) = f( α k( x, x ) + β ) (4) = where α R, =,,... are the coeffcents correspondng to each sampe, k( x, x) s the kerne functon satsfyng Mercer condtons, e.g., poynoma kerne, RBF kerne and two ayers neura net [7,8]. Note that generay speakng the nput-output reatonshp s nonnear n kerne neuron. Ony n the case when the transform functon s near and kerne functon s the near kerne (namey k ( x, x) = x x), the reatonshp s near, and can be consdered as the equvaent form of M-P neuron. he kerne neuron utzes the nonnear kernes to reaze the nonnear transform form the orgna n nput vector space ( R ) to the rea number space ( R ). 3. Sparse ranng Procedure for Kerne Neuron For the kerne neuron, XU et a [] defned an emprca functona and constructed a tranng agorthm based on standard gradent descent scheme. Such a tranng procedure ony reazes the kerne dea. Specay, t s dffcut to contro the generazaton abty and to obtan a sparse decson functon. Addng a proper reguarzaton term n rsk functona to decay the connecton weghts s smpe way to prunng weghts wthout compcatng the earnng agorthm much [4]. In kerne neuron, such a prunng mpes that a sparse representaton woud occur,.e. many α woud be cose to zeros. Smutaneousy reguarzaton method aso can mprove the generazaton and convergence of tranng procedure. hus, we defne a reguarzed rsk functona consstng of emprca rsk (square error summaton between the actua output and desred output) and a Lapace reguarzaton term, that s, E( α, β ) = [ y o( )] µ α x + (5) = = where α = [ α,..., α ], μ s the reguarzaton parameter. Now our goa s to construct an effectve agorthm to fnd the coeffcent vector α and threshod β that mnmze rsk functona (5). hs can st be done by the standard gradent descent scheme. he gradent of (5) s, E = y f( y) f ( y) k( m, ) α x x m = + µ sgn( αm) (6) m=,,, E = y f( y) f ( y) β where y = = α k( x, x ) + β and f ( y ) s = the frst dervatve of f( y ), sgn s a sgn functon. Lke the back-propagaton tranng agorthm n the forward neura networks, we aso use snge sampe correcton and add a momentum term n teratve procedure. herefore, a nove teratve procedure for kerne neuron coud be rewrtten as: Agorthm- (SKN-):. Let t=0 and αm(), t β () t be arbtrary.. Pck up some sampe x. 3. t=t+. 4. Cacuate y() t = α( t ) k( x, x ) + β( t ) = f ( y ( t)) 5. Cacuate

3 αm( t) = λ ( y f( y( t))) f ( y( t)) k( xm, x ) λ sgn( αm( t )) + λ3 αm( t ) β( t) = λ ( y f( y( t))) f ( y( t)) + λ β( t ) 6. Update 3 αm() t = αm( t ) + αm() t β() t = β( t ) + β() t 7. If otherwse go to step 3. α m() t + β () t < ε or t tmax stop, where m=,,, λ s the earnng rate, λ = λµ, λ3 denotes the momentum parameter, ε s a threshod to stop agorthm, t max s the maxma teratons. Ishkawa [6] ponted out that such a weght decay s constant n contrast to exponenta decay [7] and unnecessary connectons fade away. hs means that a arge number of parameters are cose to zeros and the sparseness happens. Partcuary when λ = λ3 = 0, ths approach s the same as the smpe tranng procedure for kerne neuron []. Ishkawa [6] advsed a seectve prunng procedure to make ony the connecton weghts decay whose absoute vaues s beow a threshod θ after a tranng procedure sted above. Another reguarzed rsk functona for kerne neuron coud be constructed as, E( α, β ) = [ y o( )] + µ α x (7) = = α < θ Such an dea can mprove the goodness of ft of a mode and decay the sma vaues further. hs means that a more sparse representaton can occur. he nove sparse tranng procedure woud be comprsed of two steps as foow: components are forced to zero, the fna dscrmnant functon changes tte. In ths paper, a proper threshod δ s set to mpose some components zeros. If α δ, ths sampe or vector s st caed as support vector. Now the fna dscrmnant or regresson functon can be represented as f( x) = f( α k( x, x ) + β ) (8) = α δ In the case when we hande bnary cassfcaton probem, for new nput vector x, f f ( x ) > 0, then x ω, otherwse x ω. For the regresson probem, we consder the f ( x ) as the regresson resut. 4. Experment Resuts and Anayss o evauate the performance of our new tranng procedure, we devsed three artfca data sets: a neary separabe cases, a nonnear case wth 0 mscassfed sampes, and a nonnear regresson. he exampe n Fg. s a neary separabe case, where there exst 79 sampes of two casses (marked by crosses and ponts n the fgure) whch can be cassfed wthout error by severa near cassfers. Fg. ustrates the seperaton nes obtaned by usng KN [], SKN-, SKN- and SVM ght method [9] wth near kerne k ( x, y) = x y, where the crces ndcate the support vectors (or SV). Agorthm- (SKN-):. Agorthm- sted above.. Agorthm-, but ncudng a threshod θ for α. In ths paper, we ca two tranng agorthms above as sparse tranng procedure and for kerne neuron, or smpy SKN- and SKN- respectvey. As n the sparse LS-SVM [8], we refer to the { α,..., α } as the spectrum of kerne neuron. he sparseness means that many components of spectrum are very cose to zeros. If such Fg.: Some separaton hyperpanes from dfferent agorthms for the near case. 3

4 Fg. show the correspondng spectrums of these earnng appproaches. In KN (Fg.(a)), the spectrum s not sparse obvousy. In Fg. (b) and (c), the arge number of components are cose to zeros and the sparesness occures n the dcson functon. Fg. (d) ustrates the spectrum of SVM ght method [9]. Fg.3: he separated hyperpanes from dfferent agorthms for nonnear case. Fg.: the spectrums from dfferent approaches for the near case. Fg.4: A nonnear regress exampe wth SKN- and SVM. For nonnear probem, we desgned an exampe whch contans ten mscassfed sampes usng some near cassfer. Fg.3 shows the decson boundares from KN, SKN-, SKN- and SVM ght wth the near kerne. Note that there exsts two contradcton sampes,.e. x = x, y y,. We fnd out that the number of support vectors from our sparse tranng agorthm s ess than that from SVM ght (C=000, other parameters are defaut vaues). he exampe demonstrates that our agorthm works we for nonnear probem. o the nonnear regresson probem, a functon 0.5x ) e f ( x) = ( x + x s used [3]. In the experment, a vaue of x s randomy generated n the range from 4 to +4, and the correspondng vaue of functon s computed and added a guassan nose wth a zero mean and a varance he tota number of tranng sampes s 30. When a rada bass functon kernne wth wdth.0 s utzed, Fg. 4(a) demonstrates the regresson resut from SKN-, where the sod curve s the regresson resut, the dashed ne true functon, and the back 4

5 ponts s actua sampes. In Fg. 4 (b), the resut comes from SVM wth RBF keren (wdth =0.4), C=00.0, ε = 0.. In both (a) and (b), the crces denote support vectors. It s easy to see that the number of support vectros form SKN- s ess than that form SVM, that s 6 versus 6. hree artfca exampes above show sparse tranng procedure can work we for both nonnear cassfcaton and regress probems. It s possbe that our methods can obtan more sparse functon than SVM. 5. Dscussons and Concusons he kerne neuron s the nonnear generazaton of neuron wth kernes. In order to contro the generazaton abty and obtan the sparse decson functon, two reguarzed rsk functons are defned for kerne neuron, whch consst of the emperca rsk fuctona and Lapace reguarzaton term. Based on the gradent descent scheme, two sparse tranng procedures are deveoped,.e. SKN- and SKN-. he new methods can be regarded as a new genera-purpose nonnear earnng machne, snce they can be apped for both nonnear pattern recognton and regresson probems. Experments on artfca data sets show that they work we on both neary seperatabe and non-seperatabe data, and aso on regresson probem. For three usua knds of kerne functons, our kerne neuron and ts sparse tranng procedures can mpement the smar performances of mut-ayer perceptrons (the kerne of two ayer neura networks), rada bass functon net (RBF kerne) and the adane wth nonnear preprocessors (poynoma kerne). Furthermore SKN protects us from desgnng hdden ayer node, custerng the centers and constructng the poynoma transfrom, etc. References [] McCuoch W. S., Ptts W. A ogca cacuus of the deas mmanent n nervous actvty. Buetn of Mathematca Bophyscs 5, 5-3, 943. [] Rosenbatt F. he perceptron: probabstc mode for nformaton storage and organzaton n the bran. Psychoogca Revew, 65, 958 [3] Rumehart D. E., Hnton G. E., Wams R. J. Learnng representatons by back-propagatng errors. Nature 33(9), , 986. [4] Specht D.F. Generaton of poynoma dscrmnant functon for pattern recognton. IEEE ransactons on Eectronc Computer, EC-6 (308-39), 967. [5] heodords S., Koutroumbas K. Pattern Recognton. Academc Press, San Dego, 999. [6] Cortes C., Vapnk V. N. Support Vector Networks. Machne Learnng, 0(3), 73-97, 995. [7] Vapnk V. N. Statstca Learnng heory. Wey, New York, 998. [8] Vapnk V. N. he Nature of Statstca Learnng heory (nd ed.). Sprnger-Verag, New York, 999. [9] Mka S., Ratsch G., Weston J., Schokopf B., Muer K.-R.. Fsher dscrmnant anayss wth kernes. Neura Networks for Sgna Processng IX, IEEE Press, New York, 999. [0] Xu J., Zhang X., L Y. Large margn kerne pocket agorthm. Proceedngs of IJCNN00, , Washngton DC, 00. [] Xu J., Zhang X., L Y. Kerne Neuron and ts ranng Agorthm. Proceedngs of 8 th Internatona Conference on Neura Informaton Processng, Vo. : , Shangha Chna, Nov. 4-8, 00, Fudan Unversty Press. [] khonov A. N., Arsenn, V. Y. Souton of -posed probem. W. H. Wanston, Washngton DC, 977. [3] Sato K., Nakano R. Second order earnng agorthm wth squared penaty term. Neura Computaton, (3), ,000. [4] Reed R. Prunng agorthms a survey. IEEE transactons on Neura networks, 4(5), , 993. [5] Wams, P.M. Bayesan Reguarzaton and prunng usng a Lapace pror. Neura Computaton, 7(), 7-43, 995. [6] Ishkawa M. Structura earnng wth forgettng. Neura Networks, 9(3).509-5, 99. [7] Paut D.C., Nowan S. J., Hnton G.E. Experments on earnng by back propagaton. echnca Report. CMU-CS-86-6, Carnege-Maon Unv [8] Suykens J. A. K., Lukas L., Vandewae J. Sparse east squares support vector machnes cassfers. In 8 t h European Symposum on Artfca Neura Networks (ESANN 000), 37-4, 000. [9] Joachms. Makng Large-scae SVM Learnng Practca. Advances n Kerne Methods Support Vector Learnng, Schokopf B, Burges C, and Smoa A (ed), 69-84, Cambrdge MA: MI Press,

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory Advanced Scence and Technoogy Letters Vo.83 (ISA 205), pp.60-65 http://dx.do.org/0.4257/ast.205.83.2 Research on Compex etworks Contro Based on Fuzzy Integra Sdng Theory Dongsheng Yang, Bngqng L, 2, He

More information

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM Journa of heoretca and Apped Informaton echnoogy th February 3. Vo. 48 No. 5-3 JAI & LLS. A rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 NONLINEAR SYSEM IDENIFICAION BASE ON FW-LSSVM, XIANFANG

More information

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students.

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students. Exampe: Suppose we want to bud a cassfer that recognzes WebPages of graduate students. How can we fnd tranng data? We can browse the web and coect a sampe of WebPages of graduate students of varous unverstes.

More information

Neural network-based athletics performance prediction optimization model applied research

Neural network-based athletics performance prediction optimization model applied research Avaabe onne www.jocpr.com Journa of Chemca and Pharmaceutca Research, 04, 6(6):8-5 Research Artce ISSN : 0975-784 CODEN(USA) : JCPRC5 Neura networ-based athetcs performance predcton optmzaton mode apped

More information

Image Classification Using EM And JE algorithms

Image Classification Using EM And JE algorithms Machne earnng project report Fa, 2 Xaojn Sh, jennfer@soe Image Cassfcaton Usng EM And JE agorthms Xaojn Sh Department of Computer Engneerng, Unversty of Caforna, Santa Cruz, CA, 9564 jennfer@soe.ucsc.edu

More information

The University of Auckland, School of Engineering SCHOOL OF ENGINEERING REPORT 616 SUPPORT VECTOR MACHINES BASICS. written by.

The University of Auckland, School of Engineering SCHOOL OF ENGINEERING REPORT 616 SUPPORT VECTOR MACHINES BASICS. written by. The Unversty of Auckand, Schoo of Engneerng SCHOOL OF ENGINEERING REPORT 66 SUPPORT VECTOR MACHINES BASICS wrtten by Vojsav Kecman Schoo of Engneerng The Unversty of Auckand Apr, 004 Vojsav Kecman Copyrght,

More information

MARKOV CHAIN AND HIDDEN MARKOV MODEL

MARKOV CHAIN AND HIDDEN MARKOV MODEL MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not

More information

Application of support vector machine in health monitoring of plate structures

Application of support vector machine in health monitoring of plate structures Appcaton of support vector machne n heath montorng of pate structures *Satsh Satpa 1), Yogesh Khandare ), Sauvk Banerjee 3) and Anrban Guha 4) 1), ), 4) Department of Mechanca Engneerng, Indan Insttute

More information

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques Energes 20, 4, 73-84; do:0.3390/en40073 Artce OPEN ACCESS energes ISSN 996-073 www.mdp.com/journa/energes Short-Term Load Forecastng for Eectrc Power Systems Usng the PSO-SVR and FCM Custerng Technques

More information

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System.

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System. Part 7: Neura Networ & earnng /2/05 Superved earnng Neura Networ and Bac-Propagaton earnng Produce dered output for tranng nput Generaze reaonaby & appropratey to other nput Good exampe: pattern recognton

More information

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident ICTCT Extra Workshop, Bejng Proceedngs The Appcaton of BP Neura Network prncpa component anayss n Forecastng Road Traffc Accdent He Mng, GuoXucheng &LuGuangmng Transportaton Coege of Souast Unversty 07

More information

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory Proceedngs of the 2009 IEEE Internatona Conference on Systems Man and Cybernetcs San Antono TX USA - October 2009 Mutspectra Remote Sensng Image Cassfcaton Agorthm Based on Rough Set Theory Yng Wang Xaoyun

More information

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM Journa of Theoretca and Apped Informaton Technoogy th February 3. Vo. 48 No. 5-3 JATIT & LLS. A rghts reserved. ISSN: 99-8645 www.att.org E-ISSN: 87-395 IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED

More information

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks Shengyang Sun, Changyou Chen, Lawrence Carn Suppementary Matera: Learnng Structured Weght Uncertanty n Bayesan Neura Networks Shengyang Sun Changyou Chen Lawrence Carn Tsnghua Unversty Duke Unversty Duke

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Associative Memories

Associative Memories Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete

More information

Nested case-control and case-cohort studies

Nested case-control and case-cohort studies Outne: Nested case-contro and case-cohort studes Ørnuf Borgan Department of Mathematcs Unversty of Oso NORBIS course Unversty of Oso 4-8 December 217 1 Radaton and breast cancer data Nested case contro

More information

Active Learning with Support Vector Machines for Tornado Prediction

Active Learning with Support Vector Machines for Tornado Prediction Actve Learnng wth Support Vector Machnes for Tornado Predcton Theodore B. Trafas, Indra Adranto, and Mchae B. Rchman Schoo of Industra Engneerng, Unversty of Okahoma, 0 West Boyd St, Room 4, Norman, OK

More information

Multi-layer neural networks

Multi-layer neural networks Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent

More information

On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel

On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel Proceedngs of th European Symposum on Artfca Neura Networks, pp. 25-222, ESANN 2003, Bruges, Begum, 2003 On the Equaty of Kerne AdaTron and Sequenta Mnma Optmzaton n Cassfcaton and Regresson Tasks and

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

The Entire Solution Path for Support Vector Machine in Positive and Unlabeled Classification 1

The Entire Solution Path for Support Vector Machine in Positive and Unlabeled Classification 1 Abstract The Entre Souton Path for Support Vector Machne n Postve and Unabeed Cassfcaton 1 Yao Lmn, Tang Je, and L Juanz Department of Computer Scence, Tsnghua Unversty 1-308, FIT, Tsnghua Unversty, Bejng,

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Predicting Model of Traffic Volume Based on Grey-Markov

Predicting Model of Traffic Volume Based on Grey-Markov Vo. No. Modern Apped Scence Predctng Mode of Traffc Voume Based on Grey-Marov Ynpeng Zhang Zhengzhou Muncpa Engneerng Desgn & Research Insttute Zhengzhou 5005 Chna Abstract Grey-marov forecastng mode of

More information

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06 Dervng the Dua Prof. Bennett Math of Data Scence /3/06 Outne Ntty Grtty for SVM Revew Rdge Regresson LS-SVM=KRR Dua Dervaton Bas Issue Summary Ntty Grtty Need Dua of w, b, z w 2 2 mn st. ( x w ) = C z

More information

Xin Li Department of Information Systems, College of Business, City University of Hong Kong, Hong Kong, CHINA

Xin Li Department of Information Systems, College of Business, City University of Hong Kong, Hong Kong, CHINA RESEARCH ARTICLE MOELING FIXE OS BETTING FOR FUTURE EVENT PREICTION Weyun Chen eartment of Educatona Informaton Technoogy, Facuty of Educaton, East Chna Norma Unversty, Shangha, CHINA {weyun.chen@qq.com}

More information

L-Edge Chromatic Number Of A Graph

L-Edge Chromatic Number Of A Graph IJISET - Internatona Journa of Innovatve Scence Engneerng & Technoogy Vo. 3 Issue 3 March 06. ISSN 348 7968 L-Edge Chromatc Number Of A Graph Dr.R.B.Gnana Joth Assocate Professor of Mathematcs V.V.Vannaperuma

More information

Decentralized Adaptive Control for a Class of Large-Scale Nonlinear Systems with Unknown Interactions

Decentralized Adaptive Control for a Class of Large-Scale Nonlinear Systems with Unknown Interactions Decentrazed Adaptve Contro for a Cass of Large-Scae onnear Systems wth Unknown Interactons Bahram Karm 1, Fatemeh Jahangr, Mohammad B. Menhaj 3, Iman Saboor 4 1. Center of Advanced Computatona Integence,

More information

Cyclic Codes BCH Codes

Cyclic Codes BCH Codes Cycc Codes BCH Codes Gaos Feds GF m A Gaos fed of m eements can be obtaned usng the symbos 0,, á, and the eements beng 0,, á, á, á 3 m,... so that fed F* s cosed under mutpcaton wth m eements. The operator

More information

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION European Journa of Mathematcs and Computer Scence Vo. No. 1, 2017 ON AUTOMATC CONTNUTY OF DERVATONS FOR BANACH ALGEBRAS WTH NVOLUTON Mohamed BELAM & Youssef T DL MATC Laboratory Hassan Unversty MORO CCO

More information

International Journal "Information Theories & Applications" Vol.13

International Journal Information Theories & Applications Vol.13 290 Concuson Wthn the framework of the Bayesan earnng theory, we anayze a cassfer generazaton abty for the recognton on fnte set of events. It was shown that the obtane resuts can be appe for cassfcaton

More information

A finite difference method for heat equation in the unbounded domain

A finite difference method for heat equation in the unbounded domain Internatona Conerence on Advanced ectronc Scence and Technoogy (AST 6) A nte derence method or heat equaton n the unbounded doman a Quan Zheng and Xn Zhao Coege o Scence North Chna nversty o Technoogy

More information

9 Adaptive Soft K-Nearest-Neighbour Classifiers with Large Margin

9 Adaptive Soft K-Nearest-Neighbour Classifiers with Large Margin 9 Adaptve Soft -Nearest-Neghbour Cassfers wth Large argn Abstract- A nove cassfer s ntroduced to overcome the mtatons of the -NN cassfcaton systems. It estmates the posteror cass probabtes usng a oca Parzen

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Reactive Power Allocation Using Support Vector Machine

Reactive Power Allocation Using Support Vector Machine Reactve Power Aocaton Usng Support Vector Machne M.W. Mustafa, S.N. Khad, A. Kharuddn Facuty of Eectrca Engneerng, Unverst Teknoog Maaysa Johor 830, Maaysa and H. Shareef Facuty of Eectrca Engneerng and

More information

Evaluation of classifiers MLPs

Evaluation of classifiers MLPs Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label

More information

A Novel Hierarchical Method for Digital Signal Type Classification

A Novel Hierarchical Method for Digital Signal Type Classification Proceedngs of the 6th WSEAS Internatona Conference on Apped Informatcs and Communcatons, Eounda, Greece, August 8-0, 006 (pp388-393) A Nove Herarchca Method for Dgta Sgna ype Cassfcaton AAOLLAH EBRAHIMZADEH,

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy daptve and Iteratve Least Squares Support Vector Regresson Based on Quadratc Ren Entrop Jngqng Jang, Chu Song, Haan Zhao, Chunguo u,3 and Yanchun Lang Coege of Mathematcs and Computer Scence, Inner Mongoa

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Support Vector Machine Technique for Wind Speed Prediction

Support Vector Machine Technique for Wind Speed Prediction Internatona Proceedngs of Chemca, Boogca and Envronmenta Engneerng, Vo. 93 (016) DOI: 10.7763/IPCBEE. 016. V93. Support Vector Machne Technque for Wnd Speed Predcton Yusuf S. Turkan 1 and Hacer Yumurtacı

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES WAVELE-BASED IMAGE COMPRESSION USING SUPPOR VECOR MACHINE LEARNING AND ENCODING ECHNIQUES Rakb Ahmed Gppsand Schoo of Computng and Informaton echnoogy Monash Unversty, Gppsand Campus Austraa. Rakb.Ahmed@nfotech.monash.edu.au

More information

Approximate merging of a pair of BeÂzier curves

Approximate merging of a pair of BeÂzier curves COMPUTER-AIDED DESIGN Computer-Aded Desgn 33 (1) 15±136 www.esever.com/ocate/cad Approxmate mergng of a par of BeÂzer curves Sh-Mn Hu a,b, *, Rou-Feng Tong c, Tao Ju a,b, Ja-Guang Sun a,b a Natona CAD

More information

THE METRIC DIMENSION OF AMALGAMATION OF CYCLES

THE METRIC DIMENSION OF AMALGAMATION OF CYCLES Far East Journa of Mathematca Scences (FJMS) Voume 4 Number 00 Pages 9- Ths paper s avaabe onne at http://pphm.com/ournas/fms.htm 00 Pushpa Pubshng House THE METRIC DIMENSION OF AMALGAMATION OF CYCLES

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints Internatona Journa Optma of Contro, Guaranteed Automaton, Cost Contro and Systems, of Lnear vo Uncertan 3, no Systems 3, pp 397-4, wth Input September Constrants 5 397 Optma Guaranteed Cost Contro of Lnear

More information

Delay tomography for large scale networks

Delay tomography for large scale networks Deay tomography for arge scae networks MENG-FU SHIH ALFRED O. HERO III Communcatons and Sgna Processng Laboratory Eectrca Engneerng and Computer Scence Department Unversty of Mchgan, 30 Bea. Ave., Ann

More information

Support Vector Machines for Classification and Regression

Support Vector Machines for Classification and Regression ISIS Technca Report Support Vector Machnes for Cassfcaton and Regresson Steve Gunn 0 November 997 Contents Introducton 3 2 Support Vector Cassfcaton 4 2. The Optma Separatng Hyperpane...5 2.. Lneary Separabe

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

A Dissimilarity Measure Based on Singular Value and Its Application in Incremental Discounting

A Dissimilarity Measure Based on Singular Value and Its Application in Incremental Discounting A Dssmarty Measure Based on Snguar Vaue and Its Appcaton n Incrementa Dscountng KE Xaou Department of Automaton, Unversty of Scence and Technoogy of Chna, Hefe, Chna Ema: kxu@ma.ustc.edu.cn MA Lyao Department

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Finding low error clusterings

Finding low error clusterings Fndng ow error custerngs Mara-Forna Bacan Mcrosoft Research, New Engand One Memora Drve, Cambrdge, MA mabacan@mcrosoft.com Mark Braverman Mcrosoft Research, New Engand One Memora Drve, Cambrdge, MA markbrav@mcrosoft.com

More information

Transient Stability Assessment of Power System Based on Support Vector Machine

Transient Stability Assessment of Power System Based on Support Vector Machine ransent Stablty Assessment of Power System Based on Support Vector Machne Shengyong Ye Yongkang Zheng Qngquan Qan School of Electrcal Engneerng, Southwest Jaotong Unversty, Chengdu 610031, P. R. Chna Abstract

More information

Discriminating Fuzzy Preference Relations Based on Heuristic Possibilistic Clustering

Discriminating Fuzzy Preference Relations Based on Heuristic Possibilistic Clustering Mutcrtera Orderng and ankng: Parta Orders, Ambgutes and Apped Issues Jan W. Owsńsk and aner Brüggemann, Edtors Dscrmnatng Fuzzy Preerence eatons Based on Heurstc Possbstc Custerng Dmtr A. Vattchenn Unted

More information

[WAVES] 1. Waves and wave forces. Definition of waves

[WAVES] 1. Waves and wave forces. Definition of waves 1. Waves and forces Defnton of s In the smuatons on ong-crested s are consdered. The drecton of these s (μ) s defned as sketched beow n the goba co-ordnate sstem: North West East South The eevaton can

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

3. Stress-strain relationships of a composite layer

3. Stress-strain relationships of a composite layer OM PO I O U P U N I V I Y O F W N ompostes ourse 8-9 Unversty of wente ng. &ech... tress-stran reatonshps of a composte ayer - Laurent Warnet & emo Aerman.. tress-stran reatonshps of a composte ayer Introducton

More information

Non-Linear Back-propagation: Doing. Back-Propagation without Derivatives of. the Activation Function. John Hertz.

Non-Linear Back-propagation: Doing. Back-Propagation without Derivatives of. the Activation Function. John Hertz. Non-Lnear Bac-propagaton: Dong Bac-Propagaton wthout Dervatves of the Actvaton Functon. John Hertz Nordta, Begdamsvej 7, 200 Copenhagen, Denmar Ema: hertz@nordta.d Anders Krogh Eectroncs Insttute, Technca

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION. Machine Learning

GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION. Machine Learning CHAPTER 3 GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION Machne Learnng Copyrght c 205. Tom M. Mtche. A rghts reserved. *DRAFT OF September 23, 207* *PLEASE DO NOT DISTRIBUTE

More information

Correspondence. Performance Evaluation for MAP State Estimate Fusion I. INTRODUCTION

Correspondence. Performance Evaluation for MAP State Estimate Fusion I. INTRODUCTION Correspondence Performance Evauaton for MAP State Estmate Fuson Ths paper presents a quanttatve performance evauaton method for the maxmum a posteror (MAP) state estmate fuson agorthm. Under dea condtons

More information

An Effective Training Method For Deep Convolutional Neural Network

An Effective Training Method For Deep Convolutional Neural Network An Effectve Tranng Method For Deep Convoutona Neura Network Yang Jang 1*, Zeyang Dou 1*, Qun Hao 1,, Je Cao 1, Kun Gao 1, X Chen 3 1. Schoo of Optoeectronc, Bejng Insttute of Technoogy. Bejng, Chna, 100081.

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17 Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

Introduction to the Introduction to Artificial Neural Network

Introduction to the Introduction to Artificial Neural Network Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

A new P system with hybrid MDE- k -means algorithm for data. clustering. 1 Introduction

A new P system with hybrid MDE- k -means algorithm for data. clustering. 1 Introduction Wesun, Lasheng Xang, Xyu Lu A new P system wth hybrd MDE- agorthm for data custerng WEISUN, LAISHENG XIANG, XIYU LIU Schoo of Management Scence and Engneerng Shandong Norma Unversty Jnan, Shandong CHINA

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

G : Statistical Mechanics

G : Statistical Mechanics G25.2651: Statstca Mechancs Notes for Lecture 11 I. PRINCIPLES OF QUANTUM STATISTICAL MECHANICS The probem of quantum statstca mechancs s the quantum mechanca treatment of an N-partce system. Suppose the

More information

QUARTERLY OF APPLIED MATHEMATICS

QUARTERLY OF APPLIED MATHEMATICS QUARTERLY OF APPLIED MATHEMATICS Voume XLI October 983 Number 3 DIAKOPTICS OR TEARING-A MATHEMATICAL APPROACH* By P. W. AITCHISON Unversty of Mantoba Abstract. The method of dakoptcs or tearng was ntroduced

More information

Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters

Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters Journa of Machne Learnng Research 8 (2007) 84-86 Submtted /06; Pubshed 4/07 Preventng Over-Fttng durng Mode Seecton va Bayesan Reguarsaton of the Hyper-Parameters Gavn C. Cawey Ncoa L. C. Tabot Schoo of

More information

Part II. Support Vector Machines

Part II. Support Vector Machines Part II Support Vector Machnes 35 Chapter 5 Lnear Cassfcaton 5. Lnear Cassfers on Lnear Separabe Data As a frst step n understandng and constructng Support Vector Machnes e stud the case of near separabe

More information

Numerical integration in more dimensions part 2. Remo Minero

Numerical integration in more dimensions part 2. Remo Minero Numerca ntegraton n more dmensons part Remo Mnero Outne The roe of a mappng functon n mutdmensona ntegraton Gauss approach n more dmensons and quadrature rues Crtca anass of acceptabt of a gven quadrature

More information

A New Algorithm for Training Multi-layered Morphological Networks

A New Algorithm for Training Multi-layered Morphological Networks A New Algorthm for Tranng Mult-layered Morphologcal Networs Rcardo Barrón, Humberto Sossa, and Benamín Cruz Centro de Investgacón en Computacón-IPN Av. Juan de Dos Bátz esquna con Mguel Othón de Mendzábal

More information

Optimum Selection Combining for M-QAM on Fading Channels

Optimum Selection Combining for M-QAM on Fading Channels Optmum Seecton Combnng for M-QAM on Fadng Channes M. Surendra Raju, Ramesh Annavajjaa and A. Chockangam Insca Semconductors Inda Pvt. Ltd, Bangaore-56000, Inda Department of ECE, Unversty of Caforna, San

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems Mathematca Probems n Engneerng Voume 213 Artce ID 945342 7 pages http://dxdoorg/11155/213/945342 Research Artce H Estmates for Dscrete-Tme Markovan Jump Lnear Systems Marco H Terra 1 Gdson Jesus 2 and

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

GENERATION OF GOLD-SEQUENCES WITH APPLICATIONS TO SPREAD SPECTRUM SYSTEMS

GENERATION OF GOLD-SEQUENCES WITH APPLICATIONS TO SPREAD SPECTRUM SYSTEMS GENERATION OF GOLD-SEQUENCES WITH APPLICATIONS TO SPREAD SPECTRUM SYSTEMS F. Rodríguez Henríquez (1), Member, IEEE, N. Cruz Cortés (1), Member, IEEE, J.M. Rocha-Pérez (2) Member, IEEE. F. Amaro Sánchez

More information

Extending boosting for large scale spoken language understanding

Extending boosting for large scale spoken language understanding Mach Learn (2007) 69: 55 74 DOI 10.1007/s10994-007-5023-9 Extendng boostng for arge scae spoken anguage understandng Gokhan Tur Receved: 19 August 2005 / Revsed: 10 June 2007 / Accepted: 28 August 2007

More information

Logistic Regression Maximum Likelihood Estimation

Logistic Regression Maximum Likelihood Estimation Harvard-MIT Dvson of Health Scences and Technology HST.951J: Medcal Decson Support, Fall 2005 Instructors: Professor Lucla Ohno-Machado and Professor Staal Vnterbo 6.873/HST.951 Medcal Decson Support Fall

More information