Removal of Hidden Neurons by Crosswise Propagation
|
|
- Samuel Barber
- 5 years ago
- Views:
Transcription
1 Neural Informaton Processng - etters and Revews Vol.6 No.3 arch 25 ETTER Removal of dden Neurons by Crosswse Propagaton Xun ang Department of anagement Scence and Engneerng Stanford Unversty CA 9535 USA Insttute of Computer Scence and Technology Peng Unversty Bejng 87 Chna Emal: langxun@cst.pu.edu.cn (Submtted on December 3 24) Abstract The hdden neuron s removed by analyzng the orthogonal projecton correlatons among the outputs of other hdden neurons. The technque of crosswse propagaton (CP) s used to update the remanng weghts and thresholds. Experments llustrate that the method gves better ntal ponts for retranngs and retranngs cost less epochs. eywords hdden neurons orthogonal projecton crosswse propagaton (CP). Introducton Scholars have showed enormous nterests for removng the superfluous weghts and hdden neurons n neural networs [][3][2][3][2]. Too many weghts or hdden neurons may lead to overfttng of data and poor generalzaton whle too few weghts and hdden neurons may not allow the neural networ to learn the data suffcently accurately. The frequently used technques can be classfed nto two categores the methods of removng weghts and the methods of removng hdden neurons. Ths paper focuses on the second category. Ths paper presents a method based on the senstvty and mportance of hdden neurons by the orthogonal projecton. In addton the method does not smply tae away the less mportant hdden neurons as most technques. Instead an approach called the weght crosswse propagaton (CP) s appled to elmnate the weght nformaton loss to the least and the CP smply apples the coeffcents of the orthogonal projectons. Frst we defne the hdden output row vector or smply the hdden row vector. A hdden row vector s composed of outputs of one hdden neuron wth respect to all the tranng patterns. Second we defne the augmented hdden output row vector or smply the augmented hdden row vector. An augmented hdden row vector s ether a hdden row vector or row vector (- -). As t s well nown the hdden layer can be compressed f the augmented hdden row vectors are lnearly dependent [][3][2][3][2]. owever n applcaton there are few cases n whch the augmented hdden row vectors are lnearly dependent. The common practce s as follows. When people tran a networ they always choose as few hdden neurons as possble n the begnnng. When the networ cannot learn the mappng they add hdden neurons. After the networ s traned they remove some hdden neurons usng the prunng algorthms and retran the networ. Ths paper s the drect extenson of the case of lnear dependence [3]. Instead of usng lnear dependence correlatons ths paper employs an orthogonal projecton crteron. In addton the method can be appled to both bnary and real nputs and outputs. The method s dvded nto two stages. In the frst stage for each hdden row vector we calculate ts orthogonal projecton n the space spanned by the other augmented hdden row vectors and obtan a dstance between ts orthogonal projecton and the hdden row vector tself. Then we remove the hdden neuron wth the smallest dstance. The second stage apples the CP method. Our motvaton s that f a hdden row vector can be best approxmately expressed by the other augmented hdden row vectors then the other augmented hdden row vectors mght also best express the nformaton provded by ths hdden row vector. [6] realzed ths dea n the cascade-correlaton neural networs. owever the general neural networ archtecture was not addressed n [6]. Ths paper accomplshes ths wor. In a word the wor n ths paper s more general and has wder applcaton potentals. 79
2 Removal of dden Neurons by Crosswse Propagaton X. ang w z p x p w N w θ ψ y p x N p N w N y p p... P θ z p ψ Fgure. A three-layer perceptron. We demonstrate our method n the archtecture of standard three-layer (one hdden layer) perceptrons. It s not dffcult to extend t to a multlayer perceptrons. In a three-layer perceptron (see Fgure ) there are N nput ports (wthout neurons) hdden neurons output neurons. W{w hn } N denotes the nput weght matrx (the one between the nput layer and the hdden layer) or smply nput-hdden matrx { mh } denotes the output weght matrx (the one between the hdden layer and output layer) or smply hdden-output matrx Θ (θ θ )τ denotes the threshold column vector n the hdden layer Ψ(ψ...ψ ) τ denotes the threshold column vector n the output layer and ( ) τ denotes the transpose of ( ) n N h... m.... The actvaton functon n the hdden layer and output layer s the sgmodal functon f ( x) () γ ( xϕ ) e where γ > ϕ R. et P be the number of tranng pattern pars X p (x p... x Np ) τ [] N be the nput pattern column vector Z p (z p... z p ) τ [] be the hdden layer output column vector T p (t p... t p ) τ [] be the output pattern column vector or target vector Y p (y p... y p ) τ [] be the networ output vector p... P. We wrte the matrx of outputs of hdden neurons or smply the hdden matrx as ] P [ Z Z P ] [ (2) where h (z h... z hp ) [] P (h... ) are the hdden row vectors. For a fxed tranng set h s a constant vector. If a vector contans the th element (augmented vector) or a matrx contans the th row or the th column vector (augmented matrx) we assgn a to ts superscrpt. et (z... z P )(-... -). Then the augmented hdden matrx s and the augmented hdden-output matrx s [] ( ) P (3) ( ) [ Ψ ] []. (4) If a vector does not contan the th element or a matrx does not contan the th row or the th column vector we assgn a - to ts superscrpt ( s the ndex of the hdden neuron to be removed wth runnng from to ). Then the correspondng augmented hdden matrx s 8
3 Neural Informaton Processng - etters and Revews Vol.6 No.3 arch 25 8 P ] [ (5) and the correspondng augmented hdden-output matrx s ] [ Ψ ] [ Ψ ] [ ) ( ) ( ) ( ) ( ψ ψ. (6) Ths paper s organzed as follows. In secton 2 we frst present the method to determne a superfluous hdden neuron by the orthogonal projecton crteron. Then we show how to update the new weghts and thresholds n the reduced networ. Smulatons are dscussed n secton 3. Secton 4 concludes the paper. 2. Removal of dden Neurons by Crosswse Propagaton (CP) A hdden layer can be compressed f the hdden row vectors are lnearly dependent [3]. Suppose that after prunng as n the above we have hdden neurons then... are lnearly ndependent. ence P ran ran (7) or P-. Ths means that the number of hdden neurons s always smaller than or equal to the number of tranng pattern pars mnus one after the networ s pruned. If > P- the method n [3] can be used untl P-. At ths tme f hdden row vectors... are stll lnearly dependent we contnue usng the method n [3] untl hdden row vectors... are lnearly ndependent (for clarty we always use to denote the number of hdden neurons namely whenever we remove a hdden neuron we redenote the new number of hdden neurons as ). Consequently n the followng we always suppose that hdden row vectors... are lnearly ndependent namely ran and ran. The space S - spanned by (-... -) and the other hdden row vectors j (j ) can be expressed as S - span{ }. (8) If and only f the rows of matrx -j are lnearly ndependent then for vector j there exsts a unque row vector of the coeffcents for the lnear combnaton of j Π Π [] ) ( ) ( (9) mnmzng the Eucldean dstance [2][5] between Π -j -j and j such that Proj( j ) Π -j -j S -j (S -j s obvously complete) where Proj( ) s the orthogonal projecton of ( ) nto space S -j Π -j -j - j mn ξ ξ( -j )- j. () It s well nown [2] that the vector Π -j -j - j s orthogonal to the rows of -j
4 Removal of dden Neurons by Crosswse Propagaton X. ang [Π -j -j - j ]( -j ) τ () and Π -j s determned by Π -j [ -j ( -j ) τ ] j ( -j ) τ. (2) Snce ran -j and ran -j ran[( -j ) τ ] - ran[ -j ( -j ) τ ] mn{ran -j ran( -j ) τ } (3) we have ran [ -j ( -j ) τ ] where -j ( -j ) τ an matrx. Thus -j ( -j ) τ s reversble [2][5] and Π -j j ( -j ) τ [ -j ( -j ) τ ] -. (4) We obtan the orthogonal projecton of j (j... ) n space S -j and the dstance between j and Proj( j ) We want to remove the hdden neuron wth the least dstance. From Proj( j ) Π -j -j j... (5) d j j -Proj( j ) j.... (6) d mn{d j j... } (7) we now that the th hdden row vector can be best expressed by the other augmented hdden row vectors. As a result we can remove the th hdden neuron. We want to use Proj( ) Π - - to approxmate the hdden row vector then I ( nety )' Π the new augmented hdden-output matrx s Proj( ) ( Π ) ( ) ' (8) )' Π. (9) ( By (9) the th hdden neuron can be removed whle the output weghts and the thresholds n the output layer should be updated. The process (9) s called the weght crosswse propagaton (CP) for the nformaton of the removed hdden neuron s propagated crosswsely to the other hdden neurons. In the nput augmented matrx we could smply tae away the weghts connected to the th hdden neuron and the threshold n the th hdden neuron whle the remanng weghts for the other hdden neurons eep unchanged (see Fgure 2). After the removal of the th hdden neuron the new networ s retraned. Snce the above process s rrelevant to the actvaton functon t can be used n any forms of actvaton functons. In addton the method s also rrelevant to the nputs and outputs; hence t can be used n the cases of bnary and real nputs and outputs wthout lmtatons. When the orthogonal projecton of a hdden row vector s the vector tself the orthogonal projecton s degenerated nto the case of lnear dependence n [3][4]. Clearly the lnear dependence crteron leads to a precse transform whle the orthogonal projecton crteron results n an approxmaton after prunng and the approxmaton should be reduced or elmnated by the retranng. We defne d m /d as the relatve mportance comparng wth the bggest d m where d m max j {d j j...; j }. If 3 we also defne d s /d as the relatve mportance of the second smallest d s where d s mn j {d j j...; j m}. In smulaton we pre-set the values of δ m and δ s. If d m >δ m or d >δ s (2) d ds we remove the th hdden neuron. If more than one hdden neurons satsfes one of the condtons we randomly choose one to remove. Snce the magntudes of d m /d and d s /d are relevant to the dmenson of hdden row vectors P the pre-set values of δ m and δ s are normally functons of P. 82
5 Neural Informaton Processng - etters and Revews Vol.6 No.3 arch 25 z p - z p m m ψ m y p - z (-) p m ( -) z p m z () p m () m y m p y p z p m p... P Fgure 2. The method of prunng away the hdden neurons. The dotted part wll be removed. The hdden-output matrx for the other hdden neurons should be updated by the CP operaton of (9) whle the nput-hdden matrx for the other hdden neurons eeps unchanged. 3. Smulatons and Dscussons We use the functon thp2.m n the neural networ toolbox of ATAB pacages on Sun s Solars worstatons. In the followng examples the actvaton functon s shown n (). The weghts and thresholds are randomly ntalzed n the begnnng. The learnng rate s. Snce the closest cousn method of removng hdden neurons s the method n [] based on the magntudes of norms of hdden row vectors we only compare our method wth that n []. Other famous methods le optmal bran damage or surgeon are most lely to be used n removng weghts [4] often leadng to dfferent networ topologes after prunng. It s therefore not perfectly comparable between the optmal bran damage methods and our method. ore than one hdden neurons could be removed at a tme before retranng by the method n ths paper. The process s the smlar. In the frst stage we fnd hdden neurons to remove based on orthogonal projectons. If d j of more than one hdden row vectors are too small we can remove them. In the second stage the followng two CP processes are equvalent () removng one hdden neuron at a tme and mplementng the CPs one by one (2) removng a batch of hdden neurons and mplementng the CP at one tme by smlar matrx operatons. Ths s because the CP process s actually the row operatons n matrx performng the operatons of lnear combnatons row by row s equvalent to performng them a batch of rows together. owever f more than one hdden neurons are removed n one tme the new ntal pont mght not be as close to the global mnmum as n the method of removng only one hdden neuron at a tme. ence n the followng we always remove hdden neurons one by one. Example. The party problem. We pre-set δ m P and δ s P/3. The permssble sum-square-root error of the networ e δ s.5. Experment. N3 6 P8. After 723 epochs e δ.5. The norms and d j of the augmented hdden row vectors are shown n Table. Snce /.3d 4 /d 3 d m /d >δ m P8 the 3rd hdden neuron can be removed. So the 6 networ s compressed nto a networ wth 5. The retranng costs 78 epochs to reach e δ.5. The number of retranng epochs depends on many factors such as d s /d d m /d the learnng rate and the shape of error hypersurfaces around the startng pont for the retranng. For comparson we also remove the hdden neuron respectvely from the st to the 6th wthout the CP process and then retran the reduced networ. The results are shown n Table. Notcng that the st hdden neuron has the smallest norm t can be removed by the method n []. The retranng costs 244 epochs. Whle 83
6 Removal of dden Neurons by Crosswse Propagaton X. ang based on our method the 3rd hdden neuron s selected to remove and the retranng costs only 78 epochs savng about 2/3 of retranng epochs comparng wth the method n []. Table. The epoch for removng the 3rd hdden neurons wth the CP s shown n row 5 and the epochs for removng each of the hdden neurons wthout the CP respectvely are shown n row 4. j norms 2.54 (smallest) d j (smallest) epochs of retranng wthout the CP epochs of retranng wth the CP 78 Experment 2. N3 4 P8. We test 2 tmes. All of them successfully remove one hdden neuron and reach to an 3 networ repeatedly usng steps to 5. In Table 2 after 79 epochs e δ.5. We fnd that 2 and /.92d 4 /d 2 d m /d >δ s P/38/ Then the 2nd hdden neuron can be removed by the orthogonal projecton crteron. Notcng that the 2nd hdden neuron has the smallest norm t can be removed by the method n []. The retranng costs 53 epochs (see row 4 n Table 2). The epochs of retranng wth and wthout the CP are also shown n Table 2. In comparson based on our method also the 2nd hdden neuron s selected to remove and the retranng wth the CP costs only 492 epochs (see row 5 n Table 2) savng about 2/3 of retranng epochs. Table 2. The epoch for removng the 2nd hdden neurons wth the CP s shown n row 5 and the epochs for removng each of the hdden neurons wthout the CP respectvely are shown n row 4. j norms (smallest) d j (smallest) epochs of retranng wthout the CP epochs of retranng wth the CP 492 Table 3. The shortest second shortest and the maxmal dstances dj and the epochs of retranngs n each archtecture for are shown. In columns 2-4 >P-7 j S -j the mnmum the second mnmum and maxmum of dj are all zeros. In columns 2-5 and 7 no retranngs are needed snce the orthogonal projectons are degenerated nto the case of lnearly dependence d (mnmum of d j ) d s (second mnmum of d j ) d s /d d m (maxmum of d j ) d m /d epochs of retranng wth the CP For comparson the method n [] s used wth networs startng from. The total retranng epochs are 23 before the networ s cut down to 3. Whle n the frst tme the retranng s trapped nto a local mnmum when 4. The reason of the large number of retranng epochs s that at each tme when a hdden neuron s removed the retranng starts almost from the very begnnng. Example 2. Neurocontrol system. We pre-set δ m 2 and δ s (d m -d )/3. δ m s set to an absolute value because we do not want t too large. The permssble sum-square-root error of the networ e δ s.. The tradtonal way to do t s to eep the networ archtecture unchanged. owever n practce n order to tran fast onlne sometmes some hdden neurons have to be added whle tranng. When the networ has been traned wth the patterns for the specfc control the superfluous hdden neurons are removed for better generalzaton. As n [8][9] at the begnnng we set N6 4 n the neural networ and let the system follow the square wave r(t). The elements n the nput pattern vector are r(t) r(t-) u(t) u(t-) y(t) y(t-) or the nput pattern vector s X t (r(t) r(t-) u(t) u(t-) y(t) y(t-)) τ [] 6 and the element n the output pattern vector 84
7 Neural Informaton Processng - etters and Revews Vol.6 No.3 arch 25 s y(t) or the output pattern vector s T t y(t) [] t 2... The neural networ learns the control onlne wthout any pror nowledge n the begnnng. If the neural networ cannot learn the patterns a new hdden neuron s added. At the tme when the control law s learned 2. Then we try to prune away some hdden neurons. Fnally a networ wth 5 s obtaned. The learnng process of neurocontroller s shown n Fgure 3 (the dotted wave). number of epochs Fgure 3. The neurocontrol result for fast learnng the square wave. The dotted wave s the learnng process of the neurocontroller. The dotted wave n the frst valley of the square wave s the tranng process whle the hdden neurons are added. The dotted wave n the second valley of the square wave s the retranng process whle some hdden neurons are removed. From the above experments we have the followng observatons: () The smallest d s not always correspondng to the smallest norm of the hdden row vector when compressng especally when s relatvely large (see Table ). owever when s approachng to ts lower bound the smallest length of the hdden row vector frequently has the smallest d (see Table 2). (2) In large scale networs the smaller d can often be found. As a result they are easer to compress. Clearly when the networ s qute superfluous d j are qute small namely the hdden row vectors j are of great potental to express each others. As decreases the maxmal and mnmal values of d j have also the same tendency to ncrease along wth the redundancy of the networ decreases (see Table 3). (3) The CP loses less weght nformaton. So t gves to better ntal ponts for retranng or the ntal ponts for retranng are not too far away from the global mnma and the retranngs are not from the very begnnng. Ths normally leads to less retranng tme (see Tables -2). (4) Removal of the hdden neuron wth a larger value of d normally needs more retranng tme of the networ than that for a smaller value of d. If d s very small sometmes no retranng tme s even needed to reach the permssble error e δ (see Tables -3). 4. Concludng Remars The orthogonal projecton crteron s very smple to use. The hdden neurons wth small norms often have the small d j but the hdden neurons wth small d j may not have small norms. In another word the applcablty feld of method n ths paper s wder than the magntude-based prunng method []. At least the orthogonal projecton crteron s an excellent alternatve method of []. The archtecture that cannot be compressed by one of the two methods may stll be compressed by the other method. Both methods are complementary and a proper usage of ther man strengths and weanesses should lead to synergetc effects benefcal to ther common goals. It should be mentoned that the error hypersurfaces are very complex [5] and there are many factors nfluencng the selecton of hdden neurons and compressng effectveness. It s acnowledged that the method n ths paper s defntely not a method of realzng the smallest networ archtecture. Instead t s just one of effectve and practcal prunng methods. 85
8 Removal of dden Neurons by Crosswse Propagaton X. ang References [] F.. Bauer Elmnaton wth weghted row combnatons for solvng lnear equatons and least square problems In: J.. Wlnson C. Rensch (ed.) near Algebra. Sprnger-Verlag 97 pp [2] A. Ben-Israel T. E. Grevlle Generalzed Inverses - Theory and Applcaton Wley-Interscence 974. [3] E. Cantu-Paz Prunng neural networs wth dstrbuton estmaton algorthms In: Cantu-Paz E. (ed.) ecture Notes n Computer Scence Vol Sprnger-Verlag 23 pp [4] Y.. Cun J. S. Dener S. A. Solla Optmal bran damage Proc of IEEE Conf on Neural Informaton Processng Systems Denver 989 pp [5] C.. Devto Functonal Analyss and near Operator Theory Addson-Wesley 99. [6] V. Egel-Danelson. F. Augustejn Neural networ prunng and ts effect on generalzaton some expermental results Neural Parallel and Scentfc Computatons Vol. 993 pp [7] A. P. Engelbrecht A new prunng heurstc based on varance analyss of senstvty nformaton IEEE Trans on Neural Networs Vol.2 No.6 2 pp [8] A. P. Engelbrecht. Fetcher I. Cloete Varance analyss of senstvty nformaton for prunng multlayer feedforward neural networs Proc of Int Jont Conf on Neural Networs Washngton D C 999 pp.379. [9]. Fetcher V. atovn F. E. Steffens Optmzng the number of hdden nodes of a feedforward artfcal neural networ Proc of IEEE World Congress on Computatonal Intellgence Anchorage 998 pp []. agwara A smple and effectve method for removal of hdden unts and weghts Neurocomputng Vol pp [] B. assb D. G. Stor Second order dervatves for networ prunng: optmal bran surgeon Proc of Neural Informaton Processng Systems Vol pp [2] Y. rose Y. och S. jya Bac-propagaton algorthm whch vares the number of hdden unts Neural Networs Vol.4 99 pp [3] X. ang ethods of dggng tunnels nto the error hypersurfaces Neural Parallel and Scentfc Computatons Vol. 993 pp [4] X. ang Networ expanson and networ compresson Proc of IEEE Int Conf on Neural Networs Perth 995 pp [5] X. ang Complexty of error hypersurfaces n multlayer perceptrons wth bnary pattern sets Int Journal of Neural Systems Vol.4 No.3 24 pp [6] X. ang A study of removng hdden neurons n cascade-correlaton neural networs Proc of Int Jont Conf on Neural Networs Budapest 24 pp.5-2. [7] B. uller J. Renhardt Neural Networs - An Introducton Sprng-Verlag 99. [8] S. Narendra. Parthasarathy Gradent methods for the optmzaton of dynamcal systems contanng neural networs IEEE Trans on Neural Networs Vol.2 99 pp [9] Y.. Pao S.. Phllps D. J. Sobajc Neural computng and the ntellgent control systems Int Journal of Control Vol pp [2] R. Reed Prunng algorthms - a survey IEEE Trans on Neural Networs Vol pp Xun ang receved hs Ph.D. n Computer Engneerng from Tsnghua Unversty and BA from Stanford Unversty. e s currently an assocate professor at Insttute of Computer Scence at Peng Unversty. s research nterests nclude neural networs fnancal nformaton processng fnancal nformaton systems. (omepage: 86
EEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationWhy feed-forward networks are in a bad shape
Why feed-forward networks are n a bad shape Patrck van der Smagt, Gerd Hrznger Insttute of Robotcs and System Dynamcs German Aerospace Center (DLR Oberpfaffenhofen) 82230 Wesslng, GERMANY emal smagt@dlr.de
More informationLecture 23: Artificial neural networks
Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationAdmin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester
0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationRadial-Basis Function Networks
Radal-Bass uncton Networs v.0 March 00 Mchel Verleysen Radal-Bass uncton Networs - Radal-Bass uncton Networs p Orgn: Cover s theorem p Interpolaton problem p Regularzaton theory p Generalzed RBN p Unversal
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationRegularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationInternet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks
Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationIntroduction to the Introduction to Artificial Neural Network
Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationRBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis
Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural
More informationISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013
ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationMultilayer neural networks
Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer
More informationWavelet chaotic neural networks and their application to continuous function optimization
Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More information2 STATISTICALLY OPTIMAL TRAINING DATA 2.1 A CRITERION OF OPTIMALITY We revew the crteron of statstcally optmal tranng data (Fukumzu et al., 1994). We
Advances n Neural Informaton Processng Systems 8 Actve Learnng n Multlayer Perceptrons Kenj Fukumzu Informaton and Communcaton R&D Center, Rcoh Co., Ltd. 3-2-3, Shn-yokohama, Yokohama, 222 Japan E-mal:
More informationLecture 4: November 17, Part 1 Single Buffer Management
Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationPower law and dimension of the maximum value for belief distribution with the max Deng entropy
Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng
More informationDr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur
Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationImproved delay-dependent stability criteria for discrete-time stochastic neural networks with time-varying delays
Avalable onlne at www.scencedrect.com Proceda Engneerng 5 ( 4456 446 Improved delay-dependent stablty crtera for dscrete-tme stochastc neural networs wth tme-varyng delays Meng-zhuo Luo a Shou-mng Zhong
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationGaussian Mixture Models
Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationIntroduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:
CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and
More informationComparison of Regression Lines
STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence
More informationChapter 12 Analysis of Covariance
Chapter Analyss of Covarance Any scentfc experment s performed to know somethng that s unknown about a group of treatments and to test certan hypothess about the correspondng treatment effect When varablty
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationErratum: A Generalized Path Integral Control Approach to Reinforcement Learning
Journal of Machne Learnng Research 00-9 Submtted /0; Publshed 7/ Erratum: A Generalzed Path Integral Control Approach to Renforcement Learnng Evangelos ATheodorou Jonas Buchl Stefan Schaal Department of
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationarxiv:cs.cv/ Jun 2000
Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationCHAPTER IV RESEARCH FINDING AND ANALYSIS
CHAPTER IV REEARCH FINDING AND ANALYI A. Descrpton of Research Fndngs To fnd out the dfference between the students who were taught by usng Mme Game and the students who were not taught by usng Mme Game
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationAtmospheric Environmental Quality Assessment RBF Model Based on the MATLAB
Journal of Envronmental Protecton, 01, 3, 689-693 http://dxdoorg/10436/jep0137081 Publshed Onlne July 01 (http://wwwscrporg/journal/jep) 689 Atmospherc Envronmental Qualty Assessment RBF Model Based on
More informationDetermining Transmission Losses Penalty Factor Using Adaptive Neuro Fuzzy Inference System (ANFIS) For Economic Dispatch Application
7 Determnng Transmsson Losses Penalty Factor Usng Adaptve Neuro Fuzzy Inference System (ANFIS) For Economc Dspatch Applcaton Rony Seto Wbowo Maurdh Hery Purnomo Dod Prastanto Electrcal Engneerng Department,
More informationMulti-layer neural networks
Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationThe internal structure of natural numbers and one method for the definition of large prime numbers
The nternal structure of natural numbers and one method for the defnton of large prme numbers Emmanul Manousos APM Insttute for the Advancement of Physcs and Mathematcs 3 Poulou str. 53 Athens Greece Abstract
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationStatistical Foundations of Pattern Recognition
Statstcal Foundatons of Pattern Recognton Learnng Objectves Bayes Theorem Decson-mang Confdence factors Dscrmnants The connecton to neural nets Statstcal Foundatons of Pattern Recognton NDE measurement
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationCONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION
CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala
More informationTwo Methods to Release a New Real-time Task
Two Methods to Release a New Real-tme Task Abstract Guangmng Qan 1, Xanghua Chen 2 College of Mathematcs and Computer Scence Hunan Normal Unversty Changsha, 410081, Chna qqyy@hunnu.edu.cn Gang Yao 3 Sebel
More informationFinding Dense Subgraphs in G(n, 1/2)
Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationOne-sided finite-difference approximations suitable for use with Richardson extrapolation
Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,
More informationLab 2e Thermal System Response and Effective Heat Transfer Coefficient
58:080 Expermental Engneerng 1 OBJECTIVE Lab 2e Thermal System Response and Effectve Heat Transfer Coeffcent Warnng: though the experment has educatonal objectves (to learn about bolng heat transfer, etc.),
More informationNeural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17
Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer
More information