Pattern Recognition 2014 Support Vector Machines

Size: px
Start display at page:

Download "Pattern Recognition 2014 Support Vector Machines"

Transcription

1 Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55

2 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft Margin) 4 SVM s in R. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 2 / 55

3 Linear Classifier fr tw classes Linear mdel y() = w φ() + b (7.1) with t n { 1, +1}. Predict t 0 = +1 if y( 0 ) 0 and t 0 = 1 therwise. The decisin bundary is given by y() = 0. This is a linear classifier in feature space φ(). Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 3 / 55

4 Mapping φ y() = w φ() + b = 0 φ maps int higher dimensinal space where data is linearly separable. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 4 / 55

5 Data linearly separable Assume training data is linearly separable in feature space, s there is at least ne chice f w, b such that: 1 y( n ) > 0 fr t n = +1; 2 y( n ) < 0 fr t n = 1; that is, all training pints are classified crrectly. Putting 1. and 2. tgether: t n y( n ) > 0 n = 1,..., N Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 5 / 55

6 Maimum Margin There may be many slutins that separate the classes eactly. Which ne gives smallest predictin errr? SVM chses line with maimal margin, where the margin is the distance between the line and the clsest data pint. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 6 / 55

7 Tw-class training data Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 7 / 55

8 Many Linear Separatrs Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 8 / 55

9 Decisin Bundary Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 9 / 55

10 Maimize Margin Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 10 / 55

11 Supprt Vectrs Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 11 / 55

12 Weight vectr is rthgnal t the decisin bundary Cnsider tw pints A and B bth f which lie n the decisin surface. Because y( A ) = y( B ) = 0, we have (w A + b) (w B + b) = w ( A B ) = 0 and s the vectr w is rthgnal t the decisin surface. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 12 / 55

13 Distance f a pint t a line 2 y() = w + b = 0 r w 1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 13 / 55

14 Distance t decisin surface (φ() = ) We have w = + r w. (4.6) where w w is the unit vectr in the directin f w, is the rthgnal prjectin f nt the line y() = 0, and r is the (signed) distance f t the line. Multiply (4.6) left and right by w and add b: w + b }{{} y() = w + b }{{} 0 +r w w w S we get r = y() w w 2 = y() w (4.7) Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 14 / 55

15 Distance f a pint t a line The signed distance f n t the decisin bundary is r = y( n) w Fr lines that separate the data perfectly, we have t n y( n ) = y( n ), s that the distance is given by t n y( n ) w = t n(w φ( n ) + b) w (7.2) Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 15 / 55

16 Maimum margin slutin Slve { } 1 arg ma w,b w min[t n(w φ( n ) + b)]. (7.3) n 1 Since w des nt depend n n, it can be mved utside f the minimizatin. Direct slutin f this prblem wuld be rather cmple. A mre cnvenient representatin is pssible. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 16 / 55

17 Cannical Representatin The hyperplane (decisin bundary) is defined by Then als w φ() + b = 0 κ(w φ() + b) = κw φ() + κb = 0 s rescaling w κw and b κb gives just anther representatin f the same decisin bundary. Chse scaling factr such that fr the pint i clsest t the decisin bundary. t i (w φ( i ) + b) = 1 (7.4) Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 17 / 55

18 Cannical Representatin (square=1,circle= 1) y() = 1 y() = 0 y() = 1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 18 / 55

19 Cannical Representatin In this case we have Quadratic prgram subject t the cnstraints (7.5). t n (w φ( n ) + b) 1 n = 1,..., N (7.5) arg min w,b 1 2 w 2 (7.6) This ptimizatin prblem has a unique glbal minimum. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 19 / 55

20 Lagrangian Functin Intrduce Lagrange multipliers a n 0 t get Lagrangian functin L(w, b, a) = 1 N 2 w 2 a n {t n (w φ( n ) + b) 1} (7.7) n=1 with L(w, b, a) w N = w a n t n φ( n ) n=1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 20 / 55

21 Lagrangian Functin and fr b: L(w, b, a) b = N a n t n n=1 Equating the derivatives t zer yields the cnditins: w = N a n t n φ( n ) (7.8) n=1 and N a n t n = 0 (7.9) n=1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 21 / 55

22 Dual Representatin Eliminating w and b frm L(w, b, a) gives the dual representatin. L(w, b, a) = 1 N 2 w 2 a n {t n (w φ( n ) + b) 1} n=1 = 1 N N N 2 w 2 a n t n w φ( n ) b a n t n + = 1 2 n=1 n=1 n=1 N N a n a m t n t m φ( n ) φ( m ) n=1 m=1 N N N a n t n a m t m φ( n ) φ( m ) + = n=1 m=1 N a n 1 2 n=1 n=1 a n N N a n t n a m t m φ( n ) φ( m ) n=1 m=1 a n Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 22 / 55

23 Dual Representatin Maimize L(a) = N a n 1 N a n t n a m t m φ( n ) φ( m ) (7.10) 2 n=1 n,m=1 with respect t a and subject t the cnstraints a n 0, n = 1,..., N (7.11) N a n t n = 0. (7.12) n=1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 23 / 55

24 Kernel Functin We map t a high-dimensinal space φ() in which data is linearly separable. Perfrming cmputatins in this high-dimensinal space may be very epensive. Use a kernel functin k that cmputes a dt prduct in this space (withut making the actual mapping): k(, ) = φ() φ( ) Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 24 / 55

25 Eample: plynmial kernel Suppse IR 3 and φ() IR 10 with φ() = (1, 2 1, 2 2, 2 3, 1 2, 2 2, 3 2, 2 1 2, 2 1 3, ) Then φ() φ(z) = z z z z z z z 1 z z 1 z z 2 z 3 But this can be written as (1 + z) 2 = (1 + 1 z z z 3 ) 2 which csts much less peratins t cmpute. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 25 / 55

26 Plynmial kernel: numeric eample Suppse = (3, 2, 6) and z = (4, 1, 5). Then φ() = (1, 3 2, 2 2, 6 2, 9, 4, 36, 6 2, 18 2, 12 2) φ(z) = (1, 4 2, 1 2, 5 2, 16, 1, 25, 4 2, 20 2, 5 2) Then φ() φ(z) = = But (1 + z) 2 = (1 + (3)(4) + (2)(1) + (6)(5)) 2 = 45 2 = 2025 is a mre efficient way t cmpute this dt prduct. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 26 / 55

27 Kernels Linear kernel k(, ) = Tw ppular nn-linear kernels are the plynmial kernel k(, ) = ( + c) M and Gaussian (r radial) kernel k(, ) = ep( 2 /2σ 2 ), (6.23) r where γ = 1 2σ 2. k(, ) = ep( γ 2 ), Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 27 / 55

28 Dual Representatin with kernels Using k(, ) = φ() φ( ) we get dual representatin: Maimize L(a) = N a n 1 N a n t n a m t m k( n, m ) (7.10) 2 n=1 n,m=1 with respect t a and subject t the cnstraints a n 0, n = 1,..., N (7.11) N a n t n = 0. (7.12) n=1 Is this dual easier than the riginal prblem? Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 28 / 55

29 Predictin Recall that Substituting int (7.1), we get y() = w φ() + b (7.1) N w = a n t n φ( n ) (7.8) n=1 N y() = b + a n t n k(, n ) (7.13) n=1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 29 / 55

30 Predictin: supprt vectrs KKT cnditins: a n 0 (7.14) t n y( n ) 1 0 (7.15) a n {t n y( n ) 1} = 0 (7.16) Frm (7.16) it fllws that fr every data pint, either 1 a n = 0, r 2 t n y( n ) = 1. The frmer play n rle in making predictins (see 7.13), and the latter are the supprt vectrs that lie n the maimum margin hyper planes. Only the supprt vectrs play a rle in predicting the class f new attribute vectrs! Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 30 / 55

31 Predictin: cmputing b Since fr any supprt vectr n we have t n y( n ) = 1, we can use (7.13) t get ( t n b + ) a m t m k( n, m ) = 1, (7.17) m S where S dentes the set f supprt vectrs. Hence we have t n b + t n a m t m k( n, m ) = 1 m S t n b = 1 t n a m t m k( n, m ) m S b = t n m S a m t m k( n, m ) (7.17a) since t n { 1, +1} and s 1/t n = t n. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 31 / 55

32 Predictin: cmputing b A numerically mre stable slutin is btained by averaging (7.17a) ver all supprt vectrs: ( b = 1 t n ) a m t m k( n, m ) (7.18) N S m S n S Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 32 / 55

33 Predictin: Eample We receive the fllwing utput frm the ptimizatin sftware fr fitting a supprt vectr machine with linear kernel and perfect separatin f the training data: n n,1 n,2 t n a n Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 33 / 55

34 Predictin: Eample The figure belw is a plt f the same data set, where the dts represent pints with class 1, and the crsses pints with class +1. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 34 / 55

35 Predictin: Eample (a) Cmpute the value f the SVM bias term b. Data pints with a > 0 are supprt vectrs. Let s take the pint 1 = 4, 2 = 4 with class label +1: b = t m N n=1 [ ] [ ] [ ] a n t n 1 3 m n = 1 + [4 4] + [4 4] [4 4] = 3 4 (b) Which class des the SVM predict fr the data pint 1 = 5, 2 = 2? y() = b + N n=1 [ ] [ ] [ ] a n t n 1 3 n = 3 [5 2] [5 2] [5 2] = Since the sign is psitive, we predict class +1. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 35 / 55

36 Predictin: Eample Decisin bundary and supprt vectrs. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 36 / 55

37 Allwing Errrs S far we assumed that the training data pints are linearly separable in feature space φ(). Resulting SVM gives eact separatin f training data in riginal input space, with nn-linear decisin bundary. Class distributins typically verlap, in which case eact separatin f the training data leads t pr generalizatin (verfitting). Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 37 / 55

38 Allwing Errrs Data pints are allwed t be n the wrng side f the margin bundary, but with a penalty that increases with the distance frm that bundary. Fr cnvenience we make this penalty a linear functin f the distance t the margin bundary. Intrduce slack variables ξ n 0 with ne slack variable fr each training data pint. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 38 / 55

39 Definitin f Slack Variables We define ξ n = 0 fr data pints that are n the inside f the crrect margin bundary and ξ n = t n y( n ) fr all ther data pints. ξ = 0 ξ = 0 ξ < 1 ξ > 1 y() = 1 y() = 0 y() = 1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 39 / 55

40 New Cnstraints The eact classificatin cnstraints t n y( n ) 1 n = 1,..., N (7.5) are replaced by t n y( n ) 1 ξ n n = 1,..., N (7.20) Check (7.20): ξ n = 0 fr data pints that are n the inside f the crrect margin bundary. In that case y n t n 1. Suppse t n = +1 and n the wrng side f the margin bundary, i.e. y n t n < 1. Since y n = y n t n, we have and therefre t n y n = 1 ξ n. Suppse t = 1... ξ n = t n y n = 1 y n t n = 1 y n t n Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 40 / 55

41 New bjective functin Our gal is t maimize the margin while sftly penalizing pints that lie n the wrng side f the margin bundary. We therefre minimize C N ξ n w 2 (7.21) n=1 where the parameter C > 0 cntrlls the trade-ff between the slack variable penalty and the margin. Alternative view (divide by C and put λ = 1 2C : N n=1 ξ n + λ w 2 i First term represents lack-f-fit (hinge lss) and secnd term takes care f regularizatin. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 41 / 55

42 Optimizatin Prblem The Lagrangian is given by L(w, b, a) = 1 N N N 2 w 2 + C ξ n a n {t n y( n ) 1 + ξ n } µ n ξ n (7.22) n=1 n=1 n=1 where a n 0 and µ n 0 are Lagrange multipliers. The KKT cnditins are given by: a n 0 (7.23) t n y( n ) 1 + ξ n 0 (7.24) a n (t n y( n ) 1 + ξ n ) = 0 (7.25) µ n 0 (7.26) ξ n 0 (7.27) µ n ξ n = 0 (7.28) Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 42 / 55

43 Dual Take derivative with respect t w, b and ξ n and equate t zer: L N w = 0 w = a n t n φ( n ) (7.29) n=1 L N b = 0 a n t n = 0 (7.30) n=1 L ξ n = 0 a n = C µ n (7.31) Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 43 / 55

44 Dual Using these t eliminate w, b and ξ n frm the Lagrangian, we btain the dual Lagrangian: Maimize L(a) = N a n 1 2 n=1 N n,m=1 with respect t a and subject t the cnstraints a n t n a m t m k( n, m ) (7.32) 0 a n C, n = 1,..., N (7.33) N a n t n = 0. (7.34) n=1 Nte: we have a n C since µ n 0 (7.26) and a n = C µ n (7.31). Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 44 / 55

45 Predictin Recall that y() = w φ() + b (7.1) Substituting N w = a n t n φ( n ) (7.8) int (7.1), we get y() = with k(, n ) = φ() φ( n ). n=1 N a n t n k(, n ) + b (7.13) n=1 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 45 / 55

46 Interpretatin f Slutin We distinguish tw cases: Pints with a n = 0 d nt play a rle in making predictins. Pints with a n > 0 are called supprt vectrs. It fllws frm KKT cnditin a n (t n y( n ) 1 + ξ n ) = 0 (7.25) that fr these pints t n y n = 1 ξ n Again we have tw cases: If a n < C then µ n > 0, because a n = C µ n. Since µ n ξ n = 0 (7.28), it fllws that ξ n = 0 and hence such pints lie n the margin. Pints with a n = C can be n the margin r inside the margin and can either be crrectly classified if ξ n 1 r misclassified if ξ n > 1. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 46 / 55

47 Cmputing the intercept T cmpute the value f b, we use the fact that thse supprt vectrs with 0 < a n < C have ξ n = 0 s that t n y( n ) = 1, s like befre we have b = t n m S a m t m k( n, m ) (7.17a) Again a numerically mre stable slutin is btained by averaging (7.17a) ver all data pints having 0 < a n < C: ( b = 1 t n ) a m t m k( n, m ) (7.37) N M m S n M Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 47 / 55

48 Mdel Selectin As usual we are cnfrnted with the prblem f selecting the apprpriate mdel cmpleity. The relevant parameters are C and any parameters f the chsen kernel functin. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 48 / 55

49 Hw t in R > cnn.svm.lin <- svm(cause sdium + c2, data=cnn.dat,kernel="linear") > plt(cnn.svm.lin,cnn.dat) > cnn.svm.lin.predict <- predict(cnn.svm.lin,cnn.dat[,1:2]) > table(cnn.dat[,3],cnn.svm.lin.predict) cnn.svm.lin.predict Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 49 / 55

50 Cnn s syndrme: linear kernel SVM classificatin plt 146 sdium c2 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 50 / 55

51 Hw t in R > cnn.svm.rad <- svm(cause sdium + c2, data=cnn.dat) > plt(cnn.svm.rad,cnn.dat) > cnn.svm.rad.predict <- predict(cnn.svm.rad,cnn.dat[,1:2]) > table(cnn.dat[,3],cnn.svm.rad.predict) cnn.svm.rad.predict Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 51 / 55

52 Cnn s syndrme: radial kernel, C = 1 SVM classificatin plt 146 sdium c2 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 52 / 55

53 Hw t in R > cnn.svm.rad <- svm(cause sdium + c2, data=cnn.dat,cst=100) > plt(cnn.svm.rad,cnn.dat) > cnn.svm.rad.predict <- predict(cnn.svm.rad,cnn.dat[,1:2]) > table(cnn.dat[,3],cnn.svm.rad.predict) cnn.svm.rad.predict Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 53 / 55

54 Cnn s syndrme: radial kernel, C = 100 SVM classificatin plt 146 sdium c2 Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 54 / 55

55 SVM in R LIBSVM is available in package e1071 in R. It can als perfrm regressin and nn-binary classificatin. Nn-binary classificatin is perfrmed as fllws: Train K(K 1)/2 binary SVM s n all pssible pairs f classes. T classify a new pint, let it be classified by every binary SVM, and pick the class with the highest number f vtes. This is dne autmatically by functin svm in e1071. Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 55 / 55

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

Pattern Recognition 2018 Support Vector Machines

Pattern Recognition 2018 Support Vector Machines Pattern Recognition 2018 Support Vector Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recognition 1 / 48 Support Vector Machines Ad Feelders ( Universiteit Utrecht

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

Support Vector Machines and Flexible Discriminants

Support Vector Machines and Flexible Discriminants 12 Supprt Vectr Machines and Flexible Discriminants This is page 417 Printer: Opaque this 12.1 Intrductin In this chapter we describe generalizatins f linear decisin bundaries fr classificatin. Optimal

More information

Linear programming III

Linear programming III Linear prgramming III Review 1/33 What have cvered in previus tw classes LP prblem setup: linear bjective functin, linear cnstraints. exist extreme pint ptimal slutin. Simplex methd: g thrugh extreme pint

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

Contents. This is page i Printer: Opaque this

Contents. This is page i Printer: Opaque this Cntents This is page i Printer: Opaque this Supprt Vectr Machines and Flexible Discriminants. Intrductin............. The Supprt Vectr Classifier.... Cmputing the Supprt Vectr Classifier........ Mixture

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017 Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Support Vector Machines and Flexible Discriminants

Support Vector Machines and Flexible Discriminants Supprt Vectr Machines and Flexible Discriminants This is page Printer: Opaque this. Intrductin In this chapter we describe generalizatins f linear decisin bundaries fr classificatin. Optimal separating

More information

SURVIVAL ANALYSIS WITH SUPPORT VECTOR MACHINES

SURVIVAL ANALYSIS WITH SUPPORT VECTOR MACHINES 1 SURVIVAL ANALYSIS WITH SUPPORT VECTOR MACHINES Wlfgang HÄRDLE Ruslan MORO Center fr Applied Statistics and Ecnmics (CASE), Humbldt-Universität zu Berlin Mtivatin 2 Applicatins in Medicine estimatin f

More information

Chapter 3 Kinematics in Two Dimensions; Vectors

Chapter 3 Kinematics in Two Dimensions; Vectors Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs

More information

STATS216v Introduction to Statistical Learning Stanford University, Summer Practice Final (Solutions) Duration: 3 hours

STATS216v Introduction to Statistical Learning Stanford University, Summer Practice Final (Solutions) Duration: 3 hours STATS216v Intrductin t Statistical Learning Stanfrd University, Summer 2016 Practice Final (Slutins) Duratin: 3 hurs Instructins: (This is a practice final and will nt be graded.) Remember the university

More information

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India CHAPTER 3 INEQUALITIES Cpyright -The Institute f Chartered Accuntants f India INEQUALITIES LEARNING OBJECTIVES One f the widely used decisin making prblems, nwadays, is t decide n the ptimal mix f scarce

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

Stats Classification Ji Zhu, Michigan Statistics 1. Classification. Ji Zhu 445C West Hall

Stats Classification Ji Zhu, Michigan Statistics 1. Classification. Ji Zhu 445C West Hall Stats 415 - Classificatin Ji Zhu, Michigan Statistics 1 Classificatin Ji Zhu 445C West Hall 734-936-2577 jizhu@umich.edu Stats 415 - Classificatin Ji Zhu, Michigan Statistics 2 Examples f Classificatin

More information

CHAPTER 24: INFERENCE IN REGRESSION. Chapter 24: Make inferences about the population from which the sample data came.

CHAPTER 24: INFERENCE IN REGRESSION. Chapter 24: Make inferences about the population from which the sample data came. MATH 1342 Ch. 24 April 25 and 27, 2013 Page 1 f 5 CHAPTER 24: INFERENCE IN REGRESSION Chapters 4 and 5: Relatinships between tw quantitative variables. Be able t Make a graph (scatterplt) Summarize the

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Homology groups of disks with holes

Homology groups of disks with holes Hmlgy grups f disks with hles THEOREM. Let p 1,, p k } be a sequence f distinct pints in the interir unit disk D n where n 2, and suppse that fr all j the sets E j Int D n are clsed, pairwise disjint subdisks.

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:

More information

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1 Administrativia Assignment 1 due thursday 9/23/2004 BEFORE midnight Midterm eam 10/07/2003 in class CS 460, Sessins 8-9 1 Last time: search strategies Uninfrmed: Use nly infrmatin available in the prblem

More information

SUPPORT VECTOR MACHINES FOR BANKRUPTCY ANALYSIS

SUPPORT VECTOR MACHINES FOR BANKRUPTCY ANALYSIS 1 SUPPORT VECTOR MACHINES FOR BANKRUPTCY ANALYSIS Wlfgang HÄRDLE 2 Ruslan MORO 1,2 Drthea SCHÄFER 1 1 Deutsches Institut für Wirtschaftsfrschung (DIW) 2 Center fr Applied Statistics and Ecnmics (CASE),

More information

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification Agenda Artificial Intelligence and its applicatins Lecture 6 Supervised Learning Prfessr Daniel Yeung danyeung@ieee.rg Dr. Patrick Chan patrickchan@ieee.rg Suth China University f Technlgy, China Learning

More information

Elements of Machine Intelligence - I

Elements of Machine Intelligence - I ECE-175A Elements f Machine Intelligence - I Ken Kreutz-Delgad Nun Vascncels ECE Department, UCSD Winter 2011 The curse The curse will cver basic, but imprtant, aspects f machine learning and pattern recgnitin

More information

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9.

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9. Sectin 7 Mdel Assessment This sectin is based n Stck and Watsn s Chapter 9. Internal vs. external validity Internal validity refers t whether the analysis is valid fr the ppulatin and sample being studied.

More information

Introduction: A Generalized approach for computing the trajectories associated with the Newtonian N Body Problem

Introduction: A Generalized approach for computing the trajectories associated with the Newtonian N Body Problem A Generalized apprach fr cmputing the trajectries assciated with the Newtnian N Bdy Prblem AbuBar Mehmd, Syed Umer Abbas Shah and Ghulam Shabbir Faculty f Engineering Sciences, GIK Institute f Engineering

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

LHS Mathematics Department Honors Pre-Calculus Final Exam 2002 Answers

LHS Mathematics Department Honors Pre-Calculus Final Exam 2002 Answers LHS Mathematics Department Hnrs Pre-alculus Final Eam nswers Part Shrt Prblems The table at the right gives the ppulatin f Massachusetts ver the past several decades Using an epnential mdel, predict the

More information

Physical Layer: Outline

Physical Layer: Outline 18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin

More information

ENGI 4430 Parametric Vector Functions Page 2-01

ENGI 4430 Parametric Vector Functions Page 2-01 ENGI 4430 Parametric Vectr Functins Page -01. Parametric Vectr Functins (cntinued) Any nn-zer vectr r can be decmpsed int its magnitude r and its directin: r rrˆ, where r r 0 Tangent Vectr: dx dy dz dr

More information

ChE 471: LECTURE 4 Fall 2003

ChE 471: LECTURE 4 Fall 2003 ChE 47: LECTURE 4 Fall 003 IDEL RECTORS One f the key gals f chemical reactin engineering is t quantify the relatinship between prductin rate, reactr size, reactin kinetics and selected perating cnditins.

More information

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~ Sectin 6-2: Simplex Methd: Maximizatin with Prblem Cnstraints f the Frm ~ Nte: This methd was develped by Gerge B. Dantzig in 1947 while n assignment t the U.S. Department f the Air Frce. Definitin: Standard

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

Pre-Calculus Individual Test 2017 February Regional

Pre-Calculus Individual Test 2017 February Regional The abbreviatin NOTA means Nne f the Abve answers and shuld be chsen if chices A, B, C and D are nt crrect. N calculatr is allwed n this test. Arcfunctins (such as y = Arcsin( ) ) have traditinal restricted

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

Lecture 8: Multiclass Classification (I)

Lecture 8: Multiclass Classification (I) Bayes Rule fr Multiclass Prblems Traditinal Methds fr Multiclass Prblems Linear Regressin Mdels Lecture 8: Multiclass Classificatin (I) Ha Helen Zhang Fall 07 Ha Helen Zhang Lecture 8: Multiclass Classificatin

More information

Reinforcement Learning" CMPSCI 383 Nov 29, 2011!

Reinforcement Learning CMPSCI 383 Nov 29, 2011! Reinfrcement Learning" CMPSCI 383 Nv 29, 2011! 1 Tdayʼs lecture" Review f Chapter 17: Making Cmple Decisins! Sequential decisin prblems! The mtivatin and advantages f reinfrcement learning.! Passive learning!

More information

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are: Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track

More information

Smoothing, penalized least squares and splines

Smoothing, penalized least squares and splines Smthing, penalized least squares and splines Duglas Nychka, www.image.ucar.edu/~nychka Lcally weighted averages Penalized least squares smthers Prperties f smthers Splines and Reprducing Kernels The interplatin

More information

Linear Classification

Linear Classification Linear Classificatin CS 54: Machine Learning Slides adapted frm Lee Cper, Jydeep Ghsh, and Sham Kakade Review: Linear Regressin CS 54 [Spring 07] - H Regressin Given an input vectr x T = (x, x,, xp), we

More information

Margin Distribution and Learning Algorithms

Margin Distribution and Learning Algorithms ICML 03 Margin Distributin and Learning Algrithms Ashutsh Garg IBM Almaden Research Center, San Jse, CA 9513 USA Dan Rth Department f Cmputer Science, University f Illinis, Urbana, IL 61801 USA ASHUTOSH@US.IBM.COM

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete

More information

NUMBERS, MATHEMATICS AND EQUATIONS

NUMBERS, MATHEMATICS AND EQUATIONS AUSTRALIAN CURRICULUM PHYSICS GETTING STARTED WITH PHYSICS NUMBERS, MATHEMATICS AND EQUATIONS An integral part t the understanding f ur physical wrld is the use f mathematical mdels which can be used t

More information

Determining Optimum Path in Synthesis of Organic Compounds using Branch and Bound Algorithm

Determining Optimum Path in Synthesis of Organic Compounds using Branch and Bound Algorithm Determining Optimum Path in Synthesis f Organic Cmpunds using Branch and Bund Algrithm Diastuti Utami 13514071 Prgram Studi Teknik Infrmatika Seklah Teknik Elektr dan Infrmatika Institut Teknlgi Bandung,

More information

Chapter 2 GAUSS LAW Recommended Problems:

Chapter 2 GAUSS LAW Recommended Problems: Chapter GAUSS LAW Recmmended Prblems: 1,4,5,6,7,9,11,13,15,18,19,1,7,9,31,35,37,39,41,43,45,47,49,51,55,57,61,6,69. LCTRIC FLUX lectric flux is a measure f the number f electric filed lines penetrating

More information

x x

x x Mdeling the Dynamics f Life: Calculus and Prbability fr Life Scientists Frederick R. Adler cfrederick R. Adler, Department f Mathematics and Department f Bilgy, University f Utah, Salt Lake City, Utah

More information

Sequential Allocation with Minimal Switching

Sequential Allocation with Minimal Switching In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University

More information

Lecture 3: Principal Components Analysis (PCA)

Lecture 3: Principal Components Analysis (PCA) Lecture 3: Principal Cmpnents Analysis (PCA) Reading: Sectins 6.3.1, 10.1, 10.2, 10.4 STATS 202: Data mining and analysis Jnathan Taylr, 9/28 Slide credits: Sergi Bacallad 1 / 24 The bias variance decmpsitin

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

Equilibrium of Stress

Equilibrium of Stress Equilibrium f Stress Cnsider tw perpendicular planes passing thrugh a pint p. The stress cmpnents acting n these planes are as shwn in ig. 3.4.1a. These stresses are usuall shwn tgether acting n a small

More information

Support Vector Machine (continued)

Support Vector Machine (continued) Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need

More information

PSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa

PSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa There are tw parts t this lab. The first is intended t demnstrate hw t request and interpret the spatial diagnstics f a standard OLS regressin mdel using GeDa. The diagnstics prvide infrmatin abut the

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

Math Foundations 20 Work Plan

Math Foundations 20 Work Plan Math Fundatins 20 Wrk Plan Units / Tpics 20.8 Demnstrate understanding f systems f linear inequalities in tw variables. Time Frame December 1-3 weeks 6-10 Majr Learning Indicatrs Identify situatins relevant

More information

An Introduction to Complex Numbers - A Complex Solution to a Simple Problem ( If i didn t exist, it would be necessary invent me.

An Introduction to Complex Numbers - A Complex Solution to a Simple Problem ( If i didn t exist, it would be necessary invent me. An Intrductin t Cmple Numbers - A Cmple Slutin t a Simple Prblem ( If i didn t eist, it wuld be necessary invent me. ) Our Prblem. The rules fr multiplying real numbers tell us that the prduct f tw negative

More information

COMP9444 Neural Networks and Deep Learning 3. Backpropagation

COMP9444 Neural Networks and Deep Learning 3. Backpropagation COMP9444 Neural Netwrks and Deep Learning 3. Backprpagatin Tetbk, Sectins 4.3, 5.2, 6.5.2 COMP9444 17s2 Backprpagatin 1 Outline Supervised Learning Ockham s Razr (5.2) Multi-Layer Netwrks Gradient Descent

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Cambridge Assessment International Education Cambridge Ordinary Level. Published

Cambridge Assessment International Education Cambridge Ordinary Level. Published Cambridge Assessment Internatinal Educatin Cambridge Ordinary Level ADDITIONAL MATHEMATICS 4037/1 Paper 1 Octber/Nvember 017 MARK SCHEME Maximum Mark: 80 Published This mark scheme is published as an aid

More information

The Solution Path of the Slab Support Vector Machine

The Solution Path of the Slab Support Vector Machine CCCG 2008, Mntréal, Québec, August 3 5, 2008 The Slutin Path f the Slab Supprt Vectr Machine Michael Eigensatz Jachim Giesen Madhusudan Manjunath Abstract Given a set f pints in a Hilbert space that can

More information

You need to be able to define the following terms and answer basic questions about them:

You need to be able to define the following terms and answer basic questions about them: CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f

More information

Part One: Heat Changes and Thermochemistry. This aspect of Thermodynamics was dealt with in Chapter 6. (Review)

Part One: Heat Changes and Thermochemistry. This aspect of Thermodynamics was dealt with in Chapter 6. (Review) CHAPTER 18: THERMODYNAMICS AND EQUILIBRIUM Part One: Heat Changes and Thermchemistry This aspect f Thermdynamics was dealt with in Chapter 6. (Review) A. Statement f First Law. (Sectin 18.1) 1. U ttal

More information

Math 105: Review for Exam I - Solutions

Math 105: Review for Exam I - Solutions 1. Let f(x) = 3 + x + 5. Math 105: Review fr Exam I - Slutins (a) What is the natural dmain f f? [ 5, ), which means all reals greater than r equal t 5 (b) What is the range f f? [3, ), which means all

More information

Assessment Primer: Writing Instructional Objectives

Assessment Primer: Writing Instructional Objectives Assessment Primer: Writing Instructinal Objectives (Based n Preparing Instructinal Objectives by Mager 1962 and Preparing Instructinal Objectives: A critical tl in the develpment f effective instructin

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets Department f Ecnmics, University f alifrnia, Davis Ecn 200 Micr Thery Prfessr Giacm Bnann Insurance Markets nsider an individual wh has an initial wealth f. ith sme prbability p he faces a lss f x (0

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

**DO NOT ONLY RELY ON THIS STUDY GUIDE!!!**

**DO NOT ONLY RELY ON THIS STUDY GUIDE!!!** Tpics lists: UV-Vis Absrbance Spectrscpy Lab & ChemActivity 3-6 (nly thrugh 4) I. UV-Vis Absrbance Spectrscpy Lab Beer s law Relates cncentratin f a chemical species in a slutin and the absrbance f that

More information

Coalition Formation and Data Envelopment Analysis

Coalition Formation and Data Envelopment Analysis Jurnal f CENTRU Cathedra Vlume 4, Issue 2, 20 26-223 JCC Jurnal f CENTRU Cathedra Calitin Frmatin and Data Envelpment Analysis Rlf Färe Oregn State University, Crvallis, OR, USA Shawna Grsspf Oregn State

More information

Support Vector Machine

Support Vector Machine Support Vector Machine Kernel: Kernel is defined as a function returning the inner product between the images of the two arguments k(x 1, x 2 ) = ϕ(x 1 ), ϕ(x 2 ) k(x 1, x 2 ) = k(x 2, x 1 ) modularity-

More information

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving.

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving. Sectin 3.2: Many f yu WILL need t watch the crrespnding vides fr this sectin n MyOpenMath! This sectin is primarily fcused n tls t aid us in finding rts/zers/ -intercepts f plynmials. Essentially, ur fcus

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

MATHEMATICS SYLLABUS SECONDARY 5th YEAR

MATHEMATICS SYLLABUS SECONDARY 5th YEAR Eurpean Schls Office f the Secretary-General Pedaggical Develpment Unit Ref. : 011-01-D-8-en- Orig. : EN MATHEMATICS SYLLABUS SECONDARY 5th YEAR 6 perid/week curse APPROVED BY THE JOINT TEACHING COMMITTEE

More information

The Electromagnetic Form of the Dirac Electron Theory

The Electromagnetic Form of the Dirac Electron Theory 0 The Electrmagnetic Frm f the Dirac Electrn Thery Aleander G. Kyriaks Saint-Petersburg State Institute f Technlgy, St. Petersburg, Russia* In the present paper it is shwn that the Dirac electrn thery

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION. Instructions: If asked to label the axes please use real world (contextual) labels

ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION. Instructions: If asked to label the axes please use real world (contextual) labels ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION Instructins: If asked t label the axes please use real wrld (cntextual) labels Multiple Chice Answers: 0 questins x 1.5 = 30 Pints ttal Questin Answer Number 1

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

NAME: Prof. Ruiz. 1. [5 points] What is the difference between simple random sampling and stratified random sampling?

NAME: Prof. Ruiz. 1. [5 points] What is the difference between simple random sampling and stratified random sampling? CS4445 ata Mining and Kwledge iscery in atabases. B Term 2014 Exam 1 Nember 24, 2014 Prf. Carlina Ruiz epartment f Cmputer Science Wrcester Plytechnic Institute NAME: Prf. Ruiz Prblem I: Prblem II: Prblem

More information

Max Margin-Classifier

Max Margin-Classifier Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization

More information

1 The limitations of Hartree Fock approximation

1 The limitations of Hartree Fock approximation Chapter: Pst-Hartree Fck Methds - I The limitatins f Hartree Fck apprximatin The n electrn single determinant Hartree Fck wave functin is the variatinal best amng all pssible n electrn single determinants

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

Chapter 5: Diffusion (2)

Chapter 5: Diffusion (2) Chapter 5: Diffusin () ISSUES TO ADDRESS... Nn-steady state diffusin and Fick s nd Law Hw des diffusin depend n structure? Chapter 5-1 Class Eercise (1) Put a sugar cube inside a cup f pure water, rughly

More information

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b . REVIEW OF SOME BASIC ALGEBRA MODULE () Slving Equatins Yu shuld be able t slve fr x: a + b = c a d + e x + c and get x = e(ba +) b(c a) d(ba +) c Cmmn mistakes and strategies:. a b + c a b + a c, but

More information