COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

Size: px
Start display at page:

Download "COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)"

Transcription

1 COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf Slides mstly by: Class web page: Unless therwise nted, all material psted fr this curse are cpyright f the instructrs, and cannt be reused r repsted withut the instructrs written permissin.

2 Quizzes Quizzes fr: - Instance based learning - Supprt vectr machines Nw available fr self-test COMP-598: Applied Machine Learning 2

3 Cnsider bth slutins: Recap primal and dual p* = min w max α:αi 0 L(w,α) Primal d* = max α:αi 0 min w L(w,α) Dual The ptimal w*, α* are the same, as the fllwing hld: f and g i are cnvex and the g i can all be satisfied simultaneusly (fr linearly seperable data) Thus we can chse whichever is easier t slve Dual cubic in n (#examples) Primal cubic in m (#features) 3

4 Recap primal and dual d* = max α:αi 0 min w L(w,α) Dual Slutin strategy: First step: slve the inner prblem (min w L(w,α)) : Taking derivatives f L(w, α) wrt w, setting t 0, and slving fr w : L(w, α) = ½ w 2 i α i (1 y i (w T x i ) ) δl/δw = w - i α i y i x i = 0 w* = i α i y i x i Just like fr the perceptrn with zer initial weights, the ptimal slutin w* is a linear cmbinatin f the x i. We can plug this in, and nw find the α that maximize the uter prblem 4

5 Recap primal and dual Result f first step: The Lagrangian and slutin fr inner prblem are: d* = max α:αi 0 min w L(w,α) L(w, α) = ½ w 2 i α i (1 y i (w T x i ) ) w* = i α i y i x i Slutin strategy: Secnd step: Plug this back int L t get the dual: max α:αi 0 L(w*,α) max α:αi 0 ½ w* 2 i α i (1 y i (w* T x i ) ), use a bit f algebra t get max α:αi 0 i α i ½ i,j y i y j α i α j (x it x) with cnstraints α i 0 and i α i y i = 0. Quadratic prgramming prblem. 5

6 Cmparisn with lgistic regressin In lgistic regressin, the lss was: lg(σ(bw T x i )) fr samples frm the psitive class lg(1- σ(bw T x i )) fr samples frm the negative class lg(1- σ(bw T x i )) = lg(σ(-(bw T x i ))) Fr labels y i = 1/-1 can write: i=1:n lg(σ(y i (bw T x i ))) 6

7 Cmparisn with lgistic regressin In lgistic regressin (with labels 1/-1), the lss was: i=1:n lg(σ(y i (b w T x i ))) λ w 2 (we can include regularizatin) Fr supprt vectr machines, we have the lss i=1:n E (y i (bw T x i )-1) λ w 2 Where E (z) is 0 if z 0, r therwise. λ des nt matter in this case! Only wrks fr linearly separable data (fr nw!) 7

8 SVM frmulatin SVM prblem: Min ½ w 2 w.r.t. w s.t. y i w T x i 1 M x i 0 γ i x i w This can be slved with quadratic prgramming. 8

9 Nn-linearly separable data A linear bundary might be t simple t capture the data. Optin 1: Relax the cnstraints and allw sme pints t be misclassified by the margin. Optin 2: Allw a nnlinear decisin bundary in the input space by finding a linear decisin bundary in an expanded space (similar t adding plynmial terms in linear regressin.) Here x i is replaced by ɸ(x i ), where ɸ is called a feature mapping. 9

10 Sften the primal bjective We wanted t slve: min w ½ w 2 s.t. y i w T x i 1 This can be re-written: min w i E (y i (bw T x i )-1) ½ w 2 where i E (w T x i, y i ) = ( fr a pints inside margin, 0 therwise) Sften misclassificatin cst: min w i L 0-1 (w T x i, y i ) ½ w 2 where i L 0-1 (w T x i, y i ) = (1 fr a misclassificatin, 0 crrect classificatin) But this is a nn-cnvex bjective! 10

11 Sften the primal bjective Nn-cnvex bjective? Functin is cnvex if: average f f(x 1 ) and f(x 2 ) > f(x 1 /2x 2 /2) fr all x 1, x 2. In a graph, this means: when yu cnnect tw lines n the graph, the cnnecting segment is never belw the riginal line Zer-One lss z i =y i (b w T x i ) 11

12 Sften the primal bjective Nn-cnvex bjective? Functin is cnvex if: average f f(x 1 ) and f(x 2 ) > f(x 1 /2x 2 /2) fr all x 1, x 2. In a graph, this means: when yu cnnect tw lines n the graph, the cnnecting segment is never belw the riginal line Zer-One lss z i =y i (b w T x i ) 12

13 Apprximatin f the L 0-1 functin L 0-1 z i =y i (b w T x i ) Cpyright: C. Bishp, Pattern recgnitin and machine learning 13

14 Apprximatin f the L 0-1 functin L 0-1 z i =y i (b w T x i ) Hinge Lss Cpyright: C. Bishp, Pattern recgnitin and machine learning 14

15 Apprximatin f the L 0-1 functin L 0-1 Lgistic regressin z i =y i (b w T x i ) Hinge Lss Cpyright: C. Bishp, Pattern recgnitin and machine learning 15

16 Apprximatin f the L 0-1 functin L 0-1 Quadratic lss Lgistic regressin z i =y i (b w T x i ) Hinge Lss Cpyright: C. Bishp, Pattern recgnitin and machine learning 16

17 SVM with hinge lss Hinge lss: L hin (w T x i, y i ) = max {1-y i w T x i, 0} Sften misclassificatin cst: min w C i L hin (w T x i, y i ) ½ w 2 where C cntrls trade-ff between slack penalty and margin. The hinge lss upper-bunds the 0-1 lss. L hin (w T x i, y i ) L 0-1 (w T x i, y i ) 17

18 Primal Sft SVM prblem Define slack variables ξ i = L hin (w T x i, y i ) = max {1-y i (w T x i b) 0} Slve: ŵ sft = argmin w,ξ C i :1:n ξ i ½ w 2 Add Lagrange mult: s. t. y i (w T x i b) 1 ξ i, i = 1,, n <= α i ξ i 0, i = 1,, n <= β i where w ε R m, ξ ε R n Intrduce Lagrange multipliers: α = ( α 1, α 2,, α n ) T, 0 α i β = ( β 1, β 2,, β n ) T, 0 β i 18

19 Sft SVM prblem: Adding Lagrange multipliers Primal bjective: (w, ξ, α, β) = arg min w,ξ max α,β L(w, ξ, α, β) where L(w, ξ, α, β) = ½ w 2 C i :1:n ξ i - i :1:n α i (y i (w T x i b) -1ξ i ) - i :1:n β i ξ i 19

20 Sft SVM prblem: Adding Lagrange multipliers Primal bjective: (w, ξ, α, β) = arg min w,ξ max α,β L(w, ξ, α, β) where L(w, ξ, α, β) = ½ w 2 C i :1:n ξ i - i :1:n α i (y i w T x i -1ξ i ) - i :1:n β i ξ i Dual (invert min and max): (w, ξ, α, β) = arg max α,β min w,ξ L(w, ξ, α, β) 20

21 Sft SVM prblem: Adding Lagrange multipliers Primal bjective: (w, ξ, α, β) = arg min w,ξ max α,β L(w, ξ, α, β) where L(w, ξ, α, β) = ½ w 2 C i :1:n ξ i - i :1:n α i (y i w T x i -1ξ i ) - i :1:n β i ξ i Dual (invert min and max): (w, ξ, α, β) = arg max α,β min w,ξ L(w, ξ, α, β) Slve: δl/δw = w - i α i y i x i = 0 => w* = i α i y i x i δl/δξ = C1 n α β = 0 => β = C1 n α Lagrange multipliers are psitive, s we have: 0 β i, 0 α i C 21

22 Sft SVM prblem: Adding Lagrange multipliers Primal bjective: (w, ξ, α, β) = arg min w,ξ max α,β L(w, ξ, α, β) where L(w, ξ, α, β) = ½ w 2 C i :1:n ξ i - i :1:n α i (y i w T x i -1ξ i ) - i :1:n β i ξ i Dual (invert min and max): (w, ξ, α, β) = arg max α,β min w,ξ L(w, ξ, α, β) Slve: δl/δw = w - i α i y i x i = 0 => w* = i α i y i x i δl/δξ = C1 n α β = 0 => β = C1 n α Lagrange multipliers are psitive, s we have: 0 β i, 0 α i C Plug int dual : max α i α i ½ i,j y i y j α i α j (x i x) with cnstraints 0 α i C and i α i y i = 0. This is a quadratic prgramming prblem (similar t Hard SVM). 22

23 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. w T x=0 23

24 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. Pints away frm margin have α i = 0, ξ i =0. The fllwing have α i > 0 Pints n the margin, ξ i =0. Pints within the margin, 0 < ξ i < 1 Pints n the decisin line, ξ i = 1. Misclassified pints, ξ i > 1. α j =0 w T x=0 24

25 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. Pints away frm margin have α i = 0, ξ i =0. The fllwing have α i > 0 Pints n the margin, ξ i =0. Pints within the margin, 0 < ξ i < 1 Pints n the decisin line, ξ i = 1. Misclassified pints, ξ i > 1. α j >0, ξ j =0 α j =0 w T x=0 25

26 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. Pints away frm margin have α i = 0, ξ i =0. The fllwing have α i > 0 Pints n the margin, ξ i =0. Pints within the margin, 0 < ξ i < 1 Pints n the decisin line, ξ i = 1. Misclassified pints, ξ i > 1. α j >0, ξ j =0 α j =0 0< ξ j <1 w T x=0 26

27 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. Pints away frm margin have α i = 0, ξ i =0. The fllwing have α i > 0 Pints n the margin, ξ i =0. Pints within the margin, 0 < ξ i < 1 Pints n the decisin line, ξ i = 1. Misclassified pints, ξ i > 1. α j >0, ξ j =0 α j =0 ξ j =1 0< ξ j <1 w T x=0 27

28 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. Pints away frm margin have α i = 0, ξ i =0. The fllwing have α i > 0 Pints n the margin, ξ i =0. Pints within the margin, 0 < ξ i < 1 Pints n the decisin line, ξ i = 1. Misclassified pints, ξ i > 1. α j >0, ξ j =0 α j =0 ξ j =1 0< ξ j <1 ξ j >1 ξ j >1 w T x=0 28

29 Sft SVM slutin Sft-SVM has ne mre cnstraint 0 α i C (vs 0 α i in Hard SVM). When C=>, then Sft-SVM=>Hard-SVM. Pints away frm margin have α i = 0, ξ i =0. The fllwing have α i > 0 Pints n the margin, ξ i =0. Pints within the margin, 0 < ξ i < 1 Pints n the decisin line, ξ i = 1. Misclassified pints, ξ i > 1. T predict n test data: h w (x) = sign( i=1:n α i y i (x i x) ) Only need t stre the supprt vectrs (i.e. pints with α i > 0 ) t predict. α j >0, ξ j =0 α j =0 ξ j =1 0< ξ j <1 ξ j >1 w T x=0 ξ j >1 29

30 Sft SVM, Bias & Variance ŵ sft = argmin w,ξ C i :1:n ξ i ½ w 2 C=>, then Sft-SVM=>Hard-SVM What des lwering C d t bias and variance? Increase bias and decrease variance? Decrease bias and increase variance? 30

31 Nn-linearly separable data A linear bundary might be t simple t capture the data. Optin 1: Relax the cnstraints and allw sme pints t be misclassified by the margin. Optin 2: Allw a nnlinear decisin bundary in the input space by finding a linear decisin bundary in an expanded space (similar t adding plynmial terms in linear regressin.) Here x i is replaced by ɸ(x i ), where ɸ is called a feature mapping. 31

32 Margin ptimizatin in feature space Replacing x i by ɸ(x i ), the ptimizatin prblem fr w becmes: Primal frm: Min ½ w 2 w.r.t. w s.t. y i w T ɸ(x i ) 1 32

33 Margin ptimizatin in feature space Replacing x i by ɸ(x i ), the ptimizatin prblem fr w becmes: Primal frm: Min ½ w 2 w.r.t. w s.t. y i (w T ɸ(x i )b) 1 Dual frm: Max i α i ½ i,j y i y j α i α j (ɸ(x i ) ɸ(x)) w.r.t. s.t. α i α i 0 i α i y i = 0 33

34 Feature space slutin The ptimal weights, in the expanded feature space, are w = i=1:n α i y i ɸ(x i ) Classificatin f an input x is given by: h w (x) = sign( i=1:n α i y i (ɸ(x i ) ɸ(x))b ) 34

35 Feature space slutin The ptimal weights, in the expended feature space, are w = i=1:n α i y i ɸ(x i ) Classificatin f an input x is given by: h w (x) = sign( i=1:n α i y i (ɸ(x i ) ɸ(x))b ) Nte that t slve the SVM ptimizatin prblem in dual frm and t make a predictin, we nly ever need t cmpute dtprducts f feature vectrs. 35

36 Feature space slutin Ptential prblem: Let s say we might need many feature 3 rd rder? If we had m riginal features, nw we have 1 m m 2 m 3 If we had 10 riginal features, we nw have mre than 1000! In primal frm this will be very cstly t slve Even in dual frm we will need t cmpute the inner prducts! 36

37 Kernel functins Whenever a learning algrithm (such as SVMs) can be written in terms f dt-prducts, it can be generalized t kernels. A kernel is any functin K: R m x R m R, which crrespnds t a dt prduct fr sme feature mapping ɸ: K(x 1, x 2 ) = ɸ(x 1 ) ɸ(x 2 ) fr sme ɸ 37

38 Kernel functins Whenever a learning algrithm (such as SVMs) can be written in terms f dt-prducts, it can be generalized t kernels. A kernel is any functin K: R m x R m R, which crrespnds t a dt prduct fr sme feature mapping ɸ: K(x 1, x 2 ) = ɸ(x 1 ) ɸ(x 2 ) fr sme ɸ Cnversely, by chsing feature mapping ɸ, we implicitly chse a kernel functin. Recall that ɸ(x 1 ) ɸ(x 2 ) = ɸ(x 1 ) ɸ(x 2 ) cs (ɸ(x 1 ),ɸ(x 2 ) ), where dentes the angle between the vectrs, s a kernel functin can be thught f as a ntin f similarity. 38

39 Example: Quadratic kernel Let K(x, z) = (x z) 2. Is this a kernel? K(x, z) = ( i=1:m x i z i ) ( j=1:m x j z j ) = i,jε{1..m} x i z i x j z j = i,j ε {1..m} ( x i x j ) ( z i z j ) 39

40 Example: Quadratic kernel Let K(x, z) = (x z) 2. Is this a kernel? K(x, z) = ( i=1:m x i z i ) ( j=1:m x j z j ) = i,jε{1..m} x i z i x j z j = i,j ε {1..m} ( x i x j ) ( z i z j ) We see it is a kernel, with feature mapping: ɸ(x) = < x 12, x 1 x 2,, x 1 x m, x 2 x 1, x 22,, x m 2 > Feature vectr includes all squares f elements and all crss terms. 40

41 Example: Quadratic kernel Let K(x, z) = (x z) 2. Is this a kernel? K(x, z) = ( i=1:m x i z i ) ( j=1:m x j z j ) = i,jε{1..m} x i z i x j z j = i,j ε {1..m} ( x i x j ) ( z i z j ) We see it is a kernel, with feature mapping: ɸ(x) = < x 12, x 1 x 2,, x 1 x m, x 2 x 1, x 22,, x m 2 > Feature vectr includes all squares f elements and all crss terms. Imprtant: Cmputing ɸ takes O(m 2 ) but cmputing K nly takes O(m). 41

42 Plynmial kernels Mre generally, K(x, z) = (x z) d is a kernel, fr any psitive integer d: K(x, z) = ( i=1:m x i z i ) d If we expanded the sum abve in the naïve way, we get m d terms. Terms are mnmials (prducts f x i ) with ttal pwer equal t d. 42

43 Plynmial kernels Mre generally, K(x, z) = (x z) d is a kernel, fr any psitive integer d: K(x, z) = ( i=1:m x i z i ) d If we expanded the sum abve in the naïve way, we get m d terms. Terms are mnmials (prducts f x i ) with ttal pwer equal t d. If we use the primal frm f the SVM, each term gets a weight. Curse f dimensinality: it is very expensive bth t ptimize and t predict with an SVM in primal frm. Hwever, evaluating the dt-prduce f any tw feature vectrs can be dne using K in O(m). 43

44 The kernel trick If we wrk with the dual, we d nt have t ever cmpute the feature mapping ɸ. We just cmpute the similarity kernel K. 44

45 The kernel trick If we wrk with the dual, we d nt have t ever cmpute the feature mapping ɸ. We just cmpute the similarity kernel K. We just replace any inner prduct ɸ(x 1 ) ɸ(x 2 ) with K(x 1, x 2 ) This is justified as any valid kernel defines an inner prduct with sme feature expansin even if we d nt want t actually frm this feature expansin (smetimes it is even impssible) 45

46 The kernel trick We can slve the dual fr the α i with the kernel trick (just replace ɸ(x 1 ) ɸ(x 2 ) with K(x 1, x 2 ) ) Max i=1:n α i ½ i,j=1:n y i y j α i α j K(x i,x j ) w.r.t. α i s.t. α i 0 and i:1..n α i y i = 0 The class f a new input x is cmputed as: h w (x) = sign( i=1:n α i y i K(x i,x ) ) where x i are the supprt vectrs (defining the margin). Remember, K(, ) can be evaluated in O(m) time = big savings! 46

47 Sme ther kernel functins K(x, z) = (1 x z) d - feature expansin has all mnmial terms f ttal pwer. Radial basis / Gaussian kernel: K(x, z) = exp ( - x-z 2 / 2σ 2 ) This kernel has an infinite-dimensinal feature expansin, but dtprducts can still be cmputed in O(m) (where m=#riginal features) It wuld be impssible t use this representatin in a primal frmulatin! Sigmidal kernel: K(x, z) = tanh(c 1 x z c 2 ) 47

48 Example: Gaussian kernel Example: Gaussian kernel Nte the nn-linear decisin bundary 48

49 Nn-parametric Remember that we call mdels that cannt be expressed with a fixed (finite) number f parameters nn-parametric With Gaussian kernels ur feature expansin is infinitedimensinal Thus, SVM with a Gaussian kernel is anther example f a nnparametric mdel Same advantages and disadvantages we discussed in Lecture 7 (Instance-based learning) 49

50 Kernels beynd SVMs A lt f research related t defining kernel functins suitable t particular tasks / kinds f inputs (e.g. wrds, graphs, images). Many kernels are available: Infrmatin diffusin kernels (Lafferty and Lebann, 2002) Diffusin kernels n graphs (Kndr and Jebara, 2003) String kernels fr text classificatin (Ldhi et al, 2002) String kernels fr prtein classificatin (Leslie et al, 2002) and thers! 50

51 Example: String kernels Very imprtant fr DNA matching, text classificatin, Often use a sliding windw f length k ver the tw strings that we want t cmpare. Within the fixed-size windw we can d many things: Cunt exact matches. Weigh mismatches based n hw bad they are. Cunt certain markers, e.g. AGT. The kernel is the sum f these similarities ver the tw sequences. 51

52 Kernelizing ther ML algrithms Many ther machine learning algrithms have a dual frmulatin, in which dt-prducts f features can be replaced by kernels. Examples: Perceptrn Lgistic regressin Linear regressin (We ll d this later in the curse!) 52

53 Multiple classes One-vs-All: Learn K separate binary classifiers. Can lead t incnsistent results. Training sets are imbalanced, e.g. assuming n examples per class, each binary classifier is trained with psitive class having 1*n f the data, and negative class having (K-1)*n f the data. 53

54 Multiple classes One-vs-All: Learn K separate binary classifiers. Can lead t incnsistent results. Training sets are imbalanced, e.g. assuming n examples per class, each binary classifier is trained with psitive class having 1*n f the data, and negative class having (K-1)*n f the data. Multi-class SVM: Define the margin t be the gap between the crrect class and the nearest ther class. 54

55 SVMs fr regressin Minimize a regularized errr functin: ŵ = argmin w C i :1:n ( y i - w T x i ) 2 ½ w 2 Intrduce slack variables t ptimize tube arund the regressin functin. 55

56 SVMs fr regressin Minimize a regularized errr functin: ŵ = argmin w C i :1:n ( y i - w T x i ) 2 ½ w 2 Intrduce slack variables t ptimize tube arund the regressin functin. Typically, relax t ε-sensitive errr n the linear target t ensure sparse slutin (i.e. few supprt vectrs): ŵ = argmin w C i :1:n E ε ( y i - w T x i ) 2 ½ w 2 where E ε = 0 if ( y i - w T x i )<ε, (y i - w T x i ) ε therwise 56

57 What yu shuld knw Frm last class and frm tday: Perceptrn algrithm. Margin definitin fr linear SVMs. Use f Lagrange multipliers t transfrm ptimizatin prblems. Primal and dual ptimizatin prblems fr SVMs. Feature space versin f SVMs. The kernel trick and examples f cmmn kernels. 57

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017 Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with

More information

Support Vector Machines and Flexible Discriminants

Support Vector Machines and Flexible Discriminants 12 Supprt Vectr Machines and Flexible Discriminants This is page 417 Printer: Opaque this 12.1 Intrductin In this chapter we describe generalizatins f linear decisin bundaries fr classificatin. Optimal

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

Linear programming III

Linear programming III Linear prgramming III Review 1/33 What have cvered in previus tw classes LP prblem setup: linear bjective functin, linear cnstraints. exist extreme pint ptimal slutin. Simplex methd: g thrugh extreme pint

More information

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India CHAPTER 3 INEQUALITIES Cpyright -The Institute f Chartered Accuntants f India INEQUALITIES LEARNING OBJECTIVES One f the widely used decisin making prblems, nwadays, is t decide n the ptimal mix f scarce

More information

Simple Linear Regression (single variable)

Simple Linear Regression (single variable) Simple Linear Regressin (single variable) Intrductin t Machine Learning Marek Petrik January 31, 2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b . REVIEW OF SOME BASIC ALGEBRA MODULE () Slving Equatins Yu shuld be able t slve fr x: a + b = c a d + e x + c and get x = e(ba +) b(c a) d(ba +) c Cmmn mistakes and strategies:. a b + c a b + a c, but

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

Smoothing, penalized least squares and splines

Smoothing, penalized least squares and splines Smthing, penalized least squares and splines Duglas Nychka, www.image.ucar.edu/~nychka Lcally weighted averages Penalized least squares smthers Prperties f smthers Splines and Reprducing Kernels The interplatin

More information

NUMBERS, MATHEMATICS AND EQUATIONS

NUMBERS, MATHEMATICS AND EQUATIONS AUSTRALIAN CURRICULUM PHYSICS GETTING STARTED WITH PHYSICS NUMBERS, MATHEMATICS AND EQUATIONS An integral part t the understanding f ur physical wrld is the use f mathematical mdels which can be used t

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

Contents. This is page i Printer: Opaque this

Contents. This is page i Printer: Opaque this Cntents This is page i Printer: Opaque this Supprt Vectr Machines and Flexible Discriminants. Intrductin............. The Supprt Vectr Classifier.... Cmputing the Supprt Vectr Classifier........ Mixture

More information

Stats Classification Ji Zhu, Michigan Statistics 1. Classification. Ji Zhu 445C West Hall

Stats Classification Ji Zhu, Michigan Statistics 1. Classification. Ji Zhu 445C West Hall Stats 415 - Classificatin Ji Zhu, Michigan Statistics 1 Classificatin Ji Zhu 445C West Hall 734-936-2577 jizhu@umich.edu Stats 415 - Classificatin Ji Zhu, Michigan Statistics 2 Examples f Classificatin

More information

Medium Scale Integrated (MSI) devices [Sections 2.9 and 2.10]

Medium Scale Integrated (MSI) devices [Sections 2.9 and 2.10] EECS 270, Winter 2017, Lecture 3 Page 1 f 6 Medium Scale Integrated (MSI) devices [Sectins 2.9 and 2.10] As we ve seen, it s smetimes nt reasnable t d all the design wrk at the gate-level smetimes we just

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

CHM112 Lab Graphing with Excel Grading Rubric

CHM112 Lab Graphing with Excel Grading Rubric Name CHM112 Lab Graphing with Excel Grading Rubric Criteria Pints pssible Pints earned Graphs crrectly pltted and adhere t all guidelines (including descriptive title, prperly frmatted axes, trendline

More information

Experiment #3. Graphing with Excel

Experiment #3. Graphing with Excel Experiment #3. Graphing with Excel Study the "Graphing with Excel" instructins that have been prvided. Additinal help with learning t use Excel can be fund n several web sites, including http://www.ncsu.edu/labwrite/res/gt/gt-

More information

Physics 2B Chapter 23 Notes - Faraday s Law & Inductors Spring 2018

Physics 2B Chapter 23 Notes - Faraday s Law & Inductors Spring 2018 Michael Faraday lived in the Lndn area frm 1791 t 1867. He was 29 years ld when Hand Oersted, in 1820, accidentally discvered that electric current creates magnetic field. Thrugh empirical bservatin and

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

1 The limitations of Hartree Fock approximation

1 The limitations of Hartree Fock approximation Chapter: Pst-Hartree Fck Methds - I The limitatins f Hartree Fck apprximatin The n electrn single determinant Hartree Fck wave functin is the variatinal best amng all pssible n electrn single determinants

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Midwest Big Data Summer School: Machine Learning I: Introduction. Kris De Brabanter

Midwest Big Data Summer School: Machine Learning I: Introduction. Kris De Brabanter Midwest Big Data Summer Schl: Machine Learning I: Intrductin Kris De Brabanter kbrabant@iastate.edu Iwa State University Department f Statistics Department f Cmputer Science June 24, 2016 1/24 Outline

More information

[COLLEGE ALGEBRA EXAM I REVIEW TOPICS] ( u s e t h i s t o m a k e s u r e y o u a r e r e a d y )

[COLLEGE ALGEBRA EXAM I REVIEW TOPICS] ( u s e t h i s t o m a k e s u r e y o u a r e r e a d y ) (Abut the final) [COLLEGE ALGEBRA EXAM I REVIEW TOPICS] ( u s e t h i s t m a k e s u r e y u a r e r e a d y ) The department writes the final exam s I dn't really knw what's n it and I can't very well

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

Support Vector Machines and Flexible Discriminants

Support Vector Machines and Flexible Discriminants Supprt Vectr Machines and Flexible Discriminants This is page Printer: Opaque this. Intrductin In this chapter we describe generalizatins f linear decisin bundaries fr classificatin. Optimal separating

More information

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are: Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track

More information

We can see from the graph above that the intersection is, i.e., [ ).

We can see from the graph above that the intersection is, i.e., [ ). MTH 111 Cllege Algebra Lecture Ntes July 2, 2014 Functin Arithmetic: With nt t much difficulty, we ntice that inputs f functins are numbers, and utputs f functins are numbers. S whatever we can d with

More information

Trigonometric Ratios Unit 5 Tentative TEST date

Trigonometric Ratios Unit 5 Tentative TEST date 1 U n i t 5 11U Date: Name: Trignmetric Ratis Unit 5 Tentative TEST date Big idea/learning Gals In this unit yu will extend yur knwledge f SOH CAH TOA t wrk with btuse and reflex angles. This extensin

More information

Preparation work for A2 Mathematics [2017]

Preparation work for A2 Mathematics [2017] Preparatin wrk fr A2 Mathematics [2017] The wrk studied in Y12 after the return frm study leave is frm the Cre 3 mdule f the A2 Mathematics curse. This wrk will nly be reviewed during Year 13, it will

More information

Elements of Machine Intelligence - I

Elements of Machine Intelligence - I ECE-175A Elements f Machine Intelligence - I Ken Kreutz-Delgad Nun Vascncels ECE Department, UCSD Winter 2011 The curse The curse will cver basic, but imprtant, aspects f machine learning and pattern recgnitin

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

Dataflow Analysis and Abstract Interpretation

Dataflow Analysis and Abstract Interpretation Dataflw Analysis and Abstract Interpretatin Cmputer Science and Artificial Intelligence Labratry MIT Nvember 9, 2015 Recap Last time we develped frm first principles an algrithm t derive invariants. Key

More information

SURVIVAL ANALYSIS WITH SUPPORT VECTOR MACHINES

SURVIVAL ANALYSIS WITH SUPPORT VECTOR MACHINES 1 SURVIVAL ANALYSIS WITH SUPPORT VECTOR MACHINES Wlfgang HÄRDLE Ruslan MORO Center fr Applied Statistics and Ecnmics (CASE), Humbldt-Universität zu Berlin Mtivatin 2 Applicatins in Medicine estimatin f

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

Sequential Allocation with Minimal Switching

Sequential Allocation with Minimal Switching In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

CHAPTER 24: INFERENCE IN REGRESSION. Chapter 24: Make inferences about the population from which the sample data came.

CHAPTER 24: INFERENCE IN REGRESSION. Chapter 24: Make inferences about the population from which the sample data came. MATH 1342 Ch. 24 April 25 and 27, 2013 Page 1 f 5 CHAPTER 24: INFERENCE IN REGRESSION Chapters 4 and 5: Relatinships between tw quantitative variables. Be able t Make a graph (scatterplt) Summarize the

More information

LHS Mathematics Department Honors Pre-Calculus Final Exam 2002 Answers

LHS Mathematics Department Honors Pre-Calculus Final Exam 2002 Answers LHS Mathematics Department Hnrs Pre-alculus Final Eam nswers Part Shrt Prblems The table at the right gives the ppulatin f Massachusetts ver the past several decades Using an epnential mdel, predict the

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

Math 105: Review for Exam I - Solutions

Math 105: Review for Exam I - Solutions 1. Let f(x) = 3 + x + 5. Math 105: Review fr Exam I - Slutins (a) What is the natural dmain f f? [ 5, ), which means all reals greater than r equal t 5 (b) What is the range f f? [3, ), which means all

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2

More information

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

Flipping Physics Lecture Notes: Simple Harmonic Motion Introduction via a Horizontal Mass-Spring System

Flipping Physics Lecture Notes: Simple Harmonic Motion Introduction via a Horizontal Mass-Spring System Flipping Physics Lecture Ntes: Simple Harmnic Mtin Intrductin via a Hrizntal Mass-Spring System A Hrizntal Mass-Spring System is where a mass is attached t a spring, riented hrizntally, and then placed

More information

A Few Basic Facts About Isothermal Mass Transfer in a Binary Mixture

A Few Basic Facts About Isothermal Mass Transfer in a Binary Mixture Few asic Facts but Isthermal Mass Transfer in a inary Miture David Keffer Department f Chemical Engineering University f Tennessee first begun: pril 22, 2004 last updated: January 13, 2006 dkeffer@utk.edu

More information

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets Department f Ecnmics, University f alifrnia, Davis Ecn 200 Micr Thery Prfessr Giacm Bnann Insurance Markets nsider an individual wh has an initial wealth f. ith sme prbability p he faces a lss f x (0

More information

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving.

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving. Sectin 3.2: Many f yu WILL need t watch the crrespnding vides fr this sectin n MyOpenMath! This sectin is primarily fcused n tls t aid us in finding rts/zers/ -intercepts f plynmials. Essentially, ur fcus

More information

Lyapunov Stability Stability of Equilibrium Points

Lyapunov Stability Stability of Equilibrium Points Lyapunv Stability Stability f Equilibrium Pints 1. Stability f Equilibrium Pints - Definitins In this sectin we cnsider n-th rder nnlinear time varying cntinuus time (C) systems f the frm x = f ( t, x),

More information

Preparation work for A2 Mathematics [2018]

Preparation work for A2 Mathematics [2018] Preparatin wrk fr A Mathematics [018] The wrk studied in Y1 will frm the fundatins n which will build upn in Year 13. It will nly be reviewed during Year 13, it will nt be retaught. This is t allw time

More information

Thermodynamics and Equilibrium

Thermodynamics and Equilibrium Thermdynamics and Equilibrium Thermdynamics Thermdynamics is the study f the relatinship between heat and ther frms f energy in a chemical r physical prcess. We intrduced the thermdynamic prperty f enthalpy,

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

Support Vector Machine (continued)

Support Vector Machine (continued) Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need

More information

B. Definition of an exponential

B. Definition of an exponential Expnents and Lgarithms Chapter IV - Expnents and Lgarithms A. Intrductin Starting with additin and defining the ntatins fr subtractin, multiplicatin and divisin, we discvered negative numbers and fractins.

More information

(2) Even if such a value of k was possible, the neutrons multiply

(2) Even if such a value of k was possible, the neutrons multiply CHANGE OF REACTOR Nuclear Thery - Curse 227 POWER WTH REACTVTY CHANGE n this lessn, we will cnsider hw neutrn density, neutrn flux and reactr pwer change when the multiplicatin factr, k, r the reactivity,

More information

L a) Calculate the maximum allowable midspan deflection (w o ) critical under which the beam will slide off its support.

L a) Calculate the maximum allowable midspan deflection (w o ) critical under which the beam will slide off its support. ecture 6 Mderately arge Deflectin Thery f Beams Prblem 6-1: Part A: The department f Highways and Public Wrks f the state f Califrnia is in the prcess f imprving the design f bridge verpasses t meet earthquake

More information

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~ Sectin 6-2: Simplex Methd: Maximizatin with Prblem Cnstraints f the Frm ~ Nte: This methd was develped by Gerge B. Dantzig in 1947 while n assignment t the U.S. Department f the Air Frce. Definitin: Standard

More information

Physics 2010 Motion with Constant Acceleration Experiment 1

Physics 2010 Motion with Constant Acceleration Experiment 1 . Physics 00 Mtin with Cnstant Acceleratin Experiment In this lab, we will study the mtin f a glider as it accelerates dwnhill n a tilted air track. The glider is supprted ver the air track by a cushin

More information

MATHEMATICS SYLLABUS SECONDARY 5th YEAR

MATHEMATICS SYLLABUS SECONDARY 5th YEAR Eurpean Schls Office f the Secretary-General Pedaggical Develpment Unit Ref. : 011-01-D-8-en- Orig. : EN MATHEMATICS SYLLABUS SECONDARY 5th YEAR 6 perid/week curse APPROVED BY THE JOINT TEACHING COMMITTEE

More information

ENGI 4430 Parametric Vector Functions Page 2-01

ENGI 4430 Parametric Vector Functions Page 2-01 ENGI 4430 Parametric Vectr Functins Page -01. Parametric Vectr Functins (cntinued) Any nn-zer vectr r can be decmpsed int its magnitude r and its directin: r rrˆ, where r r 0 Tangent Vectr: dx dy dz dr

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

Chapter 3 Kinematics in Two Dimensions; Vectors

Chapter 3 Kinematics in Two Dimensions; Vectors Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical

More information

Review: Support vector machines. Machine learning techniques and image analysis

Review: Support vector machines. Machine learning techniques and image analysis Review: Support vector machines Review: Support vector machines Margin optimization min (w,w 0 ) 1 2 w 2 subject to y i (w 0 + w T x i ) 1 0, i = 1,..., n. Review: Support vector machines Margin optimization

More information

Pipetting 101 Developed by BSU CityLab

Pipetting 101 Developed by BSU CityLab Discver the Micrbes Within: The Wlbachia Prject Pipetting 101 Develped by BSU CityLab Clr Cmparisns Pipetting Exercise #1 STUDENT OBJECTIVES Students will be able t: Chse the crrect size micrpipette fr

More information

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department

More information

Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals of Diffusion

Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals of Diffusion Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals f Diffusin Diffusin: Transprt in a slid, liquid, r gas driven by a cncentratin gradient (r, in the case f mass transprt, a chemical ptential

More information

Five Whys How To Do It Better

Five Whys How To Do It Better Five Whys Definitin. As explained in the previus article, we define rt cause as simply the uncvering f hw the current prblem came int being. Fr a simple causal chain, it is the entire chain. Fr a cmplex

More information

PSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa

PSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa There are tw parts t this lab. The first is intended t demnstrate hw t request and interpret the spatial diagnstics f a standard OLS regressin mdel using GeDa. The diagnstics prvide infrmatin abut the

More information

Lecture 02 CSE 40547/60547 Computing at the Nanoscale

Lecture 02 CSE 40547/60547 Computing at the Nanoscale PN Junctin Ntes: Lecture 02 CSE 40547/60547 Cmputing at the Nanscale Letʼs start with a (very) shrt review f semi-cnducting materials: - N-type material: Obtained by adding impurity with 5 valence elements

More information

5 th grade Common Core Standards

5 th grade Common Core Standards 5 th grade Cmmn Cre Standards In Grade 5, instructinal time shuld fcus n three critical areas: (1) develping fluency with additin and subtractin f fractins, and develping understanding f the multiplicatin

More information

AP Physics Kinematic Wrap Up

AP Physics Kinematic Wrap Up AP Physics Kinematic Wrap Up S what d yu need t knw abut this mtin in tw-dimensin stuff t get a gd scre n the ld AP Physics Test? First ff, here are the equatins that yu ll have t wrk with: v v at x x

More information

ChE 471: LECTURE 4 Fall 2003

ChE 471: LECTURE 4 Fall 2003 ChE 47: LECTURE 4 Fall 003 IDEL RECTORS One f the key gals f chemical reactin engineering is t quantify the relatinship between prductin rate, reactr size, reactin kinetics and selected perating cnditins.

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION. Instructions: If asked to label the axes please use real world (contextual) labels

ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION. Instructions: If asked to label the axes please use real world (contextual) labels ANSWER KEY FOR MATH 10 SAMPLE EXAMINATION Instructins: If asked t label the axes please use real wrld (cntextual) labels Multiple Chice Answers: 0 questins x 1.5 = 30 Pints ttal Questin Answer Number 1

More information

Lab 11 LRC Circuits, Damped Forced Harmonic Motion

Lab 11 LRC Circuits, Damped Forced Harmonic Motion Physics 6 ab ab 11 ircuits, Damped Frced Harmnic Mtin What Yu Need T Knw: The Physics OK this is basically a recap f what yu ve dne s far with circuits and circuits. Nw we get t put everything tgether

More information

An Introduction to Complex Numbers - A Complex Solution to a Simple Problem ( If i didn t exist, it would be necessary invent me.

An Introduction to Complex Numbers - A Complex Solution to a Simple Problem ( If i didn t exist, it would be necessary invent me. An Intrductin t Cmple Numbers - A Cmple Slutin t a Simple Prblem ( If i didn t eist, it wuld be necessary invent me. ) Our Prblem. The rules fr multiplying real numbers tell us that the prduct f tw negative

More information

Section 5.8 Notes Page Exponential Growth and Decay Models; Newton s Law

Section 5.8 Notes Page Exponential Growth and Decay Models; Newton s Law Sectin 5.8 Ntes Page 1 5.8 Expnential Grwth and Decay Mdels; Newtn s Law There are many applicatins t expnential functins that we will fcus n in this sectin. First let s lk at the expnential mdel. Expnential

More information

ML (cont.): SUPPORT VECTOR MACHINES

ML (cont.): SUPPORT VECTOR MACHINES ML (cont.): SUPPORT VECTOR MACHINES CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 40 Support Vector Machines (SVMs) The No-Math Version

More information