CSE 546 Midterm Exam, Fall 2014(with Solution)

Size: px
Start display at page:

Download "CSE 546 Midterm Exam, Fall 2014(with Solution)"

Transcription

1 CSE 546 Mdterm Exam, Fall 014(wth Soluton) 1. Personal nfo: Name: UW NetID: Student ID:. There should be 14 numbered pages n ths exam (ncludng ths cover sheet). 3. You can use any materal you brought: any book, class notes, your prnt outs of class materals that are on the class webste, ncludng my annotated sldes and relevant readngs. You cannot use materals brought by other students. 4. If you need more room to work out your answer to a queston, use the back of the page and clearly mark on the front of the page f we are to look at what s on the back. 5. Work effcently. Some questons are easer, some more dffcult. Be sure to gve yourself tme to answer all of the easy ones, and avod gettng bogged down n the more dffcult ones before you have answered the easer ones. 6. You have 80 mnutes. 7. Good luck! Queston Topc Max score Score 1 Short Answer 4 Decson Trees 16 3 Logstc Regresson 0 4 Boostng 16 5 Elastc Net 4 Total 100 1

2 1 [4 ponts] Short Answer 1. [6 ponts] On the D dataset below, draw the decson boundares learned by the followng algorthms (usng the features x/y). Be sure to mark whch regons are labeled postve or negatve, and assume that tes are broken arbtrarly. (a) Logstc regresson (λ = 0) (b) 1-NN (c) 3-NN. [6 ponts] A random varable follows an exponental dstrbuton wth parameter λ (λ > 0) f t has the followng densty: p(t) = λe λt, t [0, ) Ths dstrbuton s often used to model watng tmes between events. Imagne you are gven..d. data T = (t 1,..., t n ) where each t s modeled as beng drawn from an exponental dstrbuton wth parameter λ. (a) [3 ponts] Compute the log-probablty of T gven λ. (Turn products nto sums when possble).

3 ln p(t ) = ln p(t ) ln p(t ) = ln(λe λt ) ln p(t ) = ln λ λt ln p(t ) = n ln λ λ t (b) [3 ponts] Solve for ˆλ MLE. λ (n ln λ λ t ) = 0 n λ t = 0 ˆλ MLE = n t 3. [6 ponts] A B C Y F F F F T F T T T T F T T T T F (a) [3 ponts] Usng the dataset above, we want to buld a decson tree whch classfes Y as T/F gven the bnary varables A, B, C. Draw the tree that would be learned by the greedy algorthm wth zero tranng error. You do not need to show any computaton. 3

4 (b) [3 ponts] Is ths tree optmal (.e. does t get zero tranng error wth mnmal depth)? Explan n less than two sentences. If t s not optmal, draw the optmal tree as well. Although we get a better nformaton gan by frst splttng on A, Y s just a functon of B/C: Y = BxorC. Thus, we can buld a tree of depth whch classfes correctly, and s optmal. 4. [6 ponts] In class we used decson trees and ensemble methods for classfcaton, but we can use them for regresson as well (.e. learnng a functon from features to real values). Let s magne that our data has 3 bnary features A, B, C, whch take values 0/1, and we want to learn a functon whch counts the number of features whch have value 1. (a) [ ponts] Draw the decson tree whch represents ths functon. How many leaf nodes does t have? 4

5 We can represent the functon wth a decson tree contanng 8 nodes. (b) [ ponts] Now represent ths functon as a sum of decson stumps (e.g. sgn(a)). How many terms do we need? f(x) = sgn(a) + sgn(b) + sgn(c) Usng a sum of decson stumps, we can represent ths functon usng 3 terms. (c) [ ponts] In the general case, magne that we have d bnary features, and we want to count the number of features wth value 1. How many leaf nodes would a decson tree need to represent ths functon? If we used a sum of decson stumps, how many terms would be needed? No explanaton s necessary. Usng a decson tree, we wll need d nodes. Usng a sum of decson stumps, we wll need d terms. 5

6 [16 ponts] Decson Trees We wll use the dataset below to learn a decson tree whch predcts f people pass machne learnng (Yes or No), based on ther prevous GPA (Hgh, Medum, or Low) and whether or not they studed. GPA Studed Passed L F F L T T M F F M T T H F T H T T For ths problem, you can wrte your answers usng log, but t may be helpful to note that log [4 ponts] What s the entropy H(Passed)? H(P assed) = ( 6 log log 4 6 ) H(P assed) = ( 1 3 log log 3 ) H(P assed) = log [4 ponts] What s the entropy H(Passed GPA)? H(P assed GP A) = 1 3 (1 log H(P assed GP A) = 1 3 (1) (1) (0) H(P assed GP A) = log 1 ) 1 3 (1 log log 1 ) 1 3 (1 log 1) 3. [4 ponts] What s the entropy H(Passed Studed)? 6

7 H(P assed Studed) = 1 (1 3 log H(P assed Studed) = 1 (log 3 3 H(P assed Studed) = 1 log log 3 ) 1 (1 log 1) 4. [4 ponts] Draw the full decson tree that would be learned for ths dataset. You do not need to show any calculatons. We want to splt frst on the varable whch maxmzes the nformaton gan H(P assed) H(P assed A). Ths s equvalent to mnmzng H(P assed A), so we should splt on Studed? frst. 7

8 3 [0 ponts] Logstc Regresson for Sparse Data In many real-world scenaros our data has mllons of dmensons, but a gven example has only hundreds of non-zero features. For example, n document analyss wth word counts for features, our dctonary may have mllons of words, but a gven document has only hundreds of unque words. In ths queston we wll make l regularzed SGD effcent when our nput data s sparse. Recall that n l regularzed logstc regresson, we want to maxmze the followng objectve (n ths problem we have excluded w 0 for smplcty): F (w) = 1 N N l(x (j), y (j), w) λ j=1 d =1 w where l(x (j), y (j), w) s the logstc objectve functon l(x (j), y (j), w) = y (j) ( d =1 w x (j) ) ln(1 + exp( d =1 w x (j) )) and the remanng sum s our regularzaton penalty. When we do stochastc gradent descent on pont (x (j), y (j) ), we are approxmatng the objectve functon as F (w) l(x (j), y (j), w) λ d w Defnton of sparsty: Assume that our nput data has d features,.e. x (j) R d. In ths problem, we wll consder the scenaro where x (j) s sparse. Formally, let s be average number of nonzero elements n each example. We say the data s sparse when s << d. In the followng questons, your answer should take the sparsty of x (j) nto consderaton when possble. Note: When we use a sparse data structure, we can terate over the non-zero elements n O(s) tme, whereas a dense data structure requres O(d) tme. 1. [ ponts] Let us frst consder the case when λ = 0. Wrte down the SGD update rule for w when λ = 0, usng step sze η, gven the example (x (j), y (j) ). =1 The update rule can be wrtten as ( w (t+1) w (t) + ηx (j) y (j) exp( k w kx (j) k ) ). [4 ponts] If we use a dense data structure, what s the average tme complexty to update w when λ = 0? What f we use a sparse data structure? Justfy your answer n one or two sentences. 8

9 The tme complexty to calculate k w kx (j) k s O(d) when the data structure s dense, and O(s) when the data structure s sparse. Note that even f we update w for all, we only need to calculate k w kx (j) k once, and then update the w such that x (j) 0. So the answer s O(d) for the dense case, and O(s) for the sparse case. 3. [ ponts] Now let us consder the general case when λ > 0. Wrte down the SGD update rule for w when λ > 0, usng step sze η, gven the example (x (j), y (j) ). w (t+1) w (t) ηλw (t) + ηx (j) ( y (j) exp( k w kx (j) k ) ) 4. [ ponts] If we use a dense data structure, what s the average tme complexty to update w when λ > 0? The tme complexty s O(d) 5. [4 ponts] Let w (t) be the weght vector after t-th update. Now magne that we perform k SGD updates on w usng examples (x (t+1), y (t+1) ),, (x (t+k), y (t+k) ), where x (j) = 0 for every example n the sequence. (.e. the -th feature s zero for all of the examples n the sequence). Express the new weght, w (t+k) n terms of w (t), k, η, and λ. When x (j) = 0, w (t+1) = w (t) ηλw (t) = w (t) (1 ηλ) so the answer s w (t+k) = w (t) (1 ηλ) k 6. [6 ponts] Usng your answer n the prevous part, come up wth an effcent algorthm for regularzed SGD when we use a sparse data structure. What s the average tme complexty per example? (Hnt: when do you need to update w?) 9

10 Algorthm 1: Sparse SGD Algorthm for Logstc Regresson wth Regularzaton Intalze c 0 for {1,,, d} for j {1,, n} do ˆp 1 1+exp( k w kx (j) k ) for such that x (j) 0 do k j c ; auxlary varable c holds the ndex of last tme we see x (j) 0 w w (1 ηλ) k ; apply all the regularzaton updates w w + ηx (j) ( y (j) ˆp ) ; regularzaton s done n prevous step c j; remember last tme we see x (j) 0 end end The dea s to only update w when x (j) 0. Before we do the update, we apply all the regularzaton updates we skpped before, usng the answer from prevous queston. You can checkout Algorthm 1 for detals. Usng ths trck, each update takes O(s) tme. (Note: we can use the same trck apples for SGD wth l 1 regularzaton) 10

11 4 [16 ponts] Boostng Recall that Adaboost learns a classfer H usng a weghted sum of weak learners h t as follows ( T ) H(x) = sgn α t h t (x) In ths queston we wll use decson trees as our weak learners, whch classfy a pont as {1, 1} based on a sequence of threshold splts on ts features (here x, y). In the questons below, be sure to mark whch regons are marked postve/negatve, and assume that tes are broken arbtrarly. COMMENT: The solutons below are one of several possble answers. You got full credt as long as you were consstent and found a boundary wth the lowest tranng error for h 1, h. t=1 1. [ ponts] Assume that our weak learners are decson trees of depth 1 (.e. decson stumps), whch mnmze the weghted tranng error. Usng the dataset below, draw the decson boundary learned by h 1.. [3 ponts] On the dataset below, crcle the pont(s) wth the hghest weghts on the second teraton, and draw the decson boundary learned by h. ncorrectly by h 1. The ponts wth the hghest weght wll be those that were classfed 11

12 3. [3 ponts] On the dataset below, draw the decson boundary of H = sgn (α 1 h 1 + α h ). (Hnt, you do not need to explctly compute the α s). Although h 1 and h msclassfy the same number of ponts, the ponts whch h gets wrong have been downweghted. Thus, we have ɛ 1 > ɛ, so α > α 1, and h wll domnate over h 1 whenever there s a conflct. In other words, H has the same boundary as h. 4. [ ponts] Now assume that our weak learners are decson trees of maxmum depth, whch mnmze the weghted tranng error. Usng the dataset below, draw the decson boundary learned by h 1. 1

13 5. [3 ponts] On the dataset below, crcle the pont(s) wth the hghest weghts on the second teraton, and draw the decson boundary learned by h. Agan, the ponts wth the hghest weght wll be those that were classfed ncorrectly by h [3 ponts] On the dataset below, draw the decson boundary of H = sgn (α 1 h 1 + α h ). (Hnt, you do not need to explctly compute the α s). 13

14 a conflct. Agan, we have α > α 1, so h wll domnate over h 1 whenever there s 14

15 5 [4 ponts] Elastc Net In class we dscussed two types of regularzaton, l 1 and l. Both are useful, and sometmes t s helpful combne them, gvng the objectve functon below (n ths problem we have excluded w 0 for smplcty): F (w) = 1 n (y (j) j=1 d =1 w x (j) ) + α d w + λ =1 d w (1) Here (x (j), y (j) ) s j-th example n the tranng data, w s a d dmensonal weght vector, λ s a regularzaton parameter for the l norm of w, and α s a regularzaton parameter for the l 1 norm of w. Ths approach s called the Elastc Net, and you can see that t s a generalzaton of Rdge and Lasso regresson: It reverts to Lasso when λ = 0, and t reverts to Rdge when α = 0. In ths queston, we are gong to derve the coordnate descent (CD) update rule for ths objectve. Let g, h, c be real constants, and consder the functon of x f 1 (x) = c + gx + 1 hx (h > 0) () 1. [ ponts] What s the x that mnmzes f 1 (x)? (.e. calculate x = arg mn f 1 (x)) =1 Take the gradent of f 1 (x) and set t to 0. g + hx = 0 x = g h Let α be an addtonal real constant, and consder another functon of x f (x) = c + gx + 1 hx + α x (h > 0, α > 0) (3) Ths s a pecewse functon, composed of two quadratc functons: and f (x) = c + gx + 1 hx αx f + (x) = c + gx + 1 hx + αx Let x = arg mn x R f (x) and x + = arg mn x R f + (x). (Note: The argmn s taken over (, + ) here) 15

16 (a) x > 0, x + > 0, the mnma s (b) x < 0, x + < 0, the mnma s (c) x > 0, x + < 0, the mnma s at x + at x at 0 Fgure 1: Example pcture of three cases. [3 ponts] What are x + and x? Show that x x +. Usng the answer from part 1), we get x + = g+α h Snce x x + = α 0, we have h x x. and x = g α h. 3. [6 ponts] Draw a pcture of f (x) n each of the three cases below. (a) x > 0, x + > 0 (b) x < 0, x + < 0 (c) x > 0, x + < 0. For each case, mark the mnmum as ether 0, x, or x +. (You do not need to draw perfect curves, just get the rough shape and the relatve locatons of the mnma to the x-axs) An example of the curves s shown n Fgure 1. To understand the answer, note the followng: f + (0) = f (0), ths means the two pece of f match each other at x = 0. When x 0, the mnmum of left part s at 0,.e. arg mn x (,0] f (x) = 0 When x 0, the mnmum of left part s at x,.e. arg mn x (,0] f (x) = x Smlar rules holds for the rght part of the curve. 4. [4 ponts] Wrte x, the mnmum of f (x), as a pecewse functon of g. (Hnt: what s g n the cases above?) 16

17 Algorthm : Coordnate Descent for Elastc Net whle not converged do for k {1,, d} do g = n ( k w x (j) y ) end end j=1 x(j) k h = λ + n w k = j=1 (x(j) k ) g+α h g < α 0 g [ α, α] g > α g α h Summarzng the result from the prevous queston, we have x + f x + > 0 x = x f x < 0 0 f x + < 0, x > 0 Ths s equvalent to the soft-threshold functon g+α f g < α x h = 0 f g [ α, α] g α f g > α h 5. [4 ponts] Now let s derve the update rule for w k. Fxng all parameters but w k, express the objectve functon n the form of Eq (3) (here w k s our x). You do not need to explcty wrte constant factors, you can put them all together as c. What are g and h? g = n j=1 x (j) k ( w x (j) y ) (4) k h = λ + n j=1 (x (j) k ) (5) 6. [5 ponts] Puttng t all together, wrte down the coordnate descent algorthm for the Elastc Net (agan excludng w 0 ). You can wrte the update for w k n terms of the g and h you calculated n the prevous queston. The algorthm s shown n Algorthm. You can compare t to the CD algorthm for Lasso, the only dfference s addng λ to h. 17

CSE 546 Midterm Exam, Fall 2014

CSE 546 Midterm Exam, Fall 2014 CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

SDMML HT MSc Problem Sheet 4

SDMML HT MSc Problem Sheet 4 SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso Machne Learnng Data Mnng CS/CS/EE 155 Lecture 4: Regularzaton, Sparsty Lasso 1 Recap: Complete Ppelne S = {(x, y )} Tranng Data f (x, b) = T x b Model Class(es) L(a, b) = (a b) 2 Loss Functon,b L( y, f

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification Instance-Based earnng (a.k.a. memory-based learnng) Part I: Nearest Neghbor Classfcaton Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

CSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing

CSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing CSC321 Tutoral 9: Revew of Boltzmann machnes and smulated annealng (Sldes based on Lecture 16-18 and selected readngs) Yue L Emal: yuel@cs.toronto.edu Wed 11-12 March 19 Fr 10-11 March 21 Outlne Boltzmann

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

Math1110 (Spring 2009) Prelim 3 - Solutions

Math1110 (Spring 2009) Prelim 3 - Solutions Math 1110 (Sprng 2009) Solutons to Prelm 3 (04/21/2009) 1 Queston 1. (16 ponts) Short answer. Math1110 (Sprng 2009) Prelm 3 - Solutons x a 1 (a) (4 ponts) Please evaluate lm, where a and b are postve numbers.

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

COS 511: Theoretical Machine Learning

COS 511: Theoretical Machine Learning COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that

More information

Structural Extensions of Support Vector Machines. Mark Schmidt March 30, 2009

Structural Extensions of Support Vector Machines. Mark Schmidt March 30, 2009 Structural Extensons of Support Vector Machnes Mark Schmdt March 30, 2009 Formulaton: Bnary SVMs Multclass SVMs Structural SVMs Tranng: Subgradents Cuttng Planes Margnal Formulatons Mn-Max Formulatons

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013 COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning CS9 Problem Set #3 Solutons CS 9, Publc Course Problem Set #3 Solutons: Learnng Theory and Unsupervsed Learnng. Unform convergence and Model Selecton In ths problem, we wll prove a bound on the error of

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs246.stanford.edu 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 2 Hgh dm. data Graph data Infnte

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan. THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY Wllam A. Pearlman 2002 References: S. Armoto - IEEE Trans. Inform. Thy., Jan. 1972 R. Blahut - IEEE Trans. Inform. Thy., July 1972 Recall

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Generative classification models

Generative classification models CS 675 Intro to Machne Learnng Lecture Generatve classfcaton models Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square Data: D { d, d,.., dn} d, Classfcaton represents a dscrete class value Goal: learn

More information

The Experts/Multiplicative Weights Algorithm and Applications

The Experts/Multiplicative Weights Algorithm and Applications Chapter 2 he Experts/Multplcatve Weghts Algorthm and Applcatons We turn to the problem of onlne learnng, and analyze a very powerful and versatle algorthm called the multplcatve weghts update algorthm.

More information

Logistic Classifier CISC 5800 Professor Daniel Leeds

Logistic Classifier CISC 5800 Professor Daniel Leeds lon 9/7/8 Logstc Classfer CISC 58 Professor Danel Leeds Classfcaton strategy: generatve vs. dscrmnatve Generatve, e.g., Bayes/Naïve Bayes: 5 5 Identfy probablty dstrbuton for each class Determne class

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Evaluation of classifiers MLPs

Evaluation of classifiers MLPs Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

17 Support Vector Machines

17 Support Vector Machines 17 We now dscuss an nfluental and effectve classfcaton algorthm called (SVMs). In addton to ther successes n many classfcaton problems, SVMs are responsble for ntroducng and/or popularzng several mportant

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Discriminative classifier: Logistic Regression. CS534-Machine Learning

Discriminative classifier: Logistic Regression. CS534-Machine Learning Dscrmnatve classfer: Logstc Regresson CS534-Machne Learnng robablstc Classfer Gven an nstance, hat does a probablstc classfer do dfferentl compared to, sa, perceptron? It does not drectl predct Instead,

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

CSCI B609: Foundations of Data Science

CSCI B609: Foundations of Data Science CSCI B609: Foundatons of Data Scence Lecture 13/14: Gradent Descent, Boostng and Learnng from Experts Sldes at http://grgory.us/data-scence-class.html Grgory Yaroslavtsev http://grgory.us Constraned Convex

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

I have not received unauthorized aid in the completion of this exam.

I have not received unauthorized aid in the completion of this exam. ME 270 Sprng 2013 Fnal Examnaton Please read and respond to the followng statement, I have not receved unauthorzed ad n the completon of ths exam. Agree Dsagree Sgnature INSTRUCTIONS Begn each problem

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ

xp(x µ) = 0 p(x = 0 µ) + 1 p(x = 1 µ) = µ CSE 455/555 Sprng 2013 Homework 7: Parametrc Technques Jason J. Corso Computer Scence and Engneerng SUY at Buffalo jcorso@buffalo.edu Solutons by Yngbo Zhou Ths assgnment does not need to be submtted and

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maxmum Lkelhood Estmaton INFO-2301: Quanttatve Reasonng 2 Mchael Paul and Jordan Boyd-Graber MARCH 7, 2017 INFO-2301: Quanttatve Reasonng 2 Paul and Boyd-Graber Maxmum Lkelhood Estmaton 1 of 9 Why MLE?

More information

} Often, when learning, we deal with uncertainty:

} Often, when learning, we deal with uncertainty: Uncertanty and Learnng } Often, when learnng, we deal wth uncertanty: } Incomplete data sets, wth mssng nformaton } Nosy data sets, wth unrelable nformaton } Stochastcty: causes and effects related non-determnstcally

More information

MATH 567: Mathematical Techniques in Data Science Lab 8

MATH 567: Mathematical Techniques in Data Science Lab 8 1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W

More information

Learning from Data 1 Naive Bayes

Learning from Data 1 Naive Bayes Learnng from Data 1 Nave Bayes Davd Barber dbarber@anc.ed.ac.uk course page : http://anc.ed.ac.uk/ dbarber/lfd1/lfd1.html c Davd Barber 2001, 2002 1 Learnng from Data 1 : c Davd Barber 2001,2002 2 1 Why

More information

MAE140 - Linear Circuits - Winter 16 Midterm, February 5

MAE140 - Linear Circuits - Winter 16 Midterm, February 5 Instructons ME140 - Lnear Crcuts - Wnter 16 Mdterm, February 5 () Ths exam s open book. You may use whatever wrtten materals you choose, ncludng your class notes and textbook. You may use a hand calculator

More information

MAE140 - Linear Circuits - Fall 13 Midterm, October 31

MAE140 - Linear Circuits - Fall 13 Midterm, October 31 Instructons ME140 - Lnear Crcuts - Fall 13 Mdterm, October 31 () Ths exam s open book. You may use whatever wrtten materals you choose, ncludng your class notes and textbook. You may use a hand calculator

More information

Newton s Method for One - Dimensional Optimization - Theory

Newton s Method for One - Dimensional Optimization - Theory Numercal Methods Newton s Method for One - Dmensonal Optmzaton - Theory For more detals on ths topc Go to Clck on Keyword Clck on Newton s Method for One- Dmensonal Optmzaton You are free to Share to copy,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information