Video Data Analysis. Video Data Analysis, B-IT

Size: px
Start display at page:

Download "Video Data Analysis. Video Data Analysis, B-IT"

Transcription

1 Lecture Vdeo Data Analyss Deformable Snakes Segmentaton Neural networks Lecture plan:. Segmentaton by morphologcal watershed. Deformable snakes 3. Segmentaton va classfcaton of patterns 4. Concept of a neural network 5. Supervsed learnng & classfcaton by a neural network

2 Lecture Segmentaton by morphologcal watersheds Prncple: vsualzng an mage n 3-Dmensons Readng: :0.5 The Topographc representaton of mages generates ponts of three types:. Regonal mnmum ponts;. Ponts at whch a drop of water would fall wth certanty to a sngle mnmum. 3. Ponts at whch a drop of water would lkely to fall nto more than one mnmum. Basc dea Basc dea: suppose a hole s punched n each regonal mnmum and the entre topography s flooded from below Maor applcaton: extracton of nearly unform obects from the background. These obects are characterzed by small varatons n gray levels. Thus watershed s often appled to the gradent of an mage: small grayscale varatons correspond to catchment basns regons

3 Lecture Readng: :0.5 Segmentaton by morphologcal watersheds Dam constructon: Condtonal dlaton of two connected components: ) Dlaton has to be constraned to q. ) Dlaton cannot be performed on ponts that would cause the two sets to merge. q two connected components C[n-] (catchment basns) at floodng step, n- Result C[n] of the next floodng step, n The water has splled and the dam must be bult Condton ) faled durng the second dlaton; Condton ) was met for ponts at separatng dam

4 Lecture Watershed segmentaton algorthm Readng: :0.5 Let M, M,.. M R be sets denotng the coordnates of the ponts n the regonal mnma of an mage g(x,y). Let C(M ) be a set denotng the coordnates of the ponts n the catchment basn assocated wth the regonal mnma M. Let T[n] represent the set of coordnates (s,t) for whch g(s,t) < n: T [ n] = ( s, t) g( s, t) < n { } Geometrcally, T[n] represent ponts n g(x,y) layng below the plane g(x,y) = n. The topography wll be flooded n nteger ncrements from n = mn to n = max. The coordnates of ponts n the catchment basn M flooded at stage n s a bnary mage: C ( M ) = C( M ) I T[ n] Let C[n] s the unon of the flooded catchment basns portons at stage n: C[ n] = The algorthm s ntalzed wth : C [mn] = T[mn] The algorthm proceeds recursvely assumng at step n that C[n-] has been constructed. ) Flood T[n]; ) Recover C[n]; - How to obtan C[n] from C[n-]? Let Q s a set of connected components n T[n]. For each connected component q Q there exst three possbltes: U R = n C n ( M ) A) B) C) q I C[ n ] q I C[ n ] q I C[ n ] s empty new mnmum s encountered contans one connected component of C[ n ] q les wthn catchment basn contans more than one connected component of C[ n ] floodng occurs

5 Lecture Watershed segmentaton algorthm (contnued) Readng: :0.5 A) B) C) q I C[ n ] q I C[ n ] q I C[ n ] s empty contans one connected component of C[ n ] contans more than one connected component of C[ n ] Condton A) occurs when a new mnmum s encountered. Connected component q s ncorporated nto C[n-] to form C[n]. Condton B) occurs when q les wthn a catchment basn of some regonal mnmum. Connected component q s ncorporated nto C[n-] to form C[n]. Condton C) occurs when a rdge separatng two or more catchment basns encountered. A dam must be bult wthn q to prevent overflow between catchment basns. A one-pxel-thck dam can be constructed by dlatng q C[n-] wth a 3x3 structurng element and constranng the dlaton to q. Hnt: n corresponds to gray level values n g(x,y).

6 Lecture Watershed segmentaton Readng: :0.5

7 Lecture Watershed segmentaton Readng: :0.5 Drect applcaton of the watershed segmentaton may lead to oversegmentaton Lmt the number of allowable regons by pre-processng such as: - Smoothng - Defne a lmted number of nternal markers accordng to some crteron; These markers are assocated wth obects. - External markers are assocated wth the background; Image Oversegmentaton Watershed result appled to the smoothed mage f nternal markers are the only allowed mnma Watershed segmentaton of each regon appled to the gradent of the smoothed mage

8 Lecture Segmentaton by deformable contours: Snakes Readng: 5:5.4 Snake or actve contour s an energy mnmzng splne. The task s to ft a curve of arbtrary shape to a set of mage ponts by mnmzng an energy functonal: E = ( α( s) E + β ( s) E + γ ( s) E and a contour s gven by : c = c( s) cont curv parameterzed by ts length s. Internal energy s composed of the contnuty term (elastcty) and smoothness term (stffness). The contnuty term s based on the frst dervatve : E In the descreet case : E cont cont = = pars of p ponts : ) mage ) ds In order to prevent the formaton of clusters of snake ponts an approxmaton uses the average dstance between the E cont dc ds p = ( d p p

9 Lecture Segmentaton by deformable contours: Snakes Readng: 5:5.4 The smoothness term tres to avod oscllatons of the contour by mnmzng the curvature : E curv d c = ds In the descreet case t s approxmated Ecurv = p p + p+ by settng β = 0 at pont allows the snake to develop a corner by : External energy s assocated to the external force attractng the contour towards the desred mage contour: E mage = I

10 Lecture Segmentaton by deformable contours: Snakes Readng: 5:5.4 Assumptons: Let I be and mage and p, p, p N the chan of mage locatons representng the ntal poston of the deformable contour, whch s close (!) to the mage contour of nterest. Two ways to solve the mnmzaton problem: - usng the calculus of varatons and the Euler-Lagrange condton for mnmzaton, or the greedy algorthm, whch makes locally optmal choces n the hope to fnd the global mnmum.

11 Lecture Segmentaton by deformable contours: Snakes Normalzaton It s mportant to normalze the contrbuton of each energy term, for nstance by dvdng each term by the largest value n the neghborhood n whch the pont can move. For E mage apply usual mnmax normalzaton: Readng: 5:5.4 Greedy algorthm conssts of two steps: seekng for a new contour pont n a local neghborhood that mnmzes the energy functonal and handlng the occurrence of corners.. Greedy mnmzaton. Select a small local neghborhood (3x3, or 5x5). The local mnmzaton s done by drect comparson of the energy functonal at each locaton.. Allowng corner formaton. Search for corners as the curvature maxma along the contour. If a curvature maxma s found at p, then β s set to zero. Useful extra condton for the corner s the large value of ntensty gradent n that pont. I mn max mn Stoppng crteron: when a predefned fracton of all ponts reaches a local mnmum.

12 Lecture Readng: 5:5.4 Segmentaton by deformable contours Snakes

13 Lecture Readng: 5:5.4 Segmentaton by deformable contours Snakes

14 Lecture Image features Readng: :.3.4 Image segmentaton va classfcaton of mage patterns, or solvng obect recognton problem. Task: fnd descrptors, whose values are smlar for members of same pattern and dfferent for members comng from other patterns. These descrptors descrptors characterzng mage patterns are called features. A collecton of features forms feature vector, whch s a pont n the feature space.

15 Lecture Lnear classfer Readng: :. If features are chosen properly, ponts belongng to dfferent classes are form clusters n the feature space. In ths case classes can be separated by a dscrmnaton hyper-surface. If classes can be separated by a hyper-plane, than t s a lnearly-separable task called the lnear classfer. Equaton for a "decson" hyperplane s : t g( x) = w x + w two - category decson rule, If w w If x t w x and g( x) x to the hyperplane: x = x g( x) = 0, and + w w gves p f f g( x) > 0 g( x) < 0 s x t = w x w w x normal t g( x) = w x + w 0 + r 0 are both on the decson hyperplane, then : to an algebrac 0 can be assgned to ether class. + w = r 0 w decde : t w ( x measure of x any vector n the r = the dstance from g( x) w ) = 0 hyperplane.

16 Lecture Lnear classfer The Mnmum dstance classfer Readng: :. m Determne the closeness based on the D Selectng the smallest dstance s equvalent to assgnng x to the class w largest value for the followng decson functons : T T d ( x) = m x m m =,..., W The decson hyperplane g( x) between classes w g( x) = d Ths = x N ( x) = gves : g( x) = ( m w and ( x) d x m m w ( x) = 0 ) T =,..., W =,..., W s equdstant : x ( m whch yelds the m ) T dstance: ( m + m )

17 Lecture Bayes classfer for Gaussan pattern classes Bayes classfer mnmzes the total average loss due to msclassfcaton of pxels between n classes: Readng: :.. Classes PDF s are Gaussan functons Two-stage classfcaton procedure:. Tranng.. Classfcaton. The Bayes classfer assgnes the pattern x to class w f : p( x / w )P( w ) > p( x / w )P( w ) =,,..,W; Gvng the decson functons of the form : d ( x) = p( x / w )P( w ) =,,..,W C w w d ( x) = m T xx x T d ( x) = ln( P( w m m m T m Approxmatng the expected value by the average yelds the mean vector and covarance: m = x N = N If C = I and If all covarance matrces are equal then Bayes decson functon for class T C = C, T T ) + m C x m C m P( w ) = W then the lnear classfer follows : w s :

18 Lecture SVM classfer Readng: :7.. SVM preprocess the data to represent patterns n a hgh dmenson, typcally much hgher than the orgnal feature space. Usng approprate non-lnear mappng: y = φ(x) two a suffcently hgh dmenson, data can be always separated by a hyper-plane: g ( y) = a The goal n tranng the SVM s to fnd a separatng hyperplane wth the largest margn b separatng the two classes. If the margn exst then: t y z g( y a ) b, =,... N, z = ±

19 Lecture Neural Networks Readng: :..3; :7.3 The decson boundary: d( x) y = ( y, y w = ( w n n+ = w x + wn+ = = =, w,..., y, n, ) n T..., w, w n+ ) T w y = w T y = 0

20 Lecture Neural Networks Readng: :..3; :7.3 Equvalent condton on the output: O = + f f n = n = w x w x > w < w n+ n+

21 Lecture Tranng for lnearly separable classes Supervsed learnng Readng: :..3; :7.3 Iteratve algorthm for tranng sets belongng to classes w and w : - Frst choce for w() s arbtrary. - At the k teraton step: f y( k) w and w w( k + ) = w( k) + cy( k) T ( k) y( k) 0 update w( k) wth c > 0 : f y( k) w and w w( k + ) = w( k) cy( k) Otherwse leave w( k) : w( k + ) = w( k) T ( k) y( k) 0 update w( k) : Convergence s sad to be occurred when the whole set s cycled through wthout any errors

22 Lecture Tranng for lnearly separable classes Supervsed learnng -Tranng example: x Readng: :..3; :7.3 Two-class tranng set: Let c=, w()=0 : w w = {(0,0,) = {(,0,) T T,(0,,),(,,) T T } } 0 w w x Convergence s acheved after 4 teratons: w(4) = (,0,) d( x) = x + = 0 T 0 x d( x) = x + = 0

23 Lecture Multlayer feedforward neural networks Problem: Decson functons for multclass pattern recognton Readng: :..3; :7.3

24 Lecture Multlayer feedforward neural networks Readng: :..3; :7.3 Actvaton functon wth necessary dfferentablty: h ( I ) = ( I + θ )/ θ + e 0 I θ θ 0, =,,..., N Total nput to each node n layer J Offset & shape of the sgmod functon

25 Lecture Multlayer feedforward neural networks Actvaton functon Readng: :..3; :7.3 Offset θ s analogous to the last coeffcent w n+, the actvaton functon on each node n layer : h ( I ) = + e Nk ( w k O k + k= θ )/ θ 0

26 Lecture Multlayer feedforward neural networks Tranng of the mult-layer network s the process of adustng weghts of neural unts n all layers: Readng: :..3; :7.3 Output layer: Adaptng weghts s smple because the desred output of each node s known. Hdden layers: Dffcult! The obectve s to develop a tranng rule, whch allows adustment of the weghts n each of the layers such that mnmzes an error functon of the form: E r q Q N Q q q NQ ( rq Qq ) q= = Desred node Actual node responses responses the number of nodes n the output layer Q Here Q s the output layer.

27 Lecture Multlayer feedforward neural networks Step : Adustng weghts n the layer P precedng the output layer Q n proporton to the partal dervatves of the error wth respect to these weghts gves: Δw Δw qp qp E = α w = α( r q Q qp α postve correctve M and after some algebra : O ) h ( I q q q ncrement ) O p = αδ O q p Readng: :..3; :7.3 For complete dervaton see : p.7-75 After the functon h q (I q ) s specfed all the terms are ether known or can be observed n the network: - r q - s known for any tranng pattern shown to the network; - O q the value of each output node can be observed; - I q - the nput to the actvaton elements of layer Q can be observed; - O p the output of the nodes n layer P can be observed;

28 Lecture Multlayer feedforward neural networks Step propagaton of weght adustments nto nternal layers: For any layers K and J, where K mmedately precedes layer J, compute the weghts w k whch modfy the connectons between these two layers: Readng: :..3; :7.3 Δw δ = ( r δ = h δ = α( r δ = O = αδ O O p p= Usng the actvaton functon wth θ =, yelds k wth : ( I N k O ) h ( I for the output layer ; ( O ) p N δ w p p= p for the nternal layer. for the output layer: ) ) O ( O ) for the nternal layer: ) δ w p p and : 0 Generalzed delta rule for tranng the multlayer feedforward neural network NB: For complete dervaton see : p.7-75

29 Lecture Multlayer feedforward neural networks Summary on the tranng process: Start wth an arbtrary but not all equal weghts throughout the network; -The applcaton of the generalzed delta rule nvolves two basc phases: Phase : - A tranng vector s presented to the network and s propagated through the layers to compute the output, O for each node. - The outputs O q of the nodes n the output layer are compared aganst the desred response, r p to generate the error terms σ q Phase : -Update weghts of all network nodes gradually passng the error sgnal backwords. Apply same procedure to update the bas weghts, θ treatng them as addtonal weghts that modfy a unt nput nto the summng uncton of every node n the network. Readng: :..3; :7.3

30 Lecture Multlayer feedforward neural networks Readng: :..3; :7.3 Example -Input 48-dmensonal vectors normalzed shape descrptors; - Synthetc nosy samples through random perturbaton of contour coordnates wthn 8x8 wndow.

31 Lecture Multlayer feedforward neural networks Example - Input layer wth 48 nodes, same as the dmensonalty of nput vectors; - Output layer wth 4 nodes corresponds to the number of classes; - no rule for the number of nodes n nternal layers; - classfcaton condton for pattern : correspondng output unt s hgh 0,95, whereas other unts are low 0,05. Readng: :..3; :7.3

32 Lecture Multlayer feedforward neural networks Tranng - 0 samples for each class wth zero nose; - Gradual re-tranng wth nosy sets; -Recognton - recognton rate of nosy shapes close to 77% when traned wth nose-free data; - recognton rate ncreases to about 99% when traned wth noser data. Readng: :..3; :7.3 R t s the probablty of a boundary pxels to be perturbed due to nose.

33 Lecture Complexty of decson surfaces What s the nature of the decson surface mplemented by a mult-layer network? Readng: :..3; :7.3 Two-nput twolayer network AND (0,) Each node n the frst layer defnes a lne. More nodes would defne a convex regon. Possble decson boundares that can be mplemented. The network can dstngush between two classes, that couldn t be separated by a sngle lnear surface

34 Lecture Complexty of decson surfaces In the 3-layer network nodes n the: - Layer mplement lnes; - Layer perform AND operaton formng regons from varous lnes; -3Layer assgn class membershp to varous regons; Readng: :..3; :7.3 In general: -a two-layer network mplements arbtrary convex regons through ntersecton of hyperplanes; -A three-layer network mplements decson surfaces of arbtrary complexty.

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Evaluation of classifiers MLPs

Evaluation of classifiers MLPs Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Multi-layer neural networks

Multi-layer neural networks Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Absolute chain codes. Relative chain code. Chain code. Shape representations vs. descriptors. Start

Absolute chain codes. Relative chain code. Chain code. Shape representations vs. descriptors. Start Shape representatons vs. descrptors After the segmentaton of an mage, ts regons or edges are represented and descrbed n a manner approprate for further processng. Shape representaton: the ways we store

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

MATH 567: Mathematical Techniques in Data Science Lab 8

MATH 567: Mathematical Techniques in Data Science Lab 8 1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W

More information

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks Other NN Models Renforcement learnng (RL) Probablstc neural networks Support vector machne (SVM) Renforcement learnng g( (RL) Basc deas: Supervsed dlearnng: (delta rule, BP) Samples (x, f(x)) to learn

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Introduction to the Introduction to Artificial Neural Network

Introduction to the Introduction to Artificial Neural Network Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

Image Analysis. Active contour models (snakes)

Image Analysis. Active contour models (snakes) Image Analyss Actve contour models (snakes) Chrstophoros Nkou cnkou@cs.uo.gr Images taken from: Computer Vson course by Krsten Grauman, Unversty of Texas at Austn. Unversty of Ioannna - Department of Computer

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them? Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture 23: Artificial neural networks

Lecture 23: Artificial neural networks Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Statistical Foundations of Pattern Recognition

Statistical Foundations of Pattern Recognition Statstcal Foundatons of Pattern Recognton Learnng Objectves Bayes Theorem Decson-mang Confdence factors Dscrmnants The connecton to neural nets Statstcal Foundatons of Pattern Recognton NDE measurement

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

ADAPTIVE IMAGE FILTERING

ADAPTIVE IMAGE FILTERING Why adaptve? ADAPTIVE IMAGE FILTERING average detals and contours are aected Averagng should not be appled n contour / detals regons. Adaptaton Adaptaton = modyng the parameters o a prrocessng block accordng

More information

INF 4300 Digital Image Analysis REPETITION

INF 4300 Digital Image Analysis REPETITION INF 4300 Dgtal Image Analyss REPEIION Classfcaton PCA and Fsher s lnear dscrmnant Morphology Segmentaton Anne Solberg 406 INF 4300 Back to classfcaton error for thresholdng - Background - Foreground P

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Intro to Visual Recognition

Intro to Visual Recognition CS 2770: Computer Vson Intro to Vsual Recognton Prof. Adrana Kovashka Unversty of Pttsburgh February 13, 2018 Plan for today What s recognton? a.k.a. classfcaton, categorzaton Support vector machnes Separable

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Fundamentals of Neural Networks

Fundamentals of Neural Networks Fundamentals of Neural Networks Xaodong Cu IBM T. J. Watson Research Center Yorktown Heghts, NY 10598 Fall, 2018 Outlne Feedforward neural networks Forward propagaton Neural networks as unversal approxmators

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that

More information

Tutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant

Tutorial 2. COMP4134 Biometrics Authentication. February 9, Jun Xu, Teaching Asistant Tutoral 2 COMP434 ometrcs uthentcaton Jun Xu, Teachng sstant csjunxu@comp.polyu.edu.hk February 9, 207 Table of Contents Problems Problem : nswer the questons Problem 2: Power law functon Problem 3: Convoluton

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Decision Boundary Formation of Neural Networks 1

Decision Boundary Formation of Neural Networks 1 Decson Boundary ormaton of Neural Networks C. LEE, E. JUNG, O. KWON, M. PARK, AND D. HONG Department of Electrcal and Electronc Engneerng, Yonse Unversty 34 Shnchon-Dong, Seodaemum-Ku, Seoul 0-749, Korea

More information

Lecture 14: Forces and Stresses

Lecture 14: Forces and Stresses The Nuts and Bolts of Frst-Prncples Smulaton Lecture 14: Forces and Stresses Durham, 6th-13th December 2001 CASTEP Developers Group wth support from the ESF ψ k Network Overvew of Lecture Why bother? Theoretcal

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows: Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Logistic Classifier CISC 5800 Professor Daniel Leeds

Logistic Classifier CISC 5800 Professor Daniel Leeds lon 9/7/8 Logstc Classfer CISC 58 Professor Danel Leeds Classfcaton strategy: generatve vs. dscrmnatve Generatve, e.g., Bayes/Naïve Bayes: 5 5 Identfy probablty dstrbuton for each class Determne class

More information

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION CAPTER- INFORMATION MEASURE OF FUZZY MATRI AN FUZZY BINARY RELATION Introducton The basc concept of the fuzz matr theor s ver smple and can be appled to socal and natural stuatons A branch of fuzz matr

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information