Data Mining and MapReduce. Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)

Size: px
Start display at page:

Download "Data Mining and MapReduce. Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)"

Transcription

1 Data Mining and MapRedue Adapted from Letures by Prabhakar Raghavan Yahoo and Stanford and Christopher Manning Stanford 1

2 2 Overview Text Classifiation K-Means Classifiation The Naïve Bayes algorithm

3 3 Relevane feedbak revisited In relevane feedbak, the user marks a number of douments as relevant/nonrelevant, that an be used to improve searh results. Suppose we ust tried to learn a filter for nonrelevant douments This is an instane of a text lassifiation problem: Two lasses : relevant, nonrelevant For eah doument, deide whether it is relevant or nonrelevant The notion of lassifiation is very general and has many appliations within and beyond IR.

4 4 Standing queries The path from information retrieval to text lassifiation: You have an information need, say: Unrest in the Niger delta region You want to rerun an appropriate query periodially to find new news items on this topi I.e., it s lassifiation not ranking Suh queries are alled standing queries Long used by information professionals A modern mass instantiation is Google Alerts

5 Spam filtering: Another text 5 From: "" <takworlld@hotmail.om> Subet: real estate is the only way... gem oalvgkay Anyone an buy real estate with no money down Stop paying rent TODAY! lassifiation task There is no need to spend hundreds or even thousands for similar ourses I am 22 years old and I have already purhased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW! ================================================= Clik Below to order: =================================================

6 6 Categorization/Classifiation Given: A desription of an instane, x X, where X is the instane language or instane spae. Issue: how to represent text douments. A fixed set of lasses: C = { 1, 2,, J } Determine: The ategory of x: xc, where x is a lassifiation funtion whose domain is X and whose range is C. We want to know how to build lassifiation funtions lassifiers.

7 7 More Text Classifiation Examples: Many searh engine funtionalities use lassifiation Assign labels to eah doument or web-page: Labels are most often topis suh as Yahoo-ategories e.g., "finane," "sports," "news>world>asia>business" Labels may be genres e.g., "editorials" "movie-reviews" "news Labels may be opinion on a person/produt e.g., like, hate, neutral Labels may be domain-speifi e.g., "interesting-to-me" : "not-interesting-to-me e.g., ontains adult language : doesn t e.g., language identifiation: English, Frenh, Chinese, e.g., searh vertial: about Linux versus not e.g., link spam : not link spam

8 8 Classifiation Methods 1 Manual lassifiation Used by Yahoo! originally; now downplayed, Looksmart, about.om, ODP, PubMed Very aurate when ob is done by experts Consistent when the problem size and team is small Diffiult and expensive to sale Means we need automati lassifiation methods for big problems

9 Classifiation Methods 2 Supervised learning of a doument-label assignment funtion Many systems partly rely on mahine learning Autonomy, MSN, Verity, Enkata, Yahoo!, k-nearest Neighbors simple, powerful Naive Bayes simple, ommon method Support-vetor mahines new, more powerful plus many other methods No free lunh: requires hand-lassified training data But data an be built up and refined by amateurs Note that many ommerial systems use a mixture of methods 9

10 Clustering k-means Basi and fundamental Original Algorithm 1. Pik k initial enter points 2. Iterate until onverge 1. Assign eah point with the nearest enter 2. Calulate new enters Easy to parallellize

11 K-Means Clustering 11

12 12 How Does K-means partition? When K entroids are set/fixed, they partition the whole data spae into K mutually exlusive subspaes to form a partition. A partition amounts to a Voronoi Diagram Changing positions of entroids leads to a new partitioning.

13 K-means Demo 1. User set up the number of lusters they d like. e.g. k=5 13

14 K-means Demo 1. User set up the number of lusters they d like. e.g. K=5 2. Randomly guess K luster Center loations 14

15 K-means Demo 1. User set up the number of lusters they d like. e.g. K=5 2. Randomly guess K luster Center loations 3. Eah data point finds out whih Center it s losest to. Thus eah Center owns a set of data points 15

16 K-means Demo 1. User set up the number of lusters they d like. e.g. K=5 2. Randomly guess K luster entre loations 3. Eah data point finds out whih entre it s losest to. Thus eah Center owns a set of data points 4. Eah entre finds the entroid of the points it owns 16

17 K-means Demo 1. User set up the number of lusters they d like. e.g. K=5 2. Randomly guess K luster entre loations 3. Eah data point finds out whih entre it s losest to. Thus eah entre owns a set of data points 4. Eah entre finds the entroid of the points it owns 5. and umps there 17

18 K-means Demo 1. User set up the number of lusters they d like. e.g. K=5 2. Randomly guess K luster entre loations 3. Eah data point finds out whih entre it s losest to. Thus eah entre owns a set of data points 4. Eah entre finds the entroid of the points it owns 5. and umps there 6. Repeat until terminated! 18

19 K-Means Clustering with MapRedue Mapper_i-1 Eah Mapper loads a set of data samples, and assign eah sample to a nearest entroid Reduer_i-1 Mapper_i Reduer_i 4 Mapper_i+1 Eah Mapper needs to keep a opy of entroids Reduer_i How to set the initial entroids is very important! Usually we set the entroids using Canopy Clustering. [MCallum, Nigam and Ungar: "Effiient Clustering of High Dimensional Data Sets with Appliation to Referene Mathing", SIGKDD 2000]

20 Clustering k-means a shared file ontains entroid points map 1. for eah point, find the nearest enter 2. generate <key,value> pair key: enter id value: urrent point s oordinate redue 1. ollet all points belonging to the same luster they have the same key value 2. alulate the average new enter iterate

21 Probabilisti relevane feedbak Rather than re-weighting in a vetor spae If user has told us some relevant and some irrelevant douments, then we an proeed to build a probabilisti lassifier, suh as the Naive Bayes model: Pt k R = D rk / D r Pt k NR = D nrk / D nr t k is a term; D r is the set of known relevant douments; D rk is the subset that ontain t k ; D nr is the set of known irrelevant douments; D nrk is the subset that ontain t k.

22 Reall a few probability basis For events a and b: Bayes Rule Odds: a a x x p x b p a p a b p b p a p a b p b a p a p a b p b p b a p a p a b p b p b a p b a p b a p,, 1 a p a p a p a p a O Posterior Prior

23 Bayesian Methods Learning and lassifiation methods based on probability theory. Bayes theorem plays a ritial role in probabilisti learning and lassifiation. Build a generative model that approximates how data is produed. Uses prior probability of eah ategory given no information about an item. Categorization produes a posterior probability distribution over the possible ategories given a desription of an item and prior probabilities.

24 24 Bayes Rule P C, D P C D P D P D C P C PC D PD CPC PD

25 25 The Naïve Bayes Classifier Flu X 1 X 2 X 3 X 4 X 5 runnynose sinus ough fever musle-ahe Conditional Independene Assumption: Features term presene are independent of eah other given the lass: P 5 X1,, X5 C P X1 C P X 2 C P X C This model is appropriate for binary variables Multivariate Bernoulli model

26 Naive Bayes Classifiers Task: Classify a new instane D based on a tuple of attribute values into one of the lasses C 26 x n x x D,, 2, 1,,, argmax 2 1 n C MAP x x x P,,,,,, argmax n n C x x x P P x x x P,,, argmax 2 1 n C P x x x P MAP = Maximum Aposteriori Probability

27 Naïve Bayes Classifier: 27 Naïve Bayes Assumption P Can be estimated from the frequeny of lasses in the training examples. Px 1,x 2,,x n O X n C parameters Could only be estimated if a very, very large number of training examples was available. Naïve Bayes Conditional Independene Assumption: Assume that the probability of observing the onuntion of attributes is equal to the produt of the individual probabilities Px i.

28 28 Example Example: Play Tennis

29 29 Example Learning Phase Outlook Play=Yes Play=No Sunny 2/9 3/5 Overast 4/9 0/5 Rain 3/9 2/5 Temperature Play=Yes Play=No Hot 2/9 2/5 Mild 4/9 2/5 Cool 3/9 1/5 Humidity Play=Yes Play=No High 3/9 4/5 Normal 6/9 1/5 Wind Play=Yes Play=No Strong 3/9 3/5 Weak 6/9 2/5 PPlay=Yes = 9/14 PPlay=No = 5/14

30 30 Example Test Phase Given a new instane, predit its label x =Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong Look up tables ahieved in the learning phrase POutlook=Sunny Play=Yes = 2/9 PTemperature=Cool Play=Yes = 3/9 PHuminity=High Play=Yes = 3/9 PWind=Strong Play=Yes = 3/9 PPlay=Yes = 9/14 POutlook=Sunny Play=No = 3/5 PTemperature=Cool Play==No = 1/5 PHuminity=High Play=No = 4/5 PWind=Strong Play=No = 3/5 PPlay=No = 5/14 Deision making with the MAP rule PYes x : [PSunny YesPCool YesPHigh YesPStrong Yes]PPlay=Yes = PNo x : [PSunny No PCool NoPHigh NoPStrong No]PPlay=No = Given the fat PYes x < PNo x, we label x to be No.

31 Learning the Model First attempt: maximum likelihood estimates simply use the frequenies in the data 31, ˆ i i i C N C x X N x P C X 1 X 2 X 5 X 3 X 4 X 6 N C N P ˆ

32 Problem with Max Likelihood Flu X 1 X 2 X 3 X 4 X 5 runnynose sinus ough fever musle-ahe P 5 X1,, X5 C P X1 C P X 2 C P X C What if we have seen no training ases where patient had no flu and musle ahes? ˆ N X t, C nf P X 5 5 t C nf N C nf Zero probabilities annot be onditioned away, no matter the other evidene! arg max P ˆ Pˆ x i i 0

33 MapRedue Approah: Simple Method X 1 X 2 X 3 X 4 X 5 sinus Flu ough Flu fever Flu Runnynose Flu I 1 : Runnynose,sinus,ough,fever,musle-ahe,Flu I m : Runnynose,sinus,ough, fever, musle-ahe, Flu musle-ahe Flu Flu X 1 X 2 X 3 X 4 X 5 sinus Fluough Flufever Flu Runnynose Flu MAP musle-ahe Flu Flu REDUCE X 1 X 2 X 3 X 4 X 5 PRunnynose Flu Psinus Flu Pough Flu Pfever Flu Pmusle-ahe Flu PFlu

34 Mahine Learning Algorithm Transformation Naïve Bayes Classifiation more omplex arg max ]... [ arg max ] [ arg max... arg max i V v NB n V v MAP n n V v MAP n V v MAP v a P v P v v P v a a P v a a P v P v a a P v a a v P v map redue

35 Mahine Learning Algorithm Transformation to MapRedue Solution Find statistis alulation part Distribute alulations on data using map Gather and refine all statistis in redue

36 36

37 Exerise Estimate parameters of Naive Bayes lassifier Classify test doument 37

38 Example: Parameter estimates The denominators are and beause the lengths of text and are 8 and 3, respetively, and beause the onstant B is 6 as the voabulary onsists of six terms. 38

39 Example: Classifiation Thus, the lassifier assigns the test doument to = China. The reason for this lassifiation deision is that the three ourrenes of the positive indiator CHINESE in d 5 outweigh the ourrenes of the two negative indiators JAPAN and TOKYO. 39

40 Example: Sensors Raining Reality Sunny P+,+,r = 3/8 P-,-,r = 1/8 P+,+,s = 1/8 P-,-,s = 3/8 NB Model Raining? M1 M2 NB FACTORS: Ps = 1/2 P+ s = 1/4 P+ r = 3/4 PREDICTIONS: Pr,+,+ = ½¾¾ Ps,+,+ = ½¼¼ Pr +,+ = 9/10 Ps +,+ = 1/10 40

41 Prasad L13NaiveBayesClassify 41

42 42 Smoothing to Avoid Overfitting Pˆ x i N X i N C x i, C k 1 # of values of X i Prasad L13NaiveBayesClassify

43 Smoothing to Avoid Overfitting k C N C x X N x P i i i 1, ˆ Somewhat more subtle version Prasad L13NaiveBayesClassify 43 # of values of X i m C N mp C x X N x P k i k i i k i, ˆ,,, overall fration in data where X i =x i,k extent of smoothing

44 Using Multinomial Naive Bayes Classifiers to Classify Text: Basi method 48 Attributes are text positions, values are words. NB argmax C argmax C P P P x Still too many possibilities P x P x Assume that lassifiation is independent of the positions of the words Use same parameters for eah position i 1 Result is bag of words model over tokens not types i "our" n "text" Prasad L13NaiveBayesClassify

45 49 Naïve Bayes: Learning Algorithm From training orpus, extrat Voabulary Calulate required P and Px k terms For eah in C do dos subset of douments for whih the target lass is P dos total# douments Text single doument ontaining all dos for eah word x k in Voabulary n k number of ourrenes of x k in Text P x k nk n Voabulary Prasad L13NaiveBayesClassify

46 50 Naïve Bayes: Classifying positions all word positions in urrent doument whih ontain tokens found in Voabulary Return NB, where NB argmax C P P x ipositions i Prasad L13NaiveBayesClassify

47 51 Naive Bayes: Time Complexity Training Time: O D L d + C V where L d is the average length of a doument in D. Assumes V and all D i, n i, and n i pre-omputed in O D L d time during one pass through all of the data. Why? Generally ust O D L d sine usually C V < D L d Test Time: O C L t where L t is the average length of a test doument. Very effiient overall, linearly proportional to the time needed to ust read in all the data. Plus, robust in pratie Prasad L13NaiveBayesClassify

48 52 Underflow Prevention: log spae Multiplying lots of probabilities, whih are between 0 and 1, an result in floating-point underflow. Sine logxy = logx + logy, it is better to perform all omputations by summing logs of probabilities rather than multiplying probabilities. Class with highest final un-normalized log probability sore is still the most probable. argmax Note NB that model is now ust max of sum of weights C log P ipositions log P x i Prasad L13NaiveBayesClassify

49 53 Note: Two Models Model 1: Multivariate Bernoulli One feature X w for eah word in ditionary X w = true in doument d if w appears in d Naive Bayes assumption: Given the doument s topi, appearane of one word in the doument tells us nothing about hanes that another word appears This is the model used in the binary independene model in lassi probabilisti relevane feedbak in hand-lassified data Prasad L13NaiveBayesClassify

50 54 Two Models Model 2: Multinomial = Class onditional unigram One feature X i for eah word pos in doument feature s values are all words in ditionary Value of X i is the word in position i Naïve Bayes assumption: Given the doument s topi, word in one position in the doument tells us nothing about words in other positions Seond assumption: Word appearane does not depend on position P X w P X w i for all positions i,, word w, and lass Prasad

51 55 Parameter estimation Multivariate Bernoulli model: Pˆ X t w fration of douments of topi in whih word w appears Multinomial model: Pˆ X w i fration of times in whih word w appears aross all douments of topi Can reate a mega-doument for topi by onatenating all douments on this topi Use frequeny of w in mega-doument Prasad L13NaiveBayesClassify

52 56 Classifiation Multinomial vs Multivariate Bernoulli? Multinomial model is almost always more effetive in text appliations! See IIR setions 13.2 and 13.3 for worked examples with eah model Prasad L13NaiveBayesClassify

53 Feature Seletion: Why? Prasad Text olletions have a large number of features 10,000 1,000,000 unique words and more Feature Seletion Makes using a partiular lassifier feasible Some lassifiers an t deal with 100,000 of features Redues training time Training time for some methods is quadrati or worse in the number of features Can improve generalization performane Eliminates noise features Avoids overfitting L13NaiveBayesClassify

54 Feature seletion: how? Two ideas: Hypothesis testing statistis: Are we onfident that the value of one ategorial variable is assoiated with the value of another Chi-square test 2 Information theory: How muh information does the value of one ategorial variable give you about the value of another Mutual information MI They re similar, but 2 measures onfidene in assoiation, based on available statistis, while MI measures extent of assoiation assuming perfet knowledge of probabilities Prasad L13NaiveBayesClassify

55 2 statisti CHI 2 is interested in f o f e 2 /f e summed over all table entries: is the observed number what you d expet given the marginals? , a / EO.2./ / / / p0. The null hypothesis is reeted with onfidene.999, sine 12.9 > the value for.999 onfidene. Term = aguar Term aguar expeted: f e Class = auto Class auto observed: f o

56 2 statisti CHI There is a simpler formula for 2x2 2 : A = #t, B = #t, C = # t, D = # t, Yields 1.29? N = A + B + C + D Value for omplete independene of term and ategory?

57 Feature seletion via Mutual Information In training set, hoose k words whih best disriminate give most info. on the ategories. The Mutual Information between a word, lass is: I w, e w p e, e log w w { 0,1} e{ 0,1} p ew p e For eah word w and eah ategory p e, e Prasad L13NaiveBayesClassify

58 62 Feature seletion via MI ontd. For eah ategory we build a list of k most disriminating terms. For example on 20 Newsgroups: si.eletronis: iruit, voltage, amp, ground, opy, battery, eletronis, ooling, re.autos: ar, ars, engine, ford, dealer, mustang, oil, ollision, autos, tires, toyota, Greedy: does not aount for orrelations between terms Prasad L13NaiveBayesClassify

59 63 Feature Seletion Mutual Information Clear information-theoreti interpretation May selet rare uninformative terms Chi-square Statistial foundation May selet very slightly informative frequent terms that are not very useful for lassifiation Just use the ommonest terms? No partiular foundation In pratie, this is often 90% as good Prasad L13NaiveBayesClassify

60 64 Feature seletion for NB In general, feature seletion is neessary for multivariate Bernoulli NB. Otherwise, you suffer from noise, multi-ounting Feature seletion really means something different for multinomial NB. It means ditionary trunation. This feature seletion normally isn t needed for multinomial NB, but may help a fration with quantities that are badly estimated Prasad L13NaiveBayesClassify

61 WebKB Experiment 1998 Classify webpages from CS departments into: student, faulty, ourse,proet Train on ~5,000 hand-labeled web pages Cornell, Washington, U.Texas, Wisonsin Crawl and lassify a new site CMU Results: Student Faulty Person Proet Course Departmt Extrated Corret Auray: 72% 42% 79% 73% 89% 100%

62 67 NB Model Comparison: WebKB Prasad

63 68 Prasad L13NaiveBayesClassify

64 Naïve Bayes on spam Prasad L13NaiveBayesClassify

65 71 Violation of NB Assumptions Conditional independene Positional independene Examples? Prasad L13NaiveBayesClassify

66 73 Naive Bayes is Not So Naive Naïve Bayes: First and Seond plae in KDD-CUP 97 ompetition, among 16 then state of the art algorithms Goal: Finanial servies industry diret mail response predition model: Predit if the reipient of mail will atually respond to the advertisement 750,000 reords. Robust to Irrelevant Features Irrelevant Features anel eah other without affeting results Instead Deision Trees an heavily suffer from this. Very good in domains with many equally important features Deision Trees suffer from fragmentation in suh ases espeially if little data A good dependable baseline for text lassifiation but not the best! Optimal if the Independene Assumptions hold: If assumed independene is orret, then it is the Bayes Optimal Classifier for problem Very Fast: Learning with one pass of ounting over the data; testing linear in the number of attributes, and doument olletion size Low Storage requirements Prasad L13NaiveBayesClassify

Information Retrieval and Organisation

Information Retrieval and Organisation Information Retrieval and Organisation Chapter 13 Text Classification and Naïve Bayes Dell Zhang Birkbeck, University of London Motivation Relevance Feedback revisited The user marks a number of documents

More information

Naïve Bayes for Text Classification

Naïve Bayes for Text Classification Naïve Bayes for Tet Classifiation adapted by Lyle Ungar from slides by Mith Marus, whih were adapted from slides by Massimo Poesio, whih were adapted from slides by Chris Manning : Eample: Is this spam?

More information

Part 9: Text Classification; The Naïve Bayes algorithm Francesco Ricci

Part 9: Text Classification; The Naïve Bayes algorithm Francesco Ricci Part 9: Text Classification; The Naïve Bayes algorithm Francesco Ricci Most of these slides comes from the course: Information Retrieval and Web Search, Christopher Manning and Prabhakar Raghavan 1 Content

More information

BAYES CLASSIFIER. Ivan Michael Siregar APLYSIT IT SOLUTION CENTER. Jl. Ir. H. Djuanda 109 Bandung

BAYES CLASSIFIER. Ivan Michael Siregar APLYSIT IT SOLUTION CENTER. Jl. Ir. H. Djuanda 109 Bandung BAYES CLASSIFIER www.aplysit.om www.ivan.siregar.biz ALYSIT IT SOLUTION CENTER Jl. Ir. H. Duanda 109 Bandung Ivan Mihael Siregar ivan.siregar@gmail.om Data Mining 2010 Bayesian Method Our fous this leture

More information

Bayes Theorem & Naïve Bayes. (some slides adapted from slides by Massimo Poesio, adapted from slides by Chris Manning)

Bayes Theorem & Naïve Bayes. (some slides adapted from slides by Massimo Poesio, adapted from slides by Chris Manning) Bayes Theorem & Naïve Bayes (some slides adapted from slides by Massimo Poesio, adapted from slides by Chris Manning) Review: Bayes Theorem & Diagnosis P( a b) Posterior Likelihood Prior P( b a) P( a)

More information

Handling Uncertainty

Handling Uncertainty Handling Unertainty Unertain knowledge Typial example: Diagnosis. Name Toothahe Cavity Can we ertainly derive the diagnosti rule: if Toothahe=true then Cavity=true? The problem is that this rule isn t

More information

CS 687 Jana Kosecka. Uncertainty, Bayesian Networks Chapter 13, Russell and Norvig Chapter 14,

CS 687 Jana Kosecka. Uncertainty, Bayesian Networks Chapter 13, Russell and Norvig Chapter 14, CS 687 Jana Koseka Unertainty Bayesian Networks Chapter 13 Russell and Norvig Chapter 14 14.1-14.3 Outline Unertainty robability Syntax and Semantis Inferene Independene and Bayes' Rule Syntax Basi element:

More information

UVA CS / Introduc8on to Machine Learning and Data Mining

UVA CS / Introduc8on to Machine Learning and Data Mining UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 13: Probability and Sta3s3cs Review (cont.) + Naïve Bayes Classifier Yanjun Qi / Jane, PhD University of Virginia Department

More information

naive bayes document classification

naive bayes document classification naive bayes document classification October 31, 2018 naive bayes document classification 1 / 50 Overview 1 Text classification 2 Naive Bayes 3 NB theory 4 Evaluation of TC naive bayes document classification

More information

Naïve Bayes for Text Classification

Naïve Bayes for Text Classification Naïve Bayes for Text Cassifiation adapted by Lye Ungar from sides by Mith Marus, whih were adapted from sides by Massimo Poesio, whih were adapted from sides by Chris Manning : Exampe: Is this spam? From:

More information

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Introduction to ML Two examples of Learners: Naïve Bayesian Classifiers Decision Trees Why Bayesian learning? Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

10/15/2015 A FAST REVIEW OF DISCRETE PROBABILITY (PART 2) Probability, Conditional Probability & Bayes Rule. Discrete random variables

10/15/2015 A FAST REVIEW OF DISCRETE PROBABILITY (PART 2) Probability, Conditional Probability & Bayes Rule. Discrete random variables Probability, Conditional Probability & Bayes Rule A FAST REVIEW OF DISCRETE PROBABILITY (PART 2) 2 Discrete random variables A random variable can take on one of a set of different values, each with an

More information

Chapter 2. Conditional Probability

Chapter 2. Conditional Probability Chapter. Conditional Probability The probabilities assigned to various events depend on what is known about the experimental situation when the assignment is made. For a partiular event A, we have used

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1

Computer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1 Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

Bayesian Learning Features of Bayesian learning methods:

Bayesian Learning Features of Bayesian learning methods: Bayesian Learning Features of Bayesian learning methods: Each observed training example can incrementally decrease or increase the estimated probability that a hypothesis is correct. This provides a more

More information

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM

CSC2515 Winter 2015 Introduc3on to Machine Learning. Lecture 5: Clustering, mixture models, and EM CSC2515 Winter 2015 Introdu3on to Mahine Learning Leture 5: Clustering, mixture models, and EM All leture slides will be available as.pdf on the ourse website: http://www.s.toronto.edu/~urtasun/ourses/csc2515/

More information

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

EE 321 Project Spring 2018

EE 321 Project Spring 2018 EE 21 Projet Spring 2018 This ourse projet is intended to be an individual effort projet. The student is required to omplete the work individually, without help from anyone else. (The student may, however,

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown

More information

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

23.1 Tuning controllers, in the large view Quoting from Section 16.7:

23.1 Tuning controllers, in the large view Quoting from Section 16.7: Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information

Canimals. borrowed, with thanks, from Malaspina University College/Kwantlen University College

Canimals. borrowed, with thanks, from Malaspina University College/Kwantlen University College Canimals borrowed, with thanks, from Malaspina University College/Kwantlen University College http://ommons.wikimedia.org/wiki/file:ursus_maritimus_steve_amstrup.jpg Purpose Investigate the rate of heat

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilisti Graphial Models David Sontag New York University Leture 12, April 19, 2012 Aknowledgement: Partially based on slides by Eri Xing at CMU and Andrew MCallum at UMass Amherst David Sontag (NYU)

More information

The Naïve Bayes Classifier. Machine Learning Fall 2017

The Naïve Bayes Classifier. Machine Learning Fall 2017 The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

Lecture 3 - Lorentz Transformations

Lecture 3 - Lorentz Transformations Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the

More information

Naive Bayes Classifier. Danushka Bollegala

Naive Bayes Classifier. Danushka Bollegala Naive Bayes Classifier Danushka Bollegala Bayes Rule The probability of hypothesis H, given evidence E P(H E) = P(E H)P(H)/P(E) Terminology P(E): Marginal probability of the evidence E P(H): Prior probability

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

Confusion matrix. a = true positives b = false negatives c = false positives d = true negatives 1. F-measure combines Recall and Precision:

Confusion matrix. a = true positives b = false negatives c = false positives d = true negatives 1. F-measure combines Recall and Precision: Confusion matrix classifier-determined positive label classifier-determined negative label true positive a b label true negative c d label Accuracy = (a+d)/(a+b+c+d) a = true positives b = false negatives

More information

Automatic Clasification: Naïve Bayes

Automatic Clasification: Naïve Bayes Automatic Clasification: Naïve Bayes Wm&R a.a. 2012/13 R. Basili (slides borrowed by: H. Schutze Dipartimento di Informatica Sistemi e produzione Università di Roma Tor Vergata Email: basili@info.uniroma2.it

More information

18.05 Problem Set 6, Spring 2014 Solutions

18.05 Problem Set 6, Spring 2014 Solutions 8.5 Problem Set 6, Spring 4 Solutions Problem. pts.) a) Throughout this problem we will let x be the data of 4 heads out of 5 tosses. We have 4/5 =.56. Computing the likelihoods: 5 5 px H )=.5) 5 px H

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix

Product Policy in Markets with Word-of-Mouth Communication. Technical Appendix rodut oliy in Markets with Word-of-Mouth Communiation Tehnial Appendix August 05 Miro-Model for Inreasing Awareness In the paper, we make the assumption that awareness is inreasing in ustomer type. I.e.,

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install

More information

Sampler-A. Secondary Mathematics Assessment. Sampler 521-A

Sampler-A. Secondary Mathematics Assessment. Sampler 521-A Sampler-A Seondary Mathematis Assessment Sampler 521-A Instrutions for Students Desription This sample test inludes 14 Seleted Response and 4 Construted Response questions. Eah Seleted Response has a

More information

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach Measuring & Induing Neural Ativity Using Extraellular Fields I: Inverse systems approah Keith Dillon Department of Eletrial and Computer Engineering University of California San Diego 9500 Gilman Dr. La

More information

UTC. Engineering 329. Proportional Controller Design. Speed System. John Beverly. Green Team. John Beverly Keith Skiles John Barker.

UTC. Engineering 329. Proportional Controller Design. Speed System. John Beverly. Green Team. John Beverly Keith Skiles John Barker. UTC Engineering 329 Proportional Controller Design for Speed System By John Beverly Green Team John Beverly Keith Skiles John Barker 24 Mar 2006 Introdution This experiment is intended test the variable

More information

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling

ONLINE APPENDICES for Cost-Effective Quality Assurance in Crowd Labeling ONLINE APPENDICES for Cost-Effetive Quality Assurane in Crowd Labeling Jing Wang Shool of Business and Management Hong Kong University of Siene and Tehnology Clear Water Bay Kowloon Hong Kong jwang@usthk

More information

Data Mining Part 4. Prediction

Data Mining Part 4. Prediction Data Mining Part 4. Prediction 4.3. Fall 2009 Instructor: Dr. Masoud Yaghini Outline Introduction Bayes Theorem Naïve References Introduction Bayesian classifiers A statistical classifiers Introduction

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

Controller Design Based on Transient Response Criteria. Chapter 12 1

Controller Design Based on Transient Response Criteria. Chapter 12 1 Controller Design Based on Transient Response Criteria Chapter 12 1 Desirable Controller Features 0. Stable 1. Quik responding 2. Adequate disturbane rejetion 3. Insensitive to model, measurement errors

More information

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas

CSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas ian ian ian Might have reasons (domain information) to favor some hypotheses/predictions over others a priori ian methods work with probabilities, and have two main roles: Naïve Nets (Adapted from Ethem

More information

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems

Modeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems 009 9th IEEE International Conferene on Distributed Computing Systems Modeling Probabilisti Measurement Correlations for Problem Determination in Large-Sale Distributed Systems Jing Gao Guofei Jiang Haifeng

More information

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory

More information

FINITE WORD LENGTH EFFECTS IN DSP

FINITE WORD LENGTH EFFECTS IN DSP FINITE WORD LENGTH EFFECTS IN DSP PREPARED BY GUIDED BY Snehal Gor Dr. Srianth T. ABSTRACT We now that omputers store numbers not with infinite preision but rather in some approximation that an be paed

More information

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

An AI-ish view of Probability, Conditional Probability & Bayes Theorem An AI-ish view of Probability, Conditional Probability & Bayes Theorem Review: Uncertainty and Truth Values: a mismatch Let action A t = leave for airport t minutes before flight. Will A 15 get me there

More information

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty. An AI-ish view of Probability, Conditional Probability & Bayes Theorem Review: Uncertainty and Truth Values: a mismatch Let action A t = leave for airport t minutes before flight. Will A 15 get me there

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

Collocations. (M&S Ch 5)

Collocations. (M&S Ch 5) Colloations M&S Ch 5 Introdution Colloations are haraterized by limited ompositionality. Large overlap between the onepts of olloations and terms tehnial term and terminologial phrase. Colloations sometimes

More information

Soft Computing. Lecture Notes on Machine Learning. Matteo Mattecci.

Soft Computing. Lecture Notes on Machine Learning. Matteo Mattecci. Soft Computing Lecture Notes on Machine Learning Matteo Mattecci matteucci@elet.polimi.it Department of Electronics and Information Politecnico di Milano Matteo Matteucci c Lecture Notes on Machine Learning

More information

15.12 Applications of Suffix Trees

15.12 Applications of Suffix Trees 248 Algorithms in Bioinformatis II, SoSe 07, ZBIT, D. Huson, May 14, 2007 15.12 Appliations of Suffix Trees 1. Searhing for exat patterns 2. Minimal unique substrings 3. Maximum unique mathes 4. Maximum

More information

Microeconomic Theory I Assignment #7 - Answer key

Microeconomic Theory I Assignment #7 - Answer key Miroeonomi Theory I Assignment #7 - Answer key. [Menu priing in monopoly] Consider the example on seond-degree prie disrimination (see slides 9-93). To failitate your alulations, assume H = 5, L =, and

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

HOW TO FACTOR. Next you reason that if it factors, then the factorization will look something like,

HOW TO FACTOR. Next you reason that if it factors, then the factorization will look something like, HOW TO FACTOR ax bx I now want to talk a bit about how to fator ax bx where all the oeffiients a, b, and are integers. The method that most people are taught these days in high shool (assuming you go to

More information

Determination of the reaction order

Determination of the reaction order 5/7/07 A quote of the wee (or amel of the wee): Apply yourself. Get all the eduation you an, but then... do something. Don't just stand there, mae it happen. Lee Iaoa Physial Chemistry GTM/5 reation order

More information

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning

Topics. Bayesian Learning. What is Bayesian Learning? Objectives for Bayesian Learning Topics Bayesian Learning Sattiraju Prabhakar CS898O: ML Wichita State University Objectives for Bayesian Learning Bayes Theorem and MAP Bayes Optimal Classifier Naïve Bayes Classifier An Example Classifying

More information

Textual Document Indexing and Retrieval via Knowledge Sources and Data Mining

Textual Document Indexing and Retrieval via Knowledge Sources and Data Mining Textual Doument Indexing and Retrieval via Knowledge Soures and Data Mining Wesley. W. Chu, Zhenyu Liu and Wenlei Mao Computer Siene Department, University of California, Los Angeles 90095 {ww, viliu,

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson & J. Fischer) January 21,

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson & J. Fischer) January 21, Sequene Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson & J. Fisher) January 21, 201511 9 Suffix Trees and Suffix Arrays This leture is based on the following soures, whih are all reommended

More information

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev

UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS V. N. Matveev and O. V. Matvejev Joint-Stok Company Sinerta Savanoriu pr., 159, Vilnius, LT-315, Lithuania E-mail: matwad@mail.ru Abstrat

More information

Analysis of Variance (ANOVA) one way

Analysis of Variance (ANOVA) one way Analysis of Variane (ANOVA) one way ANOVA General ANOVA Setting "Slide 43-45) Investigator ontrols one or more fators of interest Eah fator ontains two or more levels Levels an be numerial or ategorial

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

Bayesian Learning. Reading: Tom Mitchell, Generative and discriminative classifiers: Naive Bayes and logistic regression, Sections 1-2.

Bayesian Learning. Reading: Tom Mitchell, Generative and discriminative classifiers: Naive Bayes and logistic regression, Sections 1-2. Bayesian Learning Reading: Tom Mitchell, Generative and discriminative classifiers: Naive Bayes and logistic regression, Sections 1-2. (Linked from class website) Conditional Probability Probability of

More information

Lecture 15 (Nov. 1, 2017)

Lecture 15 (Nov. 1, 2017) Leture 5 8.3 Quantum Theor I, Fall 07 74 Leture 5 (Nov., 07 5. Charged Partile in a Uniform Magneti Field Last time, we disussed the quantum mehanis of a harged partile moving in a uniform magneti field

More information

Test Data: Classes: Training Data:

Test Data: Classes: Training Data: CS276A Text Retreval and Mnng Reap of the last leture Probablst models n Informaton Retreval Probablty Rankng Prnple Bnary Independene Model Bayesan Networks for IR [very superfally] Leture 11 These models

More information

Data classification (II)

Data classification (II) Lecture 4: Data classification (II) Data Mining - Lecture 4 (2016) 1 Outline Decision trees Choice of the splitting attribute ID3 C4.5 Classification rules Covering algorithms Naïve Bayes Classification

More information

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES Supervised Learning Linear vs non linear classifiers In K-NN we saw an example of a non-linear classifier: the decision boundary

More information

Introduction to Exergoeconomic and Exergoenvironmental Analyses

Introduction to Exergoeconomic and Exergoenvironmental Analyses Tehnishe Universität Berlin Introdution to Exergoeonomi and Exergoenvironmental Analyses George Tsatsaronis The Summer Course on Exergy and its Appliation for Better Environment Oshawa, Canada April, 30

More information

SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS

SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS SOA/CAS MAY 2003 COURSE 1 EXAM SOLUTIONS Prepared by S. Broverman e-mail 2brove@rogers.om website http://members.rogers.om/2brove 1. We identify the following events:. - wathed gymnastis, ) - wathed baseball,

More information

Q2. [40 points] Bishop-Hill Model: Calculation of Taylor Factors for Multiple Slip

Q2. [40 points] Bishop-Hill Model: Calculation of Taylor Factors for Multiple Slip 27-750, A.D. Rollett Due: 20 th Ot., 2011. Homework 5, Volume Frations, Single and Multiple Slip Crystal Plastiity Note the 2 extra redit questions (at the end). Q1. [40 points] Single Slip: Calulating

More information

Classification Using Decision Trees

Classification Using Decision Trees Classification Using Decision Trees 1. Introduction Data mining term is mainly used for the specific set of six activities namely Classification, Estimation, Prediction, Affinity grouping or Association

More information

Classification and Regression Trees

Classification and Regression Trees Classification and Regression Trees Ryan P Adams So far, we have primarily examined linear classifiers and regressors, and considered several different ways to train them When we ve found the linearity

More information

COMP 328: Machine Learning

COMP 328: Machine Learning COMP 328: Machine Learning Lecture 2: Naive Bayes Classifiers Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology Spring 2010 Nevin L. Zhang

More information

Are You Ready? Ratios

Are You Ready? Ratios Ratios Teahing Skill Objetive Write ratios. Review with students the definition of a ratio. Explain that a ratio an be used to ompare anything that an be assigned a number value. Provide the following

More information

General Closed-form Analytical Expressions of Air-gap Inductances for Surfacemounted Permanent Magnet and Induction Machines

General Closed-form Analytical Expressions of Air-gap Inductances for Surfacemounted Permanent Magnet and Induction Machines General Closed-form Analytial Expressions of Air-gap Indutanes for Surfaemounted Permanent Magnet and Indution Mahines Ronghai Qu, Member, IEEE Eletroni & Photoni Systems Tehnologies General Eletri Company

More information

2 The Bayesian Perspective of Distributions Viewed as Information

2 The Bayesian Perspective of Distributions Viewed as Information A PRIMER ON BAYESIAN INFERENCE For the next few assignments, we are going to fous on the Bayesian way of thinking and learn how a Bayesian approahes the problem of statistial modeling and inferene. The

More information

Multicomponent analysis on polluted waters by means of an electronic tongue

Multicomponent analysis on polluted waters by means of an electronic tongue Sensors and Atuators B 44 (1997) 423 428 Multiomponent analysis on polluted waters by means of an eletroni tongue C. Di Natale a, *, A. Maagnano a, F. Davide a, A. D Amio a, A. Legin b, Y. Vlasov b, A.

More information

Simple areal weighting: illustration

Simple areal weighting: illustration 1 Using land over information to map population density Geographial disaggregation of statistial data Tallin, September 25-28, 2001 Spae Appliations Institute Diretorate General Joint Researh Centre European

More information

Tests of fit for symmetric variance gamma distributions

Tests of fit for symmetric variance gamma distributions Tests of fit for symmetri variane gamma distributions Fragiadakis Kostas UADPhilEon, National and Kapodistrian University of Athens, 4 Euripidou Street, 05 59 Athens, Greee. Keywords: Variane Gamma Distribution,

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 13: Text Classification & Naive Bayes Hinrich Schütze Center for Information and Language Processing, University of Munich 2014-05-15

More information

A Small Footprint i-vector Extractor

A Small Footprint i-vector Extractor A Small Footprint i-vetor Extrator Patrik Kenny Centre de reherhe informatique de Montréal (CRIM) Patrik.Kenny@rim.a Abstrat Both the memory and omputational requirements of algorithms traditionally used

More information

EECS 120 Signals & Systems University of California, Berkeley: Fall 2005 Gastpar November 16, Solutions to Exam 2

EECS 120 Signals & Systems University of California, Berkeley: Fall 2005 Gastpar November 16, Solutions to Exam 2 EECS 0 Signals & Systems University of California, Berkeley: Fall 005 Gastpar November 6, 005 Solutions to Exam Last name First name SID You have hour and 45 minutes to omplete this exam. he exam is losed-book

More information

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION 09-1289 Citation: Brilon, W. (2009): Impedane Effets of Left Turners from the Major Street at A TWSC Intersetion. Transportation Researh Reord Nr. 2130, pp. 2-8 IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE

More information

Relativistic Dynamics

Relativistic Dynamics Chapter 7 Relativisti Dynamis 7.1 General Priniples of Dynamis 7.2 Relativisti Ation As stated in Setion A.2, all of dynamis is derived from the priniple of least ation. Thus it is our hore to find a suitable

More information

( ) ( ) Volumetric Properties of Pure Fluids, part 4. The generic cubic equation of state:

( ) ( ) Volumetric Properties of Pure Fluids, part 4. The generic cubic equation of state: CE304, Spring 2004 Leture 6 Volumetri roperties of ure Fluids, part 4 The generi ubi equation of state: There are many possible equations of state (and many have been proposed) that have the same general

More information

Text Categorization CSE 454. (Based on slides by Dan Weld, Tom Mitchell, and others)

Text Categorization CSE 454. (Based on slides by Dan Weld, Tom Mitchell, and others) Text Categorization CSE 454 (Based on slides by Dan Weld, Tom Mitchell, and others) 1 Given: Categorization A description of an instance, x X, where X is the instance language or instance space. A fixed

More information