Handling Uncertainty
|
|
- Emory Hutchinson
- 5 years ago
- Views:
Transcription
1 Handling Unertainty
2 Unertain knowledge Typial example: Diagnosis. Name Toothahe Cavity Can we ertainly derive the diagnosti rule: if Toothahe=true then Cavity=true? The problem is that this rule isn t right always. Smith true true Mike true true Mary false true Quiny true false Not all patients with toothahe have avities; some of them have gum disease, an absess, et. We ould try turning the rule into a ausal rule: if Cavity=true then Toothahe=true But this rule isn t neessarily right either; not all avities ause pain.
3 Belief and Probability The onnetion between toothahes and avities is not a logial onsequene in either diretion. However, we an provide a degree of belief on the rules. Our main tool for this is probability theory. E.g. We might not know for sure what afflits a partiular patient, but we believe that there is, say, an 80% hane that is probability 0.8 that the patient has avity if he has a toothahe. We usually get this belief from statistial data.
4 Syntax Basi element: random variable orresponds to an attribute of data. e.g., Cavity (do I have a avity?) is one of <avity, avity> Weather is one of <sunny,rainy,loudy,snow> Both Cavity and Weather are disrete random variables Domain values must be exhaustive and mutually exlusive Elementary propositions are onstruted by the assignment of a value to a random variable: e.g., Cavity = avity, Weather = sunny
5 Prior probability and distribution Prior or unonditional probability assoiated with a proposition is the degree of belief aorded to it in the absene of any other information. e.g., P(Cavity = avity) = 0.1 (or abbrev. P(avity) = 0.1) P(Weather = sunny) = 0.7 (or abbrev. P(sunny) = 0.7) Probability distribution gives values for all possible assignments: P(Weather = sunny) = 0.7 P(Weather = rain) = 0. P(Weather = loudy) = 0.08 P(Weather = snow) = 0.0
6 Conditional probability E.g., P(avity toothahe) = 0.8 i.e., probability of avity given that toothahe is all I know It an be interpreted as the probability that the rule if Toothahe=true then Cavity=true holds. Definition of onditional probability: P(a b) = P(a b) / P(b) if P(b) > 0 Produt rule gives an alternative formulation: P(a b) = P(a b) P(b) = P(b a) P(a)
7 Bayes' Rule Produt rule P(a b) = P(a b) P(b) = P(b a) P(a) Bayes' rule: P(a b) = P(b a) P(a) / P(b) Useful for assessing diagnosti probability from ausal probability as: P(Cause Effet) = P(Effet Cause) P(Cause) / P(Effet) Bayes s rule is useful in pratie beause there are many ases where we do have good probability estimates for these three numbers and need to ompute the fourth.
8 Applying Bayes rule For example, A dotor knows that the meningitis auses the patient to have a stiff nek 50% of the time. The dotor also knows some unonditional fats: the prior probability that a patient has meningitis is 1/50,000, and the prior probability that any patient has a stiff nek is 1/0. So, what do we have in term of probabilities.
9 Bayes rule (ont d) P(StiffNek=true Meningitis=true) = 0.5 P(Meningitis=true) = 1/50000 P(StiffNek=true) = 1/0 P(Meningitis=true StiffNek=true) = P(StiffNek=true Meningitis=true) P(Meningitis=true) / P(StiffNek=true) = (0.5) * (1/50000) / (1/0) = That is, we expet only 1 in 5000 patients with a stiff nek to have meningitis. This is still a very small hane. Reason is a very small apriori probability. Also, observe that P(Meningitis=false StiffNek=true) = P(StiffNek=true Meningitis=false) P(Meningitis=false) / P(StiffNek=true) 1/ P(StiffNek=true) is the same for both onditional probabilities. It is alled the normalization onstant (denoted as α).
10 Bayes rule -- more vars P( ause, effet1, effet ) P( ause effet1, effet ) = = αp( ause, effet1, effet ) P( effet, effet ) = αp( effet, effet 1 = αp( effet 1 = αp( effet 1 effet effet, ause), ause) P( effet, ause) P( effet 1, ause) ause) P( ause) Although the effet 1 might not be independent of effet, it might be that given the ause they are independent. E.g. effet 1 is abilityinreading effet is lengthofarms There is indeed a dependene of abilityinreading to lengthofarms. People with longer arms read better than those with short arms. However, given the ause Age the abilityinreading is independent of lengthofarms.
11 Naive Bayes Two assumptions: Attributes (effets) are equally important onditionally independent (given the lass value) P( ause effet, effet 1 ) =αp( effet 1 effet, ause) P( effet ause) P( ause) P =αp ( effet1 ause) P( effet ause) P( ause) ause effet1,..., effetn ) = αp( effet ause)... P( effetn ause) P( ause) ( 1 This means that knowledge about the value of a partiular attribute doesn t tell us anything about the value of another attribute (if the lass is known) Although the formula is based on assumptions that are almost never orret, this sheme works well in pratie!
12 Weather Data Here we don t really have effets, but rather evidene.
13 Naïve Bayes for lassifiation Classifiation learning: what s the probability of the lass given an instane? Instane (Evidene E) E 1 =e 1, E =e,, E n =e n Class C = {, } Naïve Bayes assumption: evidene an be split into independent parts (i.e. attributes of instane!) P( E)=P( e 1,e,, e n ) = P(e 1 ) P(e ) P(e n ) P() / P(e 1,e,, e n )
14 The weather data example P(play=yes E) = P(Outlook=Sunny play=yes) * P(Temp=Cool play=yes) * P(Humidity=High play=yes) * P(Windy=True play=yes) * P(play=yes) / P(E) = (/9) * (3/9) * (3/9) * (3/9) * (9/14) / P(E) = / P(E) Don t worry for the 1/P(E); It s alpha, the normalization onstant.
15 The weather data example P(play=no E) = P(Outlook=Sunny play=no) * P(Temp=Cool play=no) * P(Humidity=High play=no) * P(Windy=True play=no) * P(play=no) / P(E) = (3/5) * (1/5) * (4/5) * (3/5) * (5/14) / P(E) = / P(E)
16 Normalization onstant play=yes play=no 0.5% E 79.5% P(play=yes E) + P(play=no E) = 1 i.e / P(E) / P(E) = 1 i.e. P(E) = So, P(play=yes E) = / ( ) = 0.5% P(play=no E) = / ( ) = 79.5%
17 The zero-frequeny problem What if an attribute value doesn t our with every lass value (e.g. Outlook=overast for lass Play=no )? Probability P(Outlook=overast play=no) will be zero! P(Play= no E) will also be zero! No matter how likely the other values are! Solution: Add 1 to the ount for every attribute value-lass ombination (Laplae estimator); Add k (no of possible attribute values) to the denominator. (see example on the right). P(play=yes E) = P(Outlook=Sunny play=yes) * P(Temp=Cool play=yes) * P(Humidity=High play=yes) * P(Windy=True play=yes) * P(play=yes) / P(E) = (/9) * (3/9) * (3/9) * (3/9) *(9/14) / P(E) = / P(E) It will be instead: Number of possible values for Outlook = ((+1)/(9+3)) * ((3+1)/(9+3)) * ((3+1)/(9+)) * ((3+1)/(9+)) *(10/16) / P(E) = / P(E) Number of possible values for Windy
18 Missing values Training phase : instane will not be inluded in the frequeny ount for attribute value-lass ombination Classifiation phase : attribute will be omitted from alulation Example: P(play=yes E) = P(play=no E) = P(Temp=Cool play=yes) * P(Humidity=High play=yes) * P(Windy=True play=yes) * P(play=yes) / P(E) = (4/1)*(4/11)*(4/11)*(10/16) / P(E) = / P(E) P(Temp=Cool play=no) * P(Humidity=High play=no) * P(Windy=True play=no) * P(play=no) / P(E) = (/8)*(5/7)*(4/7)*(6/16) / P(E) = / P(E) After normalization: P(play=yes E) = 4%, P(play=no E) = 58%
19 Dealing with numeri attributes Usual assumption: attributes have a normal or Gaussian probability distribution (given the lass). Probability density funtion for the normal distribution is: f ( x µ ) 1 σ ( x lass) = e σ ππ We approximate µ by the sample mean: x = 1 n n i= 1 x i We approximate σ by the sample variane: σ = 1 n n 1 i= 1 ( x i x)
20 Weather Data outlook temperature humidity windy play sunny FALSE no sunny TRUE no overast FALSE yes rainy FALSE yes rainy FALSE yes rainy TRUE no overast TRUE yes sunny 7 95 FALSE no sunny FALSE yes rainy FALSE yes sunny TRUE yes overast 7 90 TRUE yes overast FALSE yes rainy TRUE no We need to ompute: f(temperature=66 yes) f(temperature=66 yes) =e^(- ((66-m)^ / *var) ) / sqrt(*3.14*var) m = ( )/ 9 = 73 var = ( (83-73)^ + (70-73)^ + (68-73)^ + (64-73)^ + (69-73)^ + (75-73)^ + (75-73)^ + (7-73)^ + (81-73)^ )/ (9-1) = 38 f(temperature=66 yes) =e^(- ((66-73)^ / (*38) ) ) / sqrt(*3.14*38) =.034
21 Weather Data outlook temperature humidity windy play sunny FALSE no sunny TRUE no overast FALSE yes rainy FALSE yes rainy FALSE yes rainy TRUE no overast TRUE yes sunny 7 95 FALSE no sunny FALSE yes rainy FALSE yes sunny TRUE yes overast 7 90 TRUE yes overast FALSE yes rainy TRUE no We ompute similarly: f(humidity=90 yes) f(humidity=90 yes) =e^(- ((90-m)^ / *var) ) / sqrt(*3.14*var) m = ( )/ 9 = 79 var = ( (86-79)^ + (96-79)^ + (80-79)^ + (65-79)^ + (70-79)^ + (80-79)^ + (70-79)^ + (90-79)^ + (75-79)^ )/ (9-1) = 104 f(humidity=90 yes) =e^(- ((90-79)^ / (*104) ) ) / sqrt(*3.14*104) =.0
22 A new day E: Classifying a new day P(play=yes E) = P(Outlook=Sunny play=yes) * P(Temp=66 play=yes) * P(Humidity=90 play=yes) * P(Windy=True play=yes) * P(play=yes) / P(E) = = (/9) * (0.034) * (0.0) * (3/9) *(9/14) / P(E) = / P(E) P(play=no E) = P(Outlook=Sunny play=no) * P(Temp=66 play=no) * P(Humidity=90 play=no) * P(Windy=True play=no) * P(play=no) / P(E) = = (3/5) * (0.091) * (0.038) * (3/5) *(5/14) / P(E) = / P(E) After normalization: P(play=yes E) = 0.9%, P(play=no E) = 79.1%
23 10 Tax Data Naive Bayes Tid Refund Marital Status Taxable Inome Evade 1 Yes Single 15K No No Married 100K No 3 No Single 70K No 4 Yes Married 10K No 5 No Divored 95K Yes 6 No Married 60K No 7 Yes Divored 0K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Classify: (_, No, Married, 95K,?) (Apply also the Laplae normalization)
24 10 Tax Data Naive Bayes Tid Refund Marital Status Taxable Inome 1 Yes Single 15K No No Married 100K No 3 No Single 70K No 4 Yes Married 10K No Evade 5 No Divored 95K Yes 6 No Married 60K No 7 Yes Divored 0K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Classify: (_, No, Married, 95K,?) (Apply also the Laplae normalization) P(Yes E) =? P(Yes) = (3+1)/(10+) = 0.33 P(Refund=No Yes) = (3+1)/(3+) = 0.8 P(Status=Married Yes) = (0+1)/(3+3) = 0.17 f ( x µ ) 1 σ ( inome Yes) = e πσ Approximate µ with: ( )/3 =90 Approximate σ with: ( (95-90)^+(85-90) ^+(90-90) ^ )/ (3-1) = 5 f(inome=95 Yes) = e(- ( (95-90)^ / (*5)) ) / sqrt(*3.14*5) =.048 P(Yes E) = α*.8*.17*.048*.33= α*.00154
25 10 Tid Refund Marital Status Taxable Inome Evade Tax Data P(No E) =? P(No) =(7+1)/(10+) =.67 P(Refund=No No) = (4+1)/(7+) =.556 P(Status=Married No) = (4+1)/(7+3) =.5 1 Yes Single 15K No No Married 100K No 3 No Single 70K No 4 Yes Married 10K No 5 No Divored 95K Yes 6 No Married 60K No 7 Yes Divored 0K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Classify: (_, No, Married, 95K,?) (Apply also the Laplae normalization) f ( x µ ) 1 σ ( inome No) = e σ Approximate µ with: π ( )/7 =110 Approximate σ with: ((15-110)^ + ( )^ + (70-110)^ + (10-110)^ + (60-110)^ + (0-110)^ + (75-110)^ )/(7-1) = 975 f(inome=95 No) = e( -((95-110)^ / (*975)) ) /sqrt(*3.14* 975) = P(No E) = α*.556*.5*.00704*0.67= α*
26 10 Tax Data Tid Refund Marital Status Taxable Inome 1 Yes Single 15K No No Married 100K No 3 No Single 70K No 4 Yes Married 10K No Evade 5 No Divored 95K Yes 6 No Married 60K No 7 Yes Divored 0K No 8 No Single 85K Yes P(Yes E) = α* P(No E) = α* α = 1/( )=88.60 P(Yes E) = * = 0.6 P(No E) = * = 0.38 We predit Yes. 9 No Married 75K No 10 No Single 90K Yes Classify: (_, No, Married, 95K,?) (Apply also the Laplae normalization)
27 Text Categorization Text ategorization is the task of assigning a given doument to one of a fixed set of ategories, on the basis of the text it ontains. Naïve Bayes models are often used for this task. In these models, the query variable is the doument ategory, and the effet variables are the presene or absene of eah word in the language. How suh a model an be onstruted, given as training data a set of douments that have been assigned to ategories?
28 Text Categorization The model onsists of the prior probability P(Category) and the onditional probabilities P(Word i Category) for every word in the language For eah ategory, P(Category = ) is estimated as the fration of all the training douments that are of that ategory. Similarly, P(Word i = true Category = ) is estimated as the fration of douments of ategory that ontain word i. Also, P(Word i = true Category = ) is estimated as the fration of douments not of ategory that ontain word.
29 Text Categorization (ont d) Now we an use naïve Bayes for lassifying a new doument: Assume Word 1,, Word n are the words ourring in the new doument. P(Category = Word 1 = true,, Word n = true) = α*p(category = ) n i=1 P(Word i = true Category = ) P(Category = Word 1 = true,, Word n = true) = α*p(category = ) n i=1 P(Word i = true Category = ) α is the normalization onstant. Observe that similarly with the missing values. The new doument doesn t ontain every word for whih we omputed the probabilities.
CS 687 Jana Kosecka. Uncertainty, Bayesian Networks Chapter 13, Russell and Norvig Chapter 14,
CS 687 Jana Koseka Unertainty Bayesian Networks Chapter 13 Russell and Norvig Chapter 14 14.1-14.3 Outline Unertainty robability Syntax and Semantis Inferene Independene and Bayes' Rule Syntax Basi element:
More information7 Classification: Naïve Bayes Classifier
CSE4334/5334 Data Mining 7 Classifiation: Naïve Bayes Classifier Chengkai Li Department of Computer Siene and Engineering University of Texas at rlington Fall 017 Slides ourtesy of ang-ning Tan, Mihael
More informationBAYES CLASSIFIER. Ivan Michael Siregar APLYSIT IT SOLUTION CENTER. Jl. Ir. H. Djuanda 109 Bandung
BAYES CLASSIFIER www.aplysit.om www.ivan.siregar.biz ALYSIT IT SOLUTION CENTER Jl. Ir. H. Duanda 109 Bandung Ivan Mihael Siregar ivan.siregar@gmail.om Data Mining 2010 Bayesian Method Our fous this leture
More informationUncertainty. Chapter 13
Uncertainty Chapter 13 Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1. partial observability (road state, other drivers' plans, noisy
More informationNaive Bayes Classifier. Danushka Bollegala
Naive Bayes Classifier Danushka Bollegala Bayes Rule The probability of hypothesis H, given evidence E P(H E) = P(E H)P(H)/P(E) Terminology P(E): Marginal probability of the evidence E P(H): Prior probability
More informationNaïve Bayes for Text Classification
Naïve Bayes for Tet Classifiation adapted by Lyle Ungar from slides by Mith Marus, whih were adapted from slides by Massimo Poesio, whih were adapted from slides by Chris Manning : Eample: Is this spam?
More informationDanielle Maddix AA238 Final Project December 9, 2016
Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat
More informationData Mining Part 4. Prediction
Data Mining Part 4. Prediction 4.3. Fall 2009 Instructor: Dr. Masoud Yaghini Outline Introduction Bayes Theorem Naïve References Introduction Bayesian classifiers A statistical classifiers Introduction
More informationArtificial Intelligence
Artificial Intelligence Dr Ahmed Rafat Abas Computer Science Dept, Faculty of Computers and Informatics, Zagazig University arabas@zu.edu.eg http://www.arsaliem.faculty.zu.edu.eg/ Uncertainty Chapter 13
More informationData Mining and MapReduce. Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Data Mining and MapRedue Adapted from Letures by Prabhakar Raghavan Yahoo and Stanford and Christopher Manning Stanford 1 2 Overview Text Classifiation K-Means Classifiation The Naïve Bayes algorithm 3
More informationUncertainty. Outline
Uncertainty Chapter 13 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule 1 Uncertainty Let action A t = leave for airport t minutes before flight Will A t get
More informationAlgorithms for Classification: The Basic Methods
Algorithms for Classification: The Basic Methods Outline Simplicity first: 1R Naïve Bayes 2 Classification Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.
More informationControl Theory association of mathematics and engineering
Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology
More informationData Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan, Steinbach, Kumar Adapted by Qiang Yang (2010) Tan,Steinbach,
More informationDATA MINING: NAÏVE BAYES
DATA MINING: NAÏVE BAYES 1 Naïve Bayes Classifier Thomas Bayes 1702-1761 We will start off with some mathematical background. But first we start with some visual intuition. 2 Grasshoppers Antenna Length
More informationArtificial Intelligence Uncertainty
Artificial Intelligence Uncertainty Ch. 13 Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? A 25, A 60, A 3600 Uncertainty: partial observability (road
More informationUncertainty. Chapter 13
Uncertainty Chapter 13 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Uncertainty Let s say you want to get to the airport in time for a flight. Let action A
More informationHankel Optimal Model Order Reduction 1
Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both
More information18.05 Problem Set 6, Spring 2014 Solutions
8.5 Problem Set 6, Spring 4 Solutions Problem. pts.) a) Throughout this problem we will let x be the data of 4 heads out of 5 tosses. We have 4/5 =.56. Computing the likelihoods: 5 5 px H )=.5) 5 px H
More informationMethods of evaluating tests
Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed
More information10.5 Unsupervised Bayesian Learning
The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution
More informationCOMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012
COMP9414/ 9814/ 3411: Artificial Intelligence 14. Uncertainty Russell & Norvig, Chapter 13. COMP9414/9814/3411 14s1 Uncertainty 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence
More informationComputer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1
Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random
More informationSome facts you should know that would be convenient when evaluating a limit:
Some fats you should know that would be onvenient when evaluating a it: When evaluating a it of fration of two funtions, f(x) x a g(x) If f and g are both ontinuous inside an open interval that ontains
More information2 The Bayesian Perspective of Distributions Viewed as Information
A PRIMER ON BAYESIAN INFERENCE For the next few assignments, we are going to fous on the Bayesian way of thinking and learn how a Bayesian approahes the problem of statistial modeling and inferene. The
More informationGeneral Equilibrium. What happens to cause a reaction to come to equilibrium?
General Equilibrium Chemial Equilibrium Most hemial reations that are enountered are reversible. In other words, they go fairly easily in either the forward or reverse diretions. The thing to remember
More informationAn AI-ish view of Probability, Conditional Probability & Bayes Theorem
An AI-ish view of Probability, Conditional Probability & Bayes Theorem Review: Uncertainty and Truth Values: a mismatch Let action A t = leave for airport t minutes before flight. Will A 15 get me there
More information10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.
An AI-ish view of Probability, Conditional Probability & Bayes Theorem Review: Uncertainty and Truth Values: a mismatch Let action A t = leave for airport t minutes before flight. Will A 15 get me there
More informationPengju XJTU 2016
Introduction to AI Chapter13 Uncertainty Pengju Ren@IAIR Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Wumpus World Environment Squares adjacent to wumpus are
More informationQuantifying uncertainty & Bayesian networks
Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationCOMP61011 : Machine Learning. Probabilis*c Models + Bayes Theorem
COMP61011 : Machine Learning Probabilis*c Models + Bayes Theorem Probabilis*c Models - one of the most active areas of ML research in last 15 years - foundation of numerous new technologies - enables decision-making
More informationData classification (II)
Lecture 4: Data classification (II) Data Mining - Lecture 4 (2016) 1 Outline Decision trees Choice of the splitting attribute ID3 C4.5 Classification rules Covering algorithms Naïve Bayes Classification
More informationComplexity of Regularization RBF Networks
Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw
More informationSoft Computing. Lecture Notes on Machine Learning. Matteo Mattecci.
Soft Computing Lecture Notes on Machine Learning Matteo Mattecci matteucci@elet.polimi.it Department of Electronics and Information Politecnico di Milano Matteo Matteucci c Lecture Notes on Machine Learning
More informationInteligência Artificial (SI 214) Aula 15 Algoritmo 1R e Classificador Bayesiano
Inteligência Artificial (SI 214) Aula 15 Algoritmo 1R e Classificador Bayesiano Prof. Josenildo Silva jcsilva@ifma.edu.br 2015 2012-2015 Josenildo Silva (jcsilva@ifma.edu.br) Este material é derivado dos
More informationProbabilistic Reasoning
Probabilistic Reasoning Philipp Koehn 4 April 2017 Outline 1 Uncertainty Probability Inference Independence and Bayes Rule 2 uncertainty Uncertainty 3 Let action A t = leave for airport t minutes before
More informationArtificial Intelligence Programming Probability
Artificial Intelligence Programming Probability Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/?? 13-0: Uncertainty
More informationLikelihood-confidence intervals for quantiles in Extreme Value Distributions
Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio
More informationEvaluation of effect of blade internal modes on sensitivity of Advanced LIGO
Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple
More informationLecture Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 11: Uncertainty. Uncertainty.
Lecture Overview COMP 3501 / COMP 4704-4 Lecture 11: Uncertainty Return HW 1/Midterm Short HW 2 discussion Uncertainty / Probability Prof. JGH 318 Uncertainty Previous approaches dealt with relatively
More informationChapter 8 Hypothesis Testing
Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two
More informationMathacle. PSet Stats, Concepts In Statistics Level Number Name: Date: χ = npq
8.6. Chi-Square ( χ ) Test [MATH] From the DeMoivre -- Laplae Theorem, when ( 1 p) >> 1 1 m np b( n, p, m) ϕ npq npq np, where m is the observed number of suesses in n trials, the probability of suess
More informationUncertainty. Chapter 13. Chapter 13 1
Uncertainty Chapter 13 Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Chapter 13 2 Uncertainty Let action A t = leave for airport t minutes before
More informationCSCE 478/878 Lecture 6: Bayesian Learning and Graphical Models. Stephen Scott. Introduction. Outline. Bayes Theorem. Formulas
ian ian ian Might have reasons (domain information) to favor some hypotheses/predictions over others a priori ian methods work with probabilities, and have two main roles: Naïve Nets (Adapted from Ethem
More informationNonreversibility of Multiple Unicast Networks
Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast
More informationUncertainty (Chapter 13, Russell & Norvig) Introduction to Artificial Intelligence CS 150 Lecture 14
Uncertainty (Chapter 13, Russell & Norvig) Introduction to Artificial Intelligence CS 150 Lecture 14 Administration Last Programming assignment will be handed out later this week. I am doing probability
More informationAdvanced classifica-on methods
Advanced classifica-on methods Instance-based classifica-on Bayesian classifica-on Instance-Based Classifiers Set of Stored Cases Atr1... AtrN Class A B B C A C B Store the training records Use training
More informationThe Naïve Bayes Classifier. Machine Learning Fall 2017
The Naïve Bayes Classifier Machine Learning Fall 2017 1 Today s lecture The naïve Bayes Classifier Learning the naïve Bayes Classifier Practical concerns 2 Today s lecture The naïve Bayes Classifier Learning
More informationCS 5100: Founda.ons of Ar.ficial Intelligence
CS 5100: Founda.ons of Ar.ficial Intelligence Probabilistic Inference Prof. Amy Sliva November 3, 2011 Outline Discuss Midterm Class presentations start next week! Reasoning under uncertainty Probability
More informationBasic Probability. Robert Platt Northeastern University. Some images and slides are used from: 1. AIMA 2. Chris Amato
Basic Probability Robert Platt Northeastern University Some images and slides are used from: 1. AIMA 2. Chris Amato (Discrete) Random variables What is a random variable? Suppose that the variable a denotes
More informationProbabilistic representation and reasoning
Probabilistic representation and reasoning Applied artificial intelligence (EDA132) Lecture 09 2017-02-15 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,
More informationarxiv: v3 [cs.ai] 8 Feb 2019
Disintegration and Bayesian Inversion via String Diagrams K E N T A C H O and B A R T J A C O B S ariv:1709.00322v3 [s.ai] 8 Feb 2019 National Institute of Informatis 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo
More informationUncertain Knowledge and Reasoning
Uncertainty Part IV Uncertain Knowledge and Reasoning et action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1) partial observability (road state, other drivers
More informationLecture 3 - Lorentz Transformations
Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the
More informationDirectional Coupler. 4-port Network
Diretional Coupler 4-port Network 3 4 A diretional oupler is a 4-port network exhibiting: All ports mathed on the referene load (i.e. S =S =S 33 =S 44 =0) Two pair of ports unoupled (i.e. the orresponding
More informationAre You Ready? Ratios
Ratios Teahing Skill Objetive Write ratios. Review with students the definition of a ratio. Explain that a ratio an be used to ompare anything that an be assigned a number value. Provide the following
More informationQ2. [40 points] Bishop-Hill Model: Calculation of Taylor Factors for Multiple Slip
27-750, A.D. Rollett Due: 20 th Ot., 2011. Homework 5, Volume Frations, Single and Multiple Slip Crystal Plastiity Note the 2 extra redit questions (at the end). Q1. [40 points] Single Slip: Calulating
More informationOn the Quantum Theory of Radiation.
Physikalishe Zeitshrift, Band 18, Seite 121-128 1917) On the Quantum Theory of Radiation. Albert Einstein The formal similarity between the hromati distribution urve for thermal radiation and the Maxwell
More informationDATA MINING LECTURE 10
DATA MINING LECTURE 10 Classification Nearest Neighbor Classification Support Vector Machines Logistic Regression Naïve Bayes Classifier Supervised Learning 10 10 Illustrating Classification Task Tid Attrib1
More informationUVA CS / Introduc8on to Machine Learning and Data Mining
UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 13: Probability and Sta3s3cs Review (cont.) + Naïve Bayes Classifier Yanjun Qi / Jane, PhD University of Virginia Department
More informationGeneralized Dimensional Analysis
#HUTP-92/A036 7/92 Generalized Dimensional Analysis arxiv:hep-ph/9207278v1 31 Jul 1992 Howard Georgi Lyman Laboratory of Physis Harvard University Cambridge, MA 02138 Abstrat I desribe a version of so-alled
More informationMath 151 Introduction to Eigenvectors
Math 151 Introdution to Eigenvetors The motivating example we used to desrie matrixes was landsape hange and vegetation suession. We hose the simple example of Bare Soil (B), eing replaed y Grasses (G)
More informationSupplementary Materials
Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown
More informationDifferential Equations 8/24/2010
Differential Equations A Differential i Equation (DE) is an equation ontaining one or more derivatives of an unknown dependant d variable with respet to (wrt) one or more independent variables. Solution
More informationPhysics 486. Classical Newton s laws Motion of bodies described in terms of initial conditions by specifying x(t), v(t).
Physis 486 Tony M. Liss Leture 1 Why quantum mehanis? Quantum vs. lassial mehanis: Classial Newton s laws Motion of bodies desribed in terms of initial onditions by speifying x(t), v(t). Hugely suessful
More informationMaximum Entropy and Exponential Families
Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It
More informationBayesian Classification. Bayesian Classification: Why?
Bayesian Classification http://css.engineering.uiowa.edu/~comp/ Bayesian Classification: Why? Probabilistic learning: Computation of explicit probabilities for hypothesis, among the most practical approaches
More informationChapter 13 Quantifying Uncertainty
Chapter 13 Quantifying Uncertainty CS5811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Probability basics Syntax and semantics Inference
More informationNormative and descriptive approaches to multiattribute decision making
De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of
More informationClassification and Regression Trees
Classification and Regression Trees Ryan P Adams So far, we have primarily examined linear classifiers and regressors, and considered several different ways to train them When we ve found the linearity
More informationLecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.
Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these
More informationBayesian Learning Features of Bayesian learning methods:
Bayesian Learning Features of Bayesian learning methods: Each observed training example can incrementally decrease or increase the estimated probability that a hypothesis is correct. This provides a more
More informationReasoning with Uncertainty. Chapter 13
Reasoning with Uncertainty Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule 2 The real world is an uncertain place... Example: I need a plan that
More informationEFFECTIVE STRESS LAW FOR THE PERMEABILITY OF CLAY-RICH SANDSTONES
SCA22-5 1/6 EFFECTIVE STRESS LAW FOR THE PERMEABILITY OF CLAY-RICH SANDSTONES Widad Al-Wardy and Robert W. Zimmerman Department of Earth Siene and Engineering Imperial College of Siene, Tehnology and Mediine
More informationUNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS. V. N. Matveev and O. V. Matvejev
UNCERTAINTY RELATIONS AS A CONSEQUENCE OF THE LORENTZ TRANSFORMATIONS V. N. Matveev and O. V. Matvejev Joint-Stok Company Sinerta Savanoriu pr., 159, Vilnius, LT-315, Lithuania E-mail: matwad@mail.ru Abstrat
More informationUncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability
Problem of Logic Agents 22c:145 Artificial Intelligence Uncertainty Reading: Ch 13. Russell & Norvig Logic-agents almost never have access to the whole truth about their environments. A rational agent
More informationAdvanced Computational Fluid Dynamics AA215A Lecture 4
Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas
More information36-720: Log-Linear Models: Three-Way Tables
36-720: Log-Linear Models: Three-Way Tables Brian Junker September 10, 2007 Hierarhial Speifiation of Log-Linear Models Higher-Dimensional Tables Digression: Why Poisson If We Believe Multinomial? Produt
More informationAdvances in Radio Science
Advanes in adio Siene 2003) 1: 99 104 Copernius GmbH 2003 Advanes in adio Siene A hybrid method ombining the FDTD and a time domain boundary-integral equation marhing-on-in-time algorithm A Beker and V
More informationData Mining. Practical Machine Learning Tools and Techniques. Slides for Chapter 4 of Data Mining by I. H. Witten, E. Frank and M. A.
Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter of Data Mining by I. H. Witten, E. Frank and M. A. Hall Statistical modeling Opposite of R: use all the attributes Two assumptions:
More informationArtificial Intelligence CS 6364
Artificial Intelligence CS 6364 rofessor Dan Moldovan Section 12 robabilistic Reasoning Acting under uncertainty Logical agents assume propositions are - True - False - Unknown acting under uncertainty
More informationFig Review of Granta-gravel
0 Conlusion 0. Sope We have introdued the new ritial state onept among older onepts of lassial soil mehanis, but it would be wrong to leave any impression at the end of this book that the new onept merely
More informationQUANTUM MECHANICS II PHYS 517. Solutions to Problem Set # 1
QUANTUM MECHANICS II PHYS 57 Solutions to Problem Set #. The hamiltonian for a lassial harmoni osillator an be written in many different forms, suh as use ω = k/m H = p m + kx H = P + Q hω a. Find a anonial
More informationAn iterative least-square method suitable for solving large sparse matrices
An iteratie least-square method suitable for soling large sparse matries By I. M. Khabaza The purpose of this paper is to report on the results of numerial experiments with an iteratie least-square method
More informationAn I-Vector Backend for Speaker Verification
An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,
More informationCOMP61011! Probabilistic Classifiers! Part 1, Bayes Theorem!
COMP61011 Probabilistic Classifiers Part 1, Bayes Theorem Reverend Thomas Bayes, 1702-1761 p ( T W ) W T ) T ) W ) Bayes Theorem forms the backbone of the past 20 years of ML research into probabilistic
More informationarxiv:cond-mat/ v1 [cond-mat.stat-mech] 16 Aug 2004
Computational omplexity and fundamental limitations to fermioni quantum Monte Carlo simulations arxiv:ond-mat/0408370v1 [ond-mat.stat-meh] 16 Aug 2004 Matthias Troyer, 1 Uwe-Jens Wiese 2 1 Theoretishe
More informationOptimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach
Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)
More informationChapter 2. Conditional Probability
Chapter. Conditional Probability The probabilities assigned to various events depend on what is known about the experimental situation when the assignment is made. For a partiular event A, we have used
More informationThe gravitational phenomena without the curved spacetime
The gravitational phenomena without the urved spaetime Mirosław J. Kubiak Abstrat: In this paper was presented a desription of the gravitational phenomena in the new medium, different than the urved spaetime,
More informationClassification. Classification. What is classification. Simple methods for classification. Classification by decision tree induction
Classification What is classification Classification Simple methods for classification Classification by decision tree induction Classification evaluation Classification in Large Databases Classification
More informationA Spatiotemporal Approach to Passive Sound Source Localization
A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,
More informationProbabilistic representation and reasoning
Probabilistic representation and reasoning Applied artificial intelligence (EDAF70) Lecture 04 2019-02-01 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,
More informationCOMP 328: Machine Learning
COMP 328: Machine Learning Lecture 2: Naive Bayes Classifiers Nevin L. Zhang Department of Computer Science and Engineering The Hong Kong University of Science and Technology Spring 2010 Nevin L. Zhang
More informationWeb-Mining Agents Data Mining
Web-Mining Agents Data Mining Prof. Dr. Ralf Möller Dr. Özgür L. Özçep Universität zu Lübeck Institut für Informationssysteme Tanya Braun (Übungen) 2 Uncertainty AIMA Chapter 13 3 Outline Agents Uncertainty
More informationSensitivity Analysis in Markov Networks
Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores
More informationUncertainty. Outline. Probability Syntax and Semantics Inference Independence and Bayes Rule. AIMA2e Chapter 13
Uncertainty AIMA2e Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and ayes Rule 2 Uncertainty Let action A t = leave for airport t minutes before flight Will A
More informationModes are solutions, of Maxwell s equation applied to a specific device.
Mirowave Integrated Ciruits Prof. Jayanta Mukherjee Department of Eletrial Engineering Indian Institute of Tehnology, Bombay Mod 01, Le 06 Mirowave omponents Welome to another module of this NPTEL mok
More informationV. Interacting Particles
V. Interating Partiles V.A The Cumulant Expansion The examples studied in the previous setion involve non-interating partiles. It is preisely the lak of interations that renders these problems exatly solvable.
More information23.1 Tuning controllers, in the large view Quoting from Section 16.7:
Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output
More information