Relational Probabilistic Models
|
|
- Dayna Bridges
- 5 years ago
- Views:
Transcription
1 Relational Probabilistic Models flat or modular or hierarchical explicit states or features or individuals and relations static or finite stage or indefinite stage or infinite stage fully observable or partially observable deterministic or stochastic dynamics goals or complex preferences single agent or multiple agents knowledge is given or knowledge is learned perfect rationality or bounded rationality c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 1
2 Relational Probabilistic Models Often we want a random variable for each individual in a population build a probabilistic model before knowing the individuals learn the model for one set of individuals apply the model to new individuals allow complex relationships between individuals c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 2
3 Predicting students errors x 2 x 1 + y 2 y 1 z 3 z 2 z 1 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 3
4 Predicting students errors Knows_Carry Knows_Add x 2 x 1 + y 2 y 1 z 3 z 2 z 1 C 2 X 1 Y 1 C 1 X 0 Y 0 Z 2 Z 1 Z 0 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 4
5 Predicting students errors Knows_Carry Knows_Add x 2 x 1 + y 2 y 1 z 3 z 2 z 1 C 2 X 1 Y 1 C 1 X 0 Y 0 Z 2 Z 1 Z 0 What if there were multiple digits c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 5
6 Predicting students errors Knows_Carry Knows_Add x 2 x 1 + y 2 y 1 z 3 z 2 z 1 C 2 X 1 Y 1 C 1 X 0 Y 0 Z 2 Z 1 Z 0 What if there were multiple digits, problems c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 6
7 Predicting students errors Knows_Carry Knows_Add x 2 x 1 + y 2 y 1 z 3 z 2 z 1 C 2 X 1 Y 1 C 1 X 0 Y 0 Z 2 Z 1 Z 0 What if there were multiple digits, problems, students c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 7
8 Predicting students errors Knows_Carry Knows_Add x 2 x 1 + y 2 y 1 z 3 z 2 z 1 C 2 X 1 Y 1 C 1 X 0 Y 0 Z 2 Z 1 Z 0 What if there were multiple digits, problems, students, times? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 8
9 Predicting students errors Knows_Carry Knows_Add x 2 x 1 + y 2 y 1 z 3 z 2 z 1 C 2 X 1 Y 1 C 1 X 0 Y 0 Z 2 Z 1 Z 0 What if there were multiple digits, problems, students, times? How can we build a model before we know the individuals? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 9
10 Multi-digit addition with parametrized BNs / plates S,T knows_carry(s,t) knows_add(s,t) x jx x 2 x 1 + y jz y 2 y 1 z jz z 2 z 1 D,P x(d,p) y(d,p) c(d,p,s,t) z(d,p,s,t) Parametrized Random Variables: x(d, P), y(d, P), knows carry(s, T ), knows add(s, T ), c(d, P, S, T ), z(d, P, S, T ) for digit D, problem P, student S, time T. There is a random variable for each assignment of a value to D and a value to P in x(d, P).... c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 10
11 Parametrized Bayesian networks / Plates Parametrized Bayes Net: r(x) + Individuals: i 1,...,i k X Bayes Net... r(i 1 ) r(i k ) c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 11
12 Parametrized Bayesian networks / Plates (2) r(x) q q r(i 1 )... r(i k ) s(x) t Individuals: i 1,...,i k X s(i ) 1... s(i ) k t c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 12
13 Creating Dependencies Instances of plates are independent, except by common parents or children. q q Common Parents r(x) X r(i 1 )... r(i k ) Observed Children r(x) X r(i 1 )... r(i k ) q q c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 13
14 Overlapping plates Person young likes genre Movie Relations: likes(p, M), young(p), genre(m) likes is Boolean, young is Boolean, genre has range {action, romance, family} c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 14
15 Overlapping plates Person y(s) g(r) g(t) young likes genre y(c) y(k) l(s,r) l(c,r) l(s,t) l(c,t) Movie l(k,r) l(k,t) Relations: likes(p, M), young(p), genre(m) likes is Boolean, young is Boolean, genre has range {action, romance, family} Three students: sam (s), chris (c), kim (k) Two movies: rango (r), terminator (t) c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 15
16 Representing Conditional Probabilities P(likes(P, M) young(p), genre(m)) parameter sharing individuals share probability parameters. P(happy(X ) friend(x, Y ), mean(y )) needs aggregation happy(a) depends on an unbounded number of parents. the carry of one digit depends on carry of the previous digit probability that two authors collaborate depends on whether they have a paper authored together c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 16
17 Creating Dependencies: Exploit Domain Structure r(x) s(x) X r(i 1 ) r(i 2 ) r(i 3 ) r(i 4 ) s(i 1 ) s(i 2 ) s(i 3 )... c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 17
18 Creating Dependencies: Relational Structure author(a,p) P A' collaborators(a,a') A author(a i,p j ) author(a k,p j ) a i A a k A a i a k p j P collaborators(a i,a k ) c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 18
19 Independent Choice Logic A language for first-order probabilistic models. Idea : combine logic and probability, where all uncertainty in handled in terms of Bayesian decision theory, and a logic program specifies consequences of choices. Parametrized random variables are represented as logical atoms, and plates correspond to logical variables. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 19
20 Independent Choice Logic An alternative is a set of ground atomic formulas. C, the choice space is a set of disjoint alternatives. F, the facts is a logic program that gives consequences of choices. P 0 a probability distribution over alternatives: A C a A P 0 (a) = 1. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 20
21 Meaningless Example C = {{c 1, c 2, c 3 }, {b 1, b 2 }} F = { f c 1 b 1, f c 3 b 2, d c 1, d c 2 b 1, e f, e d} P 0 (c 1 ) = 0.5 P 0 (c 2 ) = 0.3 P 0 (c 3 ) = 0.2 P 0 (b 1 ) = 0.9 P 0 (b 2 ) = 0.1 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 21
22 Semantics of ICL There is a possible world for each selection of one element from each alternative. The logic program together with the selected atoms specifies what is true in each possible world. The elements of different alternatives are independent. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 22
23 Meaningless Example: Semantics F = { f c 1 b 1, f c 3 b 2, d c 1, d c 2 b 1, e f, e d} P 0 (c 1 ) = 0.5 P 0 (c 2 ) = 0.3 P 0 (c 3 ) = 0.2 P 0 (b 1 ) = 0.9 P 0 (b 2 ) = 0.1 selection {}}{ logic program {}}{ w 1 = c 1 b 1 f d e P(w 1 ) = 0.45 w 2 = c 2 b 1 f d e P(w 2 ) = 0.27 w 3 = c 3 b 1 f d e P(w 3 ) = 0.18 w 4 = c 1 b 2 f d e P(w 4 ) = 0.05 w 5 = c 2 b 2 f d e P(w 5 ) = 0.03 w 6 = c 3 b 2 f d e P(w 6 ) = 0.02 P(e) = = 0.77 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 23
24 Belief Networks, Decision trees and ICL rules Ta There is a local mapping from belief networks into ICL. Al Le Re Fi Sm prob ta : prob fire : alarm ta fire atf. alarm ta fire antf. alarm ta fire atnf. alarm ta fire antnf. prob atf : 0.5. prob antf : prob atnf : prob antnf : smoke fire sf. prob sf : 0.9. smoke fire snf. prob snf : c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 24
25 Belief Networks, Decision trees and ICL rules Rules can represent decision tree with probabilities: 0.3 f C 0.5 A t B D P(e A,B,C,D) e a b h 1. P 0 (h 1 ) = 0.7 e a b h 2. P 0 (h 2 ) = 0.2 e a c d h 3. P 0 (h 3 ) = 0.9 e a c d h 4. P 0 (h 4 ) = 0.5 e a c h 5. P 0 (h 5 ) = 0.3 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 25
26 Movie Ratings Person young likes genre Movie prob young(p) : 0.4. prob genre(m, action) : 0.4, genre(m, romance) : 0.3, genre(m, family) : 0.4. likes(p, M) young(p) genre(m, G) ly(p, M, G). likes(p, M) young(p) genre(m, G) lny(p, M, G). prob ly(p, M, action) : 0.7. prob ly(p, M, romance) : 0.3. prob ly(p, M, family) : 0.8. prob lny(p, M, action) : 0.2. prob lny(p, M, romance) : 0.9. prob lny(p, M, family) : 0.3. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 26
27 Example: Multi-digit addition S,T knows_carry(s,t) knows_add(s,t) x jx x 2 x 1 + y jz y 2 y 1 z jz z 2 z 1 D,P x(d,p) y(d,p) c(d,p,s,t) z(d,p,s,t) c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 27
28 ICL rules for multi-digit addition z(d, P, S, T ) = V x(d, P) = Vx y(d, P) = Vy c(d, P, S, T ) = Vc knows add(s, T ) mistake(d, P, S, T ) V is (Vx + Vy + Vc) div 10. z(d, P, S, T ) = V knows add(s, T ) mistake(d, P, S, T ) selectdig(d, P, S, T ) = V. z(d, P, S, T ) = V knows add(s, T ) selectdig(d, P, S, T ) = V. Alternatives: DPST {nomistake(d, P, S, T ), mistake(d, P, S, T )} DPST {selectdig(d, P, S, T ) = V V {0..9}} c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 28
29 Hidden Variables Student Course Grade s 1 c 1 A s 2 c 1 C s 1 c 2 B s 2 c 3 B s 3 c 2 B s 4 c 3 B s 3 c 4? s 4 c 4? c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 29
30 Hidden Variables Student Course Grade s 1 c 1 A s 2 c 1 C s 1 c 2 B s 2 c 3 B s 3 c 2 B s 4 c 3 B s 3 c 4? s 4 c 4? int(s) diff (C) grade(s, C) A B C true true true false false true false false P(int(S)) = 0.5 P(diff (C)) = 0.5 c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 30
31 Hidden Variables Student Course Grade s 1 c 1 A s 2 c 1 C s 1 c 2 B s 2 c 3 B s 3 c 2 B s 4 c 3 B s 3 c 4? s 4 c 4? int(s) diff (C) grade(s, C) A B C true true true false false true false false P(int(S)) = 0.5 P(diff (C)) = 0.5 P(grade(s3, c4, a) Obs) = 0.491, P(grade(s3, c4, b) Obs) = 0.245, P(grade(s3, c4, c) Obs) = P(grade(s4, c4, a) Obs) = 0.264, P(grade(s4, c4, b) Obs) = 0.245, P(grade(s4, c4, c) Obs) = c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 31
32 Learning Relational Models with Hidden Variables User Item Date Rating Sam Terminator Sam Rango Sam The Holiday Chris The Holiday Netflix: 500,000 users, 17,000 movies, 100,000,000 ratings. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 32
33 Learning Relational Models with Hidden Variables User Item Date Rating Sam Terminator Sam Rango Sam The Holiday Chris The Holiday Netflix: 500,000 users, 17,000 movies, 100,000,000 ratings. r ui = rating of user u on item i rˆ ui = predicted rating of user u on item i D = set of (u, i, r) tuples in the training set Sum squares error: ( rˆ ui r) 2 (u,i,r) D c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 33
34 Learning Relational Models with Hidden Variables Predict same for all ratings: rˆ ui = µ c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 34
35 Learning Relational Models with Hidden Variables Predict same for all ratings: rˆ ui = µ Adjust for each user and item: rˆ ui = µ + b i + c u c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 35
36 Learning Relational Models with Hidden Variables Predict same for all ratings: rˆ ui = µ Adjust for each user and item: rˆ ui = µ + b i + c u One hidden feature: f i for each item and g u for each user rˆ ui = µ + b i + c u + f i g u c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 36
37 Learning Relational Models with Hidden Variables Predict same for all ratings: rˆ ui = µ Adjust for each user and item: rˆ ui = µ + b i + c u One hidden feature: f i for each item and g u for each user rˆ ui = µ + b i + c u + f i g u k hidden features: rˆ ui = µ + b i + c u + k f ik g ku c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 37
38 Learning Relational Models with Hidden Variables Predict same for all ratings: rˆ ui = µ Adjust for each user and item: rˆ ui = µ + b i + c u One hidden feature: f i for each item and g u for each user rˆ ui = µ + b i + c u + f i g u k hidden features: rˆ ui = µ + b i + c u + k f ik g ku Regularize minimize (u,i) K(µ + b i + c u + k + λ(bi 2 + cu 2 + k f ik g ku r ui ) 2 f 2 ik + g 2 ku ) c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 38
39 Parameter Learning using Gradient Descent µ average rating assign f [i, k], g[k, u] randomly assign b[i], c[u] arbitrarily repeat: for each (u, i, r) D: e µ + b[i] + c[u] + k f [i, k] g[k, u] r b[i] b[i] η e η λ b[i] c[u] c[u] η e η λ c[u] for each feature k: f [i, k] f [i, k] η e g[k, u] η λ f [i, k] g[k, u] g[k, u] η e f [i, k] η λ g[k, u] c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 39
Learning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 14.3, Page 1
Learning Objectives At the end of the class you should be able to: describe the mapping between relational probabilistic models and their groundings read plate notation build a relational probabilistic
More informationLogic, Probability and Computation: Statistical Relational AI and Beyond
Logic and Probability Inference Weighted Existence Logic, Probability and Computation: Statistical Relational AI and Beyond David Poole Department of Computer Science, University of British Columbia March
More informationLogic, Probability and Computation: Statistical Relational AI and Beyond
Logic and Probability Inference Weighted Existence Logic, Probability and Computation: Statistical Relational AI and Beyond David Poole Department of Computer Science, University of British Columbia November
More informationIndividuals and Relations
Individuals and Relations It is useful to view the world as consisting of individuals (objects, things) and relations among individuals. Often features are made from relations among individuals and functions
More informationModel Averaging (Bayesian Learning)
Model Averaging (Bayesian Learning) We want to predict the output Y of a new case that has input X = x given the training examples e: p(y x e) = m M P(Y m x e) = m M P(Y m x e)p(m x e) = m M P(Y m x)p(m
More informationPropositions. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 5.1, Page 1
Propositions An interpretation is an assignment of values to all variables. A model is an interpretation that satisfies the constraints. Often we don t want to just find a model, but want to know what
More informationStochastic Simulation
Stochastic Simulation Idea: probabilities samples Get probabilities from samples: X count x 1 n 1. x k total. n k m X probability x 1. n 1 /m. x k n k /m If we could sample from a variable s (posterior)
More informationLearning Objectives. c D. Poole and A. Mackworth 2010 Artificial Intelligence, Lecture 7.2, Page 1
Learning Objectives At the end of the class you should be able to: identify a supervised learning problem characterize how the prediction is a function of the error measure avoid mixing the training and
More informationLogic, Knowledge Representation and Bayesian Decision Theory
Logic, Knowledge Representation and Bayesian Decision Theory David Poole University of British Columbia Overview Knowledge representation, logic, decision theory. Belief networks Independent Choice Logic
More informationReasoning under Uncertainty: Intro to Probability
Reasoning under Uncertainty: Intro to Probability Computer Science cpsc322, Lecture 24 (Textbook Chpt 6.1, 6.1.1) March, 15, 2010 CPSC 322, Lecture 24 Slide 1 To complete your Learning about Logics Review
More informationReasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule
Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule Alan Mackworth UBC CS 322 Uncertainty 2 March 13, 2013 Textbook 6.1.3 Lecture Overview Recap: Probability & Possible World Semantics
More informationReasoning Under Uncertainty: Bayesian networks intro
Reasoning Under Uncertainty: Bayesian networks intro Alan Mackworth UBC CS 322 Uncertainty 4 March 18, 2013 Textbook 6.3, 6.3.1, 6.5, 6.5.1, 6.5.2 Lecture Overview Recap: marginal and conditional independence
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationPopulation Size Extrapolation in Relational Probabilistic Model
Population Size Extrapolation in Relational Probabilistic Modelling David Poole, David Buchman, Seyed Mehran Kazemi, Kristian Kersting and Sriraam Natarajan University of British Columbia, Technical University
More informationStochastic Simulation
Stochastic Simulation Idea: probabilities samples Get probabilities from samples: X count x 1 n 1. x k total. n k m X probability x 1. n 1 /m. x k n k /m If we could sample from a variable s (posterior)
More informationProbability. CS 3793/5233 Artificial Intelligence Probability 1
CS 3793/5233 Artificial Intelligence 1 Motivation Motivation Random Variables Semantics Dice Example Joint Dist. Ex. Axioms Agents don t have complete knowledge about the world. Agents need to make decisions
More informationLecture 10: Introduction to reasoning under uncertainty. Uncertainty
Lecture 10: Introduction to reasoning under uncertainty Introduction to reasoning under uncertainty Review of probability Axioms and inference Conditional probability Probability distributions COMP-424,
More informationArtificial Intelligence Markov Chains
Artificial Intelligence Markov Chains Stephan Dreiseitl FH Hagenberg Software Engineering & Interactive Media Stephan Dreiseitl (Hagenberg/SE/IM) Lecture 12: Markov Chains Artificial Intelligence SS2010
More informationAnnouncements. Assignment 5 due today Assignment 6 (smaller than others) available sometime today Midterm: 27 February.
Announcements Assignment 5 due today Assignment 6 (smaller than others) available sometime today Midterm: 27 February. You can bring in a letter sized sheet of paper with anything written in it. Exam will
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 9: A Bayesian model of concept learning Chris Lucas School of Informatics University of Edinburgh October 16, 218 Reading Rules and Similarity in Concept Learning
More informationUncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial
C:145 Artificial Intelligence@ Uncertainty Readings: Chapter 13 Russell & Norvig. Artificial Intelligence p.1/43 Logic and Uncertainty One problem with logical-agent approaches: Agents almost never have
More informationAndriy Mnih and Ruslan Salakhutdinov
MATRIX FACTORIZATION METHODS FOR COLLABORATIVE FILTERING Andriy Mnih and Ruslan Salakhutdinov University of Toronto, Machine Learning Group 1 What is collaborative filtering? The goal of collaborative
More information6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE DP for imperfect state info Sufficient statistics Conditional state distribution as a sufficient statistic Finite-state systems Examples 1 REVIEW: IMPERFECT
More informationCollaborative Filtering. Radek Pelánek
Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine
More informationProbabilistic Reasoning. (Mostly using Bayesian Networks)
Probabilistic Reasoning (Mostly using Bayesian Networks) Introduction: Why probabilistic reasoning? The world is not deterministic. (Usually because information is limited.) Ways of coping with uncertainty
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 10 Oct, 13, 2011 CPSC 502, Lecture 10 Slide 1 Today Oct 13 Inference in HMMs More on Robot Localization CPSC 502, Lecture
More informationCMPT 310 Artificial Intelligence Survey. Simon Fraser University Summer Instructor: Oliver Schulte
CMPT 310 Artificial Intelligence Survey Simon Fraser University Summer 2017 Instructor: Oliver Schulte Assignment 3: Chapters 13, 14, 18, 20. Probabilistic Reasoning and Learning Instructions: The university
More informationArtificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!
Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error
More informationQ-learning. Tambet Matiisen
Q-learning Tambet Matiisen (based on chapter 11.3 of online book Artificial Intelligence, foundations of computational agents by David Poole and Alan Mackworth) Stochastic gradient descent Experience
More informationA Stochastic l-calculus
A Stochastic l-calculus Content Areas: probabilistic reasoning, knowledge representation, causality Tracking Number: 775 Abstract There is an increasing interest within the research community in the design
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationCOMP5211 Lecture Note on Reasoning under Uncertainty
COMP5211 Lecture Note on Reasoning under Uncertainty Fangzhen Lin Department of Computer Science and Engineering Hong Kong University of Science and Technology Fangzhen Lin (HKUST) Uncertainty 1 / 33 Uncertainty
More informationComputer Science CPSC 322. Lecture 18 Marginalization, Conditioning
Computer Science CPSC 322 Lecture 18 Marginalization, Conditioning Lecture Overview Recap Lecture 17 Joint Probability Distribution, Marginalization Conditioning Inference by Enumeration Bayes Rule, Chain
More informationUsing first-order logic, formalize the following knowledge:
Probabilistic Artificial Intelligence Final Exam Feb 2, 2016 Time limit: 120 minutes Number of pages: 19 Total points: 100 You can use the back of the pages if you run out of space. Collaboration on the
More informationCh.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science
Ch.6 Uncertain Knowledge Representation Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of Computer Science Artificial Intelligence p.1/39 Logic and Uncertainty One
More informationReasoning Under Uncertainty: Introduction to Probability
Reasoning Under Uncertainty: Introduction to Probability CPSC 322 Lecture 23 March 12, 2007 Textbook 9 Reasoning Under Uncertainty: Introduction to Probability CPSC 322 Lecture 23, Slide 1 Lecture Overview
More informationIntroduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationProbabilistic representation and reasoning
Probabilistic representation and reasoning Applied artificial intelligence (EDA132) Lecture 09 2017-02-15 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,
More informationUncertainty and Bayesian Networks
Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks
More informationIntroduction to Artificial Intelligence. Unit # 11
Introduction to Artificial Intelligence Unit # 11 1 Course Outline Overview of Artificial Intelligence State Space Representation Search Techniques Machine Learning Logic Probabilistic Reasoning/Bayesian
More informationCS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016 Plan for today and next week Today and next time: Bayesian networks (Bishop Sec. 8.1) Conditional
More informationPreliminaries. Data Mining. The art of extracting knowledge from large bodies of structured data. Let s put it to use!
Data Mining The art of extracting knowledge from large bodies of structured data. Let s put it to use! 1 Recommendations 2 Basic Recommendations with Collaborative Filtering Making Recommendations 4 The
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian
More informationBayesian Models in Machine Learning
Bayesian Models in Machine Learning Lukáš Burget Escuela de Ciencias Informáticas 2017 Buenos Aires, July 24-29 2017 Frequentist vs. Bayesian Frequentist point of view: Probability is the frequency of
More informationQuantifying Uncertainty
Quantifying Uncertainty CSL 302 ARTIFICIAL INTELLIGENCE SPRING 2014 Outline Probability theory - basics Probabilistic Inference oinference by enumeration Conditional Independence Bayesian Networks Learning
More informationComputational Learning Theory. CS 486/686: Introduction to Artificial Intelligence Fall 2013
Computational Learning Theory CS 486/686: Introduction to Artificial Intelligence Fall 2013 1 Overview Introduction to Computational Learning Theory PAC Learning Theory Thanks to T Mitchell 2 Introduction
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 3, 2016 CPSC 422, Lecture 11 Slide 1 422 big picture: Where are we? Query Planning Deterministic Logics First Order Logics Ontologies
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationRepresenting and Querying Correlated Tuples in Probabilistic Databases
Representing and Querying Correlated Tuples in Probabilistic Databases Prithviraj Sen Amol Deshpande Department of Computer Science University of Maryland, College Park. International Conference on Data
More informationThe Independent Choice Logic and Beyond
The Independent Choice Logic and Beyond David Poole Department of Computer Science University of British Columbia 2366 Main Mall Vancouver, B.C., Canada V6T 1Z4 poole@cs.ubc.ca http://www.cs.ubc.ca/spider/poole/
More informationLogic: Intro & Propositional Definite Clause Logic
Logic: Intro & Propositional Definite Clause Logic Alan Mackworth UBC CS 322 Logic 1 February 27, 2013 P & M extbook 5.1 Lecture Overview Recap: CSP planning Intro to Logic Propositional Definite Clause
More informationProbabilistic Robotics
Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh),
More informationEE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes Uncertainty & Bayesian Networks
More informationComputer Science CPSC 322. Lecture 23 Planning Under Uncertainty and Decision Networks
Computer Science CPSC 322 Lecture 23 Planning Under Uncertainty and Decision Networks 1 Announcements Final exam Mon, Dec. 18, 12noon Same general format as midterm Part short questions, part longer problems
More informationReasoning Under Uncertainty: Conditional Probability
Reasoning Under Uncertainty: Conditional Probability CPSC 322 Uncertainty 2 Textbook 6.1 Reasoning Under Uncertainty: Conditional Probability CPSC 322 Uncertainty 2, Slide 1 Lecture Overview 1 Recap 2
More informationY. Xiang, Inference with Uncertain Knowledge 1
Inference with Uncertain Knowledge Objectives Why must agent use uncertain knowledge? Fundamentals of Bayesian probability Inference with full joint distributions Inference with Bayes rule Bayesian networks
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Matrix Data: Classification: Part 2 Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2014 Methods to Learn Matrix Data Set Data Sequence Data Time Series Graph & Network
More informationReasoning under Uncertainty: Intro to Probability
Reasoning under Uncertainty: Intro to Probability Computer Science cpsc322, Lecture 24 (Textbook Chpt 6.1, 6.1.1) Nov, 2, 2012 CPSC 322, Lecture 24 Slide 1 Tracing Datalog proofs in AIspace You can trace
More informationUniversity of Washington Department of Electrical Engineering EE512 Spring, 2006 Graphical Models
University of Washington Department of Electrical Engineering EE512 Spring, 2006 Graphical Models Jeff A. Bilmes Lecture 1 Slides March 28 th, 2006 Lec 1: March 28th, 2006 EE512
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationMIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,
MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run
More informationGenerative Techniques: Bayes Rule and the Axioms of Probability
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2016/2017 Lesson 8 3 March 2017 Generative Techniques: Bayes Rule and the Axioms of Probability Generative
More informationMachine learning: lecture 20. Tommi S. Jaakkola MIT CSAIL
Machine learning: lecture 20 ommi. Jaakkola MI CAI tommi@csail.mit.edu opics Representation and graphical models examples Bayesian networks examples, specification graphs and independence associated distribution
More informationRecent Developments in Statistical Dialogue Systems
Recent Developments in Statistical Dialogue Systems Steve Young Machine Intelligence Laboratory Information Engineering Division Cambridge University Engineering Department Cambridge, UK Contents Review
More informationQualifier: CS 6375 Machine Learning Spring 2015
Qualifier: CS 6375 Machine Learning Spring 2015 The exam is closed book. You are allowed to use two double-sided cheat sheets and a calculator. If you run out of room for an answer, use an additional sheet
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationQuantifying uncertainty & Bayesian networks
Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationMachine Learning (CSE 446): Neural Networks
Machine Learning (CSE 446): Neural Networks Noah Smith c 2017 University of Washington nasmith@cs.washington.edu November 6, 2017 1 / 22 Admin No Wednesday office hours for Noah; no lecture Friday. 2 /
More informationProbabilistic Partial Evaluation: Exploiting rule structure in probabilistic inference
Probabilistic Partial Evaluation: Exploiting rule structure in probabilistic inference David Poole University of British Columbia 1 Overview Belief Networks Variable Elimination Algorithm Parent Contexts
More informationProbabilistic Graphical Models: MRFs and CRFs. CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov
Probabilistic Graphical Models: MRFs and CRFs CSE628: Natural Language Processing Guest Lecturer: Veselin Stoyanov Why PGMs? PGMs can model joint probabilities of many events. many techniques commonly
More informationUncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004
Uncertainty Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, 2004 Administration PA 1 will be handed out today. There will be a MATLAB tutorial tomorrow, Friday, April 2 in AP&M 4882 at
More informationIntroduction to Bayes Nets. CS 486/686: Introduction to Artificial Intelligence Fall 2013
Introduction to Bayes Nets CS 486/686: Introduction to Artificial Intelligence Fall 2013 1 Introduction Review probabilistic inference, independence and conditional independence Bayesian Networks - - What
More informationLearning Neural Networks
Learning Neural Networks Neural Networks can represent complex decision boundaries Variable size. Any boolean function can be represented. Hidden units can be interpreted as new features Deterministic
More informationBayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018
Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Slides have been adopted from Klein and Abdeel, CS188, UC Berkeley. Outline Probability
More informationScaling Neighbourhood Methods
Quick Recap Scaling Neighbourhood Methods Collaborative Filtering m = #items n = #users Complexity : m * m * n Comparative Scale of Signals ~50 M users ~25 M items Explicit Ratings ~ O(1M) (1 per billion)
More informationPOLYNOMIAL SPACE QSAT. Games. Polynomial space cont d
T-79.5103 / Autumn 2008 Polynomial Space 1 T-79.5103 / Autumn 2008 Polynomial Space 3 POLYNOMIAL SPACE Polynomial space cont d Polynomial space-bounded computation has a variety of alternative characterizations
More informationMixed Membership Matrix Factorization
Mixed Membership Matrix Factorization Lester Mackey University of California, Berkeley Collaborators: David Weiss, University of Pennsylvania Michael I. Jordan, University of California, Berkeley 2011
More informationOutline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car
CSE 573: Artificial Intelligence Autumn 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Outline Probabilistic models (and inference)
More informationLocal Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat:
Local Search Local Search (Greedy Descent): Maintain an assignment of a value to each variable. Repeat: I I Select a variable to change Select a new value for that variable Until a satisfying assignment
More informationRecommendation Systems
Recommendation Systems Pawan Goyal CSE, IITKGP October 21, 2014 Pawan Goyal (IIT Kharagpur) Recommendation Systems October 21, 2014 1 / 52 Recommendation System? Pawan Goyal (IIT Kharagpur) Recommendation
More informationBayesian Network Representation
Bayesian Network Representation Sargur Srihari srihari@cedar.buffalo.edu 1 Topics Joint and Conditional Distributions I-Maps I-Map to Factorization Factorization to I-Map Perfect Map Knowledge Engineering
More informationBayesian Networks aka belief networks, probabilistic networks. Bayesian Networks aka belief networks, probabilistic networks. An Example Bayes Net
Bayesian Networks aka belief networks, probabilistic networks A BN over variables {X 1, X 2,, X n } consists of: a DAG whose nodes are the variables a set of PTs (Pr(X i Parents(X i ) ) for each X i P(a)
More informationA Modified PMF Model Incorporating Implicit Item Associations
A Modified PMF Model Incorporating Implicit Item Associations Qiang Liu Institute of Artificial Intelligence College of Computer Science Zhejiang University Hangzhou 31007, China Email: 01dtd@gmail.com
More informationProbabilistic Representation and Reasoning
Probabilistic Representation and Reasoning Alessandro Panella Department of Computer Science University of Illinois at Chicago May 4, 2010 Alessandro Panella (CS Dept. - UIC) Probabilistic Representation
More information1 [15 points] Search Strategies
Probabilistic Foundations of Artificial Intelligence Final Exam Date: 29 January 2013 Time limit: 120 minutes Number of pages: 12 You can use the back of the pages if you run out of space. strictly forbidden.
More informationProbabilistic representation and reasoning
Probabilistic representation and reasoning Applied artificial intelligence (EDAF70) Lecture 04 2019-02-01 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,
More informationUncertainty and knowledge. Uncertainty and knowledge. Reasoning with uncertainty. Notes
Approximate reasoning Uncertainty and knowledge Introduction All knowledge representation formalism and problem solving mechanisms that we have seen until now are based on the following assumptions: All
More informationHuman-level concept learning through probabilistic program induction
B.M Lake, R. Salakhutdinov, J.B. Tenenbaum Human-level concept learning through probabilistic program induction journal club at two aspects in which machine learning spectacularly lags behind human learning
More informationCS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering
CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering Cengiz Günay, Emory Univ. Günay Ch. 15,20 Hidden Markov Models and Particle FilteringSpring 2013 1 / 21 Get Rich Fast!
More informationCSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes
CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes Name: Roll Number: Please read the following instructions carefully Ø Calculators are allowed. However, laptops or mobile phones are not
More informationUncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability
Problem of Logic Agents 22c:145 Artificial Intelligence Uncertainty Reading: Ch 13. Russell & Norvig Logic-agents almost never have access to the whole truth about their environments. A rational agent
More informationBayesian Machine Learning
Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning
More informationCS 188: Artificial Intelligence Fall 2009
CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes Nets 10/13/2009 Dan Klein UC Berkeley Announcements Assignments P3 due yesterday W2 due Thursday W1 returned in front (after lecture) Midterm
More informationCOS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference
COS402- Artificial Intelligence Fall 2015 Lecture 10: Bayesian Networks & Exact Inference Outline Logical inference and probabilistic inference Independence and conditional independence Bayes Nets Semantics
More informationNeural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21
Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural
More informationParts 3-6 are EXAMPLES for cse634
1 Parts 3-6 are EXAMPLES for cse634 FINAL TEST CSE 352 ARTIFICIAL INTELLIGENCE Fall 2008 There are 6 pages in this exam. Please make sure you have all of them INTRODUCTION Philosophical AI Questions Q1.
More informationReasoning Under Uncertainty: Introduction to Probability
Reasoning Under Uncertainty: Introduction to Probability CPSC 322 Uncertainty 1 Textbook 6.1 Reasoning Under Uncertainty: Introduction to Probability CPSC 322 Uncertainty 1, Slide 1 Lecture Overview 1
More informationPublished in: Tenth Tbilisi Symposium on Language, Logic and Computation: Gudauri, Georgia, September 2013
UvA-DARE (Digital Academic Repository) Estimating the Impact of Variables in Bayesian Belief Networks van Gosliga, S.P.; Groen, F.C.A. Published in: Tenth Tbilisi Symposium on Language, Logic and Computation:
More informationCOMP 551 Applied Machine Learning Lecture 14: Neural Networks
COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,
More information