Learning Analytics. Dr. Bowen Hui Computer Science University of British Columbia Okanagan
|
|
- Justina Cannon
- 5 years ago
- Views:
Transcription
1 Learning Analytics Dr. Bowen Hui Computer cience University of British Columbia kanagan
2 Putting the Pieces Together Consider an intelligent tutoring system designed to assist the user in programming Possible actions (intended to be helpful)? A1: Auto- complete the current programming statement A2: uggest several options for completing the current statement A3: Provide hints for completing the statement A4: Ask if user needs help in a pop- up dialogue A5: Keep observing the user (do nothing) What system does should depend on how much help the user needs right now 2
3 Putting the Pieces Together Consider an intelligent tutoring system designed to assist the user in programming Possible actions (intended to be helpful): A1: Auto- complete the current programming statement A2: uggest several options for completing the current statement A3: Provide hints for completing the statement A4: Ask if user needs help in a pop- up dialogue A5: Keep observing the user (do nothing) What system does should depend on how much help the user needs right now 3
4 Putting the Pieces Together Consider an intelligent tutoring system designed to assist the user in programming Possible actions (intended to be helpful): A1: Auto- complete the current programming statement A2: uggest several options for completing the current statement A3: Provide hints for completing the statement A4: Ask if user needs help in a pop- up dialogue A5: Keep observing the user (do nothing) What system does should depend on how much help the user needs right now 4
5 Your Happiness is My Happiness Images taken from cliparts.co and clipartkid.com ystem acts to help user If user is happy, system is doing the right thing Therefore: ystem makes actions to keep user happy Utility function should reflect user s preferences 5
6 Your Happiness is My Happiness Images taken from cliparts.co and clipartkid.com ystem acts to help user If user is happy, system is doing the right thing Therefore: ystem makes actions to keep user happy Utility function should reflect user s preferences 6
7 Utility Function with User Variables ystem s decision problem: Which is the best action to take to make user most happy? With consideration to how much help the user needs right now EU(Action) = Pr(Help) U(Help,Action). Comes from marginal distribution computed from Bayes net Comes from utility defined/elicited and stored in separate file 7
8 Utility Function with User Variables ystem s decision problem: Which is the best action to take to make user most happy? With consideration to how much help the user needs right now EU(Action) = Pr(Help) U(Help,Action). Comes from marginal distribution computed from Bayes net Comes from utility defined/elicited and stored in separate file 8
9 Possible Way to Compute Pr(Help) Boolean Expressions Variables Conditionals Help Persistence Mistakes Backwards Mistakes Logic Mistakes We will come back to this 9
10 Defining U(Help,Action) For each scenario, assign real number to each action: auto- complete, suggest options, hint, ask, do nothing If Help is high, Auto- complete, uggest options, Hint are more appropriate Note: hould also model quality of suggestion If Help is medium, Ask and Hint is more appropriate If Help is low, Do nothing is more appropriate 10
11 Image taken from clker.com 11
12 Reasoning ver Time What if model dynamics change over time? Knowledge of Concepts Knowledge of Concepts... Knowledge of Concepts Various Mistakes Various Mistakes... Various Mistakes time = n 12
13 Dynamic Inference Model time- steps (snapshots of the world) Granularity: 1 second, 5 minutes, 1 session, Allow different probability distributions over states at each time step Define temporal causal influences Encode how distributions change over time 13
14 The Big Picture Life as one big stochastic process: et of tates: tochastic dynamics: Pr(s t s t- 1,, s 0 ) Can be viewed as a Bayes net with one random variable per time step 14
15 Challenges with a tochastic Process Problems: Infinitely many variables Infinitely large conditional probability tables olutions: tationary process: dynamics do not change over time Markov assumption: current state depends only on a finite history of past states 15
16 Challenges with a tochastic Process Problems: Infinitely many variables Infinitely large conditional probability tables olutions: tationary process: dynamics do not change over time Markov assumption: current state depends only on a finite history of past states 16
17 K- order Markov Process Assumption: last k states sufficient First- order Markov process Pr(s t s t- 1,,s 0 ) = Pr(s t s t- 1 ) econd- order Markov process Pr(s t s t- 1,,s 0 ) = Pr(s t s t- 1,s t- 2 ) 17
18 K- order Markov Process Assumption: last k states sufficient First- order Markov process Pr(s t s t- 1,,s 0 ) = Pr(s t s t- 1 ) econd- order Markov process Pr(s t s t- 1,,s 0 ) = Pr(s t s t- 1,s t- 2 ) 18
19 K- order Markov Process Assumption: last k states sufficient First- order Markov process Pr(s t s t- 1,,s 0 ) = Pr(s t s t- 1 ) econd- order Markov process Pr(s t s t- 1,,s 0 ) = Pr(s t s t- 1,s t- 2 ) 19
20 K- order Markov Process Advantage: Can specify entire process with finitely many time steps Two time steps sufficient for a first- order Markov process Graph: Dynamics: Pr(s t s t- 1 ) Prior: Pr(s 0 ) 20
21 Building a 2- tage Dynamic Bayes Net Use same CPTs as BN Bools Vars Conds Help Persis- tence Back- wards Logic 21
22 Building a 2- tage Dynamic Bayes Net Use same CPTs as BN Bools Vars Conds Help Use same or different CPTs as other time step Persis- tence Back- wards Logic Bools Vars Conds Help time=t 1 Persis- tence Back- wards Logic time=t 22
23 Building a 2- tage Dynamic Bayes Net Bools Vars Conds Help For each new relationship, define new CPTs: Pr( t t 1 ) Persis- tence Back- wards Logic Bools Vars Conds Help time=t 1 Persis- tence Back- wards Logic time=t 23
24 Defining a DBN Recall that a BN over variables X = {X 1, X 2,, X n } consists of: A directed acyclic graph whose nodes are variables A set of CPTs Pr(X i Parents(X i )) for each X i Dynamic Bayes Net is an extension of BN Focus on 2- stage DBN 24
25 DBN Definition A DBN over variables X consists of: A directed acyclic graph whose nodes are variables is a set of hidden variables, X is a set of observable variables, X Two time steps: t 1 and t Model is first order Markov s.t. Pr(U t U 1:t 1 ) = Pr(U t U t 1 ) 25
26 DBN Definition (cont.) Numerical component: Prior distribution, Pr(X 0 ) tate transition function, Pr( t t 1 ) bservation function, Pr( t t ) ee previous example For BNs: CPTs Pr(X i Parents(X i )) for each X i 26
27 Example Bools Vars Conds Help Persis- tence Back- wards Logic Bools Vars Conds Help time=t 1 Persis- tence Back- wards Logic time=t 27
28 Exercise #1 Changes in skill from t 1 to t? Context: My son is potty training. When he has to go, he either tells us in advance (yay!), he starts but continues in the potty, or an accident happens. With feedback and practice, his ability to go potty improves. Does a temporal relationship exist? What might the CPT look like? 28
29 Exercise #2 Changes in hint quality from t 1 to t? Context: You re tutoring a friend in Math. Rather than solving the exercises for him, you offer to give hints. ometimes the hints aren t so great. Depending on the quality, you may or may not give hints each time he is stuck. Does a temporal relationship exist? What might the CPT look like? 29
30 Exercise #3 Changes in blood type from t 1 to t? Context: People with blood type A tend to be more cooperative. We have the opportunity to observe a student s teamwork in class twice each week. We want to infer the student s blood type. Does a temporal relationship exist? What might the CPT look like? 30
31 Temporal Inference Tasks Common tasks: Belief update: Pr(s t o t,, o 1 ) Prediction: Pr(s t+k o t,, o 1 ) Hindsight: Pr(s k o t,, o 1 ) where k < t Most likely explanation: argmax Pr(s t,, s 1 o t, o 1 ) s t,, s 1 Fixed interval smoothing: Pr(st o T,, o 1 ) 31
32 Temporal Inference Tasks Common tasks: Belief update: Pr(s t o t,, o 1 ) Prediction: Pr(s t+δ o t,, o 1 ) Hindsight: Pr(s k o t,, o 1 ) where k < t Most likely explanation: argmax Pr(s t,, s 1 o t, o 1 ) s t,, s 1 Fixed interval smoothing: Pr(st o T,, o 1 ) 32
33 Temporal Inference Tasks Common tasks: Belief update: Pr(s t o t,, o 1 ) Prediction: Pr(s t+δ o t,, o 1 ) Hindsight: Pr(s t- τ o t,, o 1 ) where k < t Most likely explanation: argmax Pr(s t,, s 1 o t, o 1 ) s t,, s 1 Fixed interval smoothing: Pr(st o T,, o 1 ) 33
34 Temporal Inference Tasks Common tasks: Belief update: Pr(s t o t,, o 1 ) Prediction: Pr(s t+δ o t,, o 1 ) Hindsight: Pr(s t- τ o t,, o 1 ) where k < t Most likely explanation: argmax Pr(s t,, s 1 o t, o 1 ) s t,, s 1 Fixed interval smoothing: Pr(st o T,, o 1 ) 34
35 Temporal Inference Tasks Common tasks: Belief update: Pr(s t o t,, o 1 ) Prediction: Pr(s t+δ o t,, o 1 ) Hindsight: Pr(s t- τ o t,, o 1 ) where k < t Most likely explanation: argmax Pr(s t,, s 1 o t, o 1 ) s t,, s 1 Fixed interval smoothing: Pr(s t o T,, o 1 ) 35
36 Inference ver Time ame algorithm: clique inference Added setup: Enter evidence in stage t Take marginal in stage t, keep it aside Create new copy Enter marginal into stage t 1 Repeat 36
37 Inference ver Time Mimic:... time = etup: time = t 1 t 37
38 Inference ver Time Mimic:... time = tart, time=1: time = 0 1 bserve user behaviour Enter evidence 38
39 Inference ver Time Mimic:... time = tart, time=1: Compute marginal of interest Get: Pr( 1 ) time =
40 Inference ver Time Mimic:... time = We have: Pr( 1 ) time =
41 Inference ver Time Mimic:... time = We have: Pr( 1 ) New copy: Pr( 1 ) time = 0 1 time = t 1 t 41
42 Inference ver Time Mimic:... time = We have: Pr( 1 ) New copy: Pr( 1 ) time = 0 1 time = t 1 t 42
43 Inference ver Time Mimic:... time = Continue, time=2: Pr( 1 ) time =
44 Inference ver Time Mimic:... time = Continue, time=2: Pr( 1 ) time = 1 2 bserve user behaviour Enter evidence 44
45 Inference ver Time Mimic:... time = Continue, time=2: Pr( 1 ) Compute marginal of interest Get: Pr( 2 ) time =
46 Inference ver Time Mimic:... time = We have: Pr( 2 ) Pr( 1 ) time =
47 Inference ver Time Mimic:... time = We have: Pr( 2 ) New copy: Pr( 1 ) Pr( 2 ) time = 1 2 time = t 1 t 47
48 Inference ver Time Mimic:... time = We have: Pr( 2 ) New copy: Pr( 1 ) Pr( 2 ) time = 1 2 time = t 1 t 48
49 Inference ver Time Mimic:... time = Continue, time=3: Pr( 2 ) time =
50 Inference ver Time Mimic:... time = Continue, time=3: Pr( 2 ) time = 2 3 bserve user behaviour Enter evidence 50
51 Inference ver Time Mimic:... time = Continue, time=3: Pr( 2 ) Compute marginal of interest Get: Pr( 3 ) time =
52 Inference ver Time Mimic:... time = We have: Pr( 3 ) Pr( 2 ) time =
53 Inference ver Time Mimic:... time = We have: Pr( 3 ) New copy: Pr( 2 ) Pr( 3 ) time = 2 3 time = t 1 t 53
54 Inference ver Time Mimic:... time = We have: Pr( 3 ) New copy: Pr( 2 ) Pr( 3 ) time = 2 3 time = t 1 t 54
55 Example: Belief Update ver Time Marginal of interest 55
56 Key Ideas Main concepts Reasoning over time considers evolving dynamics in the model Representation: tationarity assumption: given model, dynamics don t change over time Markov assumption: current state depends only on a finite history DBN is an extension of BN with an additional set of temporal relationships (transition functions)
Bayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationReasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence
Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence 1 Outline Reasoning under uncertainty over time Hidden Markov Models Dynamic Bayes Nets 2 Introduction So far we
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian
More informationCS 188: Artificial Intelligence Fall Recap: Inference Example
CS 188: Artificial Intelligence Fall 2007 Lecture 19: Decision Diagrams 11/01/2007 Dan Klein UC Berkeley Recap: Inference Example Find P( F=bad) Restrict all factors P() P(F=bad ) P() 0.7 0.3 eather 0.7
More informationLecture 8. Probabilistic Reasoning CS 486/686 May 25, 2006
Lecture 8 Probabilistic Reasoning CS 486/686 May 25, 2006 Outline Review probabilistic inference, independence and conditional independence Bayesian networks What are they What do they mean How do we create
More informationCSE 473: Artificial Intelligence Probability Review à Markov Models. Outline
CSE 473: Artificial Intelligence Probability Review à Markov Models Daniel Weld University of Washington [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationAnnouncements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008
CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationAnnouncements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example
CS 88: Artificial Intelligence Fall 29 Lecture 9: Hidden Markov Models /3/29 Announcements Written 3 is up! Due on /2 (i.e. under two weeks) Project 4 up very soon! Due on /9 (i.e. a little over two weeks)
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationEE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes Uncertainty & Bayesian Networks
More informationHidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012
Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationProbabilistic Reasoning. (Mostly using Bayesian Networks)
Probabilistic Reasoning (Mostly using Bayesian Networks) Introduction: Why probabilistic reasoning? The world is not deterministic. (Usually because information is limited.) Ways of coping with uncertainty
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Bayes Nets: Sampling Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationRapid Introduction to Machine Learning/ Deep Learning
Rapid Introduction to Machine Learning/ Deep Learning Hyeong In Choi Seoul National University 1/32 Lecture 5a Bayesian network April 14, 2016 2/32 Table of contents 1 1. Objectives of Lecture 5a 2 2.Bayesian
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic
More informationCS 188: Artificial Intelligence Spring 2009
CS 188: Artificial Intelligence Spring 2009 Lecture 21: Hidden Markov Models 4/7/2009 John DeNero UC Berkeley Slides adapted from Dan Klein Announcements Written 3 deadline extended! Posted last Friday
More informationArtificial Intelligence Bayes Nets: Independence
Artificial Intelligence Bayes Nets: Independence Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter
More informationCOSC 341 Human Computer Interaction. Dr. Bowen Hui University of British Columbia Okanagan
COSC 341 Human Computer Interaction Dr. Bowen Hui University of British Columbia Okanagan 1 Last Class Introduced hypothesis testing Core logic behind it Determining results significance in scenario when:
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore
More informationCS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008 Lecture 14: Bayes Nets 10/14/2008 Dan Klein UC Berkeley 1 1 Announcements Midterm 10/21! One page note sheet Review sessions Friday and Sunday (similar) OHs on
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationHidden Markov Models (recap BNs)
Probabilistic reasoning over time - Hidden Markov Models (recap BNs) Applied artificial intelligence (EDA132) Lecture 10 2016-02-17 Elin A. Topp Material based on course book, chapter 15 1 A robot s view
More informationBayes Net Representation. CS 188: Artificial Intelligence. Approximate Inference: Sampling. Variable Elimination. Sampling.
188: Artificial Intelligence Bayes Nets: ampling Bayes Net epresentation A directed, acyclic graph, one node per random variable A conditional probability table (PT) for each node A collection of distributions
More informationHidden Markov models 1
Hidden Markov models 1 Outline Time and uncertainty Markov process Hidden Markov models Inference: filtering, prediction, smoothing Most likely explanation: Viterbi 2 Time and uncertainty The world changes;
More informationECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning
ECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning Topics Summary of Class Advanced Topics Dhruv Batra Virginia Tech HW1 Grades Mean: 28.5/38 ~= 74.9%
More information15-780: Grad AI Lecture 19: Graphical models, Monte Carlo methods. Geoff Gordon (this lecture) Tuomas Sandholm TAs Erik Zawadzki, Abe Othman
15-780: Grad AI Lecture 19: Graphical models, Monte Carlo methods Geoff Gordon (this lecture) Tuomas Sandholm TAs Erik Zawadzki, Abe Othman Admin Reminder: midterm March 29 Reminder: project milestone
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Bayes Nets: Independence Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationProbabilistic Robotics
University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: http://www.dis.uniroma1.it/~grisetti/teaching/probabilistic_ro
More informationCOS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference
COS402- Artificial Intelligence Fall 2015 Lecture 10: Bayesian Networks & Exact Inference Outline Logical inference and probabilistic inference Independence and conditional independence Bayes Nets Semantics
More informationQualifier: CS 6375 Machine Learning Spring 2015
Qualifier: CS 6375 Machine Learning Spring 2015 The exam is closed book. You are allowed to use two double-sided cheat sheets and a calculator. If you run out of room for an answer, use an additional sheet
More informationExtensions of Bayesian Networks. Outline. Bayesian Network. Reasoning under Uncertainty. Features of Bayesian Networks.
Extensions of Bayesian Networks Outline Ethan Howe, James Lenfestey, Tom Temple Intro to Dynamic Bayesian Nets (Tom Exact inference in DBNs with demo (Ethan Approximate inference and learning (Tom Probabilistic
More informationCS 188: Artificial Intelligence. Bayes Nets
CS 188: Artificial Intelligence Probabilistic Inference: Enumeration, Variable Elimination, Sampling Pieter Abbeel UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew
More informationBayesian networks. Chapter 14, Sections 1 4
Bayesian networks Chapter 14, Sections 1 4 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 14, Sections 1 4 1 Bayesian networks
More informationCSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i
CSE 473: Ar+ficial Intelligence Bayes Nets Daniel Weld [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at hnp://ai.berkeley.edu.]
More informationSTATISTICAL METHODS IN AI/ML Vibhav Gogate The University of Texas at Dallas. Bayesian networks: Representation
STATISTICAL METHODS IN AI/ML Vibhav Gogate The University of Texas at Dallas Bayesian networks: Representation Motivation Explicit representation of the joint distribution is unmanageable Computationally:
More informationBayesian networks. Chapter Chapter
Bayesian networks Chapter 14.1 3 Chapter 14.1 3 1 Outline Syntax Semantics Parameterized distributions Chapter 14.1 3 2 Bayesian networks A simple, graphical notation for conditional independence assertions
More informationReasoning under Uncertainty: Intro to Probability
Reasoning under Uncertainty: Intro to Probability Computer Science cpsc322, Lecture 24 (Textbook Chpt 6.1, 6.1.1) March, 15, 2010 CPSC 322, Lecture 24 Slide 1 To complete your Learning about Logics Review
More informationCS532, Winter 2010 Hidden Markov Models
CS532, Winter 2010 Hidden Markov Models Dr. Alan Fern, afern@eecs.oregonstate.edu March 8, 2010 1 Hidden Markov Models The world is dynamic and evolves over time. An intelligent agent in such a world needs
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 10 Oct, 13, 2011 CPSC 502, Lecture 10 Slide 1 Today Oct 13 Inference in HMMs More on Robot Localization CPSC 502, Lecture
More informationCS 188: Artificial Intelligence Fall 2009
CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes Nets 10/13/2009 Dan Klein UC Berkeley Announcements Assignments P3 due yesterday W2 due Thursday W1 returned in front (after lecture) Midterm
More informationMarkov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.
CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called
More informationProbability. CS 3793/5233 Artificial Intelligence Probability 1
CS 3793/5233 Artificial Intelligence 1 Motivation Motivation Random Variables Semantics Dice Example Joint Dist. Ex. Axioms Agents don t have complete knowledge about the world. Agents need to make decisions
More informationLearning in Bayesian Networks
Learning in Bayesian Networks Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Berlin: 20.06.2002 1 Overview 1. Bayesian Networks Stochastic Networks
More informationUncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial
C:145 Artificial Intelligence@ Uncertainty Readings: Chapter 13 Russell & Norvig. Artificial Intelligence p.1/43 Logic and Uncertainty One problem with logical-agent approaches: Agents almost never have
More informationIntroduction to Artificial Intelligence. Unit # 11
Introduction to Artificial Intelligence Unit # 11 1 Course Outline Overview of Artificial Intelligence State Space Representation Search Techniques Machine Learning Logic Probabilistic Reasoning/Bayesian
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More information9 Forward-backward algorithm, sum-product on factor graphs
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous
More informationLesson 8: Graphs of Simple Non Linear Functions
Student Outcomes Students examine the average rate of change for non linear functions and learn that, unlike linear functions, non linear functions do not have a constant rate of change. Students determine
More informationMidterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015
S 88 Spring 205 Introduction to rtificial Intelligence Midterm 2 V ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib
More informationBayes Networks. CS540 Bryan R Gibson University of Wisconsin-Madison. Slides adapted from those used by Prof. Jerry Zhu, CS540-1
Bayes Networks CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 59 Outline Joint Probability: great for inference, terrible to obtain
More informationQuantifying uncertainty & Bayesian networks
Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationOutline. Lecture 13. Sequential Decision Making. Sequential Decision Making. Markov Decision Process. Stationary Preferences
Outline Lecture 3 October 27, 2009 C 486/686 Markov Decision Processes Dynamic Decision Networks Russell and Norvig: ect 7., 7.2 (up to p. 620), 7.4, 7.5 2 equential Decision Making tatic Decision Making
More informationHidden Markov Models. Vibhav Gogate The University of Texas at Dallas
Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1
More informationIntroduction to Artificial Intelligence Midterm 2. CS 188 Spring You have approximately 2 hours and 50 minutes.
CS 188 Spring 2014 Introduction to Artificial Intelligence Midterm 2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers
More informationThe Origin of Deep Learning. Lili Mou Jan, 2015
The Origin of Deep Learning Lili Mou Jan, 2015 Acknowledgment Most of the materials come from G. E. Hinton s online course. Outline Introduction Preliminary Boltzmann Machines and RBMs Deep Belief Nets
More informationIntroduction to Bayes Nets. CS 486/686: Introduction to Artificial Intelligence Fall 2013
Introduction to Bayes Nets CS 486/686: Introduction to Artificial Intelligence Fall 2013 1 Introduction Review probabilistic inference, independence and conditional independence Bayesian Networks - - What
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 14: Bayes Nets II Independence 3/9/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Announcements
More informationBayesian Methods in Artificial Intelligence
WDS'10 Proceedings of Contributed Papers, Part I, 25 30, 2010. ISBN 978-80-7378-139-2 MATFYZPRESS Bayesian Methods in Artificial Intelligence M. Kukačka Charles University, Faculty of Mathematics and Physics,
More informationOutline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car
CSE 573: Artificial Intelligence Autumn 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Outline Probabilistic models (and inference)
More informationAnnouncements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation
CS 188: Artificial Intelligence Spring 2010 Lecture 15: Bayes Nets II Independence 3/9/2010 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Current
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro
More informationReasoning Under Uncertainty: Bayesian networks intro
Reasoning Under Uncertainty: Bayesian networks intro Alan Mackworth UBC CS 322 Uncertainty 4 March 18, 2013 Textbook 6.3, 6.3.1, 6.5, 6.5.1, 6.5.2 Lecture Overview Recap: marginal and conditional independence
More informationCS Homework 3. October 15, 2009
CS 294 - Homework 3 October 15, 2009 If you have questions, contact Alexandre Bouchard (bouchard@cs.berkeley.edu) for part 1 and Alex Simma (asimma@eecs.berkeley.edu) for part 2. Also check the class website
More informationSequential Data and Markov Models
equential Data and Markov Models argur N. rihari srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/ce574/index.html 0 equential Data Examples Often arise through
More informationBayes Nets: Independence
Bayes Nets: Independence [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Bayes Nets A Bayes
More informationHidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1
Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position
More informationInference in Bayesian networks
Inference in Bayesian networks AIMA2e hapter 14.4 5 1 Outline Exact inference by enumeration Exact inference by variable elimination Approximate inference by stochastic simulation Approximate inference
More informationMini-project 2 (really) due today! Turn in a printout of your work at the end of the class
Administrivia Mini-project 2 (really) due today Turn in a printout of your work at the end of the class Project presentations April 23 (Thursday next week) and 28 (Tuesday the week after) Order will be
More informationLesson 7: Classification of Solutions
Student Outcomes Students know the conditions for which a linear equation will have a unique solution, no solution, or infinitely many solutions. Lesson Notes Part of the discussion on the second page
More informationCovariance. if X, Y are independent
Review: probability Monty Hall, weighted dice Frequentist v. Bayesian Independence Expectations, conditional expectations Exp. & independence; linearity of exp. Estimator (RV computed from sample) law
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 3, 2016 CPSC 422, Lecture 11 Slide 1 422 big picture: Where are we? Query Planning Deterministic Logics First Order Logics Ontologies
More informationAnnouncements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic
CS 188: Artificial Intelligence Fall 2008 Lecture 16: Bayes Nets III 10/23/2008 Announcements Midterms graded, up on glookup, back Tuesday W4 also graded, back in sections / box Past homeworks in return
More informationProbabilistic Models
Bayes Nets 1 Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables
More informationMarkov Networks. l Like Bayes Nets. l Graph model that describes joint probability distribution using tables (AKA potentials)
Markov Networks l Like Bayes Nets l Graph model that describes joint probability distribution using tables (AKA potentials) l Nodes are random variables l Labels are outcomes over the variables Markov
More informationUncertainty and Bayesian Networks
Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks
More informationObjectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.
Copyright Richard J. Povinelli rev 1.0, 10/1//2001 Page 1 Probabilistic Reasoning Systems Dr. Richard J. Povinelli Objectives You should be able to apply belief networks to model a problem with uncertainty.
More informationSampling from Bayes Nets
from Bayes Nets http://www.youtube.com/watch?v=mvrtaljp8dm http://www.youtube.com/watch?v=geqip_0vjec Paper reviews Should be useful feedback for the authors A critique of the paper No paper is perfect!
More informationBayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018
Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Slides have been adopted from Klein and Abdeel, CS188, UC Berkeley. Outline Probability
More informationBayesian Network. Outline. Bayesian Network. Syntax Semantics Exact inference by enumeration Exact inference by variable elimination
Outline Syntax Semantics Exact inference by enumeration Exact inference by variable elimination s A simple, graphical notation for conditional independence assertions and hence for compact specication
More informationMarkov Networks.
Markov Networks www.biostat.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts Markov network syntax Markov network semantics Potential functions Partition function
More informationMachine Learning for Data Science (CS4786) Lecture 19
Machine Learning for Data Science (CS4786) Lecture 19 Hidden Markov Models Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2017fa/ Quiz Quiz Two variables can be marginally independent but not
More informationProbabilistic Reasoning Systems
Probabilistic Reasoning Systems Dr. Richard J. Povinelli Copyright Richard J. Povinelli rev 1.0, 10/7/2001 Page 1 Objectives You should be able to apply belief networks to model a problem with uncertainty.
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Undirected Graphical Models Mark Schmidt University of British Columbia Winter 2016 Admin Assignment 3: 2 late days to hand it in today, Thursday is final day. Assignment 4:
More informationCognitive Systems 300: Probability and Causality (cont.)
Cognitive Systems 300: Probability and Causality (cont.) David Poole and Peter Danielson University of British Columbia Fall 2013 1 David Poole and Peter Danielson Cognitive Systems 300: Probability and
More informationDeep Learning Srihari. Deep Belief Nets. Sargur N. Srihari
Deep Belief Nets Sargur N. Srihari srihari@cedar.buffalo.edu Topics 1. Boltzmann machines 2. Restricted Boltzmann machines 3. Deep Belief Networks 4. Deep Boltzmann machines 5. Boltzmann machines for continuous
More informationCSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-II
CSC384: Intro to Artificial Intelligence Reasoning under Uncertainty-II 1 Bayes Rule Example Disease {malaria, cold, flu}; Symptom = fever Must compute Pr(D fever) to prescribe treatment Why not assess
More informationBayesian networks (1) Lirong Xia
Bayesian networks (1) Lirong Xia Random variables and joint distributions Ø A random variable is a variable with a domain Random variables: capital letters, e.g. W, D, L values: small letters, e.g. w,
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro
More informationCS 7180: Behavioral Modeling and Decision- making in AI
CS 7180: Behavioral Modeling and Decision- making in AI Bayesian Networks for Dynamic and/or Relational Domains Prof. Amy Sliva October 12, 2012 World is not only uncertain, it is dynamic Beliefs, observations,
More informationMidterm II. Introduction to Artificial Intelligence. CS 188 Fall ˆ You have approximately 3 hours.
CS 188 Fall 2012 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 3 hours. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use non-programmable
More informationIntroduction to Bayesian Networks
Introduction to Bayesian Networks Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/23 Outline Basic Concepts Bayesian
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 16: Bayes Nets IV Inference 3/28/2011 Pieter Abbeel UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore Announcements
More informationProbabilistic Classification
Bayesian Networks Probabilistic Classification Goal: Gather Labeled Training Data Build/Learn a Probability Model Use the model to infer class labels for unlabeled data points Example: Spam Filtering...
More informationMarkov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials)
Markov Networks l Like Bayes Nets l Graphical model that describes joint probability distribution using tables (AKA potentials) l Nodes are random variables l Labels are outcomes over the variables Markov
More informationMathematica Project 3
Mathematica Project 3 Name: Section: Date: On your class s Sakai site, your instructor has placed 5 Mathematica notebooks. Please use the following table to determine which file you should select based
More information2. I will provide two examples to contrast the two schools. Hopefully, in the end you will fall in love with the Bayesian approach.
Bayesian Statistics 1. In the new era of big data, machine learning and artificial intelligence, it is important for students to know the vocabulary of Bayesian statistics, which has been competing with
More informationDefining Things in Terms of Joint Probability Distribution. Today s Lecture. Lecture 17: Uncertainty 2. Victor R. Lesser
Lecture 17: Uncertainty 2 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture How belief networks can be a Knowledge Base for probabilistic knowledge. How to construct a belief network. How to answer
More information