Course Overview. Summary. Performance of approximation algorithms
|
|
- Cecil Rogers
- 5 years ago
- Views:
Transcription
1 Course Overview Lecture 11 Dynamic ayesian Networks and Hidden Markov Models Decision Trees Marco Chiarandini Deptartment of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and Peter Norvig Introduction Artificial Intelligence Intelligent Agents Search Uninformed Search Heuristic Search Adversarial Search Minimax search Alpha-beta pruning Knowledge representation and Reasoning Propositional logic First order logic Inference Uncertain knowledge and Reasoning Probability and ayesian approach ayesian Networks Hidden Markov Chains Kalman Filters Decision Trees Maximum Likelihood EM Algorithm ayesian Networks Neural Networks Support vector machines 2 Performance of approximation algorithms Summary Absolute approximation: P(X e) ˆP(X e) ɛ Relative approximation: P(X e) ˆP(X e) P(X e) ɛ Relative = absolute since 0 P 1 (may be O(2 n )) Randomized algorithms may fail with probability at most δ Polytime approximation: poly(n, ɛ 1, log δ 1 ) Theorem (Dagum and Luby, 1993): both absolute and relative approximation for either deterministic or randomized algorithms are NP-hard for any ɛ, δ < 0.5 (Absolute approximation polytime with no evidence Chernoff bounds) Exact inference by variable elimination: polytime on polytrees, NP-hard on general graphs space = time, very sensitive to topology Approximate inference by Likelihood Weighting (LW), Markov Chain Monte Carlo Method (MCMC): PriorSampling and RejectionSampling unusable as evidence grow LW does poorly when there is lots of (late-in-the-order) evidence LW, MCMC generally insensitive to topology Convergence can be very slow with probabilities close to 1 or 0 Can handle arbitrary combinations of discrete and continuous variables 3 4
2 Outline Wumpus World 1,4 2,4 3,4 4,4 1. 1,3 2,3 3,3 4, ,2 2,2 3,2 4,2 1,1 2,1 3,1 4,1 P ij = iff [i, j] contains a pit ij = iff [i, j] is breezy Include only 1,1, 1,2, 2,1 in the probability model 5 6 Specifying the probability model Observations and query The full joint distribution is P(P 1,1,..., P 4,4, 1,1, 1,2, 2,1 ) Apply product rule: P( 1,1, 1,2, 2,1 P 1,1,..., P 4,4 )P(P 1,1,..., P 4,4 ) (Do it this way to get P(Effect Cause).) First term: 1 if pits are adjacent to breezes, 0 otherwise Second term: pits are placed randomly, probability 0.2 per square: P(P 1,1,..., P 4,4 ) = for n pits. 4,4 i,j = 1,1 P(P i,j ) = 0.2 n n We know the following facts: b = b 1,1 b 1,2 b 2,1 known = p 1,1 p 1,2 p 2,1 Query is P(P 1,3 known, b) Define Unknown = P ij s other than P 1,3 and Known For inference by enumeration, we have P(P 1,3 known, b) = α P(P 1,3, unknown, known, b) unknown Grows exponentially with number of squares! 7 8
3 Using conditional independence Using conditional independence contd. asic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares 1,4 2,4 3,4 4,4 1,3 2,3 3,3 4,3 OTHER QUERY 1,2 2,2 3,2 4,2 1,1 2,1 FRINGE 3,1 4,1 KNOWN Define Unknown = Fringe Other P(b P 1,3, Known, Unknown) = P(b P 1,3, Known, Fringe) Manipulate query into a form where we can use this! P(P 1,3 known, b) = α X P(P 1,3, unknown, known, b) unknown = α X P(b P 1,3, known, unknown)p(p 1,3, known, unknown) unknown = α X = α X fringe other fringe other X P(b known, P 1,3, fringe, other)p(p 1,3, known, fringe, other) X P(b known, P 1,3, fringe)p(p 1,3, known, fringe, other) = α fringep(b known, X P 1,3, fringe) X P(P 1,3, known, fringe, other) other = α X P(b known, P 1,3, fringe) X P(P 1,3 )P(known)P(fringe)P(other) fringe other = α P(known)P(P 1,3 ) X P(b known, P 1,3, fringe)p(fringe) X P(other) fringe other = α P(P 1,3 ) X P(b known, P 1,3, fringe)p(fringe) fringe 9 10 Using conditional independence contd. Outline 1,3 1,2 2,2 1,3 1,2 2,2 1,3 1,2 2,2 1,3 1,2 2,2 1,3 1,2 2,2 1. 1,1 2,1 3,1 1,1 2,1 3,1 1,1 2,1 3,1 0.2 x 0.2 = x 0.8 = x 0.2 = ,1 2,1 3,1 1,1 2,1 3,1 0.2 x 0.2 = x 0.8 = P(P 1,3 known, b) = α 0.2( ), 0.8( ) 0.31, 0.69 P(P 2,2 known, b) 0.86,
4 Outline Time and uncertainty The world changes; we need to track and predict it Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention) Dynamic ayesian networks (an even briefer mention) Diabetes management vs vehicle diagnosis asic idea: copy state and evidence variables for each time step X t = set of unobservable state variables at time t e.g., loodsugar t, StomachContents t, etc. E t = set of observable evidence variables at time t e.g., MeasuredloodSugar t, PulseRate t, FoodEaten t This assumes discrete time; step size depends on problem Notation: X a:b = X a, X a+1,..., X b 1, X b Markov processes (Markov chains) Example Construct a ayes net from these variables: - unbounded number of conditional probability table - unbounded number of parents Markov assumption: X t depends on bounded subset of X 0:t 1 First-order Markov process: P(X t X 0:t 1 ) = P(X t X t 1 ) Second-order Markov process: P(X t X 0:t 1 ) = P(X t X t 2, X t 1 ) First order X t 2 X t 1 X t X t +1 X t +2 Rain t 1 Umbrella t 1 R t 1 t f P(R t ) Rain t Rt t f Umbrella t P(U t ) Rain t +1 Umbrella t +1 Second order X t 2 X t 1 X t X t +1 X t +2 Sensor Markov assumption: P(E t X 0:t, E 0:t 1 ) = P(E t X t ) Stationary process: transition model P(X t X t 1 ) and sensor model P(E t X t ) fixed for all t First-order Markov assumption not exactly in real world! Possible fixes: 1. Increase order of Markov process 2. Augment state, e.g., add Temp t, Pressure t Example: robot motion. Augment position and velocity with attery t 15 16
5 Inference tasks Filtering Aim: devise a recursive state estimation algorithm: 1. Filtering: P(X t e 1:t ) belief state input to the decision process of a rational agent 2. Prediction: P(X t+k e 1:t ) for k > 0 evaluation of possible action sequences; like filtering without the evidence 3. Smoothing: P(X k e 1:t ) for 0 k < t better estimate of past states, essential for learning 4. Most likely explanation: arg max x1:t P(x 1:t e 1:t ) speech recognition, decoding with a noisy channel P(X t+1 e 1:t+1 ) = f (e t+1, P(X t e 1:t )) P(X t+1 e 1:t+1 ) = P(X t+1 e 1:t, e t+1 ) = αp(e t+1 X t+1, e 1:t )P(X t+1 e 1:t ) = αp(e t+1 X t+1 )P(X t+1 e 1:t ) I.e., prediction + estimation. Prediction by summing out X t : P(X t+1 e 1:t+1 ) = αp(e t+1 X t+1 ) P(X t+1 x t, e 1:t )P(x t e 1:t ) x t = αp(e t+1 X t+1 ) P(X t+1 x t )P(x t e 1:t ) x t 17 f 1:t+1 = Forward(f 1:t, e t+1 ) where f 1:t = P(X t e 1:t ) Time and space constant (independent of t) by keeping track of f 18 Filtering example Smoothing True False Rain t 1 Umbrella t 1 Rain 0 R t 1 t f P(R t ) Rain t Rt t f Umbrella t Rain 1 P(U t ) Rain t +1 Umbrella t Rain 2 X 0 X 1 E1 Divide evidence e 1:t into e 1:k, e k+1:t : X k P(X k e 1:t ) = P(X k e 1:k, e k+1:t ) E k X t Et = αp(x k e 1:k )P(e k+1:t X k, e 1:k ) = αp(x k e 1:k )P(e k+1:t X k ) = αf 1:k b k+1:t ackward message computed by a backwards recursion: P(e k+1:t X k ) = X xk+1 P(e k+1:t X k, x k+1 )P(x k+1 X k ) Umbrella 1 Umbrella 2 = X xk+1 P(e k+1:t x k+1 )P(x k+1 X k ) 19 = X xk+1 P(e k+1 x k+1 )P(e k+2:t x k+1 )P(x k+1 X k ) 20
6 Smoothing example Most likely explanation True False forward Most likely sequence sequence of most likely states (joint distr.)! Most likely path to each x t+1 = most likely path to some x t plus one more step smoothed backward max P(x 1,..., x t, X t+1 e 1:t+1 ) x 1...x t ( ) = P(e t+1 X t+1 ) max P(X t+1 x t ) max P(x 1,..., x t 1, x t e 1:t ) x t x 1...x t 1 Rain 0 Rain 1 Rain 2 Identical to filtering, except f 1:t replaced by m 1:t = max P(x 1,..., x t 1, X t e 1:t ), x 1...x t 1 Umbrella 1 Umbrella 2 I.e., m 1:t (i) gives the probability of the most likely path to state i. Update has sum replaced by max, giving the Viterbi algorithm: If we want to smooth the whole sequence: Forward backward algorithm: cache forward messages along the way Time linear in t (polytree inference), space O(t f ) m 1:t+1 = P(e t+1 X t+1 ) max x t (P(X t+1 x t )m 1:t ) Viterbi example Hidden Markov models state space paths most likely paths Rain 1 Rain 2 Rain 3 Rain 4 Rain 5 false false false false false umbrella false m 1:1 m 1:2 m 1:3 m 1:4 m 1:5 X t is a single, discrete variable (usually E t is too) Domain of X t is {1,..., S} ( ) Transition matrix T ij = P(X t = j X t 1 = i), e.g., Sensor matrix O t for each time ( step, diagonal ) elements P(e t X t = i) e.g., with U 1 =, O 1 = Forward and backward messages as column vectors: f 1:t+1 = αo t+1 T f 1:t b k+1:t = TO k+1 b k+2:t Forward-backward algorithm needs time O(S 2 t) and space O(St) 23 24
7 Kalman filters Updating Gaussian distributions Modelling systems described by a set of continuous variables, e.g., tracking a bird flying X t = X, Y, Z, Ẋ, Ẏ, Ż. Airplanes, robots, ecosystems, economies, chemical plants, planets,... Xt Xt+1 Xt Xt+1 Zt Zt+1 Prediction step: if P(X t e 1:t ) is Gaussian, then prediction P(X t+1 e 1:t ) = P(X t+1 x t )P(x t e 1:t ) dx t x t is Gaussian. If P(X t+1 e 1:t ) is Gaussian, then the updated distribution P(X t+1 e 1:t+1 ) = αp(e t+1 X t+1 )P(X t+1 e 1:t ) is Gaussian Hence P(X t e 1:t ) is multivariate Gaussian N(µ t, Σ t ) for all t General (nonlinear, non-gaussian) process: description of posterior grows unboundedly as t Gaussian prior, linear Gaussian transition model and sensor model D tracking example: filtering 2-D tracking example: smoothing 12 2D filtering 12 2D smoothing 11 observed filtered 11 observed smoothed Y 9 Y X X 27 28
8 Where it breaks Dynamic ayesian networks Cannot be applied if the transition model is nonlinear Extended Kalman Filter models transition as locally linear around x t = µ t Fails if systems is locally unsmooth X t, E t contain arbitrarily many variables in a replicated ayes net Meter 1 P(R ) R 0 t f P(R 1) Rain 0 Rain 1 attery 0 attery 1 R 1 t f P(U 1) X 0 X 1 Umbrella 1 txx 0 X 1 Z DNs vs. HMMs DNs vs Kalman filters Every HMM is a single-variable DN; every discrete DN is an HMM X t X t+1 Y t Yt+1 Every Kalman filter model is a DN, but few DNs are KFs; real world requires non-gaussian posteriors Z t Z t+1 Sparse dependencies exponentially fewer parameters; e.g., 20 state variables, three parents each DN has = 160 parameters, HMM has
9 Summary Outline Temporal models use state and sensor variables replicated over time Markov assumptions and stationarity assumption, so we need transition model P(X t X t 1 ) sensor model P(E t X t ) Tasks are filtering, prediction, smoothing, most likely sequence; all done recursively with constant cost per time step Hidden Markov models have a single discrete state variable; used for speech recognition Kalman filters allow n state variables, linear Gaussian, O(n 3 ) update Dynamic ayes nets subsume HMMs, Kalman filters; exact update intractable Outline Speech as probabilistic inference Speech signals are noisy, variable, ambiguous Speech as probabilistic inference Speech sounds Word pronunciation Word sequences What is the most likely word sequence, given the speech signal? I.e., choose Words to maximize P(Words signal) Use ayes rule: P(Words signal) = αp(signal Words)P(Words) I.e., decomposes into acoustic model + language model Words are the hidden state sequence, signal is the observation sequence 35 36
10 Phones Word pronunciation models All human speech is composed from phones, determined by the configuration of articulators (lips, teeth, tongue, vocal cords, air flow) Form an intermediate level of hidden states between words and signal acoustic model = pronunciation model + phone model ARPAbet designed for American English [iy] beat [b] bet [p] pet [ih] bit [ch] Chet [r] rat [ey] bet [d] debt [s] set [ao] bought [hh] hat [th] thick [ow] boat [hv] high [dh] that [er] ert [l] let [w] wet [ix] roses [ng] sing [en] button.... E.g., ceiling is [s iy l ih ng] / [s iy l ix ng] / [s iy l en].. Each word is described as a distribution over phone sequences Distribution represented as an HMM transition model [t] [ow] [ah] [m] [ey] [aa] [t] 1.0 [ow] P([towmeytow] tomato ) = P([towmaatow] tomato ) = 0.1 P([tahmeytow] tomato ) = P([tahmaatow] tomato ) = 0.4 Structure is created manually, transition probabilities learned from data Isolated words Continuous speech Phone models + word models fix likelihood P(e 1:t word) for isolated word P(word e 1:t ) = αp(e 1:t word)p(word) Prior probability P(word) obtained simply by counting word frequencies P(e 1:t word) can be computed recursively: define l 1:t = P(X t, e 1:t ) Not just a sequence of isolated-word recognition problems! Adjacent words highly correlated Sequence of most likely words most likely sequence of words Segmentation: there are few gaps in speech Cross-word coarticulation e.g., next thing Continuous speech systems manage 60 80% accuracy on a good day and use the recursive update l 1:t+1 = Forward(l 1:t, e t+1 ) and then P(e 1:t word) = x t l 1:t (x t ) Isolated-word dictation systems with training reach 95 99% accuracy 42 43
11 Language model Combined HMM Prior probability of a word sequence is given by chain rule: P(w 1 w n ) = igram model: n P(w i w 1 w i 1 ) i=1 P(w i w 1 w i 1 ) P(w i w i 1 ) Train by counting all word pairs in a large text corpus More sophisticated models (trigrams, grammars, etc.) help a little bit States of the combined language+word+phone model are labelled by the word we re in + the phone in that word + the phone state in that phone Viterbi algorithm finds the most likely phone state sequence Does segmentation by considering all possible word sequences and boundaries Doesn t always give the most likely word sequence because each word sequence is the sum over many state sequences Jelinek invented A in 1969 a way to find most likely word sequence where step cost is log P(w i w i 1 ) Outline Outline agents Inductive learning Decision tree learning Measuring learning performance
12 agents Performance standard ack to Turing s article: - child mind program - education Critic Sensors Reward & Punishment is essential for unknown environments, i.e., when designer lacks omniscience is useful as a system construction method, i.e., expose the agent to reality rather than trying to write it down feedback learning goals element changes knowledge Performance element Environment modifies the agent s decision mechanisms to improve performance Problem generator experiments Agent Effectors element Design of learning element is dictated by what type of performance element is used which functional component is to be learned how that functional compoent is represented what kind of feedback is available Example scenarios: Performance element Component Representation Feedback Alpha beta search Eval. fn. Weighted linear function Win/loss Logical agent Transition model Successor state axioms Outcome Utility based agent Transition model Dynamic ayes net Outcome Simple reflex agent Percept action fn Neural net Correct action Supervised learning: correct answers for each instance Reinforcement learning: occasional rewards 50
Dynamic Bayesian Networks and Hidden Markov Models Decision Trees
Lecture 11 Dynamic Bayesian Networks and Hidden Markov Models Decision Trees Marco Chiarandini Deptartment of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and
More informationPROBABILISTIC REASONING OVER TIME
PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:
More informationHidden Markov models 1
Hidden Markov models 1 Outline Time and uncertainty Markov process Hidden Markov models Inference: filtering, prediction, smoothing Most likely explanation: Viterbi 2 Time and uncertainty The world changes;
More informationHidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1
Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position
More informationTemporal probability models. Chapter 15, Sections 1 5 1
Temporal probability models Chapter 15, Sections 1 5 Chapter 15, Sections 1 5 1 Outline Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention)
More informationTemporal probability models. Chapter 15
Temporal probability models Chapter 15 Outline Time and uncertainty Inference: filtering, prediction, smoothing Hidden Markov models Kalman filters (a brief mention) Dynamic Bayesian networks Particle
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationArtificial Intelligence Markov Chains
Artificial Intelligence Markov Chains Stephan Dreiseitl FH Hagenberg Software Engineering & Interactive Media Stephan Dreiseitl (Hagenberg/SE/IM) Lecture 12: Markov Chains Artificial Intelligence SS2010
More informationInference in Bayesian Networks
Lecture 7 Inference in Bayesian Networks Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and Peter Norvig Course Overview Introduction
More informationStochastic inference in Bayesian networks, Markov chain Monte Carlo methods
Stochastic inference in Bayesian networks, Markov chain Monte Carlo methods AI: Stochastic inference in BNs AI: Stochastic inference in BNs 1 Outline ypes of inference in (causal) BNs Hardness of exact
More informationHuman-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg
Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More informationHidden Markov Models (recap BNs)
Probabilistic reasoning over time - Hidden Markov Models (recap BNs) Applied artificial intelligence (EDA132) Lecture 10 2016-02-17 Elin A. Topp Material based on course book, chapter 15 1 A robot s view
More informationArtificial Intelligence
ICS461 Fall 2010 Nancy E. Reed nreed@hawaii.edu 1 Lecture #14B Outline Inference in Bayesian Networks Exact inference by enumeration Exact inference by variable elimination Approximate inference by stochastic
More informationPROBABILISTIC REASONING Outline
PROBABILISTIC REASONING Outline Danica Kragic Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule September 23, 2005 Uncertainty - Let action A t = leave for airport t minutes
More informationOutline. Uncertainty. Methods for handling uncertainty. Uncertainty. Making decisions under uncertainty. Probability. Uncertainty
Outline Uncertainty Uncertainty Chapter 13 Probability Syntax and Semantics Inference Independence and ayes Rule Chapter 13 1 Chapter 13 2 Uncertainty et action A t =leaveforairportt minutes before flight
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic
More informationCS 380: ARTIFICIAL INTELLIGENCE
CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING 11/11/2013 Santiago Ontañón santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/cs380/intro.html Summary so far: Rational Agents Problem
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationCSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Professor Wei-Min Shen Week 8.1 and 8.2
CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on Professor Wei-Min Shen Week 8.1 and 8.2 Status Check Projects Project 2 Midterm is coming, please do your homework!
More informationEE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes Uncertainty & Bayesian Networks
More informationUncertainty. Outline. Probability Syntax and Semantics Inference Independence and Bayes Rule. AIMA2e Chapter 13
Uncertainty AIMA2e Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and ayes Rule 2 Uncertainty Let action A t = leave for airport t minutes before flight Will A
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Pacman Sonar (P4) [Demo: Pacman Sonar
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Hidden Markov Models Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 10 Oct, 13, 2011 CPSC 502, Lecture 10 Slide 1 Today Oct 13 Inference in HMMs More on Robot Localization CPSC 502, Lecture
More informationUncertain Knowledge and Reasoning
Uncertainty Part IV Uncertain Knowledge and Reasoning et action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1) partial observability (road state, other drivers
More informationCS532, Winter 2010 Hidden Markov Models
CS532, Winter 2010 Hidden Markov Models Dr. Alan Fern, afern@eecs.oregonstate.edu March 8, 2010 1 Hidden Markov Models The world is dynamic and evolves over time. An intelligent agent in such a world needs
More informationUncertainty. Chapter 13, Sections 1 6
Uncertainty Chapter 13, Sections 1 6 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 13, Sections 1 6 1 Outline Uncertainty Probability
More informationArtificial Intelligence
Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t
More informationCS 7180: Behavioral Modeling and Decision- making in AI
CS 7180: Behavioral Modeling and Decision- making in AI Bayesian Networks for Dynamic and/or Relational Domains Prof. Amy Sliva October 12, 2012 World is not only uncertain, it is dynamic Beliefs, observations,
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian
More informationProbabilistic Reasoning
Probabilistic Reasoning Philipp Koehn 4 April 2017 Outline 1 Uncertainty Probability Inference Independence and Bayes Rule 2 uncertainty Uncertainty 3 Let action A t = leave for airport t minutes before
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 20: HMMs / Speech / ML 11/8/2011 Dan Klein UC Berkeley Today HMMs Demo bonanza! Most likely explanation queries Speech recognition A massive HMM! Details
More informationCOMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012
COMP9414/ 9814/ 3411: Artificial Intelligence 14. Uncertainty Russell & Norvig, Chapter 13. COMP9414/9814/3411 14s1 Uncertainty 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence
More informationExtensions of Bayesian Networks. Outline. Bayesian Network. Reasoning under Uncertainty. Features of Bayesian Networks.
Extensions of Bayesian Networks Outline Ethan Howe, James Lenfestey, Tom Temple Intro to Dynamic Bayesian Nets (Tom Exact inference in DBNs with demo (Ethan Approximate inference and learning (Tom Probabilistic
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationCS 188: Artificial Intelligence. Bayes Nets
CS 188: Artificial Intelligence Probabilistic Inference: Enumeration, Variable Elimination, Sampling Pieter Abbeel UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew
More informationLearning from Observations. Chapter 18, Sections 1 3 1
Learning from Observations Chapter 18, Sections 1 3 Chapter 18, Sections 1 3 1 Outline Learning agents Inductive learning Decision tree learning Measuring learning performance Chapter 18, Sections 1 3
More informationInference in Bayesian networks
Inference in Bayesian networks AIMA2e hapter 14.4 5 1 Outline Exact inference by enumeration Exact inference by variable elimination Approximate inference by stochastic simulation Approximate inference
More informationHidden Markov Models. Vibhav Gogate The University of Texas at Dallas
Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1
More informationHidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012
Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationAnnouncements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.
Introduction to Artificial Intelligence V22.0472-001 Fall 2009 Lecture 15: Bayes Nets 3 Midterms graded Assignment 2 graded Announcements Rob Fergus Dept of Computer Science, Courant Institute, NYU Slides
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationBayesian Methods in Artificial Intelligence
WDS'10 Proceedings of Contributed Papers, Part I, 25 30, 2010. ISBN 978-80-7378-139-2 MATFYZPRESS Bayesian Methods in Artificial Intelligence M. Kukačka Charles University, Faculty of Mathematics and Physics,
More informationCS 188: Artificial Intelligence Spring 2009
CS 188: Artificial Intelligence Spring 2009 Lecture 21: Hidden Markov Models 4/7/2009 John DeNero UC Berkeley Slides adapted from Dan Klein Announcements Written 3 deadline extended! Posted last Friday
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro
More informationUncertainty. Chapter 13. Chapter 13 1
Uncertainty Chapter 13 Chapter 13 1 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Chapter 13 2 Uncertainty Let action A t = leave for airport t minutes before
More informationMarkov Chains and Hidden Markov Models
Markov Chains and Hidden Markov Models CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Slides are based on Klein and Abdeel, CS188, UC Berkeley. Reasoning
More informationCS 188: Artificial Intelligence Fall Recap: Inference Example
CS 188: Artificial Intelligence Fall 2007 Lecture 19: Decision Diagrams 11/01/2007 Dan Klein UC Berkeley Recap: Inference Example Find P( F=bad) Restrict all factors P() P(F=bad ) P() 0.7 0.3 eather 0.7
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Hidden Markov Models Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging
ACS Introduction to NLP Lecture 2: Part of Speech (POS) Tagging Stephen Clark Natural Language and Information Processing (NLIP) Group sc609@cam.ac.uk The POS Tagging Problem 2 England NNP s POS fencers
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationCS 4100 // artificial intelligence. Recap/midterm review!
CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks
More informationCSE 473: Artificial Intelligence Spring 2014
CSE 473: Artificial Intelligence Spring 2014 Hidden Markov Models Hanna Hajishirzi Many slides adapted from Dan Weld, Pieter Abbeel, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationMini-project 2 (really) due today! Turn in a printout of your work at the end of the class
Administrivia Mini-project 2 (really) due today Turn in a printout of your work at the end of the class Project presentations April 23 (Thursday next week) and 28 (Tuesday the week after) Order will be
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 3, 2016 CPSC 422, Lecture 11 Slide 1 422 big picture: Where are we? Query Planning Deterministic Logics First Order Logics Ontologies
More informationAnnouncements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example
CS 88: Artificial Intelligence Fall 29 Lecture 9: Hidden Markov Models /3/29 Announcements Written 3 is up! Due on /2 (i.e. under two weeks) Project 4 up very soon! Due on /9 (i.e. a little over two weeks)
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationCOMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma
COMS 4771 Probabilistic Reasoning via Graphical Models Nakul Verma Last time Dimensionality Reduction Linear vs non-linear Dimensionality Reduction Principal Component Analysis (PCA) Non-linear methods
More informationAnnouncements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic
CS 188: Artificial Intelligence Fall 2008 Lecture 16: Bayes Nets III 10/23/2008 Announcements Midterms graded, up on glookup, back Tuesday W4 also graded, back in sections / box Past homeworks in return
More informationWe Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named
We Live in Exciting Times ACM (an international computing research society) has named CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Apr. 2, 2019 Yoshua Bengio,
More informationUncertainty and Bayesian Networks
Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Particle Filters and Applications of HMMs Instructor: Wei Xu Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley.] Recap: Reasoning
More informationThe Noisy Channel Model. Statistical NLP Spring Mel Freq. Cepstral Coefficients. Frame Extraction ... Lecture 9: Acoustic Models
Statistical NLP Spring 2010 The Noisy Channel Model Lecture 9: Acoustic Models Dan Klein UC Berkeley Acoustic model: HMMs over word positions with mixtures of Gaussians as emissions Language model: Distributions
More informationCS 561: Artificial Intelligence
CS 561: Artificial Intelligence Instructor: TAs: Sofus A. Macskassy, macskass@usc.edu Nadeesha Ranashinghe (nadeeshr@usc.edu) William Yeoh (wyeoh@usc.edu) Harris Chiu (chiciu@usc.edu) Lectures: MW 5:00-6:20pm,
More informationBayes Nets III: Inference
1 Hal Daumé III (me@hal3.name) Bayes Nets III: Inference Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 10 Apr 2012 Many slides courtesy
More informationPengju XJTU 2016
Introduction to AI Chapter13 Uncertainty Pengju Ren@IAIR Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes Rule Wumpus World Environment Squares adjacent to wumpus are
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Particle Filters and Applications of HMMs Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Bayes Nets: Sampling Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationProbability. CS 3793/5233 Artificial Intelligence Probability 1
CS 3793/5233 Artificial Intelligence 1 Motivation Motivation Random Variables Semantics Dice Example Joint Dist. Ex. Axioms Agents don t have complete knowledge about the world. Agents need to make decisions
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 19 Nov. 5, 2018 1 Reminders Homework
More informationCS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning
CS188 Outline We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more! Part III:
More informationOutline } Exact inference by enumeration } Exact inference by variable elimination } Approximate inference by stochastic simulation } Approximate infe
Inference in belief networks Chapter 15.3{4 + new AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 15.3{4 + new 1 Outline } Exact inference by enumeration } Exact inference by variable elimination
More informationChapter 13 Quantifying Uncertainty
Chapter 13 Quantifying Uncertainty CS5811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline Probability basics Syntax and semantics Inference
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationLinear Dynamical Systems (Kalman filter)
Linear Dynamical Systems (Kalman filter) (a) Overview of HMMs (b) From HMMs to Linear Dynamical Systems (LDS) 1 Markov Chains with Discrete Random Variables x 1 x 2 x 3 x T Let s assume we have discrete
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationOur Status in CSE 5522
Our Status in CSE 5522 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more!
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Hidden Markov Models Barnabás Póczos & Aarti Singh Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed
More informationMACHINE LEARNING 2 UGM,HMMS Lecture 7
LOREM I P S U M Royal Institute of Technology MACHINE LEARNING 2 UGM,HMMS Lecture 7 THIS LECTURE DGM semantics UGM De-noising HMMs Applications (interesting probabilities) DP for generation probability
More informationIntelligent Agents. Pınar Yolum Utrecht University
Intelligent Agents Pınar Yolum p.yolum@uu.nl Utrecht University Logical Agents (Based mostly on the course slides from http://aima.cs.berkeley.edu/) Outline Knowledge-based agents Wumpus world Logic in
More informationMarkov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.
CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called
More informationSampling from Bayes Nets
from Bayes Nets http://www.youtube.com/watch?v=mvrtaljp8dm http://www.youtube.com/watch?v=geqip_0vjec Paper reviews Should be useful feedback for the authors A critique of the paper No paper is perfect!
More informationLecture 15. Probabilistic Models on Graph
Lecture 15. Probabilistic Models on Graph Prof. Alan Yuille Spring 2014 1 Introduction We discuss how to define probabilistic models that use richly structured probability distributions and describe how
More informationChapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang
Chapter 4 Dynamic Bayesian Networks 2016 Fall Jin Gu, Michael Zhang Reviews: BN Representation Basic steps for BN representations Define variables Define the preliminary relations between variables Check
More informationAnnouncements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008
CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC
More informationCS 188: Artificial Intelligence. Our Status in CS188
CS 188: Artificial Intelligence Probability Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Our Status in CS188 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning
More informationBrief Introduction of Machine Learning Techniques for Content Analysis
1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview
More informationHidden Markov models
Hidden Markov models Charles Elkan November 26, 2012 Important: These lecture notes are based on notes written by Lawrence Saul. Also, these typeset notes lack illustrations. See the classroom lectures
More informationLecture : Probabilistic Machine Learning
Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning
More informationQuantifying uncertainty & Bayesian networks
Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Probability Steve Tanimoto University of Washington [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More information