Dipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics
|
|
- Jonas Adams
- 5 years ago
- Views:
Transcription
1 Dipartimento di Elettronica Informaione e Bioingegneria Cognitive Robotics robabilistic self localiation and 205
2 Implementations of rational agent 2 Today implementations of rational agents are hybrid declarative knowledge for high level planning declarative/probabilistic /neural knowledge for motion planning qualitative physics or behavior for monitoring and recovery
3 Why? The urge to ask why and the capacity to find causal eplanations came early in human development. When God asks: Did you eat from that tree? This is what Adam replies: The woman whom you gave to be with me She handed me the fruit from the tree; and I ate. Eve is just as skillful: The serpent deceived me and I ate. The thing to notice about this story is that God did not ask for eplanation only for the facts it was Adam who felt the need to eplain. The message is clear: causal eplanation is a man-made concept. J earl
4 causality - human reasoning We humans are fundamentally storytellers. We like to organie events into chains of causes and effects that eplain the consequences of our actions. It is tempting to believe that our stories of causes and effects are how the world works. Actually they are just a framework that we use to manipulate the world and to construct eplanations. Did Newton F Ma eplain how a force causes a mass to accelerate? The cause-effect paradigm works well in engineering to arrange the world for our convenience. The computer is a perfect eample. The inputs affect the outputs but not vice versa. The components used to construct the computer are constructed to be atomic building blocks of cause-and-effect.
5 limits The notion of cause-and-effect breaks down when the parts that we would like to think of as outputs affect the parts that we would prefer to think of as inputs. The paradoes of quantum mechanics are a perfect eample. Our mere observation of a particle can "cause" a distant particle to be in a different state. It also falls apart when we try to use causation to eplain comple dynamical systems. A gene does not "cause" the trait like height or a disease like cancer. Science will need more powerful eplanatory tools and we will learn to accept the limits of our old methods of storytelling. We will come to appreciate that causes and effects do not eist in nature that they are just convenient creations of our own minds.» W. Daniel Hillis
6 Galileo - how two maims: description first eplanation second that is the how precedes the why description is carried out in the language of mathematics; namely equations. Ask not whether an object falls because it is pulled from below or pushed from above. Ask how well you can predict the time it takes; that time will vary from object to object and as the angle of the track changes. Moreover do not attempt to answer such questions in human language; say it in the form of mathematical equations. description first eplanation second was taken very seriously by the scientists and changed the character of science from speculative to empirical. 638
7 Hume - how David Hume: the why is not merely second to the how the why is totally superfluous as it is subsumed by the how. Thus causal connections are the product of observations. Causation is a learnable habit of the mind almost as fictional as optical illusions and as transitory as avlov s conditioning. How do people ever acquire knowledge of causation? Spuriuos correlations the rooster crow stands in constant conjunction to the sunrise yet it does not cause the sun to rise.
8 Causality in a pure logic system
9 Computing Causality causal relations can be reduced to other concepts mechanistic theory reduces causal relations to physical processes. probabilistic account reduces causal relations to probabilistic relations; causal relations induce probabilistic dependencies counterfactual account reduces causal relations to counterfactual conditionals. C is a direct cause of E if» If C where to occur than E would occur» If C where not to occur than E would not occur agent-oriented account reduces causal relations to the ability of agents to achieve goals by manipulating their causes. C causes E if and only if reaching C would be an effective way for an agent to reach E.
10 Discovering causal relationships 2 general strategies: Hypothetico deductive opper Make hypothesis of a causal relationship Deduce predictions from hypothesis Compare predictions and true values A causal eplanation of an event effect is natural laws +initial conditionscause Inductive Bacon Make a large number of observations Compile a table of positive instances negative and partial instances Induce causal relationship from data in steps Today: causal Bayesian networks J. earl
11 Causality or probability In most of causal connections causality is a matter of degree. robabilities can be used "by default" to model graduation but they do not represent the core of a causal process. They are only a measure of the eternal manifestations of the inner mechanism of change. Marianne Belis The causal roots of probability - in "Causality and probability in the sciences" editors: F.Russo and J.Williamson Volume 5 pp College ublication King's College London 2007
12 Non-determinism in Science 2 In his h. D. Thesis Niels Bohr 90 introduced a demonstration of diamagnetism based on statistics Heisemberg introduced the uncertainty principle in physics 927 In 997 Ilya rigogine contended that determinism is no longer a viable scientific belief
13 Schrodinger s cat parado A cat a flask of poison a radioactive source in a sealed bo. When eactly quantum superposition ends and reality collapses into one possibility or the other? When there is an eternal observation. If an internal monitor detects radioactivity a single atom decaying the flask is shattered releasing the poison that kills the cat. After a while the cat is simultaneously alive and dead. Yet when one looks in the bo one sees the cat either alive or dead. Eplanations consistent with microscopic quantum mechanics require that macroscopic objects such as cats do not always have unique classical descriptions. Our intuition says that no observer can be in a miture of states.
14 probabilistic approach in AI 4 LANNING IN UNCERTAIN DOMAINS Choosing the best action requires thinking about more than just the immediate effects of your actions. There is a lot of uncertainty about the future. Models developed in Dynamic rogramming and Operation Research have been adopted in AI Markov decision process
15 probabilistic robotics Classical Robotics mid-70 s eact models no perception necessary Reactive aradigm mid-80 s no models relies heavily on good perception Hybrids since 90 s model-based at higher levels reactive at lower levels robabilistic Robotics since mid-90 s seamless integration of models and perception inaccurate models inaccurate sensors
16 robabilistic Robotics S. Thrum Key ideas: Eplicit representation of uncertainty using the calculus of probability theory. applications independent from the sensors used. robability both for perception from action
17 museum robot RHINO 996 AAAI - 97 Bonn Navigation Environment crowded unpredictable Environment unmodified Invisible haards Walking speed or faster High failure costs Interaction Individuals and crowds Museum visitors first encounter Age 2 through 99 Spend less than 5 minutes
18 Localiation as an estimate The localiation problem can be described as a Bayesian estimation problem: We want to estimate the location of a robot given noisy measurements the robot has a belief about where it is. at any time it does not consider one location but the whole space of possible locations. based on all available information the robot can believe to be at a certain location to a certain degree. The localiation problem consists of estimating the probability density function over the space of all locations.
19 SLAM SLAM Simultaneous Localiation And Mapping Figure out where the robot is and what the world looks like at the same time Localiation Where am I? osition error accumulates with movement Mapping What does the environment look like? Sensor error not independent of position error
20 Random Variables Discrete X can take on a finite number of values in { 2 n} Xi or i is the probability that the random variable X takes value i Continuous X takes on values in the continuum. px or p is a probability density function. r [ a b] p d b a p
21 Theorem of Total robability Discrete case Continuous case p d independent y y y y y p p y dy p p y p y dy
22 Conditional robability X and Yy y If X and Y are independent then y y conditional probability we know that the Y value is y we would like to know the probability that the X value is conditioned on that fact y probability of given y y y / y if y>0 y y y If X and Y are independent then y y / y y/y
23 Bayes rule evidence prior likelihood y y y y y y y We want to connect y to its inverse y
24 Meaning of Bayes rule is a quantity that we want to infer from y y is data is prior probability distribution knowledge we have prior to incorporate data y y is the posterior probability distribution Bayes rule is a way to compute a posterior probability using the conditional and prior probabilities. y describes how the state variable causes sensor measurements y
25 Normaliation y y y y y y η η y y y y y au : au au : η η The denominator y of Bayes rule does not depend on y - will be the same for any so call it η normalier calculus
26 Meaning in robotics pd η pd p pd is the probability of the position being true given the sensor measurement d pd is the probability of the sensor measurement being d given an object at p is the prior probability of the map
27 Eample - State Estimation Suppose a robot with a sensor in front of a door It makes a measurement What is open?
28 Causal vs. Diagnostic Reasoning open is diagnostic. open is causal. Often causal knowledge is easier to obtain. Bayes rule allows to transform causal into diagnostic knowledge: count frequencies open open open
29 Eample open 0.6 open 0.3 open open 0.5 open open open open p open + open p open open raises the probability that the door is open
30 Combining Evidence Suppose our robot obtains another observation 2. How can we integrate this new information? More generally how can we estimate... n? We want to estimate the probability of being at state considering the history of sensors measures
31 Conditioning Conditioning on random variable Z gives y y y
32 Recursive Bayesian Updating n n n n n n Markov assumption: n is independent of... n- if we know n i i n n n n n n n n η η
33 Eample: Second Measurement 2 open open 0.6 open 2/ open open open open open open open 2 lowers the probability that the door is open
34 Robot actions the world is dynamic actions carried out by the robot actions carried out by other agents or just the time passing by changes the world. How can we incorporate such actions? Actions are never carried out with absolute certainty. In contrast to measurements actions generally increase the uncertainty.
35 Modeling Actions To incorporate the outcome of an action u into the current belief we use u This term specifies the probability density function that eecuting u changes the state from to.
36 Eample: Closing the door
37 State Transitions for action u u for u close door : open closed 0 If the door is open the action close door succeeds in 90% of all cases if it is closed in 00%.
38 Integrating the Outcome of Actions Continuous case: u u ' ' d' Discrete case: u u ' '
39 Eample: The resulting Belief ' ' ' ' u closed closed closed u open open open u open u open u open closed closed u closed open open u closed u closed u closed
40 Bayes Filters Given: Stream of observations and action data u Sensor model. Action model u. rior probability of the system state. Wanted: Estimate of the state X The posterior of the state is called Belief: Bel t t u 2 ut t
41 Bayes Filters 2 2 t t t t t u u u u η Bayes observation u action state 2 t t t t u u Bel Markov assumption 2 t t t t u u η t t t t t t t d Bel u η Markov 2 t t t t t t t t d u u u η 2 2 t t t t t t t t d u u u u η Total prob. redicts the state given action and belief Corrects the estimate using the perceptual model
42 Bayes filter Algorithm Bayes_filter Beld : η0 if d is a perceptual data item then for all do for all do Bel ' η η + Bel' Bel' η Bel Bel' else if d is an action data item u then for all do return Bel Bel' u ' Bel ' d'
43 Bayes filters Represent the state at time t by random variables i At each point in time a probability distribution over i Bel i represents the uncertainty Sequentially estimate the belief over the state space conditioned on all sensor information to make the computation tractable assume the Markov hypothesis Recursive Bayes filter updating reduces the uncertainty of being at a location state s
44 Eample A person or robot with a door-sensing sensor moves in a corridor osition and current belief in black robability to make door observation in red
45 Bayes Filters implementations different implementations article filters Kalman Filters and EKF Etended KF Dynamic Bayes networks Hidden Markov models artially Observable Markov Decision rocesses OMDs
46 State Representations for Localiation Discrete Representations Grid Based approaches Continuous Representations Kalman Tracking article Filters topologicall
47 State representation Grid based approaches Maintain a grid of discrete positions in memory and upgrade them Topological approaches Maintain a graph representation of the environment article filters represent beliefs of present position by set of samples particles Belt St {< i t w i t> i..n} each i t is a state w i t are non negative weight factors that sum up to n is the number of particles At each iteration samples constitue an approimation of the posterior probability
48 article filters: the idea The vertical black lines are particles. Each particle represents a possible location at random along the hallway. article filters use a sampling procedure At each state different samples of the variable to estimate are taken and the probabilities computed; only particles with probabilities higher than a threshold are used and the weights arranged to sum up to.
49 article filter method article filters represent beliefs posterior density function by sets of samples or particles a set of random samples with associated weights w that sum up to. At each iteration samples constitute an approimation of the posterior probability new samples obtained by augmenting each of the present samples with the new state and derive the weights A common problem is the degeneracy after iterations all but one particle will have negligible weights. resampling is a way to eliminate particles that have small weights
50 The particle filter algorithm
51 Eample of partile filter a door sensor says if we are in front of a door The vertical black lines are particles; Each particle represents a possible Location; at start chosen at random along the hallway.
52 p Bel Bel p w Bel p Bel α α α Sensor Information the door sensor currently tells in front of a door.
53 Robot Motion When the robot drives forward particles move the same amount Bel p u ' Bel ' d '
54 Sensor Information Bel w α p Bel α p Bel Bel α p
55 Robot Motion Bel p u ' Bel ' d ' The net time we move we can rule out most of the particles and be more confident about the real location
56 Eample - ultrasounds
57 Markov Decision rocess MD 57 At each discrete time step an agent must choose an action MD is defined by SATR S finite set of states - A finite set of actions T State transition function - from SA to probability distributions over S. Tsas` is the probability of being state s` when agent was in state s and has chosen action a. Actions have nondeterministic effect. R Reward Function - from SA to real numbers. The making agent starts in some state and chooses an action according to its policy. This determines a reward and causes a stochastic transition to the net state Optimiation -> find the policy that leads to the highest total reward over T finite horion Because of the Markov property the policy does not have to remember previous states
58 Generaliations to MD 58 Two dimensions of generaliation: artial observability Decentraliation
59 OMD 59 Real agents cannot directly observe the state.state is estimated SE The agent must maintain a probability distribution over the set of possible states the belief state based on a set of observations and observation probabilities Optimally solving a generic discrete OMD is computationally epensive observation World SE belief state b π action Agent
60 Successful Applications Industrial outdoor navigation [Durrant-Whyte 95] Underwater vehicles [Leonard et al 98] Coal Mining [Singh 98] Missile Guidance Indoor navigation [Simmons et al 97] RoboCup [Lenser et al 2000] Museum Tour-Guides [Burgard et al 98; Thrun 99] + many others
61 Real application: eploring a mine On 30 May 2003 CMU robot Groundhog successfully eplored and mapped a main corridor 308 meters of the abandoned Mathies mine ennsylvania. Groundhog is designed to autonomously eplore and acquire 3-D maps. is built out of the front halves of two all terrain vehicles ATVs with identical steering mechanisms on either end. is equipped with tiltable laser range finders on either end.
62 The core of the Groundhog navigation system is a software package SLAM that acquires 2-D maps. Eample: A 2-D map obtained from a dataset lacking any odometry information. A 3D reconstruction. -
63 Advantages and pitfalls of SLAM Can accommodate inaccurate models Can accommodate imperfect sensors Robust in real-world applications No need for perfect world model Computationally demanding Consider entire probability densities False assumptions Approimate Represents continuous probability distribution
64 workshops Affordances EmbodiedLanguage MachineConsciuosness rostheticrobotics RoboticsEperiments SwarmRobotics
Probabilistic Robotics. Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics (S. Thurn et al.
robabilistic Robotics Slides from Autonomous Robots Siegwart and Nourbaksh Chapter 5 robabilistic Robotics S. Thurn et al. Today Overview of probability Representing uncertainty ropagation of uncertainty
More informationIntroduction to Mobile Robotics Probabilistic Robotics
Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action
More informationRecursive Bayes Filtering
Recursive Bayes Filtering CS485 Autonomous Robotics Amarda Shehu Fall 2013 Notes modified from Wolfram Burgard, University of Freiburg Physical Agents are Inherently Uncertain Uncertainty arises from four
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationProbabilistic Robotics
Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh),
More informationModeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems
Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationL11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms
L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation
More informationMathematical Formulation of Our Example
Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot
More informationHidden Markov Models. Vibhav Gogate The University of Texas at Dallas
Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationRobotics. Mobile Robotics. Marc Toussaint U Stuttgart
Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand
More informationCS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!
CS188 Outline We re done with art I: Search and lanning! CS 188: Artificial Intelligence robability art II: robabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error
More informationProbability: Review. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
robabilit: Review ieter Abbeel UC Berkele EECS Man slides adapted from Thrun Burgard and Fo robabilistic Robotics Wh probabilit in robotics? Often state of robot and state of its environment are unknown
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationLecture 10: Introduction to reasoning under uncertainty. Uncertainty
Lecture 10: Introduction to reasoning under uncertainty Introduction to reasoning under uncertainty Review of probability Axioms and inference Conditional probability Probability distributions COMP-424,
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic
More informationMobile Robotics II: Simultaneous localization and mapping
Mobile Robotics II: Simultaneous localization and mapping Introduction: probability theory, estimation Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision
More informationParticle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics
Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error
More informationCS491/691: Introduction to Aerial Robotics
CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationIntroduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping
Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is SLAM? Estimate the pose of a robot and the map of the environment
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 10 Oct, 13, 2011 CPSC 502, Lecture 10 Slide 1 Today Oct 13 Inference in HMMs More on Robot Localization CPSC 502, Lecture
More informationProbability and Time: Hidden Markov Models (HMMs)
Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5.2) Nov, 25, 2013 CPSC 322, Lecture 32 Slide 1 Lecture Overview Recap Markov Models Markov Chain
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 12: Probability 3/2/2011 Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Announcements P3 due on Monday (3/7) at 4:59pm W3 going out
More informationMODULE -4 BAYEIAN LEARNING
MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationMarkov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.
CS 88: Artificial Intelligence Fall 27 Lecture 2: HMMs /6/27 Markov Models A Markov model is a chain-structured BN Each node is identically distributed (stationarity) Value of X at a given time is called
More informationEnter Heisenberg, Exit Common Sense
Enter Heisenberg, Exit Common Sense W. Blaine Dowler July 10, 2010 1 Unanswered Questions The questions raised to date which still have not been answered are as follows: 1. The basic building blocks of
More informationHidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012
Hidden Markov Models Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 19 Apr 2012 Many slides courtesy of Dan Klein, Stuart Russell, or
More informationSimultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo
Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle
More informationCS 188: Artificial Intelligence. Our Status in CS188
CS 188: Artificial Intelligence Probability Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Our Status in CS188 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More informationCMPT Machine Learning. Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th
CMPT 882 - Machine Learning Bayesian Learning Lecture Scribe for Week 4 Jan 30th & Feb 4th Stephen Fagan sfagan@sfu.ca Overview: Introduction - Who was Bayes? - Bayesian Statistics Versus Classical Statistics
More informationPROBABILISTIC REASONING OVER TIME
PROBABILISTIC REASONING OVER TIME In which we try to interpret the present, understand the past, and perhaps predict the future, even when very little is crystal clear. Outline Time and uncertainty Inference:
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile
More informationArtificial Intelligence
Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t
More informationA Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems
A Probabilistic Relational Model for Characterizing Situations in Dynamic Multi-Agent Systems Daniel Meyer-Delius 1, Christian Plagemann 1, Georg von Wichert 2, Wendelin Feiten 2, Gisbert Lawitzky 2, and
More informationOur Status in CSE 5522
Our Status in CSE 5522 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more!
More informationGeneral Physical Chemistry II
General Physical Chemistry II Lecture 3 Aleksey Kocherzhenko September 2, 2014" Last time " The time-independent Schrödinger equation" Erwin Schrödinger " ~ 2 2m d 2 (x) dx 2 The wavefunction:" (x) The
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 18: HMMs and Particle Filtering 4/4/2011 Pieter Abbeel --- UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore
More informationACCUPLACER MATH 0310
The University of Teas at El Paso Tutoring and Learning Center ACCUPLACER MATH 00 http://www.academics.utep.edu/tlc MATH 00 Page Linear Equations Linear Equations Eercises 5 Linear Equations Answer to
More informationCS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning
CS188 Outline We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more! Part III:
More informationBayesian Methods / G.D. Hager S. Leonard
Bayesian Methods Recall Robot Localization Given Sensor readings z 1, z 2,, z t = z 1:t Known control inputs u 0, u 1, u t = u 0:t Known model t+1 t, u t ) with initial 1 u 0 ) Known map z t t ) Compute
More informationAn AI-ish view of Probability, Conditional Probability & Bayes Theorem
An AI-ish view of Probability, Conditional Probability & Bayes Theorem Review: Uncertainty and Truth Values: a mismatch Let action A t = leave for airport t minutes before flight. Will A 15 get me there
More information10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.
An AI-ish view of Probability, Conditional Probability & Bayes Theorem Review: Uncertainty and Truth Values: a mismatch Let action A t = leave for airport t minutes before flight. Will A 15 get me there
More informationReasoning with Uncertainty
Reasoning with Uncertainty Representing Uncertainty Manfred Huber 2005 1 Reasoning with Uncertainty The goal of reasoning is usually to: Determine the state of the world Determine what actions to take
More informationUncertainty, precision, prediction errors and their relevance to computational psychiatry
Uncertainty, precision, prediction errors and their relevance to computational psychiatry Christoph Mathys Wellcome Trust Centre for Neuroimaging at UCL, London, UK Max Planck UCL Centre for Computational
More informationCS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering
CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering Cengiz Günay, Emory Univ. Günay Ch. 15,20 Hidden Markov Models and Particle FilteringSpring 2013 1 / 21 Get Rich Fast!
More informationWhy do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationComputational Cognitive Science
Computational Cognitive Science Lecture 9: A Bayesian model of concept learning Chris Lucas School of Informatics University of Edinburgh October 16, 218 Reading Rules and Similarity in Concept Learning
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robotic mapping Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic
More informationAnnouncements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008
CS 88: Artificial Intelligence Fall 28 Lecture 9: HMMs /4/28 Announcements Midterm solutions up, submit regrade requests within a week Midterm course evaluation up on web, please fill out! Dan Klein UC
More informationACCUPLACER MATH 0311 OR MATH 0120
The University of Teas at El Paso Tutoring and Learning Center ACCUPLACER MATH 0 OR MATH 00 http://www.academics.utep.edu/tlc MATH 0 OR MATH 00 Page Factoring Factoring Eercises 8 Factoring Answer to Eercises
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationde Blanc, Peter Ontological Crises in Artificial Agents Value Systems. The Singularity Institute, San Francisco, CA, May 19.
MIRI MACHINE INTELLIGENCE RESEARCH INSTITUTE Ontological Crises in Artificial Agents Value Systems Peter de Blanc Machine Intelligence Research Institute Abstract Decision-theoretic agents predict and
More informationHidden Markov Models. AIMA Chapter 15, Sections 1 5. AIMA Chapter 15, Sections 1 5 1
Hidden Markov Models AIMA Chapter 15, Sections 1 5 AIMA Chapter 15, Sections 1 5 1 Consider a target tracking problem Time and uncertainty X t = set of unobservable state variables at time t e.g., Position
More informationAnnouncements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example
CS 88: Artificial Intelligence Fall 29 Lecture 9: Hidden Markov Models /3/29 Announcements Written 3 is up! Due on /2 (i.e. under two weeks) Project 4 up very soon! Due on /9 (i.e. a little over two weeks)
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Hidden Markov Models Instructor: Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, and Anca. http://ai.berkeley.edu.]
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationProbabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010
Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Particle Filters and Applications of HMMs Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationPei Wang( 王培 ) Temple University, Philadelphia, USA
Pei Wang( 王培 ) Temple University, Philadelphia, USA Artificial General Intelligence (AGI): a small research community in AI that believes Intelligence is a general-purpose capability Intelligence should
More informationRecall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem
Recall from last time: Conditional probabilities Our probabilistic models will compute and manipulate conditional probabilities. Given two random variables X, Y, we denote by Lecture 2: Belief (Bayesian)
More informationIntroduction: MLE, MAP, Bayesian reasoning (28/8/13)
STA561: Probabilistic machine learning Introduction: MLE, MAP, Bayesian reasoning (28/8/13) Lecturer: Barbara Engelhardt Scribes: K. Ulrich, J. Subramanian, N. Raval, J. O Hollaren 1 Classifiers In this
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 12: Probability 10/4/2011 Dan Klein UC Berkeley 1 Today Probability Random Variables Joint and Marginal Distributions Conditional Distribution Product
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Introduction. Basic Probability and Bayes Volkan Cevher, Matthias Seeger Ecole Polytechnique Fédérale de Lausanne 26/9/2011 (EPFL) Graphical Models 26/9/2011 1 / 28 Outline
More informationBayesian Methods in Artificial Intelligence
WDS'10 Proceedings of Contributed Papers, Part I, 25 30, 2010. ISBN 978-80-7378-139-2 MATFYZPRESS Bayesian Methods in Artificial Intelligence M. Kukačka Charles University, Faculty of Mathematics and Physics,
More informationLogic, Knowledge Representation and Bayesian Decision Theory
Logic, Knowledge Representation and Bayesian Decision Theory David Poole University of British Columbia Overview Knowledge representation, logic, decision theory. Belief networks Independent Choice Logic
More informationOutline. Spring It Introduction Representation. Markov Random Field. Conclusion. Conditional Independence Inference: Variable elimination
Probabilistic Graphical Models COMP 790-90 Seminar Spring 2011 The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Outline It Introduction ti Representation Bayesian network Conditional Independence Inference:
More informationConceptual Explanations: Radicals
Conceptual Eplanations: Radicals The concept of a radical (or root) is a familiar one, and was reviewed in the conceptual eplanation of logarithms in the previous chapter. In this chapter, we are going
More informationOur Status. We re done with Part I Search and Planning!
Probability [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Our Status We re done with Part
More information9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering
Types of learning Modeling data Supervised: we know input and targets Goal is to learn a model that, given input data, accurately predicts target data Unsupervised: we know the input only and want to make
More informationCOMP219: Artificial Intelligence. Lecture 19: Logic for KR
COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof
More informationCS 188: Artificial Intelligence Spring 2009
CS 188: Artificial Intelligence Spring 2009 Lecture 21: Hidden Markov Models 4/7/2009 John DeNero UC Berkeley Slides adapted from Dan Klein Announcements Written 3 deadline extended! Posted last Friday
More informationReinforcement Learning Wrap-up
Reinforcement Learning Wrap-up Slides courtesy of Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationVision for Mobile Robot Navigation: A Survey
Vision for Mobile Robot Navigation: A Survey (February 2002) Guilherme N. DeSouza & Avinash C. Kak presentation by: Job Zondag 27 February 2009 Outline: Types of Navigation Absolute localization (Structured)
More informationBasic methods to solve equations
Roberto s Notes on Prerequisites for Calculus Chapter 1: Algebra Section 1 Basic methods to solve equations What you need to know already: How to factor an algebraic epression. What you can learn here:
More informationI. Induction, Probability and Confirmation: Introduction
I. Induction, Probability and Confirmation: Introduction 1. Basic Definitions and Distinctions Singular statements vs. universal statements Observational terms vs. theoretical terms Observational statement
More information27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling
10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel
More informationProbability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas
Probability, Markov models and HMMs Vibhav Gogate The University of Texas at Dallas CS 6364 Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell and Andrew Moore
More informationSLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada
SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.
More informationProbability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles
Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles Myungsoo Jun and Raffaello D Andrea Sibley School of Mechanical and Aerospace Engineering Cornell University
More informationIntroduction. So, why did I even bother to write this?
Introduction This review was originally written for my Calculus I class, but it should be accessible to anyone needing a review in some basic algebra and trig topics. The review contains the occasional
More informationCourse Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.
Course Introduction Probabilistic Modelling and Reasoning Chris Williams School of Informatics, University of Edinburgh September 2008 Welcome Administration Handout Books Assignments Tutorials Course
More informationSensor Fusion: Particle Filter
Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,
More informationBayesian Updating with Continuous Priors Class 13, Jeremy Orloff and Jonathan Bloom
Bayesian Updating with Continuous Priors Class 3, 8.05 Jeremy Orloff and Jonathan Bloom Learning Goals. Understand a parameterized family of distributions as representing a continuous range of hypotheses
More information1 Multiple Choice. PHIL110 Philosophy of Science. Exam May 10, Basic Concepts. 1.2 Inductivism. Name:
PHIL110 Philosophy of Science Exam May 10, 2016 Name: Directions: The following exam consists of 24 questions, for a total of 100 points with 0 bonus points. Read each question carefully (note: answers
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Probability Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188
More informationPlanning Under Uncertainty II
Planning Under Uncertainty II Intelligent Robotics 2014/15 Bruno Lacerda Announcement No class next Monday - 17/11/2014 2 Previous Lecture Approach to cope with uncertainty on outcome of actions Markov
More informationIntroduction to Bayesian Learning. Machine Learning Fall 2018
Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability
More informationCS 188: Artificial Intelligence Fall 2009
CS 188: Artificial Intelligence Fall 2009 Lecture 13: Probability 10/8/2009 Dan Klein UC Berkeley 1 Announcements Upcoming P3 Due 10/12 W2 Due 10/15 Midterm in evening of 10/22 Review sessions: Probability
More informationBayesian Networks Inference with Probabilistic Graphical Models
4190.408 2016-Spring Bayesian Networks Inference with Probabilistic Graphical Models Byoung-Tak Zhang intelligence Lab Seoul National University 4190.408 Artificial (2016-Spring) 1 Machine Learning? Learning
More information