Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

Size: px
Start display at page:

Download "Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo"

Transcription

1 Robots Autónomos 2. Bayesian Estimation and sensor models Domingo Gallardo Depto. CCIA

2 References Recursive State Estimation: Thrun, chapter 2 Sensor models and robot perception: Thrun, chapter 6 Depto DCCIA - Robots Autónomos

3 Contents Probability theory and robotics Sensor models Dynamic state estimation Bayesian filter Depto DCCIA - Robots Autónomos

4 Autonomous Robotics We are interested in controlling autonomous mobile robots that continuously sense and manoeuvre in a semistructured environment with a defined goal. Interesting Robot guides Robot transporters Cars Vacuum cleaners Not interesting Manipulators Humanoids AGVs Game avatars Doctorado Depto. CCIA - Robots Autónomos 3/64

5 Uncertainty in Robotics Sensors obtain imprecise readings of the environment. From them we want to infer (estimate) the most probable state of the world. We have to model error and imprecision. Doctorado Depto. CCIA - Robots Autónomos 4/64

6 Probability theory and Robotics Random variables and probability functions to model readings. Variance and covariance to model error. Bayes theorem to infer unknown data. Expectations and Bayesian Filters to integrate the readings. Doctorado Depto. CCIA - Robots Autónomos 17/64

7 Remembering basic concepts Axioms of probability Random variables Probability Density Functions Joint distributions Bayes rule Expectation, variance, covariance and correlation Doctorado Depto. CCIA - Robots Autónomos 18/64

8 Axioms of probability Let A, B,... be propositions about the outcomes of a given experiment. The number P (A) represents the probability that the proposition A is true after performing the experiment. It must satisfy the following three axioms: P (A) > 0 P (True) =1 If A and B are exclusive (there are no outcome in which both are true) then P (A B) =P (A)+P (B) Doctorado Depto. CCIA - Robots Autónomos 19/64

9 Frequencies It s usual to interpret probabilities as relative frequencies We count the number of times that A is true and the number of executions of the experiment. Let s suppose that our robot tries to enter in the room 30 times: A (Open) = (25) B (Someone) = (12) C (Chair) = (2) P (A) = =0.83, P(B) =12 30 =0.4, P(C) = 2 30 =0.07 Doctorado Depto. CCIA - Robots Autónomos 21/64

10 Discrete Random Variables X denotes a random variable X can take on a contable number of values in {x 1,x 2,...,x n } P (X = x i ) or P (x i ) is the probability that an outcome of the experiment produces the value x i if: p(x i ) 0 i p(x i )=1 Doctorado Depto. CCIA - Robots Autónomos 23/64

11 Continuous Random Variables X takes on values in the continuum p(x = x) or p(x) is a probability density function or PDF iff x p(x) 0 p(x) dx =1 Doctorado Depto. CCIA - Robots Autónomos 24/64

12 PDF and cumulative function The probability of a event gives an outcome between values a and b is P (a <X b) = b a p(x)dx The cumulative distribution function is: P (X x) = x p(x) dx Relation between the PDF and the cumulative distribution: p(x) = lim h 0 P (x h 2 <X x + h 2 ) h = P (X x) x Doctorado Depto. CCIA - Robots Autónomos 25/64

13 Understanding p(x) What s the gut-feel meaning of p(x)? If then p(a) p(b) = α when a value x is sampled from the distribution, you are α times as likely to find that x is very close to a than that x is very close to b. Doctorado Depto. CCIA - Robots Autónomos 26/64

14 Some Probability Density Functions Uniform: p(x) = 0 if x<a 1 b a if a<x<b 0 if x>b Triangular: p(x) = 2(x a) (b a)(c a) 2(b x) (b a)(b c) if a x c if c<x b Doctorado Depto. CCIA - Robots Autónomos 27/64

15 Exponential pdf p(x; λ) = λe λx x 0, 0 x<0 Doctorado Depto. CCIA - Robots Autónomos 28/64

16 Normal (Gaussian) pdf p(x; µ, σ 2 )=N(µ, σ 2 ) = (2πσ 2 ) 1 2 exp 1 2 µ = mean, σ 2 = variance (x µ) 2 σ 2 Doctorado Depto. CCIA - Robots Autónomos 29/64

17 Joint distribution The joint distribution of two random variables X and Y is given by: p(x, y) =P (X = x and Y = y) Conditional probability: the probality of X = x given that Y = y is denoted by p(x y). p(x y) = p(x, y) p(y) If X and Y are independents then p(x, y) =p(x)p(y) p(x y) =p(x) Doctorado Depto. CCIA - Robots Autónomos 30/64

18 Multivariate Normal distributions If x is a multi-dimensional vector (column vector): x =(x 1,...,x n ) T the density function of the multivariate Normal distribution is: p(x; µ, Σ) = det(2πσ) 1 2 exp{ 1 2 (x µ)t σ 1 (x µ)} Means and covariances: µ =(µ 1,...,µ n ) T Σ = σ σ 1n σ n1... σn 2 Doctorado Depto. CCIA - Robots Autónomos 32/64

19 Expectations µ = E[X] is the expected value of random variable X E[X] = x= xp(x) dx The average value we d see if we took a very large number of random samples of X The first moment of the shape formed by the axes and the curve The best value to choose if you must guess an unknown value drawn from the distribution and you ll be fined the square of your error Doctorado Depto. CCIA - Robots Autónomos 42/64

20 Expectation of a function E[f(X)] is the expected value of f(x) where x is drawn from X s distribution E[f(X)] = x= f(x) p(x) dx The average value we d see if we took a very large number of random samples of f(x) Note that in general E[f(X)] = f(e[x]) If f(x) is a linear function: E[aX + b] = ae[x]+ b Doctorado Depto. CCIA - Robots Autónomos 43/64

21 Variance The variance of a random variable gives an indication of how the values of a distribution are separated from the mean. We ll use it as an indication of the quality of an estimation. Var[X] =E[(X µ) 2 ]= = x= (x µ) 2 p(x) dx Var[X] =E[X 2 ] (E[X])] 2 The standard deviation σ x is defined as Var[X] Doctorado Depto. CCIA - Robots Autónomos 44/64

22 Covariance and correlation (1) The covariance Cov[X, Y ] of two random variables X and Y is defined as Cov[X, Y ] = E[(X µ x )(Y µ y )] = E[XY ] E[X]E[Y ] The correlation coefficient ρ xy is defined by the ratio: ρ xy = Cov[X, Y ] σ x σ y Doctorado Depto. CCIA - Robots Autónomos 45/64

23 Covariance and correlation (2) Some properties: ρ xy 1 Cov[X, Y ] σ x σ y Two variables are called uncorrelated if their covariance is 0, which is equivalent to: Cov[X, Y ]=0 ρ xy =0 E[XY ]=E[X]E[Y ] Independence: if X and Y are independent then they are uncorrelated. The reverse is not in general true. However, if X and Y are normal random variables and their correlation coefficient is 0, then they are independent. Doctorado Depto. CCIA - Robots Autónomos 46/64

24 Covariance matrix Given n random variables {X 1,...,X n } their covariance matrix is defined by: being Σ = C C 1n C n1... C nn C ij = Cov[X i,x j ] Doctorado Depto. CCIA - Robots Autónomos 47/64

25 Example: measurement errors We measure an object of length η with n instruments of varying accuracies. The results of the measurements are n random variables x i = η + ν i E[ν i ]=0 Var[ν i ]=E[ν 2 i ]=σ 2 i where ν i are the measurement errors which we assume independent with zero mean. Our problem is to determine the unbiased, minimum variance, linear estimation of η. We wish to find n constants α i such that the sum ˆη = α 1 x α n x n is a random variable with mean E[ˆη] =α 1 E[x 1 ]+ + α n E[x n ]=η and its variance V = α 2 1σ α 2 nσ 2 n is minimum, subject to the constraint: α α n =1 Doctorado Depto. CCIA - Robots Autónomos 48/64

26 Solution measurement errors To solve this problem, we note that V = α 2 1σ α 2 nσ 2 n λ(α 1...α n 1) for any λ (Lagrange multiplier). Hence V is minimum if δv =2α i σi 2 =0 α i = λ δα i 2σi 2 Inserting into the constraint and solving for λ, we obtain λ 2 = V = 1 1/σ /σ2 n Hence ˆη = x i/σ x n /σ 2 n 1/σ /σ2 n Doctorado Depto. CCIA - Robots Autónomos 49/64

27 Algebraic manipulation Based on three rules: the product rule, the sum rule and the Bayes theorem Doctorado Depto. CCIA - Robots Autónomos 36/64

28 Product rule Also called the chain rule, it s derived from the definition of conditional probability p(x, y H) = p(x y, H) p(y H) = p(y x, H) p(x H) For generality we include the value H which represents all other knwon or assumed quantities. Doctorado Depto. CCIA - Robots Autónomos 37/64

29 Sum rule Also called the total probability theorem, the sum rule follows from the definition of marginalisation p(x H) = = p(x, y H) dy p(x y, H) p(y H) dy Doctorado Depto. CCIA - Robots Autónomos 38/64

30 Bayes rule The Bayes theorem is derived from the product rule p(x y, H) = = p(y x, H) p(x H) p(y H) p(y x, H) p(x H) p(y x, H) p(x H) dx The denominator is a normalizing constant that does not depend on x and such that p(x y, H) integrates to one. We can then can formulate the rule in this simpler way: p(x y, H) =η p(y x, H) p(x H) Doctorado Depto. CCIA - Robots Autónomos 39/64

31 Using the Bayes rule P (h D) = P (D h)p (h) P (D) h is the ignored variable (hypothesis) that we want to infer D is the data that we know P (h) is the a priori probability of h P (D h) is the likelihood of obtaining the data D if the hypothesis h is satisfied P (h D) is the a posteriori probability of h Doctorado Depto. CCIA - Robots Autónomos 40/64

32 Maximum a posteriori P (h D) = P (D h)p (h) P (D) Generally want the most probable hypothesis given the training data Maximum a posteriori hypothesis h MAP : h MAP = arg max h H = arg max h H = arg max h H P (h D) P (D h)p (h) P (D) P (D h)p (h) Doctorado Depto. CCIA - Robots Autónomos 41/64

33 Sensors for Mobile Robots Contact sensors: Bumpers Internal sensors Accelerometers (spring-mounted masses) Gyroscopes (spinning mass, laser light) Compasses, inclinometers (earth magnetic field, gravity) Proximity sensors Sonar (time of flight) Radar (phase and frequency) Laser range-finders (triangulation, tof, phase) Infrared (intensity) Visual sensors: Cameras Satellite-based sensors: GPS Depto DCCIA - Robots Autónomos

34 Proximity sensors The central task is to determine P(z x), i.e., the probability of a measurement z given that the robot is at position x. This is called the likelihood of the measurement Depto DCCIA - Robots Autónomos

35 Beam-based sensor model Scan z consists of K measurements. Individual measurements are independent given the robot position. Depto DCCIA - Robots Autónomos

36 Beam-based sensor model Depto DCCIA - Robots Autónomos

37 Typical errors of range measurements Depto DCCIA - Robots Autónomos

38 Proximity measurement Measurement can be caused by a known obstacle. cross-talk. an unexpected obstacle (people, furniture, ). missing all obstacles (total reflection, glass, ). Noise is due to uncertainty in measuring distance to known obstacle. in position of known obstacles. in position of additional obstacles. whether obstacle is missed. Depto DCCIA - Robots Autónomos

39 Beam-based proximity model Measurement noise Unexpected obstacles 0 z exp z max 0 z exp z max Depto DCCIA - Robots Autónomos

40 Beam-based proximity model Depto DCCIA - Robots Autónomos

41 Resulting mixture density How can we determine the model parameters? Depto DCCIA - Robots Autónomos

42 Raw sensor data Measured distances for expected distance of 300 cm. Sonar Laser Depto DCCIA - Robots Autónomos

43 Approximation Maximize log likelihood of the data Search space of n-1 parameters. Hill climbing Gradient descent Genetic algorithms Deterministically compute the n-th parameter to satisfy normalization constraint. Depto DCCIA - Robots Autónomos

44 Approximation results Laser Sonar 300cm 400cm Depto DCCIA - Robots Autónomos

45 Example z P(z x,m) Depto DCCIA - Robots Autónomos

46 Landmarks Active beacons (e.g., radio, GPS) Passive (e.g., visual, retro-reflective) Standard approach is triangulation Sensor provides distance, or bearing, or distance and bearing. Depto DCCIA - Robots Autónomos

47 Distance and bearing Depto DCCIA - Robots Autónomos

48 Computing the likelihood of a landmark measurement Depto DCCIA - Robots Autónomos

49 Data asociation problem Assume that landmarks cannot be uniquely identified Data association problem: the observed features must be matched with the previous registered landmarks Methods based on likelihood of features readings given the previous landmarks Depto DCCIA - Robots Autónomos

50 Summary of sensors models Explicitly modeling uncertainty in sensing is key to robustness. In many cases, good models can be found by the following approach: 1. Determine parametric model of noise free measurement. 2. Analyze sources of noise. 3. Add adequate noise to parameters (eventually mix in densities for noise). 4. Learn (and verify) parameters by fitting model to data. 5. Likelihood of measurement is given by probabilistically comparing the actual with the expected measurement. This holds for motion models as well. It is extremely important to be aware of the underlying assumptions! Depto DCCIA - Robots Autónomos

51 Bayesian Filters One of the most important techniques in probabilistic robotics is the use of bayesian filters to: Estimate the state of the robot after integrate the likelihood of the sensor perceptions and update it with the robot action model Doctorado Depto. CCIA - Robots Autónomos 14/64

52 An example: Markov Localization (1) (a) bel(x) x (b) p(z x) x bel(x) x (c) bel(x) x (d) Doctorado Depto. CCIA - Robots Autónomos 15/64

53 An example: Markov Localization (2) (d) p(z x) x bel(x) x (e) bel(x) x Doctorado Depto. CCIA - Robots Autónomos 16/64

54 Dynamic State Estimation Introduction State estimation with a single observation Application: laser scan likelihood Probabilistic State Dynamics Integrating everything: the Bayes Filter Doctorado Depto. CCIA - Robots Autónomos 50/64

55 Introduction The problem is to track the sequence of states {x 1, x 2,...,x t } of a dynamical system. The states can not be measured directly, but by means of observations. We obtain a set of t observations {z 1, z 2,...,z t } observations corresponding to the t states. The initial state has a distribution p(x 1 ). Examples: Robot Localization Missile Guidance Tracking of objects in video sequences Doctorado Depto. CCIA - Robots Autónomos 51/64

56 State Estimation with Single Observation Given the observation z = {z 1,z 2,...,z n } We want to obtain a posterior estimation (named belief) of the current state x given that observation Bel(x) p(x z 1,...,z n ) Doctorado Depto. CCIA - Robots Autónomos 52/64

57 Example z 1 = texture descriptor z 2 = number of edges... x {(door-open, dist), (door-closed, dist), (wall, dist)} Doctorado Depto. CCIA - Robots Autónomos 53/64

58 State estimation with single observation p(x z 1,...,z n )= p(z n x,z 1,...,z n 1 )p(x z 1,...,z n 1 ) p(z 1,...,z n ) Independence assumption: z i {z 1,...,z i 1,z i+1,...,z n } Doctorado Depto. CCIA - Robots Autónomos 54/64

59 State estimation with single observation p(x z 1,...,z n ) = p(z n x,z 1,...,z n 1 )p(x z 1,...,z n 1 ) p(z 1,...,z n ) = ηp(z n x,z 1,...,z n 1 )p(x z 1,...,z n 1 ) = η 1...n p(z i x)p(x) i n Doctorado Depto. CCIA - Robots Autónomos 55/64

60 A harder version of the problem z P(z x,m) Given an occupancy grid m and a laser reading z Compute the likelihood of having obtained the reading from the position x Doctorado Depto. CCIA - Robots Autónomos 11/64

61 Laser Scan Likelihood [S. Thrun, Artificial Intelligence, 2001] Doctorado Depto. CCIA - Robots Autónomos 56/64

62 Laser Scan Likelihood Doctorado Depto. CCIA - Robots Autónomos 57/64

63 Probabilistic State Dynamics The current state dependend on the previous state: p(x t x t 1 ) And only on it (Markov assumption): p(x t x t 1, x t 2,...,x 1 )=p(x t x t 1 ) Example: p(x t x t 1 ) e 1 2 (x t x t 1 1) 2 [M. Isard, 1998] Doctorado Depto. CCIA - Robots Autónomos 58/64

64 Modeling Actions The actions u change the world state according to a distribution p(x t x t 1, u), being x t 1 the state of the world before executing the action u that leads to the state x t Robot motion model: Doctorado Depto. CCIA - Robots Autónomos 59/64

65 Integrating everything: Bayes Filter We obtain a recursive formulation Bel(x t ) = ηp(z t x t )p(x t u t 1,...,z 1 ) Total prob = ηp(z t x t ) p(x t x t 1 u t 1,...,z 1 )p(x t 1 u t 1,...,z 1 )dx x t 1 Markov = ηp(z t x t ) p(x t x t 1, u t 1 )p(x t 1 u t 1,...,z 1 )dx t 1 x t 1 = p(z t x t ) p(x t x t 1, u t 1 )Bel(x t 1 )dx t 1 x t 1 Doctorado Depto. CCIA - Robots Autónomos 60/64

66 The Bayes Filter Given: A robot action u t A sensor reading z t A a-priori robot state belief estimation bel(x t 1 )=P(x t 1 ) Computes: The a-posteriori robot state belief estimate bel(x t )=P (x t x t 1,u t,z t ) Doctorado Depto. CCIA - Robots Autónomos 61/64

67 The Bayes Filter Algorithm Bel(x t )=p(z t x t ) x t 1 p(x t x t 1, u t 1 )Bel(x t 1 )dx t 1 Algorithm Bayes Filter (bel(x t 1 ),z t,u t ): Estimate the new state from the previous and the action 1. For all x t X: 2. bel (x i )= bel(x t 1 )P (x t u t,x t 1 ) Incorporate the observations in the belief 3. bel(x t )=ηp(z t x t )bel (x t ) 4. return bel(x t ) Doctorado Depto. CCIA - Robots Autónomos 62/64

68 Bayes Filter Techniques Bel(x t )=p(z t x t ) x t 1 p(x t x t 1, u t 1 )Bel(x t 1 )dx t 1 Kalman Filter Particle Filter Hidden Markov Models Dynamic Bayes Networks Partially Observable Markov Decision Process Doctorado Depto. CCIA - Robots Autónomos 63/64

Introduction to Mobile Robotics Probabilistic Sensor Models

Introduction to Mobile Robotics Probabilistic Sensor Models Introduction to Mobile Robotics Probabilistic Sensor Models Wolfram Burgard 1 Sensors for Mobile Robots Contact sensors: Bumpers Proprioceptive sensors Accelerometers (spring-mounted masses) Gyroscopes

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

Local Probabilistic Models: Continuous Variable CPDs

Local Probabilistic Models: Continuous Variable CPDs Local Probabilistic Models: Continuous Variable CPDs Sargur srihari@cedar.buffalo.edu 1 Topics 1. Simple discretizing loses continuity 2. Continuous Variable CPDs 3. Linear Gaussian Model Example of car

More information

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space. Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Intro to Probability. Andrei Barbu

Intro to Probability. Andrei Barbu Intro to Probability Andrei Barbu Some problems Some problems A means to capture uncertainty Some problems A means to capture uncertainty You have data from two sources, are they different? Some problems

More information

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion Today Probability and Statistics Naïve Bayes Classification Linear Algebra Matrix Multiplication Matrix Inversion Calculus Vector Calculus Optimization Lagrange Multipliers 1 Classical Artificial Intelligence

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Hidden Markov Models Dieter Fox --- University of Washington [Most slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

B4 Estimation and Inference

B4 Estimation and Inference B4 Estimation and Inference 6 Lectures Hilary Term 27 2 Tutorial Sheets A. Zisserman Overview Lectures 1 & 2: Introduction sensors, and basics of probability density functions for representing sensor error

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Overview of probability, Representing uncertainty Propagation of uncertainty, Bayes Rule Application to Localization and Mapping Slides from Autonomous Robots (Siegwart and Nourbaksh),

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the

More information

Lecture : Probabilistic Machine Learning

Lecture : Probabilistic Machine Learning Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning

More information

Simultaneous Localization and Mapping

Simultaneous Localization and Mapping Simultaneous Localization and Mapping Miroslav Kulich Intelligent and Mobile Robotics Group Czech Institute of Informatics, Robotics and Cybernetics Czech Technical University in Prague Winter semester

More information

Probabilistic Robotics

Probabilistic Robotics University of Rome La Sapienza Master in Artificial Intelligence and Robotics Probabilistic Robotics Prof. Giorgio Grisetti Course web site: http://www.dis.uniroma1.it/~grisetti/teaching/probabilistic_ro

More information

Probabilistic Fundamentals in Robotics

Probabilistic Fundamentals in Robotics Probabilistic Fundamentals in Robotics Probabilistic Models of Mobile Robots Robot localization Basilio Bona DAUIN Politecnico di Torino June 2011 Course Outline Basic mathematical framework Probabilistic

More information

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2017 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

More information

Mobile Robotics II: Simultaneous localization and mapping

Mobile Robotics II: Simultaneous localization and mapping Mobile Robotics II: Simultaneous localization and mapping Introduction: probability theory, estimation Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Lecture 6: April 19, 2002

Lecture 6: April 19, 2002 EE596 Pat. Recog. II: Introduction to Graphical Models Spring 2002 Lecturer: Jeff Bilmes Lecture 6: April 19, 2002 University of Washington Dept. of Electrical Engineering Scribe: Huaning Niu,Özgür Çetin

More information

CS 109 Review. CS 109 Review. Julia Daniel, 12/3/2018. Julia Daniel

CS 109 Review. CS 109 Review. Julia Daniel, 12/3/2018. Julia Daniel CS 109 Review CS 109 Review Julia Daniel, 12/3/2018 Julia Daniel Dec. 3, 2018 Where we re at Last week: ML wrap-up, theoretical background for modern ML This week: course overview, open questions after

More information

CS491/691: Introduction to Aerial Robotics

CS491/691: Introduction to Aerial Robotics CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Some Concepts of Probability (Review) Volker Tresp Summer 2018

Some Concepts of Probability (Review) Volker Tresp Summer 2018 Some Concepts of Probability (Review) Volker Tresp Summer 2018 1 Definition There are different way to define what a probability stands for Mathematically, the most rigorous definition is based on Kolmogorov

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Simultaneous Localization and Mapping Techniques

Simultaneous Localization and Mapping Techniques Simultaneous Localization and Mapping Techniques Andrew Hogue Oral Exam: June 20, 2005 Contents 1 Introduction 1 1.1 Why is SLAM necessary?.................................. 2 1.2 A Brief History of SLAM...................................

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Algorithmisches Lernen/Machine Learning

Algorithmisches Lernen/Machine Learning Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB

More information

Sampling Based Filters

Sampling Based Filters Statistical Techniques in Robotics (16-831, F12) Lecture#03 (Monday September 5) Sampling Based Filters Lecturer: Drew Bagnell Scribes: M. Taylor, M. Shomin 1 1 Beam-based Sensor Model Many common sensors

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Probabilistic Map-based Localization (Kalman Filter) Chapter 5 (textbook) Based on textbook

More information

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal

More information

Robot Localization and Kalman Filters

Robot Localization and Kalman Filters Robot Localization and Kalman Filters Rudy Negenborn rudy@negenborn.net August 26, 2003 Outline Robot Localization Probabilistic Localization Kalman Filters Kalman Localization Kalman Localization with

More information

Class 8 Review Problems solutions, 18.05, Spring 2014

Class 8 Review Problems solutions, 18.05, Spring 2014 Class 8 Review Problems solutions, 8.5, Spring 4 Counting and Probability. (a) Create an arrangement in stages and count the number of possibilities at each stage: ( ) Stage : Choose three of the slots

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

AUTOMOTIVE ENVIRONMENT SENSORS

AUTOMOTIVE ENVIRONMENT SENSORS AUTOMOTIVE ENVIRONMENT SENSORS Lecture 5. Localization BME KÖZLEKEDÉSMÉRNÖKI ÉS JÁRMŰMÉRNÖKI KAR 32708-2/2017/INTFIN SZÁMÚ EMMI ÁLTAL TÁMOGATOTT TANANYAG Related concepts Concepts related to vehicles moving

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Practice Examination # 3

Practice Examination # 3 Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single

More information

Machine Learning Srihari. Probability Theory. Sargur N. Srihari

Machine Learning Srihari. Probability Theory. Sargur N. Srihari Probability Theory Sargur N. Srihari srihari@cedar.buffalo.edu 1 Probability Theory with Several Variables Key concept is dealing with uncertainty Due to noise and finite data sets Framework for quantification

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Probability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014

Probability. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh. August 2014 Probability Machine Learning and Pattern Recognition Chris Williams School of Informatics, University of Edinburgh August 2014 (All of the slides in this course have been adapted from previous versions

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

Lecture 3: Pattern Classification

Lecture 3: Pattern Classification EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1 Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be

More information

Statistical learning. Chapter 20, Sections 1 4 1

Statistical learning. Chapter 20, Sections 1 4 1 Statistical learning Chapter 20, Sections 1 4 Chapter 20, Sections 1 4 1 Outline Bayesian learning Maximum a posteriori and maximum likelihood learning Bayes net learning ML parameter learning with complete

More information

Introduction to Machine Learning

Introduction to Machine Learning What does this mean? Outline Contents Introduction to Machine Learning Introduction to Probabilistic Methods Varun Chandola December 26, 2017 1 Introduction to Probability 1 2 Random Variables 3 3 Bayes

More information

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots Manipulators P obotics Configuration of robot specified by 6 numbers 6 degrees of freedom (DOF) 6 is the minimum number required to position end-effector arbitrarily. For dynamical systems, add velocity

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood

More information

Probabilistic Reasoning in Deep Learning

Probabilistic Reasoning in Deep Learning Probabilistic Reasoning in Deep Learning Dr Konstantina Palla, PhD palla@stats.ox.ac.uk September 2017 Deep Learning Indaba, Johannesburgh Konstantina Palla 1 / 39 OVERVIEW OF THE TALK Basics of Bayesian

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information