Mobile Robot Localization

Similar documents
Mobile Robot Localization

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

State Space and Hidden Markov Models

Bayesian Methods / G.D. Hager S. Leonard

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

Lecture 3: Pattern Classification. Pattern classification

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

AUTOMOTIVE ENVIRONMENT SENSORS

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Robot Localisation. Henrik I. Christensen. January 12, 2007

Why do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time

From Bayes to Extended Kalman Filter

Linear Dynamical Systems

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

2D Image Processing. Bayes filter implementation: Kalman filter

MACHINE LEARNING ADVANCED MACHINE LEARNING

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Why do we care? Examples. Bayes Rule. What room am I in? Handling uncertainty over time: predicting, estimating, recognizing, learning

Introduction to Mobile Robotics Probabilistic Robotics

Autonomous Mobile Robot Design

Chapter 9: Bayesian Methods. CS 336/436 Gregory D. Hager

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Outline. Spring It Introduction Representation. Markov Random Field. Conclusion. Conditional Independence Inference: Variable elimination

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

STA 4273H: Statistical Machine Learning

Autonomous Navigation for Flying Robots

Robots Autónomos. Depto. CCIA. 2. Bayesian Estimation and sensor models. Domingo Gallardo

2D Image Processing. Bayes filter implementation: Kalman filter

Localization. Howie Choset Adapted from slides by Humphrey Hu, Trevor Decker, and Brad Neuman

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

ESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu

Dynamic models 1 Kalman filters, linearization,

Probabilistic Graphical Models

2D Image Processing (Extended) Kalman and particle filter

Probabilistic Fundamentals in Robotics

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

CSE 473: Artificial Intelligence

Simultaneous Localization and Mapping Techniques

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Gaussian Mixture Models, Expectation Maximization

MACHINE LEARNING ADVANCED MACHINE LEARNING

Introduction to Mobile Robotics SLAM: Simultaneous Localization and Mapping

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan

Machine Learning Lecture 2

Simultaneous Localization and Mapping

CS325 Artificial Intelligence Ch. 15,20 Hidden Markov Models and Particle Filtering

Statistical learning. Chapter 20, Sections 1 4 1

Robot Mapping. Least Squares. Cyrill Stachniss

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Recursive Bayes Filtering

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Expectation Maximization Deconvolution Algorithm

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

CS491/691: Introduction to Aerial Robotics

Gaussian Mixture Models

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Bayesian Networks BY: MOHAMAD ALSABBAGH

Introduction to Mobile Robotics Information Gain-Based Exploration. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Giorgio Grisetti, Kai Arras

Approximate Inference

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Notes on Discriminant Functions and Optimal Classification

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Mixtures of Gaussians. Sargur Srihari

CSEP 573: Artificial Intelligence

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Machine Learning 4771

Probabilistic Graphical Models

STA 4273H: Statistical Machine Learning

Short course A vademecum of statistical pattern recognition techniques with applications to image and video analysis. Agenda

Inference and estimation in probabilistic time series models

Bayesian Approach 2. CSC412 Probabilistic Learning & Reasoning

Introduction to Mobile Robotics Probabilistic Sensor Models

Latent Variable Models and Expectation Maximization

Maximum Likelihood (ML), Expectation Maximization (EM) Pieter Abbeel UC Berkeley EECS

Latent Variable Models and Expectation Maximization

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Mathematical Formulation of Our Example

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

STA 4273H: Statistical Machine Learning

CS 188: Artificial Intelligence

STA 414/2104, Spring 2014, Practice Problem Set #1

Statistical Pattern Recognition

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

6.867 Machine learning

Lecture 4: Probabilistic Learning

Machine Learning Basics III

Transcription:

Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations depend probabilistically on robot pose - pose changes depend probabilistically on robot actions Eample: Uncertainty of robot position after travelling along red path ( shaded area indicates probability distribution) Slides on Robot Localization are partly adapted from Sebastian Thrun, http://www-2.cs.cmu.edu/~thrun/papers/thrun.probrob.html Michael Beetz, http://wwwradig.in.tum.de/vorlesungen/as.ss03/folien_links.html 2 1

Formalization of Localization Problem m model of environment (e.g. map) s t pose at time t o t observation at time t a t action at time t d 0...t = o 0, a 0, o 1, a 1,..., o t, a t observation and action data up to t Task: Estimate p(s t d 0...t, m) = b t (s t ) "robot s belief state at time t" Markov properties: Current observation depends only on current pose Net pose depends only on current pose and current action "Future is independent of past given current state" Markov assumption implies static environment! (Violation, for eample, by robot actions changing the environment) 3 Structure of Probabilistic Localization map m observations o 1 o 2 o 3 o t poses S 1 S 2 S 3 S t actions a 1 a 2 a 3 a t-1 4 2

Recursive Markov Localization b t (s t ) = p(s t o 0, a 0, o 1, a 1,..., a t-1, o t, m) Bayes = α t p(o t o 0, a 0,..., a t-1, s t, m) p(s t o 0, a 0,..., a t-1, m) Markov = α t p(o t s t, m) p(s t o 0, a 0,..., a t-1, m) Total Prob. = α t p(o t s t, m) p(s t o 0, a 0,..., a t-1, s t-1, m) p(s t-1 o 0, a 0,..., a t-1, m) ds t-1 Markov = α t p(o t s t, m) p(s t a t-1, s t-1, m) p(s t-1 o 0, a 0,..., a t-1, m) ds t-1 = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 α t is normalizing factor b t (s t ) = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 p(o t s t, m) probabilistic perceptual model - often time-invariant: p(o s, m) p(s t a t-1, s t-1, m) probabilistic motion model - often time-invariant: p(s a, s, m) must be specified for a specific robot and a specific environment 5 Probabilistic Sensor Model for Laser Range Finder probability p(o s) 0.125 0.1 0.075 0.05 epected distance sensor properties can often be approimated by Gaussian miture densities 0.025 0 s 100 200 300 400 500 measured distance o [cm] Adapted from: Sebastian Thrun, Probabilistic Algorithms in Robotics http://www-2.cs.cmu.edu/~thrun/papers/thrun.probrob.html 6 3

Probabilistic Pose Model Possible poses are represented by a probability distribution ("belief") which is modified during the localization process initial distribution intermediate distribution final distribution 7 Grid-based Markov Localization (Eample 1) robot path with 4 reference poses, initially belief is equally distributed distribution of belief at second pose distribution of belief at third pose distribution of belief at fourth pose Ambiguous localizations due to a repetitive and symmetric environment are sharpened and disambiguated after several observations. 8 4

Grid-based Markov Localization (Eample 2) map and robot path maimum position probabilities after 6 steps maimum position probabilities after 12 steps [Burgard et al. 96] 9 Approimating Probabilistic Update by Monte Carlo Localization (MCL) "Importance Sampling" "Particle Filters" "Condensation Algorithm" different names for a method to approimate a probability density by discrete samples (see slide "Sampling Methods") Approimate implementation of belief update equation b t (s t ) = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 1. Draw a sample s t-1 from the current belief b t-1 (s t-1 ) with a likelihood given by the importance factors of the belief b t-1 (s t-1 ). 2. For this s t-1, guess a successor pose s t according to the distribution p(s t a t-1, s t-1, m). 3. Compute a preliminary importance factor p(o t s t, m) for this sample and assign it to the new sample representing b t (s t ). 4. Repeat Step 1 through 3 m times. Finally, normalize the importance factors in the new sample set b t (s t ) so that they add up to 1. MCL is very effective and can give good results with as few as 100 samples. 10 5

Simultaneous Localization and Mapping (SLAM) Typical problem for a mobile robot in an unknown environment: learn the environment ("mapping") keep track of position and orientation ("localization") "Chicken-and-egg" problem: robot needs knowledge of environment in order to interpret sensor readings for localization robot needs pose knowledge in order to interpret sensor readings for mapping Make the environment a multidimensional probabilistic variable! Eample: Model of environment is a probabilistic occupancy grid P ij = e ij ( i, y i ) is empty 1-e ij ( i, y i ) is occupied 11 Bayes Filter for SLAM Etend the localization approach to simultaneous mapping: b t (s t ) = α t p(o t s t, m) p(s t a t-1, s t-1, m) b t-1 (s t-1 ) ds t-1 b t (s t, m t ) = α t p(o t s t, m t ) p(s t, m t a t-1, s t-1, m t-1 ) b t-1 (s t-1, m t-1 ) ds t-1 dm t-1 Assuming a time-invariant map and map-independent motion: b t (s t, m) = α t p(o t s t, m) p(s t a t-1, s t-1 ) b t-1 (s t-1, m) ds t-1 b t (s t, m) is (N+3)-dimensional with N variables for m (N >> 1000) and 3 for s t => compleity problem Important approaches to cope with this compleity: Kalman filtering (Gaussian probabilities and linear updating) Estimating only the mode of the posterior, argma m b(m) Treating the robot path as "missing variables" in Epectation Maimization 12 6

Kalman Filter for SLAM Problems (1) Basic Kalman Filter assumptions: 1. Net-state function is linear with added Gaussian noise 2. Perceptual model is linear with added Gaussian noise 3. Initial uncertainty is Gaussian Ad 1) Net state in SLAM is pose s t and model m. m is assumed constant s t is non-linear in general, approimately linear in a first-degree Taylor series epansion ("Etended Kalman Filtering") Let t be the state variable (s t, m) and ε control Gaussian noise with covariance Σ control, then t = A t-1 + B a t-1 + ε control => p( t a t-1, t-1 ) Ad 2) Sensor measurements are usually nonlinear in robotics, with nonwhite Gaussian noise. Approimation by first-degree Taylor series and ε measure Gaussian noise with covariance Σ measure. o t = C t + ε measure => p(o t t ) 13 Kalman Filter for SLAM Problems (2) Bayes Filter equation b t (s t, m) = α t p(o t s t, m) p(s t, m t a t-1, s t-1, m) b t-1 (s t-1, m) ds t-1 can be rewritten using the standard Kalman Filter equations: " µ t-1 "= µ t-1 + B a t" Σ t-1 "= Σ t-1 + Σ control" K t "= Σ t-1 C T (CΣ t-1 C T + Σ measure ) -1" µ t "= µ t-1 + K t (o t-1 - Cµ t-1 )" Σ t "= (I - K t C) Σ t-1" Compare with slides on Kalman Filtering in "Bildverarbeitung". Kalman Filtering estimates the full posterior distribution for all poses (not only the maimum) Guaranteed convergence to true map and robot pose Gaussian sensor noise is a bad model for correspondence problems 14 7

Remember: Kalman Filters (1) A Kalman filter provides an iterative scheme for (i) predicting an event and (ii) incorporating new measurements. prediction measurement Assume a linear system with observations depending linearly on the system state, and white Gaussian noise disturbing the system evolution and the observations: k+1 = A k k + w k z k = H k k + v k What is the best estimate of k based on the previous estimate k-1 and the observation z k? k quantity of interest ("state") at time k A k model for evolution of k w k zero mean Gaussian noise with covariance Q k z k observations at time k H k relation of observations to state v k zero mean Gaussian noise with covariance R k Often, A k, Q k, H k and R k are constant. 15 Remember: Kalman Filters (2) The best a priori estimate of k before observing z k is k = A k-1 k-1 After observing z k, the a priori estimate is updated by k = k + K k (z k - H k k ) K k is Kalman gain matri. K k is determined to minimize the a posteriori variance P k of the error k - k. The minimizing K k is K k = P k H k T (H k P k H k T + R k ) -1 with P k = A k P k-1 A k T + Q k-1 and P k = (I - K k H k ) P k P k is covariance of error k - k before observation of z k. Iterative order of computations: 0 P 0 (1) k = A k-1 k-1 (2) P k = A k P T k-1 A k + Q k-1 (1) K k = P T k H k (H k P T k H k + R k ) -1 (2) k = k + K k (z k - H k k ) initialization (3) P k = (I - K k H k ) P k k := k+1 16 8

Eample: Underwater Sonic Mapping From: S.Williams, G. Dissanayake, and H.F. Durrant-Whyte. Towards terrain-aided navigation for underwater robotics. Advanced Robotics, 15(5), 2001. Kalman Filter map and pose estimation Figure shows: estimated path of underwater vehicle with ellipses indicating position uncertainty 14 landmarks obtained by sonar measurements with ellipses indicating uncertainty, 5 artificial landmarks, the rest other reflective objects additional dots for weak landmark hypotheses 17 Solving the Correspondence Problem Map obtained from raw sensory data of a cyclic environment (large hall of a museum) based on robot s odometry correspondence problem! Map obtained by EM algorithm: Iterative maimization of both robot path and model non-incremental procedure! 18 9

Mapping with Epectation Maimization Principle of EM mapping algorithm: Repeat until no more changes E-step: Estimate robot poses for given map M-step: Calculate most likely map given poses The algorithm computes the maimum of the epectation of the joint log likelihood of the data d t = {a 0, o 0,..., a t, o t } and the robot s path s t = {s 0,..., s t }.!! " "+# # $ = $%&!$' (! ) +,&-. / * 0) * 1! * ( )1! " "%!! " # $ 0/ * # $&!! " "+# # $ = $%&!$'! '& % ( ) % *!! " ( "# $ +, ) -./&0(1/ *) +!20,) % % % E-step: Compute the posterior of pose s based on m [i] and all data including t > τ : => different from incremental localization M-step: Maimize log p(o τ s τ, m) for all τ and all poses s t under the epectation calculated in the E-step 19 Mapping with Incremental Maimum-Likelihood Estimation Stepwise maimum-likelihood estimation of map and pose is inferior to Kalman Filtering and EM estimation, but less comple. Obtain series of maimum-likelihood maps and poses m 1 *, m 2 *,... s 1 *, s 2 *,... by maimizing the marginal likelihood: <m t *, s t *> = argma p(o t m t, s t ) p(m t, s t a t, s t-1 *, m t-1 *) m t, s t This equation follows from the Bayes Filter equation by assuming that map and pose at t-1 are known for certain. real-time computation possible unable to handle cycles 20 10

Eample of Incremental Maimum- Likelihood Mapping At every time step, the map is grown by finding the most likely continuation. Map estimates do not converge as robot completes cycle because of accumulated pose estimation error. Eamples from: Sebastian Thrun, Probabilistic Algorithms in Robotics http://www-2.cs.cmu.edu/~thrun/papers/thrun.probrob.html 21 Maintaining a Pose Posterior Distribution Problems with cyclic environment can be overcome by maintaining not only the maimum-likelihood pose estimate at t-1 but also the uncertainty distribution using Bayes Filter: p(s t o t, a t ) = α p(o t s t ) p(s t a t-1, s t-1 ) p(s t-1 o t-1, a t-1 ) ds t-1 Last eample repeated, representing the pose posterior by particles. Uncertainty is transferred onto map, so major corrections remain possible. 22 11

Estimating Probabilities from a Database Given a sufficiently large database with tupels a (1)... a (N) of an unknown distribution P(X), we can compute maimum likelihood estimates of all partial joint probabilities and hence of all conditional probabilities. X m1,..., X mk = subset of X 1,... X L with K L w a = number of tuples in database with X m1 =a m1,..., X mk =a mk N = total number of tuples Maimum likelihood estimate of P(X m1 =a m1,..., X mk =a mk ) is P (X m1 =a m1,..., X mk =a mk ) = w a / N If a priori information is available, it may be introduced via a bias m a : P (X m1 =a m1,..., X mk =a mk ) = (w a + m a ) / N Often m a = 1 is chosen for all tupels a to epress equal likelihoods in the case of an empty database. 23 Epectation Maimization (EM) Basic idea of EM-algorithm: Initially one has data with missing values e.g. unassigned cluster memberships distribution models e.g. rule to assign to nearest-distance cluster center Iterate the two steps: E-Step: Compute epected distribution parameters based on data (initially by random choice) M-Step: Maimise likelihood of missing values based on distribution parameters 24 12

Eample for EM (1) Problem: Fit 3 straight lines to data points, not knowing which point belongs to which line. Algorithm: A Select 3 random lines initially B Assign data points to each line by minimum distance criterion C Determine best-fitting straight line for assigned data points D Repeat B and C until no further changes occur (Eample by Anna Ergorova, FU Berlin) 25 Eample for EM (2) 1. 6. Iteration (after C) B) 26 13

K-Means Clustering Algorithm (1) Special case of EM Most popular clustering algorithm Searches for local minimum of sum of Euclidean sample distances to cluster centers Guaranteed convergence in a finite number of steps Requires initialisation of fied number of clusters K May converge to local minimum 27 K-Means Clustering Algorithm (2) Represent eemplars as points in N-dimensional feature space A Choose arbitrary initial cluster centers B Determine cluster assignments using distance measure C Move cluster centers to center-of-gravity of assigned eemplars D Repeat B and C until no further changes occur Feature 2 o o Final positions of cluster centers have been reached! Feature 1 28 14

Learning Mitures of Gaussians Determine Gaussian miture distribution with K multivariate Gaussians which best describes given data => unsupervised clustering 2 " 1 " Multivariate Gaussian miture distribution: p() = w i N(µ i, Σ i )" i=1.. K" with w i=1.. K" i = 1" A "Select w i, µ i and Σ i, i = 1.. K, at random (K is given)" B "For each datum j compute probability p ij that j was generated by N(µ i, Σ i ):" " p ij = w i N(µ i, Σ i ) " C "Compute new weights w i ', mean µ i ', and covariance Σ i ' by maimum likelihood estimation:"! " µ! "! = ' "! # "! " = # "$ $ " $ #!% & %! % D "Repeat B and C until no further changes occur" "! = " $ #!!% & % & % % 29 General Form of EM Algorithm Compute unknown distribution for data with hidden variables observed values of all samples Y variables with hidden values for all samples θ parameters for probabilistic model EM algorithm: "! " =!"#$!%! # ) &'( = )*%+!,-.'%+( = )*!, E-step: Computation of summation => Likelihood of "completed" data w.r.t. distribution p(y = y, θ) M-step: Maimization of epected likelihood w.r.t. parameters θ The computed parameters increase the likelihood of data and hidden values with each iteration The algorithm terminates in a local maimum 30 15

Epectation Maimization for Estimating Bayes Net with Hidden Variable Epectation step of EM: Use current (initial) probability estimates to compute probability P(a) for all attribute combinations a (including values for hidden variables). a* = [ *, X 2 =a m2, X 3 =a m3,... ] w a* a 1 = [X 1 =a 1, X 2 =a m2, X 3 =a m3,... ] w a* P(a 1 ) a 2 = [X 1 =a 2, X 2 =a m2, X 3 =a m3,... ] w a* P(a 2 ) a M = [X 1 =a M, X 2 =a m2, X 3 =a m3,... ] w a* P(a M ) missing value absolute frequency completed database Recommended reading: Borgelt & Kruse, Graphical Models, Wiley 2002 31 Eample for Epectation Maimization (1) (adapted from Borgelt & Kruse, Graphical Models, Wiley 2002) Given 4 binary probabilistic variables A, B, C, H with known dependency structure: H A B C Given also a database with tuples [ * A B C] where H is a missing attribute. H A B C w * T T T 14 * T T F 11 * T F T 20 * T F F 20 * F T T 5 * F T F 5 * F F T 11 * F F F 14 absolute frequencies of occurrence Estimate of the conditional probabilities of the Bayes Net nodes! 32 16

Eample for Epectation Maimization (2) Initial (random) probability assignments: H P(H) A H P(A H) B H P(B H) C H P(C H) T 0.3 T T 0.4 T T 0.7 T T 0.8 F 0.7 T F 0.6 T F 0.8 T F 0.5 F T 0.6 F T 0.3 F T 0.2 F F 0.4 F F 0.2 F F 0.5 With!"#$%&'&() =!"%$#)!!"'$#)!!"($#)!!"#)!"%$#)!!"'$#)!!"($#)!!"#) " one can complete the database: # H A B C w T T T T 1.27 T T T F 3.14 T T F T 2.93 T T F F 8.14 T F T T 0.92 T F T F 2.37 T F F T 3.06 T F F F 8.49 H A B C w F T T T 12.73 F T T F 7.86 F T F T 17.07 F T F F 11.86 F F T T 4.08 F F T F 2.63 F F F T 7.94 F F F F 5.51 33 Eample for Epectation Maimization (3) Based on the modified complete database, one computes the maimum likelihood estimates of the conditional probabilities of the Bayes Net. Eample: ()*+ ",)(- " *)., " /)(-!"# = $%& = $'! ()*+ ",)(- " *)., " /)(- " 0).* " *)+, ",)01 " /)-.! 0)2( This way one gets new probability assignments: H P(H) A H P(A H) B H P(B H) C H P(C H) T 0.3 T T 0.51 T T 0.25 T T 0.27 F 0.7 T F 0.71 T F 0.39 T F 0.60 F T 0.49 F T 0.75 F T 0.73 F F 0.29 F F 0.61 F F 0.40 This completes the first iteration. After ca. 700 iterations the modifications of the probabilities are less than 10-4. The resulting values are H P(H) A H P(A H) B H P(B H) C H P(C H) T 0.5 T T 0.5 T T 0.2 T T 0.4 F 0.5 T F 0.8 T F 0.5 T F 0.6 F T 0.5 F T 0.8 F T 0.6 F F 0.2 F F 0.2 F F 0.4 34 17