1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

Size: px
Start display at page:

Download "1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment"

Transcription

1 Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the system under test is placed in a test environment which duplicates the interfaces whose errros are under consideration. Test System Under Test Environment Figure 3 Simulating Errors. The test environment generates erroneous signals with the right characteristics.

2 2 Systematic Errors Systematic errors are easy to model. You tell the test environment the truth (include the error) and don t tell the rest of the system. A great example is time delays. Time delays can be modelled with FIFO queues. Data goes in one side and comes out the other a few iterations later. s s2 s3 s4 s5 s6 s7 time time c c2 c3 c4 Figure 4 FIFO Queues as Delay Simulators. The command which responds to the first state does not reach the actuators until three more states have been read into the system. Well engineered systems cycle faster than their own delays so this is the typical situation. Using a large delay in the test environment FIFO than is used in the system makes it possible to assess the impact of delay 2 Systematic Errors 2 3.The Transformation Method (for Generating Arbitrary PDFs) calibration errors on your roadfollower, for example. Another is wheel radius or other vehicle dimensions. Its easy to have the external test environment use the right value whereas 3 Random Errors Simulating random errors is a good deal harder to do well and correctly. 3. The Transformation Method (for Generating Arbitrary PDFs) The Box-Meller method [2] of generating a simple Gaussian random variable goes like so. Most system supplied random number generators are uniform deviates (i.e. uniform distributions) but you often want a Gaussian random variable. Suppose you have a uniform deviate valued between 0 and and you want one that is twice as likely to return a value between 0.25 and 0.75 Obviously, we could achieve this if we took the numbers we get from rand() and

3 3 Random Errors 3 3.The Transformation Method (for Generating Arbitrary PDFs) p(x) HAVE THIS p2(x) WANT THIS p(x) p(y) y f( x) 0 x 0 x alternately stretched and compressed them so that all numbers between /6 and 5/6 were scaled onto 0.25 to We need to stretch the x axis in the middle and squish it on the ends. But what is the function f() which, when applied to x, will give us a new variable y with the right distribution. First, when y() is an invertible function of x, the likelihood of a<x<b must be the same as the liklihood of y(a) < y < y(b) because when you happen to get a particular x between a and b, the second inequality follows immediately. squish is an obscure technical term from ancient probability theory Transformation Rule x.p( y)dy p( x)dx 2.dx 3.x py ( ) dy px ( ) x py ( ) dy px ( ) For a uniform deviate p(x), so x 4.x p( y) dy This is In general, to convert a uniform deviate into any other distribution, put x rand(), and compute yf - (x) where F - is the inverse of the cumulative distribution function for the required new distribution. This new variable y has the required distribution. 5.x F( y) 6.y F ( x) By defn y This is our squishing function

4 This is why libraries often only give you a uniform deviate. Its easy to convert it to anything else. 3.2 Scaling Distributions If x is a random variable with mean 0 and standard deviation, then: y σx + µ is a new random variable with mean µ and standard deviation σ. 3.3 Computing Probability Ellipses from Gaussian Distributions There is alot to this, so I wrote it all down. A covariance matrix C encodes the first moment of a probability density function. Using it alone is tantamount to assuming a gaussian distribution. In n dimensions, this is: Px ( ) e ( 2π) n C -- [( x Xˆ ) T C 2 ( x Xˆ )] 3 Random Errors 4 3.2Scaling Distributions This is formula for a probability given a random vector x. Contours of constant probability are curves containing all values of x for which P(x) constant. In general, these curves are n - ellipsoids because P(x) is constant when the exponent, called the Mahalanobis distance is constant. An n - ellipsoid can be written in the form: ( x Xˆ ) T C ( x Xˆ ) k 2 ( p) where k 2 ( p) is the squared Mahalanobis distance which corresponds to probability p. See [3] if you need a proof that this is an ellipse. Can only plot one in 2D, so remove all but two corresponding rows and columns, then for an equi-probability ellipse of probability p: k 2 ( p) 2ln( p) A 2D covariance matrix has these internals by definition:

5 C σ xx σ xy σ xy σ yy 3.3. Rotating Covariance Suppose frame X is a counterclockwise rotate version of frame U as indicated below. y v u U Let R R X be the rotation matrix which converts coordinates of points from frame X to frame U thus: r u Rr x Let x be an unbiased random vector of covariance which is expressed in frame X. r X C X θ x 3 Random Errors 5 3.3Computing Probability Ellipses from Gaussian Distributions C U The covariance of the same vector u r U expressed in frame U is: C U Exp[ uu T ] Exp[ Rxx T R T ] RC X R T We can interpret R as either an operator on x producing u or as a conversion of coordinates. Using the latter interpretation, note that C X and C U would generate the same physical uncertainty region in space because we are only converting coordinates. Stated differently, R is an orthonormal matrix Diagonalizing Covariance All symmetric matrices are diagonalizeable via a matrix similiarity transform based on an orthonormal (rotation) matrix R. Covariance matrices are symmetric because of their definition as an outer product. Thus there is always a rotation of coordinates which renders a covariance matrix diagonal. Lets try to find it. Using the last result, require that the covariance in the U frame be diagonal and solve for the rotation matrix R

6 which transforms the covariance in the X frame to this. σ C uu 0 cθ sθ σ xx σ T U xy cθ sθ RC X R T 0 σ vv sθ cθ σ xy σ yy sθ cθ Multiplying out the internal equality: σ uu 0 cθ sθ 0 σ vv sθ cθ σ uu 0 cθ sθ 0 σ vv sθ cθ σ xx σ xy σ xy σ yy cθ sθ sθ cθ ( cθσ xx sθσ xy ) ( sθσ xx + cθσ xy ) ( cθσ xy sθσ yy ) ( sθσ xy + cθσ yy ) From the off diagonal element at (,0) we can write: 0 σ xx sθcθ σ xy s 2 θ+ σ xy c 2 θ σ yy sθcθ 0 σ xy c2θ + -- ( σ 2 xx σ yy )s2θ s2θ 2σ xy c2θ ( σ yy σ xx ) 3 Random Errors 6 3.3Computing Probability Ellipses from Gaussian Distributions Hence, the rotation angle is: θ -- atan22σ [ 2 xy, ( σ yy σ xx )] Note that: if σ xy 0, C X is already diagonal and θ 0 is computed - which is correct. if σ xx σ yy, θ π 4 regardless of σ xy. if both arguments are zero, θ is arbitrary. Detect this case and set θ 0. if σ xx, σ yy, σ xy < ε covariance is basically zero. Detect and exit. Draw a dot if this is for graphics. The values of the diagonal covariances come from the diagonal elements: σ uu σ xx c 2 θ 2σ xy sθcθ + σ yy s 2 θ σ vv σ xx s 2 θ + 2σ xy sθcθ + σ yy c 2 θ

7 3.3.3 Drawing Covariance Now that the coviariance is diagonal in some rotated coordinate system, here is an algorithm for drawing it. Reusing the earlier result, the equation of the ellipse for a diagonal matrix must be: uv u σ uu σ uu 0 0 σ vv v σ vv Dividing by produces the ellipse in standard form: u k 2 σ uu u v + k 2 k 2 v k 2 σ vv u 2 a 2 v 2 k b 2 3 Random Errors 7 The parametric equations of the contour in this frame are: x acosθ y bsinθ The rotation of the model frame with respect to original (x,y) coordinates is given by the negative of θ because we require (for drawing) the angle through which we rotate frame X to bring it into coincidence with U. 3.4 Some Important Caveats on Discrete Random Variables While standard deviation is a linear operator, the variance is not. So, given: y ax We can draw this ellipse by letting the U frame be the model frame of the ellipse sprite.

8 We have: σ y σ yy aσ x a 2 σ xx This fact has big implications when doing simulations of dynamic systems. See two sections below Discretizing Continuous Random Processes - Linear Velocity Be careful when you want to control the integrated behavior of a random process by controlling the noise in the original derivative. A classic case is odometry. Suppose you are computing distance by integrating velocity and the velocity is noisy. Suppose you want the variance in computed distance to be linear in distance: σ ss If distance is computed from the sum of incremental distances: αs () (2) n 3 Random Errors 8 s V t t V i Reusing (), the variance in each term in the sum is: The variance in the sum is the sum of the variances: Speed Measurements Equating this result to (2) gives: So that we must have: n n i σ s s σ vv t 2 (3) σ ss nσ vv t t σ t vv t 2 σ vv t t σ vv σ vv t t αs ---- α s - t α ---- V (4) t t

9 Hence, velocity variance must be proportional to velocity and inversely proportional to time step in order to get distance error which is proportional to distance. Doubling the time step creates four times the variance per sample added and half the computed distance variance because the integral multiplies by t. To corrupt a velocity measurement in order to generate the desired behavior you would use the Gaussian: V meas V true + N 0, αv t Differential Distance Measurements However, if your simulated encoder is returning a differential distance, considering (3), you use the Gaussian: s meas s true + N( 0, σ s ) s meas s true + N( 0, σ vv t) 3 Random Errors 9 Which can be rewritten using (4) as: s meas s true + N 0, αv t t Hence, differential position variance must be proportional to velocity and proportional to time step in order to get distance error which is proportional to distance. In this way, the behavior of the injected error is the same regardless of the update rate used in your application layer and regardless of whether the simulated encode generates differential position or instaneous velocity Discretizing Continuous Random Processes - Angular Velocity Suppose you want to generate a sequence of discrete angular velocity measurements from a simulated gyro and you want to generate a computed heading error which is linear in time.

10 Continuous Time In continuous time, the process is: θ ω (5) This is of the form: x u Where the system and input Jacobians are: F x 0 L x The transition matrix is: x u Ψ( t, τ) exp F( ζ) dζ On substitution, this is identity because the system is entirely forced (no autonomous behavior to be encoded in a transition matrix): t τ Ψ( t, τ) 3 Random Errors 0 For vanishing initial conditions, the solution for the covariance of the state θ is: σ θθ t 0 Φ( t, τ)l( τ)q( τ)l T ( τ)φ T ( t, τ) dτ Let the continuous variance Q in the input ω be: σ ωω σ gg const Since Φ and L are dimensionless here, Q must have units of rads^2/sec to generate rads/ sec upon integration. This is why gyro specs often quote random walk (standard deviation) in units of angle/root(time). On substitution: σ θθ t (6) Q( τ) dτ σ gg t 0

11 Hence, continuous heading variance grows linearly with respect to time when the continuous gyro noise is constant Discrete Time Let a tilde over a symbol denote a function of a discrete variable. Now, suppose we want to simulate this behavior in discrete time. A random sequence of angular velocity errors δω is generated and integrated with respect to time to get the associated heading errors δθ. If the same time period is divided into k steps of size t, the discrete integral is: δθ( k) δω( k) t (7) k Suppose by analogy to the continuous case that the variance in the discrete velocities is constant: var[ δω( k) ] σ γγ ( k) const 3 Random Errors Using (), the variance in the discrete integral is then: σ θθ ( k) k( t) 2 [ σ γγ ( k) ] (8) Thus, the variance as a function of sample number is linear in sample number and quadratic in the time step. Since k corresponds to the time t, and k t t this also means that the discrete variance computed for time t is proportional to that time and the time step: σ θθ () t σ γγ ()t t t The intuition behind this can be generated from equation (8). If: σ γγ ( k) const Then, the number of random numbers added in equation (7) over time period t depends on the magnitude of t. In fact, if t is reduced in size by a factor of 2, the number of random numbers is doubled. However, the variance in

12 the numbers themselves is the variance in δω( k) t which is not halved but quartered. Adding twice as many numbers of one quarter the variance produces half the original result. Therefore if it is required that the discrete integral expressed in equation (7) produce the same results over time as the continuous integral expressed in equation (6), we must equate the variances thus: σ γγ ()t t t σ gg t Solving for the discrete variance leads to: Thus, the discrete time equivalent (in terms of generating the same integrated behavior) to a continuous time variance, is computed by dividing by the time step. σ γγ () t σ gg t 4 Summary 2 4 Summary While systematic erros are pretty easy to model, random ones take some work. Arbitrary distributions can be generated from a uniform one with the transformation method. Simulating noises in sensors in order to generate specific behaviors is quite subtle. The discrete equivalent (in terms of generating the same integrated behavior) of a continuous time variance is computed by dividing by the time step. However, if the measurements are integrated wrt time first, and then added, the discrete standard deviation must be multiplied by the time step.

13 5 Notes Move FIFO queues to simulation section... 5 Notes 3 6 References [] Knuth Art of Computer Programming for pseudorandom number generators. [2] Numerical Recipes for the Box-Meller method of generating a simple Gaussian random variable. [3] R. C. Smith, P. Cheeseman, "On the Representation and Estimation of Spatial Uncertainty", The International Journal of Robotics Research, vol. 5, Number 4, pp Published by MIT Press, 987.

14 6 References 4

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X

More information

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y. ATMO/OPTI 656b Spring 009 Bayesian Retrievals Note: This follows the discussion in Chapter of Rogers (000) As we have seen, the problem with the nadir viewing emission measurements is they do not contain

More information

1 Why Study Uncertainty? Uncertainty 1: Fundamentals (Random Variables, Processes and Transformation)

1 Why Study Uncertainty? Uncertainty 1: Fundamentals (Random Variables, Processes and Transformation) Uncertainty 1: Fundamentals (Random Variables, Processes and Transformation) 1 Why Study Uncertainty? Uncertainty models can be applied to many forms of error, including: random & systematic temporal and

More information

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution EIGENVECTORS [I don t know if you were properly taught about eigenvectors

More information

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

Control of Mobile Robots

Control of Mobile Robots Control of Mobile Robots Regulation and trajectory tracking Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Organization and

More information

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester Physics 403 Propagation of Uncertainties Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Maximum Likelihood and Minimum Least Squares Uncertainty Intervals

More information

Mobile Robots Localization

Mobile Robots Localization Mobile Robots Localization Institute for Software Technology 1 Today s Agenda Motivation for Localization Odometry Odometry Calibration Error Model 2 Robotics is Easy control behavior perception modelling

More information

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6

MTH739U/P: Topics in Scientific Computing Autumn 2016 Week 6 MTH739U/P: Topics in Scientific Computing Autumn 16 Week 6 4.5 Generic algorithms for non-uniform variates We have seen that sampling from a uniform distribution in [, 1] is a relatively straightforward

More information

Bayes Decision Theory

Bayes Decision Theory Bayes Decision Theory Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density 0 Minimum-Error-Rate Classification Actions are decisions on classes

More information

Will Landau. Feb 21, 2013

Will Landau. Feb 21, 2013 Iowa State University Feb 21, 2013 Iowa State University Feb 21, 2013 1 / 31 Outline Iowa State University Feb 21, 2013 2 / 31 random variables Two types of random variables: Discrete random variable:

More information

Statistics, Data Analysis, and Simulation SS 2015

Statistics, Data Analysis, and Simulation SS 2015 Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline. MFM Practitioner Module: Risk & Asset Allocation September 11, 2013 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

Inverse Problems in the Bayesian Framework

Inverse Problems in the Bayesian Framework Inverse Problems in the Bayesian Framework Daniela Calvetti Case Western Reserve University Cleveland, Ohio Raleigh, NC, July 2016 Bayes Formula Stochastic model: Two random variables X R n, B R m, where

More information

A review of probability theory

A review of probability theory 1 A review of probability theory In this book we will study dynamical systems driven by noise. Noise is something that changes randomly with time, and quantities that do this are called stochastic processes.

More information

E190Q Lecture 11 Autonomous Robot Navigation

E190Q Lecture 11 Autonomous Robot Navigation E190Q Lecture 11 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 013 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

Math221: HW# 7 solutions

Math221: HW# 7 solutions Math22: HW# 7 solutions Andy Royston November 7, 25.3.3 let x = e u. Then ln x = u, x2 = e 2u, and dx = e 2u du. Furthermore, when x =, u, and when x =, u =. Hence x 2 ln x) 3 dx = e 2u u 3 e u du) = e

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

ME5286 Robotics Spring 2017 Quiz 2

ME5286 Robotics Spring 2017 Quiz 2 Page 1 of 5 ME5286 Robotics Spring 2017 Quiz 2 Total Points: 30 You are responsible for following these instructions. Please take a minute and read them completely. 1. Put your name on this page, any other

More information

Stochastic Processes for Physicists

Stochastic Processes for Physicists Stochastic Processes for Physicists Understanding Noisy Systems Chapter 1: A review of probability theory Paul Kirk Division of Molecular Biosciences, Imperial College London 19/03/2013 1.1 Random variables

More information

Machine Learning (CS 567) Lecture 5

Machine Learning (CS 567) Lecture 5 Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x. Section 5.1 Simple One-Dimensional Problems: The Free Particle Page 9 The Free Particle Gaussian Wave Packets The Gaussian wave packet initial state is one of the few states for which both the { x } and

More information

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent

More information

The Gaussian distribution

The Gaussian distribution The Gaussian distribution Probability density function: A continuous probability density function, px), satisfies the following properties:. The probability that x is between two points a and b b P a

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes COMP 55 Applied Machine Learning Lecture 2: Gaussian processes Instructor: Ryan Lowe (ryan.lowe@cs.mcgill.ca) Slides mostly by: (herke.vanhoof@mcgill.ca) Class web page: www.cs.mcgill.ca/~hvanho2/comp55

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Multivariate Statistics

Multivariate Statistics Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21 CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic - not

More information

Physics 116C The Distribution of the Sum of Random Variables

Physics 116C The Distribution of the Sum of Random Variables Physics 116C The Distribution of the Sum of Random Variables Peter Young (Dated: December 2, 2013) Consider a random variable with a probability distribution P(x). The distribution is normalized, i.e.

More information

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

Metric-based classifiers. Nuno Vasconcelos UCSD

Metric-based classifiers. Nuno Vasconcelos UCSD Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

Math 10C - Fall Final Exam

Math 10C - Fall Final Exam Math 1C - Fall 217 - Final Exam Problem 1. Consider the function f(x, y) = 1 x 2 (y 1) 2. (i) Draw the level curve through the point P (1, 2). Find the gradient of f at the point P and draw the gradient

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II

More information

1 Using standard errors when comparing estimated values

1 Using standard errors when comparing estimated values MLPR Assignment Part : General comments Below are comments on some recurring issues I came across when marking the second part of the assignment, which I thought it would help to explain in more detail

More information

APPENDIX 2.1 LINE AND SURFACE INTEGRALS

APPENDIX 2.1 LINE AND SURFACE INTEGRALS 2 APPENDIX 2. LINE AND URFACE INTEGRAL Consider a path connecting points (a) and (b) as shown in Fig. A.2.. Assume that a vector field A(r) exists in the space in which the path is situated. Then the line

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1 Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be

More information

Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA

Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA Manifold Learning for Signal and Visual Processing Lecture 9: Probabilistic PCA (PPCA), Factor Analysis, Mixtures of PPCA Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/

More information

Notes Errors and Noise PHYS 3600, Northeastern University, Don Heiman, 6/9/ Accuracy versus Precision. 2. Errors

Notes Errors and Noise PHYS 3600, Northeastern University, Don Heiman, 6/9/ Accuracy versus Precision. 2. Errors Notes Errors and Noise PHYS 3600, Northeastern University, Don Heiman, 6/9/2011 1. Accuracy versus Precision 1.1 Precision how exact is a measurement, or how fine is the scale (# of significant figures).

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

22 : Hilbert Space Embeddings of Distributions

22 : Hilbert Space Embeddings of Distributions 10-708: Probabilistic Graphical Models 10-708, Spring 2014 22 : Hilbert Space Embeddings of Distributions Lecturer: Eric P. Xing Scribes: Sujay Kumar Jauhar and Zhiguang Huo 1 Introduction and Motivation

More information

Haus, Hermann A., and James R. Melcher. Electromagnetic Fields and Energy. Englewood Cliffs, NJ: Prentice-Hall, ISBN:

Haus, Hermann A., and James R. Melcher. Electromagnetic Fields and Energy. Englewood Cliffs, NJ: Prentice-Hall, ISBN: MIT OpenCourseWare http://ocw.mit.edu Haus, Hermann A., and James R. Melcher. Electromagnetic Fields and Energy. Englewood Cliffs, NJ: Prentice-Hall, 989. ISBN: 978032490207. Please use the following citation

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

Basic Probability Reference Sheet

Basic Probability Reference Sheet February 27, 2001 Basic Probability Reference Sheet 17.846, 2001 This is intended to be used in addition to, not as a substitute for, a textbook. X is a random variable. This means that X is a variable

More information

Contravariant and Covariant as Transforms

Contravariant and Covariant as Transforms Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Motion Models (cont) 1 2/15/2012

Motion Models (cont) 1 2/15/2012 Motion Models (cont 1 Odometry Motion Model the key to computing p( xt ut, xt 1 for the odometry motion model is to remember that the robot has an internal estimate of its pose θ x t 1 x y θ θ true poses

More information

Conceptual Explanations: Simultaneous Equations Distance, rate, and time

Conceptual Explanations: Simultaneous Equations Distance, rate, and time Conceptual Explanations: Simultaneous Equations Distance, rate, and time If you travel 30 miles per hour for 4 hours, how far do you go? A little common sense will tell you that the answer is 120 miles.

More information

Today: Fundamentals of Monte Carlo

Today: Fundamentals of Monte Carlo Today: Fundamentals of Monte Carlo What is Monte Carlo? Named at Los Alamos in 1940 s after the casino. Any method which uses (pseudo)random numbers as an essential part of the algorithm. Stochastic -

More information

Lecture 3: Pattern Classification

Lecture 3: Pattern Classification EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Properties of surfaces II: Second moment of area

Properties of surfaces II: Second moment of area Properties of surfaces II: Second moment of area Just as we have discussing first moment of an area and its relation with problems in mechanics, we will now describe second moment and product of area of

More information

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

the robot in its current estimated position and orientation (also include a point at the reference point of the robot) CSCI 4190 Introduction to Robotic Algorithms, Spring 006 Assignment : out February 13, due February 3 and March Localization and the extended Kalman filter In this assignment, you will write a program

More information

Linear classifiers: Overfitting and regularization

Linear classifiers: Overfitting and regularization Linear classifiers: Overfitting and regularization Emily Fox University of Washington January 25, 2017 Logistic regression recap 1 . Thus far, we focused on decision boundaries Score(x i ) = w 0 h 0 (x

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Kalman Filters. Derivation of Kalman Filter equations

Kalman Filters. Derivation of Kalman Filter equations Kalman Filters Derivation of Kalman Filter equations Mar Fiala CRV 10 Tutorial Day May 29/2010 Kalman Filters Predict x ˆ + = F xˆ 1 1 1 B u P F + 1 = F P 1 1 T Q Update S H + K ~ y P = H P 1 T T 1 = P

More information

Introduction to Probabilistic Graphical Models: Exercises

Introduction to Probabilistic Graphical Models: Exercises Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics

More information

2 Two Random Variables

2 Two Random Variables Two Random Variables 19 2 Two Random Variables A number of features of the two-variable problem follow by direct analogy with the one-variable case: the joint probability density, the joint probability

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores

More information

a b = a T b = a i b i (1) i=1 (Geometric definition) The dot product of two Euclidean vectors a and b is defined by a b = a b cos(θ a,b ) (2)

a b = a T b = a i b i (1) i=1 (Geometric definition) The dot product of two Euclidean vectors a and b is defined by a b = a b cos(θ a,b ) (2) This is my preperation notes for teaching in sections during the winter 2018 quarter for course CSE 446. Useful for myself to review the concepts as well. More Linear Algebra Definition 1.1 (Dot Product).

More information

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π

Solutions to Homework Set #5 (Prepared by Lele Wang) MSE = E [ (sgn(x) g(y)) 2],, where f X (x) = 1 2 2π e. e (x y)2 2 dx 2π Solutions to Homework Set #5 (Prepared by Lele Wang). Neural net. Let Y X + Z, where the signal X U[,] and noise Z N(,) are independent. (a) Find the function g(y) that minimizes MSE E [ (sgn(x) g(y))

More information

In this section of notes, we look at the calculation of forces and torques for a manipulator in two settings:

In this section of notes, we look at the calculation of forces and torques for a manipulator in two settings: Introduction Up to this point we have considered only the kinematics of a manipulator. That is, only the specification of motion without regard to the forces and torques required to cause motion In this

More information

Inference of cluster distance and geometry from astrometry

Inference of cluster distance and geometry from astrometry Inference of cluster distance and geometry from astrometry Coryn A.L. Bailer-Jones Max Planck Institute for Astronomy, Heidelberg Email: calj@mpia.de 2017-12-01 Abstract Here I outline a method to determine

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical

More information

Modern Navigation. Thomas Herring

Modern Navigation. Thomas Herring 12.215 Modern Navigation Thomas Herring Estimation methods Review of last class Restrict to basically linear estimation problems (also non-linear problems that are nearly linear) Restrict to parametric,

More information

Session #9 Critical Parameter Management & Error Budgeting

Session #9 Critical Parameter Management & Error Budgeting ESD.33 -- Systems Engineering Session #9 Critical Parameter Management & Error Budgeting Dan Frey Plan for the Session Follow up on session #8 Critical Parameter Management Probability Preliminaries Error

More information