1 Why Study Uncertainty? Uncertainty 1: Fundamentals (Random Variables, Processes and Transformation)

Size: px
Start display at page:

Download "1 Why Study Uncertainty? Uncertainty 1: Fundamentals (Random Variables, Processes and Transformation)"

Transcription

1 Uncertainty 1: Fundamentals (Random Variables, Processes and Transformation) 1 Why Study Uncertainty? Uncertainty models can be applied to many forms of error, including: random & systematic temporal and spatial We are generally concerned with modelling errors in measurements. Measurements may be: incomplete: related to some but not all of the variables of interest indirect: related indirectly to the quantities of interest intermittent: available at irregularlyspaced instants of time Every measurement in nature has some amount of random noise impressed on it as a matter of basic thermodynamics. The question is, how important is it? Modelling that noise can lead to systems of considerably higher performance. This may mean: 1

2 Improved safety, availability, overall robustness. Improved accuracy of generated models of the environment (say of a planetary surface). Types of Uncertainty Types of Uncertainty Measurements may be modelled by a general additive error model such as: x x 1 x...x n T x meas x true + ε In general, ε may be zero, a constant, or a function of anything. In general, ε may be deterministic (systematic), stochastic (random) or a combination of both. We will generally assume that the random component of ε is unbiased (i.e. has a mean of zero). This is equivalent to assuming all systematic error has been removed through calibration. Of course, ε is normally unknown. Otherwise we would just subtract it from the measurement and all sensors would effectively be perfect. Some classes of error include:

3 Types of Uncertainty 3 When the error is occasionally very far off from the true value, its called an outlier, (e.g. glitch). When the error follows a deterministic relationship, we often call it a calibration or model error. Examples include a scale (slope) or bias (offset) error. When the error possesses a random distribution, its called noise. Do not confuse the unknown with the random. Errors may be unknown but systematic. Random error models may be useful for approximating unknown quantities but they are not the same thing. Suppose we measure the rotation of a constant speed wheel with both a typical wheel encoder and an ideal sensor. We might get the following two curves. θ Wheel Position, Real Ideal time This sensor has it all: random noise, bias and scale error, and two outliers. Ignoring the outliers, we might model the errors as follows: ε a + bθ + N ( µσ, ) where N represents the Normal or Gaussian probability distribution function.

4 Notice that the unknowns include parameters of both systematic ( ab, ) and stochastic ( µ, σ) processes..1 Systematic Error - Calibration We remove systematic error through the process of calibration. To do calibration you need: A model for the systematic components of ε.this model must make some sense 1. A set of observations from a reference that has the accuracy you want to achieve. An estimation process which computes the best fit parameters of the model which matches the observations. Types of Uncertainty 4.1Systematic Error - Calibration. Random Error - Filtering We remove random error and outliers by the process of filtering..3 Correlated Error - Differential Observation If two measurements at different times or in different places are correlated, the correlated part of the error can be removed by differential observation. If two measurements are corrupted by the same constant value, observations of their difference (or the difference of both observations) will not include the error. 1. You can easily fit a straight line to parabolic data. You will get an answer that is the best fit line. The goodness of fit will be poor, so a good measure of whether the model is any good is the residual computed from subtracting the model predictions from the observations - after removing outliers.. Hence the differential amplifier.

5 3 Random Variables A random variable (random vector) may be continuous or discrete. A continuous one can be described in terms of its probability distribution (or density) function (pdf). An arbitrary pdf may look like this: fx ( ) a b x The area under the pdf determines the probability that an observation of x falls between a and b. pa ( x b) fu ( ) du b a 3 Random Variables 5.3Correlated Error - Differential Observation A discrete one can be described in terms of its probability function (pf). An arbitrary one may look like this: px ( ) D and higher dimensionality pdf s are important to us. For example, consider a representation of a D pdf in terms of the density of the following points in the plane: x

6 3 Random Variables 6 3.1Gaussian Distributions y d c pa ( x b y) fuy (, ) du b a which fix one coordinate or the other (imagine taking a slice through the surface and renormalizing). These are scalar functions of scalars. 3.1 Gaussian Distributions For example, for a continuous scalar, a socalled Gaussian pdf looks like this: f( u, y) du x a b In this case the joint pdf is a scalar function of a vector: d b p( a x b c y d) fuv (, ) dudv c a and there are also conditional pdf s such as

7 3 Random Variables 7 3.Mean and Variance of a Random Variable where the exponent: fx ( ) a 1 x µ ( ) exp πσ σ In n dimensions we have: fx ( ) b µ σ 1 [ x µ ] T C 1 [ x µ ] ( π) n exp C [ x µ ] T C 1 [ x µ ] is often called the Mahalanobis distance and C is the covariance matrix discussed below. 3. Mean and Variance of a Random Variable We must always be careful to distinguish the parameters of a theoretical population distribution from the statistics of a sampling distribution from the population. The latter can be used to estimate the former, but they are different things. The former are computed using knowledge of the pdf, the latter divide by the number n of samples somewhere in their formulas. The expected value or expectation of anything is its weighted average where the pdf provides the weighting function:

8 Ehx ( ( )) hu ( )fu ( ) du n Ehx ( ( )) hu ( i )pu ( i ) i 1 continuous discrete Note especially that the expectation is a functional (i.e a moment), so you need the entire distribution fx ( ) to find it. If you know the whole distribution, you can compute all kinds of moments. Mean and variance are two important moments. The population mean is the expected value of the random variable x. This is basically the centroid of the distribution. µ Ex ( ) uf( u) du n µ Ex ( ) u i pu ( i ) i 1 continuous discrete Note that this is not the most likely value of x - which is determined by the point at which 3 Random Variables 8 3.3Sampling Distributions and Statistics the pdf or pf is maximum. So expected value is a very misleading name for the mean. For a bimodal distribution, the expected value may be very unlikely to occur as in the figure below. Expected Value Ex ( ) fx ( ) Most Likely Value to Observe but for a Gaussian distribution, the mean and maximum likelihood estimate of x are the same. The population covariance is the expected value of the outer product (a matrix) of two deviation vectors: 3.3 Sampling Distributions and Statistics Statistics are quantities computed from a sample of measurements that can be used to

9 continuous Σ E( [ x µ ][ x µ ] T ) [ x µ ][ x µ ] T fx ( µ ) du discrete n Σ E( [ x µ ][ x µ ] T ) [ x µ ][ x µ ] T p( [ x µ ]) i 1 estimate the parameters of the population from which the sample was taken. Samples are not normally continuous, so we will deal only with discrete values. The sample mean: xˆ 1 n -- x i 1 The sample covariance 1 : S n 1. If you use the sample mean to estimate the population mean in this formula, you have to divide by (n-1). Its a subtlety that complicates things that we will ignore. Assume you know the population mean for now. n 1 -- [ x µ ][ x µ ] T n i 1 3 Random Variables 9 3.4Why Use a Gaussian to Model Uncertainty The diagonal elements of S measure the average squared distance to the corresponding mean (i.e. the variance). 1 s ii -- x n [ i µ i ][ x i µ i ] i 1 The off-diagonal elements of S measure the covariance of two different mean deviations. 3.4 Why Use a Gaussian to Model Uncertainty n n 1 s ij -- x n [ i µ i ][ x j µ j ] i 1 In general, the mean and covariance are the first and second moments of an arbitrary pdf. Use of these alone to characterize a real pdf is a form of linear assumption. This linear assumption happens to correspond to the use of a Gaussian because its higher moments are all zero. Many real noise signals follow a Gaussian distribution anyway.

10 By the Central Limit Theorem, the sum of a number of independent variables has a Gaussian distribution regardless of their individual distributions. Representation of contours of constant probability is easy. 99% 95% 67% σ 3 Random Variables Why Use a Gaussian to Model Uncertainty ( x µ ) T Σ 1 ( x µ ) k ( p) and this can be shown to be the equation of an ellipse in n dimensions. We can only plot an ellipse in D, so we can remove all but two corresponding rows and columns from Σ, then for a probability p ellipse: k ( p) ln( 1 p) Here are some concentration ellipsoids 1. 99% 90% 50% 3σ σ σ σ σ 3σ In general, these curves of equiprobability are n - ellipsoids because fx ( ) is constant when the exponent (the Mahalanobis distance) is a constant k that depends on the probability p. 1. These ellipses are contours of the Gaussian density and are sometimes called concentration ellipses since the bulk of the probability is concentrated inside them.

11 The covariance matrix is diagonal when it is expressed in a set of rectangular coordinates coincident with the major and minor axes of the ellipitical distribution that it represents. Let these axes be given (with respect to the system in which the covariance is expressed) by the columns of a rotation matrix R. Then, the covariance matrix can be converted to a diagonal matrix with the following similiarity transformation: Σ 1 RD 1 R T 3 Random Variables Why Use a Gaussian to Model Uncertainty It has elongations along each axis proportional to the square roots of the corresponding diagonal elements of D The diagonal values of D are the eigenvalues of Σ and the columns of R are its eigenvectors. The ellipse given by D 1 has these properties 1 : It is centered at xˆ. It has major and minor axes coinciding with the columns of R 1. The gory details are in Smith and Cheeseman.

12 4 Transformation of Uncertainty We can use the covariance matrix to represent the spread of data. This form of model can be transformed from one rectangular coordinate system to another. It can also be transformed through a nonlinear change of variables if we linearize. 4.1 Linear Transformations Suppose we know the covariance matrix in one frame (because its easy to express there) and want to express it in another (because we need to use it there). y a x a y b xb Uncertainty is diagonal in frame a. Let the transformation of coordinates from system a to system b be: 4 Transformation of Uncertainty 1 4.1Linear Transformations Then the transformed mean and covariance are: 4. Nonlinear Transformations More generally, let us imagine a case where we know the uncertainty in x and want to compute the uncertainty in a function of x. We know that: b x b xˆ b S R y x a R xˆ R a x + t + t where ε has a mean of zero. Hence, we can use the Jacobian to linearly approximate y as follows: a S f( x) xˆ + ε R T

13 y f( x) fxˆ ( + ε) fxˆ ( ) + Jε Now the mean of the distribution of y is: ŷ Ey ( ) Efxˆ ( ( ) + Jε) Efxˆ ( ( )) + EJε ( ) fxˆ ( ) Because, we assume the error ε is unbiased and therefore it has a mean of zero and Efxˆ ( ( )) fxˆ ( ). Therefore we have: y ŷ Jε The covariance of the transformed variable is: S y E( [ y ŷ] [ y ŷ] T ) S y S y EJεε T J T ( ) JS x J T 4 Transformation of Uncertainty Example: Transforming Uncertainty from Image to Groundplane in the Azimuth Note that the linear case of the last section is just a special case of this where the Jacobian was the rotation matrix. 4.3 Example: Transforming Uncertainty from Image to Groundplane in the Azimuth Scanner The coordinates in x need not be cartesian coordinates. Consider how to convert uncertainty from the polar coordinates of a laser rangefinder to the cartesian coordinates of the groundplane. Note the sensor tilt angle is negative in the figure, hence the label θ h θ z w y s z s y w T s w T s w Trans( 00h,, )Rotx( θ) h cθ sθ 0 0 sθ cθ

14 First, consider converting coordinates from polar range, azimuth, elevation (i) to a sensorfixed (s) cartesian frame. x s z s R ψ θ y s Consider representing the uncertainty in range and the two scanning angles in terms of the diagonal matrix: Here we have implicitly made the standard assumption that our errors are uncorrelated (as well as unbiased) in the natural (polar) frame of reference because we used a diagonal matrix 1. v s x s y s s RR 0 0 S i 0 s θθ 0 z s 0 0 s ψψ Rsψ Rcθcψ Rsθcψ 4 Transformation of Uncertainty Example: Transforming Uncertainty from Image to Groundplane in the Azimuth Recall that the Jacobian of the transform to the sensor frame is: J i s Hence: sψ Rcψ 0 cθcψ Rcθsψ Rsθcψ sθcψ Rsθsψ Rcθcψ S s J i s Si J i s ( ) T The transformation from this to the groundplane (w) involves the RPY transform and the location of the sensor on the vehicle. w Let this homogeneous transform be called T s. A simplified version was given above for the case when the vehicle is level and its control point is directly beneath the sensor. 1. Because every covariance matrix is positive definite, it is diagonalizeable. Because it is diagonalizeable, there is some coordinate system within which the associated variables are uncorrelated. Hence, whether or not two quantities are correlated depends on the choice of coordinates.

15 This part of the problem is linear, so using w only the contained rotation matrix R s : S w R s w Ss R s w ( ) T 4.4 Example: Computing Attitude from a Terrain Map This is a case where the Jacobian is not square. We can use the last result to get uncertainty of elevation in a terrain map. Using this, we can compute uncertainty in predicted vehicle attitude. Many forms of hazards that present themselves to a vehicle can be interpreted in terms of a gradient operator over some support length. For example, static stability involves estimation of vehicle pitch and roll under an assumption of terrain following. In the case of pitch, the appropriate length is the vehicle wheelbase. In the case of roll, the appropriate length is the vehicle width. Consider the problem of estimation of the slope angle of the terrain in a particular 4 Transformation of Uncertainty Example: Computing Attitude from a Terrain Map direction over a certain distance. The distance L represents the separation of the wheels. Let the slope angle be represented in terms of its tangent, thus: z r θ ( z f z r ) L L The two elevations can be considered to be a -vector whose covariance is 1 : S z s f o 0 s r Under our uncertainty propagation techniques, the covariance S θ of the computed pitch angle is a scalar. It can be 1. Notationally, variance is often written as either s xx or s x etc. z f

16 computed from the covariance S z of the elevations as: S θ J T S z J where J is, in this case, the gradient vector of pitch with respect to the elevations. J θ θ -- 1 z f z r L 1 -- L The uncertainty in pitch is therefore simply: 1 σ θ [ σ f + σ r ] L This simple formula permits an assessment of confidence in predicted pitch at any point. 4.5 Example: Range Error in Rangefinders We have up to now assumed that the variances in sensor readings are constants. It is often more realistic to express them as functions of something else - especially the sensor readings themselves. 4 Transformation of Uncertainty Example: Range Error in Rangefinders For laser rangefinders: recieved laser radiation intensity is inversely proportional to the 4th power of the measured range (from the radar equation ) range noise can be expected to depend on the recieved intensity and the local terrain gradient because there is a range distribution within that footprint. One reasonable model of all this is to express range uncertainty in terms of the reflectance signal ρ, the range R, and the beam incidence angle α. z w y s z s y w ρ s RR σ RR Rcosα R

17 Such a model is not valid for ranges below 3 meters or so. 4 Transformation of Uncertainty Example: Stereo Vision Consider the geometry of two cameras with parallel image planes. 0.6 in meters σ RR object Range in meters 4.6 Example: Stereo Vision In the case of stereo vision, a completely different model for range error applies. There are several ways to formulate this problem. Matthies considers the image plane coordinates the independent variables whereas we will use their difference (called disparity) here. f f Y L X L Left Camera b Y R Right Camera XR The basic stereo triangulation formula for perfectly aligned cameras of epipolar geometry is quoted below. It can be derived

18 simply from the principle of similar triangles. X L Y L X L b x l --- f X R Yd f Y X R Y R x r --- f Uppercase letters signify scene (3D) quantities whereas lowercase signify image plane quantities. Once stereo matching is performed, the y coordinate can be determined from the disparity d of each pixel. Then the x and z coordinates come from the known unit vector through each pixel which is given by the camera kinematic model. It will be useful at times to define disparity as the tangent of an angle thus: Yx [ l x r ] f bf ---- d 4 Transformation of Uncertainty Example: Stereo Vision d b δ f Y because this hides the dependence of disparity on focal length of the lens. Then, the basic triangulation equation becomes: Y Thus, if s δδ is the uncertainty in disparity, then the uncertainty in range is: s yy where the Jacobian in this case is simply the scalar: J If we take the disparity uncertainty to be equal to the constant angle subtended by one b -- δ Js δδ J T Y δ b δ

19 pixel, then its clear that the range variance goes as the fourth power of range itself: b Y 4 s yy ---- s δδ s δδ δ 4 and hence the standard deviation of range goes with range squared - an extremely wellknown and important result. For perfectly aligned cameras, the computed range can be applied to either camera model provided the appropriate pixel coordinates are used. Unlike the case for rangefinders, it is natural to consider the y coordinate known instead of the true 3D range to an object because the y coordinate is related to disparity: x s z s R ψ θ y s s v b x s y s z s Y -- f x i f z i 4 Transformation of Uncertainty Example: Stereo Vision The imaging Jacobian provides the relationship between the differential quantities in the sensor frame and the associated position change in the image. The Jacobian is: J i s x s Y x s x i x s z i y s y s y s Y z s Y x i z s x i z i z s z i If we consider that the only sensor error is the range measurement, the uncertainty in the sensor frame is: Multiplying out and cancelling small terms leads to: 1 -- f x i Y 0 f Y z i s st Y 4 S s J i syy J i s δδ b

20 Y 4 Y 0 0 S s f b s δδ 0 f Y 5 Computation of Sample Statistics 0 4.6Example: Stereo Vision 5 Computation of Sample Statistics It can be very important in practice to know how to compute mean and covariance continuously on the fly. The batch methods assume that all data is available at the beginning: mean covariance n 1 xˆ -- x n S i n [ x µ ][ x µ ] T n i 1 The recursive methods keep a running estimate and add one point at a time as necessary:

21 mean ( kxˆ k + x k + 1 ) xˆ k ( k + 1) covariance ks k + x k + 1 µ S [ ] [ x k + µ ] T 1 k ( k + 1) These are related to the Kalman filter. 5 Computation of Sample Statistics 1 4.6Example: Stereo Vision The calculator methods update statistical accumulators when data arrives and then compute the stratistics on demand at any time: mean + Tˆ k + x k + 1 Tˆ k 1 xˆ k + 1 T k ( k + 1) when data arrives when answer necessary covariance Q k 1 + Q k + [ x k + 1 µ ] [ x k + µ ] T 1 data arrives Q k S k ( k + 1) when answer necessary It is generally a bad idea to compute standard deviation (not variance) recursively because it requires a square and a square root - which are expensive.

22 6 Random Processes Recall that a random variable is a single (may be vector-valued) value chosen at random from a theoretical collection of all possible points called a population. Conversely, the random process (random sequence, random signal) is a function chosen at random from a theoretical collection of all possible functions of time called an ensemble. Usually, the statistical variation of the ensemble of functions at any time is considered known. The central thing for a random process is: when you choose a single outcome, you get a whole function - not just one value. Exactly which signal is chosen is a random variable, and therefore the value of the chosen function at any time across all experiments is a random variable. The values of the signal from sample to sample may or may not be random 1 and if so, may or may not be independent. Some examples: 6 Random Processes 6.1Random Constant Buy a gyroscope and look at its bias torque. Its a constant, but you don t know its value until you look because they are all different. Look at an ac power line. The frequency is a pretty solid 60 Hz. The amplitude may slowly vary from time to time. The phase is a uniformly distributed random variable because it depends on exactly when you look. Flip a coin 10 times. This is a uniformly distributed binary random signal. Measure the cosmic background radiation from the big bang. Pretty random. 6.1 Random Constant Consider for example a random constant process of buying a gyroscope and plotting its bias torque. The bias torques of all gyroscopes manufactured follow some sort of 1. By the Wold decomposition theorem, all random signals can be decomposed into their predictable and unpredictable parts.

23 distribution. To be sure, the bias torque for any particular gyroscope could be plotted on, an oscilloscope and it would produce (something like) the following time signal: volts Nothing random about that. Suppose, however, that alot of people buy several at the same time, hook em up, wait ten seconds, and record the bias value. The values of the bias torques at t 10 seconds could be plotted in a histogram to produce something like. frequency bias time This distribution is the probability distribution of the associated random constant 6 Random Processes 3 6.Bias, Stationarity, Ergodicity and Whiteness process at t 10 seconds. In this case, while any member of the ensemble of functions is deterministic in time, the choice of the function is random. In general, there may be a family of related functions of time where some important parameter varies randomly - a family of sinusoids of random amplitude, for instance. Even though there is no way to predict which function will be chosen, it may be known that all functions are related by a single random parameter and from this knowledge, it is possible to compute the distribution for the process as a function of time. 6. Bias, Stationarity, Ergodicity and Whiteness A random process is unbiased if its expected (i.e. average) value is zero for all time. A random process is said to be stationary if the distribution of values of the functions in the ensemble does not vary with time. Conceptually, a movie of the histogram above

24 for the gyroscopes for each second of time would be a still picture. An ergodic random process is one where time averaging is equivalent to ensemble averaging - which is to say that everything about the process can be discovered by watching a single function for all time, or by watching all signals at a single instant 1. A white signal is one which contains all frequencies. It is clear that fluency with these concepts requires the ability to think about a random process in three different ways: in terms of its probability distribution in terms of it evolution over time in terms of its frequency content. These different views of the same process will be discussed in the next sections, as well as methods for converting back and forth between them. 1. This is related to the way that a Fourier or Taylor series can predict the value of a function anywhere if its respective coefficients are known at a single point. 6 Random Processes 4 6.3Correlation Functions 6.3 Correlation Functions Correlation is a way of thinking about both the probability distributions of a random process and its time evolution. The autocorrelation function for a random process x(t) is defined as: R xx ( t 1, t ) Ext [ ( 1 )xt ( )] so it s just the expected value of the product of two random numbers - each of which can be considered to be functions of time. The result is a function of both times. Let: x 1 xt ( 1 ) x xt ( ) then the autocorrelation function is, by definition of expectation: R xx ( t 1, t ) x 1 x fx ( 1, x )dx 1 dx where fx ( 1, x ) is the joint probability distribution. The autocorrelation function gives the tendency of a function to have the

25 same sign and magnitude (i.e. to be correlated) at two different times. For smooth functions it is expected that the autocorrelation function would be highest when the two times are close because the smooth function has little time to change. This idea can be expressed formally in terms of the frequency content of the signal, and conversely, the autocorrelation function says a lot about how smooth a function is. Equivalently, the autocorrelation function specifies how fast a function can change, which is equivalent to saying something about the magnitude of the coefficients in its Taylor series, or its Fourier series, or its Fourier transform. All of these things are linked. The crosscorrelation function relates two different random processes in an analogous way: 6 Random Processes 5 6.3Correlation Functions unbiased, the correlation functions give the variance and covariance of the indicated random variables. This is easy to see by considering the general formula for variance and setting the mean to zero. Thus, for stationary unbiased processes, the correlation functions are the variances and covariances expressed as a function of the time difference: R xx ( τ) σ xx ( τ) R xy ( τ) σ xy ( τ) Also, for stationary unbiased processes, setting the time difference to zero recovers the traditional variance of the associated random variable: σ xx R xx ( 0) σ xy R xy ( 0) R xy ( t 1, t ) Ext [ ( 1 )yt ( )] For a stationary process the correlation function is dependent only on the difference τ t 1 t.when the processes involved are

26 6.4 Power Spectral Density The power spectral density is just the Fourier transform of the autocorrelation function, thus: S xx ( jω) I[ R xx ( τ) ] R xx ( τ)e jωτ dτ The power spectral density is a direct measure of the frequency content of a signal, and hence, of its power content. Of course, the inverse Fourier transform yields the autocorrelation back again. R xx ( τ) Similarly, the cross power spectral density function is the Fourier transform of crosscorrelation: S π xx ( jω)e jωτ dω S xy ( jω) I[ R xy ( τ) ] R xy ( τ)e jωτ dτ 6 Random Processes 6 6.4Power Spectral Density 6.5 White Noise White noise is defined as a stationary random process whose power spectral density function is constant. That is, it contains all frequencies of equal amplitude. If the constant spectral amplitude is A, then the corresponding autocorrelation function is given by the inverse Fourier transform of a constant, which is the Dirac delta. Sjω ( ) R( τ) A A R( τ) Aδ( τ) ω τ

27 Thus, knowing the value of a white noise signal at some instant of time says absolutely nothing about its value at any other time, and this is because it is possible for it to jump around at infinite frequency. 6.6 White Noise Process Covariance Let the white unbiased Gaussian random process xt () have power spectral density S p. Then the variance is: σ xx R xx ( 0) S p δ( 0) S p S π p e jωτ dω Thus the variance of a white noise process is its spectral amplitude. This one is easy. 6.7 Random Walk Process Covariance Now suppose that the white noise process represents the time derivative of the real variable of interest. Thus: σ x x S p 6 Random Processes 7 6.6White Noise Process Covariance The question to be answered is what is σ, σ and σ. These can be derived from first x x principles. xx xx Consider that the value of x at any time must be, by definition, the integral of x as follows: σ xx Ex ( ) E x ( u)du x ( v)dv this can be written easily as a double integral since the variables of integration are independent thus: σ xx interchanging the order of expectation and integration: σ xx E t t 0 0 t t 0 0 t 0 t 0 x ( u)x ( v)dudv Ex [ ( u)x ( v) ]dudv

28 but now the integrand is just the autocorrelation function, which, for white noise is the Dirac delta, so: σ xx t t S p δ( u v)dudv S p dv 0 0 t 0 S p t 6 Random Processes 8 6.8Integrated Random Walk Process Covariance Thus the variance grows with the square of time and the standard deviation with time. This is called an integrated random walk. This technique can be used in general to compute the elements of a covariance matrix given the spectral amplitudes of the noises. Thus the variance of the integral of white noise grows linearly with time. Also, the standard deviation grows with the square root of time. The process xt () is one for which the velocity at any instant is a white random process. It is called a random walk. 6.8 Integrated Random Walk Process Covariance If the second derivative of xt () with respect to time is a white process, then the variance in xt () is: σ xx t t t t S p δ( u v)δ( s t)dudvdsdt 0000 S p t

29 7 Summary Real measurements incorporate many forms of error and we can remove them in various ways including filtering, calibration, and differential observation. Covariance measures the spread of data or a population of random vectors. It is expressed as a matrix and computed as the expected value of the outer product of two deviations from the mean. Covariance completely determines the size and shape of the multivariate normal distribution. Contours of constant probability for this distribution are ellipsoids. A quadratic form involving the Jacobian of a nonlinear transform is used to convert the coordinates of covariance. It is possible and fairly easy to convert uncertainty of perceptual data into uncertainty in the quantities derived from it. The random process is a randomly chosen function. Its values may or may not be randomly varying in time. 7 Summary 9 6.8Integrated Random Walk Process Covariance A white noise signal is a mathematical idealization of a signal which contains all frequencies (and hence has infinite power). The variance of the random walk process grows linearly with time. Hence a sequence of random steps forward or backward along the x axis results, on average, in a radius from the origin which grows with the square root of time.

30 8 Notes Integrated Random Walk Process Covariance 8 Notes Notation: use upper right for exponent and transpose, upper left for the coordinate system as is traditional. Underline vectors below so you can put hats on them. Central limit theorem. Draw a few samples and their means and then draw a histogram of the means and indicate how the histogram gets thinner (sigma/n) for large n.

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment

1 Introduction. Systems 2: Simulating Errors. Mobile Robot Systems. System Under. Environment Systems 2: Simulating Errors Introduction Simulating errors is a great way to test you calibration algorithms, your real-time identification algorithms, and your estimation algorithms. Conceptually, the

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

3F1 Random Processes Examples Paper (for all 6 lectures)

3F1 Random Processes Examples Paper (for all 6 lectures) 3F Random Processes Examples Paper (for all 6 lectures). Three factories make the same electrical component. Factory A supplies half of the total number of components to the central depot, while factories

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

Mobile Robotics, Mathematics, Models, and Methods

Mobile Robotics, Mathematics, Models, and Methods Mobile Robotics, Mathematics, Models, and Methods Errata in Initial Revision Prepared by Professor Alonzo Kelly Rev 10, June 0, 015 Locations in the first column are given in several formats: 1 x,y,z means

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes

EAS 305 Random Processes Viewgraph 1 of 10. Random Processes EAS 305 Random Processes Viewgraph 1 of 10 Definitions: Random Processes A random process is a family of random variables indexed by a parameter t T, where T is called the index set λ i Experiment outcome

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Definition of a Stochastic Process

Definition of a Stochastic Process Definition of a Stochastic Process Balu Santhanam Dept. of E.C.E., University of New Mexico Fax: 505 277 8298 bsanthan@unm.edu August 26, 2018 Balu Santhanam (UNM) August 26, 2018 1 / 20 Overview 1 Stochastic

More information

Modern Navigation. Thomas Herring

Modern Navigation. Thomas Herring 12.215 Modern Navigation Thomas Herring Estimation methods Review of last class Restrict to basically linear estimation problems (also non-linear problems that are nearly linear) Restrict to parametric,

More information

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver Stochastic Signals Overview Definitions Second order statistics Stationarity and ergodicity Random signal variability Power spectral density Linear systems with stationary inputs Random signal memory Correlation

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

3. Review of Probability and Statistics

3. Review of Probability and Statistics 3. Review of Probability and Statistics ECE 830, Spring 2014 Probabilistic models will be used throughout the course to represent noise, errors, and uncertainty in signal processing problems. This lecture

More information

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

conditional cdf, conditional pdf, total probability theorem?

conditional cdf, conditional pdf, total probability theorem? 6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random

More information

Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras

Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras Electro Magnetic Field Dr. Harishankar Ramachandran Department of Electrical Engineering Indian Institute of Technology Madras Lecture - 7 Gauss s Law Good morning. Today, I want to discuss two or three

More information

Chapter 6. Random Processes

Chapter 6. Random Processes Chapter 6 Random Processes Random Process A random process is a time-varying function that assigns the outcome of a random experiment to each time instant: X(t). For a fixed (sample path): a random process

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X

More information

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance.

Chapter 2. Random Variable. Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. Chapter 2 Random Variable CLO2 Define single random variables in terms of their PDF and CDF, and calculate moments such as the mean and variance. 1 1. Introduction In Chapter 1, we introduced the concept

More information

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION c01.fm Page 3 Friday, December 8, 2006 10:08 AM 1 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION In the design and analysis of image processing systems, it is convenient and often necessary mathematically

More information

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or Physics 7b: Statistical Mechanics Brownian Motion Brownian motion is the motion of a particle due to the buffeting by the molecules in a gas or liquid. The particle must be small enough that the effects

More information

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.

E X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl. E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Metric-based classifiers. Nuno Vasconcelos UCSD

Metric-based classifiers. Nuno Vasconcelos UCSD Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg Statistics for Data Analysis PSI Practical Course 2014 Niklaus Berger Physics Institute, University of Heidelberg Overview You are going to perform a data analysis: Compare measured distributions to theoretical

More information

Robotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.

Robotics. Lecture 4: Probabilistic Robotics. See course website   for up to date information. Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Sensors

More information

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y. ATMO/OPTI 656b Spring 009 Bayesian Retrievals Note: This follows the discussion in Chapter of Rogers (000) As we have seen, the problem with the nadir viewing emission measurements is they do not contain

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms

L03. PROBABILITY REVIEW II COVARIANCE PROJECTION. NA568 Mobile Robotics: Methods & Algorithms L03. PROBABILITY REVIEW II COVARIANCE PROJECTION NA568 Mobile Robotics: Methods & Algorithms Today s Agenda State Representation and Uncertainty Multivariate Gaussian Covariance Projection Probabilistic

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

Random Processes Why we Care

Random Processes Why we Care Random Processes Why we Care I Random processes describe signals that change randomly over time. I Compare: deterministic signals can be described by a mathematical expression that describes the signal

More information

Systematic Uncertainty Max Bean John Jay College of Criminal Justice, Physics Program

Systematic Uncertainty Max Bean John Jay College of Criminal Justice, Physics Program Systematic Uncertainty Max Bean John Jay College of Criminal Justice, Physics Program When we perform an experiment, there are several reasons why the data we collect will tend to differ from the actual

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

Multivariate Distributions

Multivariate Distributions Copyright Cosma Rohilla Shalizi; do not distribute without permission updates at http://www.stat.cmu.edu/~cshalizi/adafaepov/ Appendix E Multivariate Distributions E.1 Review of Definitions Let s review

More information

Figure 1: Doing work on a block by pushing it across the floor.

Figure 1: Doing work on a block by pushing it across the floor. Work Let s imagine I have a block which I m pushing across the floor, shown in Figure 1. If I m moving the block at constant velocity, then I know that I have to apply a force to compensate the effects

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Statistics and Data Analysis

Statistics and Data Analysis Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data

More information

16.584: Random (Stochastic) Processes

16.584: Random (Stochastic) Processes 1 16.584: Random (Stochastic) Processes X(t): X : RV : Continuous function of the independent variable t (time, space etc.) Random process : Collection of X(t, ζ) : Indexed on another independent variable

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides were adapted

More information

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester Physics 403 Propagation of Uncertainties Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Maximum Likelihood and Minimum Least Squares Uncertainty Intervals

More information

Treatment of Error in Experimental Measurements

Treatment of Error in Experimental Measurements in Experimental Measurements All measurements contain error. An experiment is truly incomplete without an evaluation of the amount of error in the results. In this course, you will learn to use some common

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x. Section 5.1 Simple One-Dimensional Problems: The Free Particle Page 9 The Free Particle Gaussian Wave Packets The Gaussian wave packet initial state is one of the few states for which both the { x } and

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

Brief Review of Probability

Brief Review of Probability Brief Review of Probability Nuno Vasconcelos (Ken Kreutz-Delgado) ECE Department, UCSD Probability Probability theory is a mathematical language to deal with processes or experiments that are non-deterministic

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Math Methods for Polymer Physics Lecture 1: Series Representations of Functions

Math Methods for Polymer Physics Lecture 1: Series Representations of Functions Math Methods for Polymer Physics ecture 1: Series Representations of Functions Series analysis is an essential tool in polymer physics and physical sciences, in general. Though other broadly speaking,

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I

for valid PSD. PART B (Answer all five units, 5 X 10 = 50 Marks) UNIT I Code: 15A04304 R15 B.Tech II Year I Semester (R15) Regular Examinations November/December 016 PROBABILITY THEY & STOCHASTIC PROCESSES (Electronics and Communication Engineering) Time: 3 hours Max. Marks:

More information

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21 CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 08 SUBJECT NAME : Probability & Random Processes SUBJECT CODE : MA645 MATERIAL NAME : University Questions REGULATION : R03 UPDATED ON : November 07 (Upto N/D 07 Q.P) (Scan the

More information

Dimension Reduction. David M. Blei. April 23, 2012

Dimension Reduction. David M. Blei. April 23, 2012 Dimension Reduction David M. Blei April 23, 2012 1 Basic idea Goal: Compute a reduced representation of data from p -dimensional to q-dimensional, where q < p. x 1,...,x p z 1,...,z q (1) We want to do

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES

3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.0 PROBABILITY, RANDOM VARIABLES AND RANDOM PROCESSES 3.1 Introduction In this chapter we will review the concepts of probabilit, rom variables rom processes. We begin b reviewing some of the definitions

More information

Motion Estimation (I)

Motion Estimation (I) Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods Prof. Daniel Cremers 14. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Chapter 6 - Random Processes

Chapter 6 - Random Processes EE385 Class Notes //04 John Stensby Chapter 6 - Random Processes Recall that a random variable X is a mapping between the sample space S and the extended real line R +. That is, X : S R +. A random process

More information

Contravariant and Covariant as Transforms

Contravariant and Covariant as Transforms Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to

More information

Introduction and some preliminaries

Introduction and some preliminaries 1 Partial differential equations Introduction and some preliminaries A partial differential equation (PDE) is a relationship among partial derivatives of a function (or functions) of more than one variable.

More information

Empirical Mean and Variance!

Empirical Mean and Variance! Global Image Properties! Global image properties refer to an image as a whole rather than components. Computation of global image properties is often required for image enhancement, preceding image analysis.!

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Data Analysis, Standard Error, and Confidence Limits E80 Spring 2015 Notes

Data Analysis, Standard Error, and Confidence Limits E80 Spring 2015 Notes Data Analysis Standard Error and Confidence Limits E80 Spring 05 otes We Believe in the Truth We frequently assume (believe) when making measurements of something (like the mass of a rocket motor) that

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

1 Fundamentals. 1.1 Overview. 1.2 Units: Physics 704 Spring 2018

1 Fundamentals. 1.1 Overview. 1.2 Units: Physics 704 Spring 2018 Physics 704 Spring 2018 1 Fundamentals 1.1 Overview The objective of this course is: to determine and fields in various physical systems and the forces and/or torques resulting from them. The domain of

More information

A Introduction to Matrix Algebra and the Multivariate Normal Distribution

A Introduction to Matrix Algebra and the Multivariate Normal Distribution A Introduction to Matrix Algebra and the Multivariate Normal Distribution PRE 905: Multivariate Analysis Spring 2014 Lecture 6 PRE 905: Lecture 7 Matrix Algebra and the MVN Distribution Today s Class An

More information

2 Two Random Variables

2 Two Random Variables Two Random Variables 19 2 Two Random Variables A number of features of the two-variable problem follow by direct analogy with the one-variable case: the joint probability density, the joint probability

More information

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e. Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate

More information

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Statistical Methods in Particle Physics Lecture 10 December 17, 01 Silvia Masciocchi, GSI Darmstadt Winter Semester 01 / 13 Method of least squares The method of least squares is a standard approach to

More information

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart 1 Motivation Imaging is a stochastic process: If we take all the different sources of

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

Stochastic Processes. Chapter Definitions

Stochastic Processes. Chapter Definitions Chapter 4 Stochastic Processes Clearly data assimilation schemes such as Optimal Interpolation are crucially dependent on the estimates of background and observation error statistics. Yet, we don t know

More information

ADAPTIVE ANTENNAS. SPATIAL BF

ADAPTIVE ANTENNAS. SPATIAL BF ADAPTIVE ANTENNAS SPATIAL BF 1 1-Spatial reference BF -Spatial reference beamforming may not use of embedded training sequences. Instead, the directions of arrival (DoA) of the impinging waves are used

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Advanced Optical Communications Prof. R. K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay

Advanced Optical Communications Prof. R. K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay Advanced Optical Communications Prof. R. K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture No. # 15 Laser - I In the last lecture, we discussed various

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Vectors a vector is a quantity that has both a magnitude (size) and a direction

Vectors a vector is a quantity that has both a magnitude (size) and a direction Vectors In physics, a vector is a quantity that has both a magnitude (size) and a direction. Familiar examples of vectors include velocity, force, and electric field. For any applications beyond one dimension,

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

1 Measurement Uncertainties

1 Measurement Uncertainties 1 Measurement Uncertainties (Adapted stolen, really from work by Amin Jaziri) 1.1 Introduction No measurement can be perfectly certain. No measuring device is infinitely sensitive or infinitely precise.

More information

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

the robot in its current estimated position and orientation (also include a point at the reference point of the robot) CSCI 4190 Introduction to Robotic Algorithms, Spring 006 Assignment : out February 13, due February 3 and March Localization and the extended Kalman filter In this assignment, you will write a program

More information