Recursive Estimation

Size: px
Start display at page:

Download "Recursive Estimation"

Transcription

1 Recursive Estimation Raffaello D Andrea Spring 08 Problem Set 3: Extracting Estimates from Probability Distributions Last updated: April 9, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables, p x denotes the probability density function of x, and p x y denotes the conditional probability density function of x conditioned on y. Note that shorthand as introduced in Lecture and and longhand notation is used. The expected value of x and its variance is denoted by E [x] and Var [x], and Pr Z denotes the probability that the event Z occurs. A normally distributed random variable x with mean µ and variance σ is denoted by x N µ, σ. Please report any errors found in this problem set to the teaching assistants hofermat@ethz.ch or csferrazza@ethz.ch.

2 Problem Set Problem Let x and w be scalar CRVs, where x, w are independent: px, w px pw, and w [, ] is uniformly distributed. Let z x + 3w. a Calculate pz x using the change of variables. b Let x. Verify that Pr z [, 4] x Pr w [0, ] x. Problem Let w R and x R 4 be CRVs, where x and w are independent: px, w px pw. The two elements of the measurement noise vector w are w, w, and the PDF of w is uniform: p w w p w,w w, w The measurement model is where z Hx + Gw H /4 for w and w 0 otherwise. [ ] [ ], G Calculate the conditional joint PDF p z x using the change of variables.. Problem 3 Find the maximum likelihood ML estimate ˆx ML : arg max pz x when x z i x + w i, i, where x, w i are mutually independent, with w i N 0, σ i and zi, w i, x R. Problem 4 Solve Problem 3 if the joint PDF of w is p w w p w,w w, w /4 for w and w 0 otherwise.

3 Problem 5 Let x R n be an unknown, constant parameter. Furthermore, let w R m, n m, represent the measurement noise with its joint PDF defined by the multivariate Gaussian distribution p w w π m / det Σ exp / wt Σ w with mean E [w] 0 and variance E [ ww T ] : Σ Σ T independent. Let the measurement be R m m. Assume that x and w are z Hx + w where the matrix H R m n has full column rank. a Calculate the ML estimate ˆx ML : arg max pz x for a diagonal variance x σ 0 0 Σ 0 σ σm Note that in this case, the elements w i of w are mutually independent. b Calculate the ML estimate for non-diagonal Σ, which implies that the elements w i of w are correlated. Hint: The following matrix calculus is useful for this problem see, for example, the Matrix Cookbook for reference: Recall the definition of a Jacobian matrix from the lecture: Let gx : R n R m be a function that maps the vector x R n to another vector gx R m. The Jacobian matrix is defined as gx x : g x x g x x. g mx x g x x g x x.. g mx x g x x n g x x n. g mx x n Rm n where x j is the j-th element of x and g i x the i-th element of gx. Derivative of a quadratic form: Chain rule: x xt Ax x T A + A T R n, A R n n. u gx, x R n, u R n x ut Au u T A + A T u x, u x gx x Rn n. 3

4 Problem 6 Find the maximum a posteriori MAP estimate ˆx MAP : arg max pz x px when x z x + w, x, z, w R where x and w are independent, x is exponentially distributed, 0 for x < 0 p x x e x for x 0 and w N 0,. Problem 7 In the derivation of the recursive least squares algorithm, we used the following result: trace ABA T AB if B B T A where trace is the sum of the diagonal elements of a matrix. Prove that the above holds for the two-dimensional case, i.e. A, B R. Problem 8 In the derivation of the recursive least squares algorithm, we also used the following result trace AB A B T. Prove that the above holds for A, B R. Problem 9 Consider the recursive least squares algorithm presented in lecture, where the estimation error at step k was shown to be ek I KkHk ek Kkwk. Furthermore, E [wk] 0, E [ek] 0, and x, w,...} are mutually independent. Show that the measurement error variance P k Var [ek] E [ eke T k ] is given by T P k I KkHk Pk I KkHk + KkRkK T k where Rk Var [wk]. Problem 0 Use the results from Problems 7 and 8 to derive the gain matrix Kk for the recursive least squares algorithm that minimizes the mean squared error MMSE by solving 0 T Kk trace I KkHk Pk I KkHk + KkRkK T k. 4

5 Problem Apply the recursive least squares algorithm when where zk x + wk, k,,... E [x] 0, Var [x], and E [wk] 0, Var [wk]. The random variables x, w, w,... } are mutually independent and zk, wk, x R. a Express the gain matrix Kk and variance P k as explicit functions of k. b Simulate the algorithm for normally distributed continuous random variables wk and x with the above properties. c Simulate the algorithm for x and wk being discrete random variables that take the values and with equal probability. Note that x and wk satisfy. Simulate for values of k up to Problem Consider the random variables and measurement equation from Problem c, i.e. zk x + wk, k,,... 3 for mutually independent x and wk that take the values and with equal probability. a What is the MMSE estimate of x at time k given all past measurements z, z,..., zk? Remember that the MMSE estimate is given by [ ˆx MMSE k : arg min E ˆx x ] z, z,..., zk. ˆx x z,z,...,zk b Now consider the MMSE estimate that is restricted to the sample space of x, i.e. [ ˆx MMSE k : arg min E ˆx x ] z, z,..., zk ˆx X x z,z,...,zk where X denotes the sample space of x. Calculate the estimate of x at time k given all past measurements z, z,..., zk. 5

6 Sample solutions Problem a The change of variables formula for conditional PDFs when z gx, w is p z x z x p w xh z, x x g. w x, h z, x where w h z, x is the unique solution to z g x, w. We solve z g x, w for w to find h z, x w z x 3 h z, x and apply the change of variables: p z x z x p w xh z, x x g w x, h z, x p w x 3 z x x 3 3 p w z x. 3 With the given uniform PDF of w, the PDF of pz x can then be stated explicitely: 6 for x 3 z x + 3 p z x z x 0 otherwise b We begin by computing Pr z [, 4] x : Pr z [, 4] x 4 p z x z d z The probability Pr w [0, ] x is Pr w [0, ] x Pr w [0, ] 4 which is indeed identical to Pr z [, 4] x. 0 6 d z. p w w d w 0 d w Problem Analogous to the lecture notes, the multivariate change of variables formula for conditional PDFs is p z x z x p w x h z, x x detg where w h z, x is the unique solution to z H x + G w. The function h z, x is given by [ ] [ ] w G z H x 0 z x [ ] z x 3 z x 3 6

7 where we used x x, x, x 3, x 4 and z z, z. Applying the change of variables, it follows that p z x z x p w x h z, x x detg p w x G z H x x 3 3 p w z x, 3 z x 3. Using the PDF p w, this can then be expressed as 6 for x z x + and x z x 3 3 p z x z x 0 otherwise. Problem 3 For a normally-distributed random variable with w i N 0, σ i, we have pw i exp w i σi. Using the change of variables, and by the conditional independence of z, z given x, we can write pz, z x pz x pz x exp z x σ + z x σ. Differentiating with respect to x, and setting to 0, leads to pz, z x x 0 xˆx ML z ˆx ML z ˆx ML + σ σ 0 ˆx ML z σ + z σ σ + σ which is a weighted sum of the measurements. Note that for σ 0 : ˆx ML z, for σ 0 : ˆx ML z. Problem 4 Using the total probability theorem, we find that p wi w i for w i [, ] 0 otherwise. and pw, w pw pw. Using the change of variables, p zi x z i x if z i x 0 otherwise. We need to solve for p z,z x z, z x p z x z x p z x z x. Consider the different cases: z z >, for which the likelihoods do not overlap, see the following figure: 7

8 p z x z x z p z x z x x z Since no value for x can explain both, z and z, given the model z i x + w i, we find p z,z x z, z x 0. z z, where the likelihoods overlap, which is shown in the following figure: pz, z x pz x pz x x 4 x z z z z We find p z,z x z, z x 4 for x [ z, z + ] [ z, z + ] 0 otherwise where A B denotes the intersection of the sets A and B. Therefore the estimate is ˆx ML [ z, z + ] [ z, z + ]. All values in this range are equally likely, and any value in the range is therefore a valid maximum likelihood estimate ˆx ML. Problem 5 a We apply the multivariate change of coordinates: p z x z x p w x h z, x x detg p w x z H x x p w z H x π m / det Σ exp / z H xt Σ z H x We proceed similarly as in class, for z, w R m and x R n : H H H. H m, H i [ ] h i h i... h in, where hij R. Using p z x z x exp m z i H i x i σ i. 8

9 differentiating with respect to x j, and setting to zero yields m z i H iˆx ML h ij 0, i σ i [ ] h j h mj }} column of H From which we have and H T W z H ˆx ML 0 σ ˆx ML H T W H H T W z. σ... 0 σm j,,..., n } } W 0 z H ˆx ML 0, j,..., n. Note that this can also be interpreted as a weighted least squares, wx z Hx ˆx arg min wx T W wx x where W is a weighting of the measurements. The result in both cases is the same! b We find the maximizing ˆx ML by setting the derivative of pz x with respect to x to zero. With the given facts about matrix calculus, we find 0 x p z x z x x exp z H xt Σ z H x where we use the fact that Σ T Σ T Σ. z H xt Σ + Σ T z H x x z H x T Σ H Finally, z H x T Σ H 0 z T Σ H x T H T Σ H and after we transpose the left and right hand sides, we find H T Σ z H T Σ H x. It follows that ˆx ML H T Σ H H T Σ z. 9

10 Problem 6 We calculate the conditional probability density function, p z x z x exp z x. π Both the measurement likelihood p z x z x and the prior p x x are shown in the following figure p x x p z x z x x z We find p z x z x p x x 0 for x < 0 π exp x exp z x for x 0. We calculate the MAP estimate ˆx MAP that maximizes the above function for a given z. We note that p z x z x p x x > 0 for x 0, and the maximizing value is therefore ˆx MAP 0. Since the function is discontinuous at x 0, we have to consider two cases:. The maximum is at the boundary, ˆx MAP 0. It follows that p z x z ˆx MAP MAP px ˆx exp π z. 4. The maximum is where the derivative of the likelihood p z x z x p x x with respect to x is zero, i.e. 0 p z x z x p x x x from which follows ˆx MAP z xˆx MAP + z ˆx MAP at the maximum. Substituting back into the likelihood, we obtain p z x z ˆx MAP MAP px ˆx exp z + exp π exp z +. 5 π px ˆx MAP We note that p z x z ˆx MAP is therefore ˆx MAP 0 for z < z for z. pz x z ˆx MAP px ˆx MAP for z, and the ML estimate 0

11 Problem 7 We begin by evaluating the individual elements of AB and ABA T with A ij denoting the i, jth element of matrix A element in ith row and jth column: AB ij a i b j + a i b j ABA T ij a ib + a i b a j + a i b + a i b a j From this we obtain the trace as trace ABA T ABA T + ABA T and the derivatives trace ABA T a b + a b a + a b + a b a + a b + a b a + a b + a b a a b + a b +a a }} b AB b trace ABA T a b + a b + a b AB a trace ABA T a b + a b + a b AB a trace ABA T a a b + a b + a b AB We therefore proved that trace ABA T A holds for A, B R. AB Problem 8 We proceed similarly to the previous problem: that is trace AB AB + AB a b + a b + a b + a b trace AB a b B B T trace AB a b B B T trace AB a b B B T trace AB a b B B T trace AB A B T for A, B R.

12 Problem 9 By definition, P k E ek [ eke T k ]. Substituting the estimation error ek I KkHk ek Kkwk in the definition, we obtain [ ] T P k E I KkHk ek Kkwk I KkHk ek Kkwk ek,wk [ T E I KkHk ek e T k I KkHk + Kkwkw T kk T k ek,wk ] T I KkHk ek w T kk T k Kkwke T k I KkHk T I KkHk Pk I KkHk + KkRkK T k [ I KkHk ek w T k ] K T k Kk E ek,wk E ek,wk [ wke T k ] T I KkHk. Because ek and wk are independent, [ E wke T k ] [ E [wk] E e T k ]. ek,wk wk ek [ Since E [wk] 0, E [wk] E e T k ] 0, and it follows that wk ek T P k I KkHk Pk I KkHk + KkRkK T k. Problem 0 First, we expand T I KkHk Pk I KkHk + KkRkK T k Pk KkHkPk Pk H T kk T k+kk HkPk H T k+rk K T k. We use the results from problems 7 and 8 and evaluate the summands individually, since tracea + B tracea + traceb. We find Kk trace Pk 0 Kk trace KkHkPk Pk H T k, note that Pk P T k Kk trace Pk H T kk T k Pk H T k, note that trace A trace A T Kk trace Kk HkPk H T k + Rk K T k Kk HkPk H T k + Rk.

13 Note that R T k Rk and HkPk H T k T HkPk H T k. Finally, Kk HkPk H T k + Rk Pk H T k. Kk Pk H T k HkPk H T k + Rk Problem a We initialize P 0, ˆx0 0, Hk and Rk k,,..., and get: Kk Pk Pk + 6 P k Kk Pk + Kk Pk + Pk + Pk + Pk Pk + Pk. From 7 and P 0, we have P, P 3,..., and we can show by induction see below that P k + k, Kk + k Proof by induction: Assume from 6. P k + k. 8 Start with P 0 +0, which satisfies 8. Now we show for k + : P k + + k + k + P k + P k +k + by assumption 8 +k q.e.d. + k + b The Matlab code is available on the class web page. from 7 Recursive least squares works, and provides a reasonable estimate after several thousand iterations. The recursive least squares estimator is the optimal estimator for Gaussian noise. c The Matlab code is available on the class web page. Recursive least squares works, but we can do much better with the proposed nonlinear estimator derived in Problem. 7 3

14 Problem a The MMSE estimate is ˆx MMSE k arg min ˆx x, ˆx x p x z,...,zk x z, z,..., zk. 9 Let us first consider the MMSE estimate at time k. The only possible combinations are: x w z We can now construct the conditional PDF of x when conditioned on z: x z p x z x z / 0 / Therefore, if z or z, we know x precisely: x and x, respectively. If z 0, it is equally likely that x or x. The MMSE estimate is for z : for z : for z 0: ˆx MMSE since p x z 0 ˆx MMSE since p z z 0 ˆx MMSE arg min ˆx arg min ˆx arg min ˆx ˆx + 0. p x z 0 ˆx + p x z 0 ˆx + ˆx + ˆx + Note that the same result could have been obtained by evaluating ˆx MMSE k E [x 0] x z x, xp x z x 0 + At time k, we could construct a table for the conditional PDF of x when conditioned on z and z as above. However, note that the MMSE estimate is given for all k if one measures either or at time k, for z, ˆx MMSE k k,,... for z, ˆx MMSE k k,,

15 On the other hand, a measurement z 0 does not provide any information, i.e. x and x stay equally likely. As soon as a subsequent measurement zk, k, has a value of or, x is known precisely which is also reflected by the MMSE estimate. As long as we keep measuring 0, we do not gain additional information and ˆx MMSE k 0 if zl 0 l,,..., k. Matlab code for this nonlinear MMSE estimator is available on the class webpage. b The solution is identical to part a, except that the estimate is the minimizing value from the sample space of x: ˆx MMSE k arg min ˆx x p ˆx X x z,...,zk x z,..., zk. 0 x, The same argument holds that, if one measurement is either - or, we know x precisely. However, for z 0, the estimate is ˆx MMSE arg min p x z 0 ˆx + p x z 0 ˆx + ˆx X arg min ˆx X ±. ˆx + ˆx + Note that X, } and therefore, we cannot differentiate and set to zero the right hand side of Equation 0 to find the minimizing value. This implies that the estimate is not necessarily equal to E [x z], which is the case for measuring z 0. In this case E x z [x 0] is equal 0, whereas the MMMSE estimator yields ± for ˆxMMSE. The remainder of the solution is identical to the above part a in that the estimate remains ˆx MMSE ± until a measurement is either - or. 5

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 07 Problem Set 3: Etracting Estimates from Probability Distributions Last updated: April 5, 07 Notes: Notation: Unlessotherwisenoted,, y,andz denoterandomvariables,

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 08 Problem Set : Bayes Theorem and Bayesian Tracking Last updated: March, 08 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables,

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

From Bayes to Extended Kalman Filter

From Bayes to Extended Kalman Filter From Bayes to Extended Kalman Filter Michal Reinštein Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

More information

STATS 306B: Unsupervised Learning Spring Lecture 2 April 2

STATS 306B: Unsupervised Learning Spring Lecture 2 April 2 STATS 306B: Unsupervised Learning Spring 2014 Lecture 2 April 2 Lecturer: Lester Mackey Scribe: Junyang Qian, Minzhe Wang 2.1 Recap In the last lecture, we formulated our working definition of unsupervised

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions

ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions ECE 541 Stochastic Signals and Systems Problem Set 9 Solutions Problem Solutions : Yates and Goodman, 9.5.3 9.1.4 9.2.2 9.2.6 9.3.2 9.4.2 9.4.6 9.4.7 and Problem 9.1.4 Solution The joint PDF of X and Y

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Vector Derivatives and the Gradient

Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 10 ECE 275A Vector Derivatives and the Gradient ECE 275AB Lecture 10 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Lecture 16 Solving GLMs via IRWLS

Lecture 16 Solving GLMs via IRWLS Lecture 16 Solving GLMs via IRWLS 09 November 2015 Taylor B. Arnold Yale Statistics STAT 312/612 Notes problem set 5 posted; due next class problem set 6, November 18th Goals for today fixed PCA example

More information

11 - The linear model

11 - The linear model 11-1 The linear model S. Lall, Stanford 2011.02.15.01 11 - The linear model The linear model The joint pdf and covariance Example: uniform pdfs The importance of the prior Linear measurements with Gaussian

More information

Machine Learning (CS 567) Lecture 5

Machine Learning (CS 567) Lecture 5 Machine Learning (CS 567) Lecture 5 Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Markov Chains and Hidden Markov Models

Markov Chains and Hidden Markov Models Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

CSC411 Fall 2018 Homework 5

CSC411 Fall 2018 Homework 5 Homework 5 Deadline: Wednesday, Nov. 4, at :59pm. Submission: You need to submit two files:. Your solutions to Questions and 2 as a PDF file, hw5_writeup.pdf, through MarkUs. (If you submit answers to

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander

ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 PROBABILITY. Prof. Steven Waslander ME 597: AUTONOMOUS MOBILE ROBOTICS SECTION 2 Prof. Steven Waslander p(a): Probability that A is true 0 pa ( ) 1 p( True) 1, p( False) 0 p( A B) p( A) p( B) p( A B) A A B B 2 Discrete Random Variable X

More information

Estimation techniques

Estimation techniques Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

IEOR 4703: Homework 2 Solutions

IEOR 4703: Homework 2 Solutions IEOR 4703: Homework 2 Solutions Exercises for which no programming is required Let U be uniformly distributed on the interval (0, 1); P (U x) = x, x (0, 1). We assume that your computer can sequentially

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator

Kalman Filtering. Namrata Vaswani. March 29, Kalman Filter as a causal MMSE estimator Kalman Filtering Namrata Vaswani March 29, 2018 Notes are based on Vincent Poor s book. 1 Kalman Filter as a causal MMSE estimator Consider the following state space model (signal and observation model).

More information

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

More information

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Lecture 4: Types of errors. Bayesian regression models. Logistic regression Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture

More information

Nonlinear Programming Models

Nonlinear Programming Models Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores

More information

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester Physics 403 Propagation of Uncertainties Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Maximum Likelihood and Minimum Least Squares Uncertainty Intervals

More information

Intelligent Systems Discriminative Learning, Neural Networks

Intelligent Systems Discriminative Learning, Neural Networks Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Bayesian sensitivity analysis of a cardiac cell model using a Gaussian process emulator Supporting information

Bayesian sensitivity analysis of a cardiac cell model using a Gaussian process emulator Supporting information Bayesian sensitivity analysis of a cardiac cell model using a Gaussian process emulator Supporting information E T Y Chang 1,2, M Strong 3 R H Clayton 1,2, 1 Insigneo Institute for in-silico Medicine,

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression CSE 4309 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 The Regression Problem Training data: A set of input-output

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26

Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26 Clustering Professor Ameet Talwalkar Professor Ameet Talwalkar CS26 Machine Learning Algorithms March 8, 217 1 / 26 Outline 1 Administration 2 Review of last lecture 3 Clustering Professor Ameet Talwalkar

More information

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011

UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, Homework Set #6 Due: Thursday, May 22, 2011 UCSD ECE153 Handout #30 Prof. Young-Han Kim Thursday, May 15, 2014 Homework Set #6 Due: Thursday, May 22, 2011 1. Linear estimator. Consider a channel with the observation Y = XZ, where the signal X and

More information

Variations. ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra

Variations. ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra Variations ECE 6540, Lecture 02 Multivariate Random Variables & Linear Algebra Last Time Probability Density Functions Normal Distribution Expectation / Expectation of a function Independence Uncorrelated

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

10701/15781 Machine Learning, Spring 2007: Homework 2

10701/15781 Machine Learning, Spring 2007: Homework 2 070/578 Machine Learning, Spring 2007: Homework 2 Due: Wednesday, February 2, beginning of the class Instructions There are 4 questions on this assignment The second question involves coding Do not attach

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

MIT Spring 2016

MIT Spring 2016 Generalized Linear Models MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Generalized Linear Models 1 Generalized Linear Models 2 Generalized Linear Model Data: (y i, x i ), i = 1,..., n where y i : response

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Bayesian Linear Regression [DRAFT - In Progress]

Bayesian Linear Regression [DRAFT - In Progress] Bayesian Linear Regression [DRAFT - In Progress] David S. Rosenberg Abstract Here we develop some basics of Bayesian linear regression. Most of the calculations for this document come from the basic theory

More information

CS 453X: Class 20. Jacob Whitehill

CS 453X: Class 20. Jacob Whitehill CS 3X: Class 20 Jacob Whitehill More on training neural networks Training neural networks While training neural networks by hand is (arguably) fun, it is completely impractical except for toy examples.

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Matrices. In this chapter: matrices, determinants. inverse matrix

Matrices. In this chapter: matrices, determinants. inverse matrix Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a

More information

Hidden Markov Models and Gaussian Mixture Models

Hidden Markov Models and Gaussian Mixture Models Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 23&27 January 2014 ASR Lectures 4&5 Hidden Markov Models and Gaussian

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

CS281A/Stat241A Lecture 17

CS281A/Stat241A Lecture 17 CS281A/Stat241A Lecture 17 p. 1/4 CS281A/Stat241A Lecture 17 Factor Analysis and State Space Models Peter Bartlett CS281A/Stat241A Lecture 17 p. 2/4 Key ideas of this lecture Factor Analysis. Recall: Gaussian

More information

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

Lecture 3: Pattern Classification

Lecture 3: Pattern Classification EE E6820: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 1 2 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mixtures

More information

Lecture 13: Simple Linear Regression in Matrix Format

Lecture 13: Simple Linear Regression in Matrix Format See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm 1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Machine Learning Problem Set 2 Due date: Wednesday October 6 Please address all questions and comments about this problem set to 6867-staff@csail.mit.edu. You will need to use MATLAB for some of

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

Lecture 6: Gaussian Mixture Models (GMM)

Lecture 6: Gaussian Mixture Models (GMM) Helsinki Institute for Information Technology Lecture 6: Gaussian Mixture Models (GMM) Pedram Daee 3.11.2015 Outline Gaussian Mixture Models (GMM) Models Model families and parameters Parameter learning

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

CS 195-5: Machine Learning Problem Set 1

CS 195-5: Machine Learning Problem Set 1 CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

DYNAMIC AND COMPROMISE FACTOR ANALYSIS

DYNAMIC AND COMPROMISE FACTOR ANALYSIS DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question Linear regression Reminders Thought questions should be submitted on eclass Please list the section related to the thought question If it is a more general, open-ended question not exactly related to a

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that

1. Method 1: bisection. The bisection methods starts from two points a 0 and b 0 such that Chapter 4 Nonlinear equations 4.1 Root finding Consider the problem of solving any nonlinear relation g(x) = h(x) in the real variable x. We rephrase this problem as one of finding the zero (root) of a

More information