Velocity (Kalman Filter) Velocity (knots) Time (sec) Kalman Filter Accelerations 2.5. Acceleration (m/s2)
|
|
- Ruby Armstrong
- 5 years ago
- Views:
Transcription
1 KALMAN FILERING 1
2 50 Velocity (Kalman Filter) Velocity (nots) ime (sec) 3 Kalman Filter Accelerations 2.5 Acceleration (m/s2) ime 2
3 Kalman Filtering is a least squares estimation process to determine the state of a system from a sequence of measurements at discrete intervals of time. In the previous images the system is the yacht Macquarie Innovation (attempting to brea the World Sailing Speed Record) and the state is the yacht s position, velocity and acceleration. he measurements were GPS positions (E,N) at 0.1 sec time intervals. A Kalman Filter is a set of (matrix) equations applied in a recursive sequence. 3
4 References for these lectures available at in the folder Least Squares: [1] Least Squares and Kalman Filtering (91 pages) [2] Notes on Least Squares (224 pages) [3] Combined Least Squares (21 pages) [4] Kalman Filter and Surveying Applications (30 pages) Matrix algebra is used extensively in least squares and Kalman filtering. A useful summary is contained in Appendix A of [2]. 4
5 REVIEW OF LEAS SQUARES Least Squares developed by C.F. Gauss in the late 18th century (1795). Gauss concluded: the most probable system of values of the quantities will be that in which the sum of the squares of the differences between actually observed and computed values multiplied by numbers that measure the degrees of precision, is a minimum. his proposition is often stated as: the best estimate is that which maes the sum of the squares of the weighted residuals a minimum. 5
6 Here s how this least squares principle wors. Consider a set of measurements of a quantity as x1, x2,, x n each having weight w1, w2,, w n. Denote the best estimate of this quantity as q and according to the measurement model: measurement + residual = best estimate we write x1 + v1 = q, x2 + v2 = q,, xn + vn = q which can be re-arranged as v1 = q x1, v2 = q x2,, vn = q x n 6
7 Now define a function and ϕ = sum of squares of weighted residials = w v 2 n ϕ = 2 w v = w1 q x1 + w2 q x2 + w q x = 1 n = ( ) ( ) ( ) n n We say that ϕ is a function of q, or ϕ = ϕ ( q ) and the minimum value of this function can be found by dϕ equating the derivative to zero. dq 7
8 dϕ = 2w ( ) + ( ) + ( ) 1 q x1 2w2 q x2 2wn q xn = 0 dq Cancelling the 2 s, expanding and re-arranging gives the equation for q q w x + w x + + w x = = n n = 1 n w1 + w2 + + wn n = 1 w x his is the formula for the weighted mean and q is a best estimate (or a least squares estimate) w 8
9 Using matrix algebra we write ϕ = sum of squares of weighted residials = v Wv v is a vector of residuals W is a symmetric weight matrix. For the weighted mean, W is an n,n diagonal matrix with weights w on the leading diagonal. he superscript as in v denotes the transpose of the vector (or matrix). he least squares principle is: ϕ = v Wv min 9
10 he process of minimizing the function ϕ by equating derivatives to zero is common to all least squares estimations, including Kalman Filtering, yielding equations that can be solved for best estimates. his example of the weighted mean is taen from [2] Ch. 2 A more general treatment of adjustment problems where measurements (or observations) and parameters (or unnowns) are lined by non-linear relationships follows. Detailed explanations are given in [1], [2] and [3]. 10
11 Suppose the following set of non-linear equations represent the mathematical model in an adjustment F ˆl, xˆ 0 ( ) = (A) l is a vector of n observations, x is a vector of u parameters. ˆl and ˆv refer to estimates derived from a least squares process such that ˆl = l+ v and xˆ = x +δx (B) v and δx are vectors of small corrections (or residuals). 11
12 he observations l have a priori cofactor matrix Q containing estimates of variances and covariances and the parameters x are treated as observables with a priori cofactor matrix Q xx. (he idea of treating parameters as observable quantities allows sequential processing). Cofactor matrices Q, covariance matrices Σ and weight matrices W are related as follows where σ 2 0 Σ = σ0 Q and W = Q is the variance factor
13 Linearizing (A) using aylor s theorem and ignoring 2nd and higher-order terms gives ( ˆ ) F ( ) ( ) = + ˆ F F l, xˆ F l, x l l + ( xˆ δx) = 0 ˆl xˆ l, x l, x Using (B) we write this linearized equation as A v + B δx = f (C) ( m, n) ( n,1 ) ( m, u) ( u,1 ) ( m,1) his equation represents a system of m equations that will be used to estimate u parameters from n observations. n m u 13
14 A F F = and B = ˆl ˆ l x x f ( m, n) ( m, u), l, x of partial derivatives and ( ) ( ) are coefficient matrices = { F }, m,1 l x is a vector of numeric terms. he least squares solution of equations (C) can be obtained by minimizing the function ϕ = v Wv + δx W δx 2 Av + Bδx f xx ( ) are a vector of Lagrange multipliers (at present unnown) and the 2 is added for convenience. 14
15 ϕ = ϕ ( v, δx ) and the partial derivatives equated to zero are ϕ ϕ = 2v W 2 A = 0 and = 2δx W 2 = xx B 0 v δx hese equations can be simplified by dividing both sides by 2, transposing and changing signs to give Note that ( ABC) = Wv + A = 0 and W δx + B = 0 C B A Combining these equations with (C) and arranging in a matrix gives xx 15
16 W A 0 v 0 = A 0 B f (D) 0 B W xx δx 0 Equation (D) can be solved by the following matrix reduction process. Consider the partitioned matrix equation Py = u given as P11 P12 y 1 u1 = (i) P 21 P22 y 2 u 2 16
17 which can be expanded to give = 1 y = P 1 u P y Eliminating y 1 by substituting (ii) into (i) gives 1 P ( ) 11 P12 P 11 u1 P12y u 2 1 = P 21 P22 y 2 u 2 Expanding the matrix equation gives 1 P P u P y + P y = u P y P y u or ( ) ( ) P P u P P P y + P y = u and an expression for y 2 is given by (ii) 17
18 ( ) P P P P y = u P P u Applying this matrix reduction process to (D) see [1] and [3] yields the following equations where ( ) 1 (iii) δx = N+ W t (x) xx = W f Bδx e ( ) v = QA xˆ = x + δx, ˆl = l + v Q = AQA W = Q N = B WB t = B W f 1 e, e e, e, e (y) (z) 18
19 Equations (x), (y) and (z) are the solutions to Combined Least Squares problems. If the problems involve nonlinear functions (as they often do in surveying) then the process of solution may be iterative and repeated until corrections δx become negligible. References [1], [2] and [3] have examples of iterative solutions. An important and useful property of least squares solutions is that estimates of the precisions of parameters ˆx, observations ˆl and residuals v are easily obtained. hese estimates are contained in the cofactor matrices 19
20 Q = N+ W xx ˆˆ ( ) Q = Q + QA WBQ B WAQ QA WAQ ˆˆ ll xx e xx ˆˆ e e Q vv = Q Q ˆˆ he matrix reduction process outlined above (a matrix partitioning solution to a set of equations) is important. It is used in the development of the Kalman Filter equations where a large hyper-matrix is reduced by repeated applications of the reduction process (see reference [1]) ll 1 20
21 he cofactor matrices Qxx ˆˆ, Qˆˆ and Q ll vv are obtained using Propagation of Variances (or cofactors) which is an important tool in analysis of measurements. Suppose that computed quantities y are linear functions of variables x (with cofactor matrix Q xx) and constant terms b (A is a coefficient matrix) y = Ax + b then, Propagation of Variances gives Q = AQ A (*) yy xx 21
22 If y are non-linear functions of x then Propagation of Variances gives Q = J Q J (**) yy yx xx yx where yx J is the Jacobian matrix of partial derivatives. For most of the cofactor propagation used in least squares we are dealing with linear or linearized functions so (*) is most often used. Cofactor propagation is very important in the development of the Kalman Filter equations. 22
23 he general case of Combined Least Squares [see solutions (x),(y) and (z) above] may be simplified into the following three cases: [1] Combined Least Squares For most least squares problems, the parameters x are not observables and W = 0. XX he system of equations is and the solutions are Av + Bδx = f 23
24 1 δx = N t = W f Bδx e ( ) v = QA 0 xˆ = x + δx, ˆl = l + v where Q AQA W Q N B WB t B W f 1 e =, e = e, = e, = e [See reference [1] Example 2] 24
25 [2] Indirect Least Squares (Parametric case) In this case W = 0 and = xx A I (the Identity matrix). he system of equations is and the solutions are v + Bδx = f δ 1 x = N t v = f Bδx 0 xˆ = x + δx, ˆl = l + v where N = B WB, t = B Wf [See reference [1] Examples 1, 3 and 4] 25
26 [3] Observations Only Least Squares (Condition case) In this case W = 0 xx and B = 0. he system of equations is Av = f and the solutions are = W f e v = QA ˆ = + l l v [See reference [1], Example 3] 26
27 KALMAN FILER he Kalman Filter is an estimation process developed by Rudolf E. Kalman in the early 1960 s. It s first practical use was in the navigation system of the Apollo space craft. he Kalman Filter can be regarded as a logical extension of Gauss original least squares technique and the derivation of the equations of the Filter set out below follows the usual surveying development of minimizing a function of sums of squares of weighted residuals. 27
28 For this development it is useful to consider a ship moving in a navigation channel. At regular intervals of time (epochs t ) a measurement system on board determines the distances to nown navigation beacons. We wish to determine the position and velocity of the ship as it moves down the channel. Suppose we write a system of equations as v + Bx = f primary at t v + Bx = f 1 m primary at x = x + v t secondary or dynamic (a) 28
29 x is the state vector containing the parameters of the system, in this case, the position ( E, N ) and the E, N, four parameters velocity ( ) v is the vector of residuals associated with the measurements l such that ˆ l = l+ v B is a coefficient matrix f is a vector of numeric terms derived from the measurements l is the ransition matrix v m is a vector of residuals associated with the dynamic or secondary model 29
30 he measurements l and the model corrections v m have associated cofactor matrices Q and Q m and weight 1 matrices W = Q It is useful to assume the vector v m is the product of two matrices v = Hw (b) m where H is a coefficient matrix and w is vector of quantities nown as the system driving noise. Assuming the system driving noise w has a cofactor matrix Q w, applying Propagation of Variances to (b) 30
31 gives the cofactor matrix of the system of the secondary model Q m = HQ H Applying the least squares principle to equations (a), the function ϕ to be minimized is = v W v + v Wv + v W v ϕ m m m 2 v + B x f w ( ) v + B x f 2 ( ) 2 x x v ( ) 3 1 m 31
32 Now (,,,, ) ϕ = ϕ v 1 v vm x 1 x and differentiating ϕ with respect to the variables gives ϕ ϕ = 2v 1W 1 21 = 0 = 2vW 22 = 0 v v 1 ϕ v m ϕ x ϕ = 2v W + 2 = 0 = 2 B + 2 = 0 m m x 1 = 2 B 2 =
33 ransposing, re-arranging and dividing by 2 gives W v = 0 B + = W v = 0 B = 0 W v + = 0 m m Combining these equations with equations (a) the primary and secondary model equations in the form of a hyper-matrix gives 33
34 W 0 0 I v W 0 0 I v W 0 0 I 0 0 m vm B = B I I B 0 f I B x f I I x 0 hyper-matrix (c) 34
35 Partial solution x 1 Using only the observations l 1 at time t 1a partial solution x 1 can be obtained from the hyper-matrix (c) by deleting all matrices associated with the primary model at t and the secondary model. his gives (after some re-arrangement) W 1 I 0 v 1 0 I 0 B 1 1 = f 1 0 B 1 0 x
36 his is identical in form to previous equation (D) and the solution is 1 x = N t (d) where N = B W B t = B W f with the cofactor matrix of the partial solution Q = N 1 x
37 Solution for x ˆ he hyper-matrix (c) is partitioned in a way that allows for the elimination of the vectors v,, 1 v v by the m reduction process set out previously. And after this the parameters x 1 are eliminated (by the same process) yielding 1 1 Qm + N 1 I 0 3 N 1t 1 I 0 B x = 0 0 B Q f 2 (e) 37
38 Now, following equations (a) we write the predicted state vector x = x 1and substituting (d) gives 1 x = N t 1 1 And applying Propagation of Variances gives (after some reduction) the cofactor matrix of the predicted residuals x x m 1 (f) Q = Q + Q (g) he quantities in (f) and (g) appear in (e) and we may write 38
39 Q I 0 x = 0 B Q 2 f We may apply the matrix reduction process to this matrix to obtain (after some manipulation) x 3 I 0 B x 0 1 = + x + x x x Q B Q B Q B ( ) ( f B x ) 39
40 In Kalman Filtering, the expression ( ) K = Q B Q + B Q B 1 x x is called the gain matrix Using this result gives the filtered state vector x = x + K( f B x ) ˆ Applying Propagation of Variances to this equation gives (after some manipulation) ( ) ( ) Qx = I KB ˆ Qx I KB + KQK = I KB Q ( ) x 40
41 HE KALMAN FILER (1) Compute the predicted state vector at t x = x ˆ 1 (2) Compute the predicted state cofactor matrix at t Q = Q ˆ + Q x x m 1 (3) Compute the Kalman Gain matrix ( ) 1 K = Q B Q + B Q B x x 41
42 (4) Compute the filtered state vector by updating the predicted state with the measurements at t x = x + K f B x ( ) (5) Compute the filtered state cofactor matrix Q = I KB Q ˆ ( ) x x Go to step (1) and repeat the process for the next measurement epoch t
43 MARIX REDUCION EXERCISE If a matrix can be partitioned as follows P11 P12 y 1 u1 = P 21 P22 y 2 u 2 then (after some manipulation) y 1 is eliminated and ( ) P P P P y = u P P u (i) (ii) Use this matrix reduction process to obtain an expression for δx from the system of equations expressed in matrix form as 43
44 W A 0 v 0 = A 0 B f 0 B W xx δx 0 (A) 44
45 SOLUION Partitioning (A) in the same way as (i) W A 0 v 0 = A 0 B f 0 B W xx δx 0 then eliminating v by applying (ii) gives 0 B A f A 1 1 W A 0 = W B W 0 δx 0 0 xx 45 [ 0]
46 Remembering that simplified as 1 Q = W the equation can be AQA B f = δ (B) B W x 0 xx Again, applying (ii) to the partitioned equation (B) gives ( ( ) ) ( ) xx δ = 1 1 W B AQA B x 0 B AQA f and re-arranging gives the normal equations ( ( ) ) ( + ) xx δ = 1 1 B AQA B W x B AQA f 46
47 LINE OF BES FI EXERCISE he diagram overleaf shows part of an Abstract of Field Notes with offsets from occupation to a traverse line in Whitten Street. he bearing of the traverse line is Use Indirect Least Squares (and least_squares.m) to determine the bearing of the line of best fit through the occupation in Whitten Street. You may consider the chainages (linear distances along Whitten Street) to be free of error. 47
48 N ABSRAC OF FIELDNOES Distances in metres post & wire post & wire post & wire post & wire post & wire A post and wire B WHIEN ROAD
49 SOLUION he equation of the Line of Best Fit is: y bx c = +. he y-values are the offsets, the x-values are derived from the chainages and c is a constant. Reverse the chainages to create distances from B. hese will be the x-values (considered error free). he offsets (the y-values) are the measurements, considered to be affected by small random errors, and assumed to be of equal precision. A able of values is shown overleaf. 49
50 Point x (m) y (m) weight w 1 (B) (A) he observation equation (or measurement model) is y + v = bx + c 50
51 he set of equations for each measurement are y + v = bx + c y + v = bx + c y + v = bx + c y + v = bx + c y + v = bx + c his equations can be re-arranged in the matrix form form v + Bx = f where the residuals v and the unnowns (b, c) in x and the coefficient matrix B are on the left-hand-side of the equals sign. 51
52 he measurements (or functions of measurements) are on the right-hand-side forming the vector of numeric terms f. v1 x1 1 y1 v 2 x2 1 y 2 b v3 + x3 1 y3 c = v 4 x4 1 y 4 v5 x5 1 y5 52
53 Now create an ASCII text file that can be used in MALAB function least_squares.m his text file has a standard format Save the text file as best_fit_exercise.txt % Data file for function "least_squares.m" % Line of Best Fit Exercise % B(1) B(2) f w
54 MALAB function least_squares.m uses the standard solution for Indirect Least Squares 1 x = N t v = f Bx ˆ l = l + v where N = B WB, t = B Wf he function will read the data text file and place the results in another text file with the extension.out In this case: best_fit_exercise.out 54
55 Part of the output file is shown below Vector of solutions x e e+00 Vector of residuals v
56 he slope of the line is b = tanθ = e 03 0 and θ = (angles are considered + tve anticlocwise from the x-axis) and the bearing of the line of 0 best fit will be he offset from the traverse to the line of best fit at B is m (the value of c) he offset from the traverse to the line of best fit at A is m 56
57 Part of MALAB function least_squares.m Read the data file into memory filepath = strcat('d:\temp\','*.txt'); [infilename,inpathname] = uigetfile(filepath); infilepath = strcat(inpathname,infilename); rootname = strto(infilename,'.'); outfilename = strcat(rootname,'.out'); outfilepath = strcat(inpathname,outfilename); % % 1. Load the data into an array whose name is the rootname % 2. set fileemp = rootname % 3. Copy columns of data into individual arrays % load(infilepath); fileemp = eval(rootname); 57
58 Establish the matrices B, f and W % get the number of rows (n) and the number of columns (m) [n,m] = size(fileemp); % set the number of unnowns u = m-2; % copy the data into B, f and w B = fileemp(:,1:u); f = fileemp(:,u+1); w = fileemp(:,m); % set the elements of the weight matrix W W = zeros(n,n); for = 1:n W(,) = w(); end 58
59 Form the normal equations Nx = t % form the normal equation coefficient matrix N N = B'*W*B; % form the vector of numeric terms t t = B'*W*f; Solve the normal equations 1 x = N t % solve the system Nx = t for the unnown parameters x Ninv = inv(n); x = Ninv*t; Compute the residuals v = Bx f % compute residuals v = f - (B*x); 59
60 2 Compute the variance factor σ 0 and the cofactor matrices Qxx, Qvv, Q ll % compute the variance factor vfact = (f'*w*f - x'*t)/(n-u); % compute the cofactor matrix of the adjusted quantities Qxx = Ninv; % compute the cofactor matrix of the residuals Qvv = inv(w)-b*ninv*b'; % compute the cofactor matrix of the adjusted quantities Qll = inv(w)-qvv; Open the data file and print out results % open the output file print the data fidout = fopen(outfilepath,'wt'); 60
61 KALMAN FILER: SHIP IN CHANNEL EXERCISE B A true path -1 C 61
62 he diagram shows the path of a ship in a navigation channel at a constant speed and heading. Navigation equipment on board automatically measures distances to transponders at three navigation beacons A, B and C. he measured distances have an estimated standard deviation of 1 metre and each set of measurements occur at 60-second time intervals. he coordinates of the navigation beacons are A: E B: E C: E N N N 62
63 ransponder measurements at 60-second intervals ransponder measurements to Measurement navigation beacons Epoch A B C ransponder measurements to Measurement navigation beacons Epoch A B C he exercise is to establish the matrix forms of the primary and secondary measurement models suitable for processing the measurements in a Kalman Filter. 63
64 Primary Measurement Model v + B xˆ = f he measurements can be written as l + v = ˆl where l is the m,1 vector of measurements (transponder distances), v is the m,1 vector of residuals and ˆ l are estimates of the true (but unnown) value of the measurements. m = 3 is the number of measurements at each epoch. (a) 64
65 ˆ l are non-linear functions of the coordinates of the ship and the beacons and coordinates E,N of the beacons A, B and C and the filtered state coordinates Eˆ, ˆ N of the ship at time t lˆ = lˆ Eˆ, Nˆ, E, N ( ) j j j 2 2 = Eˆ E + Nˆ N for j = A, B, C ( ) ( ) j j Expanding this equation into a series using aylor's theorem, and ignoring higher-order terms gives 65
66 ˆ ˆ ˆ l ˆ l l = l + E E + Nˆ N Eˆ Nˆ ( ) ( ) where E, N are approximate coordinates of the ship at t, l is an approximate distance computed using E, N and the coordinates of the beacon, and the partial derivatives are lˆ E Ej = = dj Eˆ l lˆ Nˆ j N Nj = = cj for j = A, B, C l j 66
67 For a single distance we may re-arrange (a) as v lˆ = l and substituting the aylor series approximation for ˆl and re-arranging gives v d Eˆ c Nˆ = l l + d E c N for j = A, B, C ( ) j j j j j j j his primary measurement model can be expressed in terms of the filtered state vector x ˆ at time t in the matrix form as 67
68 or Eˆ va da ca 0 0 ˆ N v B + db cb 0 0 E vc dc cc 0 0 N E l A la da ca 0 0 N = l B l B + db cb 0 0 E l C lc dc cc 0 0 N v + B xˆ = l l + B x = f (b) 68
69 Secondary (or dynamic) model x = x + v (c) 1 m he model corrections v m can be written as the product of two matrices as v = Hw m (d) where w is the system driving noise and H is a coefficient matrix. Applying Propagation of Variances to (d) gives the cofactor matrix of the system driving noise Qm = HQwH (e) 69
70 o determine the elements of the transition matrix consider the position of the ship as a simple function of time, say y = y( t). Now, expand the function y( t ) into a series about the point t = t using aylor s theorem ( t t ) ( ) ( ) ( ) ( ) y t = y t + t t y t + y t 2! 3 2 ( ) ( t t ) + y ( t ) + 3! where y ( t ), y ( t ), y ( t ), etc. are derivatives of y with respect to t evaluated at t = t 70
71 Letting = + δt and then t t t t = δt then y ( t ) ( ) ( ) ( ) ( ) ( ) 2 y t ( ) 3 y t + δt = y t + y t δt + δt + δt + 2! 3! We now have a power series expression for the y t at the point t = t + δt continuous function ( ) Similarly, assuming y ( t), y( t) functions of t then ( + δ ) = ( ) + ( ), etc. to be continuous y t y t t y t y t t t 2! y t t y t y t δt ( ) ( ) ( ) ( ) ( ) 2 + δ = + δ + δ + 71
72 Now considering two time epochs t and t 1separated by a time interval δ t we may combine the equations into general matrix forms and ( ) 2 y 1 δt y 1 2 δt [ y] y = y δt 1 ( ) 2 1 ( ) 3 1 y 1 δt 2 δt y 6 δt y = 0 1 δt y + t y y y δt ( ) 2 [ ] 2 δ 1 (f) 72
73 For this exercise, the state vector of the ship in the, E, N channel contains position ( ) E N and velocity ( ) and we choose equation (f) as representative of the secondary (or dynamic) model [see(c) and (d)] x = x + Hw 1 1 ( ) 2 E 1 0 t 0 E 2 t 0 1 ( ) 2 N t N 0 E 2 t = + E E t 0 N N N 1 0 t 1 73
74 In this exercise it is assumed that the system driving noise w contains small random accelerations caused by the sea and wind conditions, the steering of the ship, the engine speed variation, etc. For this exercise assume that the estimates of the 2 2 variances of the system driving noise are s and E sn and the cofactor matrix of the system driving noise is Q w 2 s 0 E = 2 0 sn 74
Least Squares and Kalman Filtering
INRODUCION Least Squares and Kalman Filtering R.E. Deain Bonbeach VIC, 396, Australia Email: randm.deain@gmail.com -Sep-5 he theory of least squares and its application to adjustment of survey measurements
More informationTHE KALMAN FILTER AND SURVEYING APPLICATIONS
HE KALMAN FILER AND SURVEYING APPLICAIONS R.E. Deain School of Mathematical and Geospatial Sciences RMI University email: rod.deain@rmit.edu.au June 006 ABSRAC he Kalman filter is a set of equations, applied
More information5. PROPAGATION OF VARIANCES APPLIED TO LEAST SQUARES ADJUSTMENT OF INDIRECT OBSERVATIONS., the inverse of the normal equation coefficient matrix is Q
RMI University 5. PROPAGAION OF VARIANCES APPLIED O LEAS SQUARES ADJUSMEN OF INDIREC OBSERVAIONS A most important outcome of a least squares adjustment is that estimates of the precisions of the quantities
More information2. LEAST SQUARES ADJUSTMENT OF INDIRECT OBSERVATIONS
. LEAST SQUARES ADJUSTMENT OF INDIRECT OBSERVATIONS.. Introduction The modern professional surveyor must be competent in all aspects of surveying measurements such as height differences, linear distances,
More informationModern Navigation. Thomas Herring
12.215 Modern Navigation Thomas Herring Basic Statistics Summary of last class Statistical description and parameters Probability distributions Descriptions: expectations, variances, moments Covariances
More informationPrinciples of the Global Positioning System Lecture 11
12.540 Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring http://geoweb.mit.edu/~tah/12.540 Statistical approach to estimation Summary Look at estimation from statistical point
More informationMatrices and Determinants
Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or
More informationTHE KALMAN FILTER: A LOOK BEHIND THE SCENE
HE KALMA FILER: A LOOK BEHID HE SCEE R.E. Deain School of Matheatical and Geospatial Sciences, RMI University eail: rod.deain@rit.edu.au Presented at the Victorian Regional Survey Conference, Mildura,
More informationLeast Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD
Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (
More information1 Kalman Filter Introduction
1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation
More informationThe Matrix Algebra of Sample Statistics
The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics
More informationAPPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF
ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done
More informationWritten as per the revised G Scheme syllabus prescribed by the Maharashtra State Board of Technical Education (MSBTE) w.e.f. academic year
Written as per the revised G Scheme syllabus prescribed by the Maharashtra State Board of Technical Education (MSBTE) w.e.f. academic year 2012-2013 Basic MATHEMATICS First Year Diploma Semester - I First
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay
Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector
More informationKalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q
Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I
More informationv are uncorrelated, zero-mean, white
6.0 EXENDED KALMAN FILER 6.1 Introduction One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear
More informationSatellite Navigation PVT estimation and integrity
Satellite Navigation PVT estimation and integrity Picture: ESA AE4E8 Sandra Verhagen Course 1 11, lecture 7 1 Satellite Navigation (AE4E8 Lecture 7 Today s topics Position, Velocity and Time estimation
More informationLINEAR SYSTEMS, MATRICES, AND VECTORS
ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationMini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra
Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State
More informationLecture 9: Elementary Matrices
Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax
More informationAppendix C Vector and matrix algebra
Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The
More informationKalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University
Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationTSRT14: Sensor Fusion Lecture 8
TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,
More informationLinear Algebra I Lecture 8
Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n
More informationLECTURE 12: SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS. Prof. N. Harnew University of Oxford MT 2012
LECTURE 12: SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS Prof. N. Harnew University of Oxford MT 2012 1 Outline: 12. SOLUTIONS TO SIMULTANEOUS LINEAR EQUATIONS 12.1 Methods used to solve for unique solution
More informationA Comparison of Multiple-Model Target Tracking Algorithms
University of New Orleans ScholarWors@UNO University of New Orleans heses and Dissertations Dissertations and heses 1-17-4 A Comparison of Multiple-Model arget racing Algorithms Ryan Pitre University of
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationLinear Algebra Review
Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and
More informationLecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel
Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random
More informationEE482: Digital Signal Processing Applications
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 11 Adaptive Filtering 14/03/04 http://www.ee.unlv.edu/~b1morris/ee482/
More informationCS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:
CS100: DISCRETE STRUCTURES Lecture 3 Matrices Ch 3 Pages: 246-262 Matrices 2 Introduction DEFINITION 1: A matrix is a rectangular array of numbers. A matrix with m rows and n columns is called an m x n
More informationAdvanced ESM Angle Tracker: Volume I Theoretical Foundations
Naval Research Laboratory Washington, DC 075-50 NRL/FR/574--06-0,4 Advanced ESM Angle Tracker: Volume I Theoretical Foundations Edward N. Khoury Surface Electronic Warfare Systems Branch Tactical Electronic
More informationThe Kalman filter is arguably one of the most notable algorithms
LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive
More informationA Matrix Theoretic Derivation of the Kalman Filter
A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding
More informationCHAPTER 3 MOTION IN TWO AND THREE DIMENSIONS
CHAPTER 3 MOTION IN TWO AND THREE DIMENSIONS General properties of vectors displacement vector position and velocity vectors acceleration vector equations of motion in 2- and 3-dimensions Projectile motion
More informationIntroduction to Least Squares Adjustment for geodetic VLBI
Introduction to Least Squares Adjustment for geodetic VLBI Matthias Schartner a, David Mayer a a TU Wien, Department of Geodesy and Geoinformation Least Squares Adjustment why? observation is τ (baseline)
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationIntroduction to Matrices and Linear Systems Ch. 3
Introduction to Matrices and Linear Systems Ch. 3 Doreen De Leon Department of Mathematics, California State University, Fresno June, 5 Basic Matrix Concepts and Operations Section 3.4. Basic Matrix Concepts
More informationSTATISTICAL ORBIT DETERMINATION
STATISTICAL ORBIT DETERMINATION Satellite Tracking Example of SNC and DMC ASEN 5070 LECTURE 6 4.08.011 1 We will develop a simple state noise compensation (SNC) algorithm. This algorithm adds process noise
More informationFormula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column
Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the
More informationA Robust Extended Kalman Filter for Discrete-time Systems with Uncertain Dynamics, Measurements and Correlated Noise
2009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 10-12, 2009 WeC16.6 A Robust Extended Kalman Filter for Discrete-time Systems with Uncertain Dynamics, Measurements and
More informationJim Lambers MAT 419/519 Summer Session Lecture 11 Notes
Jim Lambers MAT 49/59 Summer Session 20-2 Lecture Notes These notes correspond to Section 34 in the text Broyden s Method One of the drawbacks of using Newton s Method to solve a system of nonlinear equations
More informationAutocorrelation Functions in GPS Data Processing: Modeling Aspects
Autocorrelation Functions in GPS Data Processing: Modeling Aspects Kai Borre, Aalborg University Gilbert Strang, Massachusetts Institute of Technology Consider a process that is actually random walk but
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More information1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.
Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification
More informationMath 60. Rumbos Spring Solutions to Assignment #17
Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad bc 0 then the matrix A = is invertible and c d compute A 1. a b Solution: Let A = and assume that ad bc 0. c d First consider
More informationIntroduction to Matrices
214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the
More informationChapter 2. Ma 322 Fall Ma 322. Sept 23-27
Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is
More informationData Fusion of Dual Foot-Mounted Zero Velocity Update (ZUPT) Aided Inertial Navigation Systems (INSs) using Centroid Method
February 02, 2013 Data Fusion of Dual Foot-Mounted Zero Velocity Update (ZUPT) Aided Inertial Navigation Systems (INSs) using Centroid Method Girisha Under the guidance of Prof. K.V.S. Hari Notations Define
More informationParameter identification using the skewed Kalman Filter
Parameter identification using the sewed Kalman Filter Katrin Runtemund, Gerhard Müller, U München Motivation and the investigated identification problem Problem: damped SDOF system excited by a sewed
More informationStudy on State Estimator for Wave-piercing Catamaran Longitudinal. Motion Based on EKF
5th International Conference on Civil Engineering and ransportation (ICCE 205) Study on State Estimator for Wave-piercing Catamaran Longitudinal Motion Based on EKF SHAO Qiwen, *LIU Sheng, WANG Wugui2
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis
MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with
More informationIntroduction to Vectors
Introduction to Vectors K. Behrend January 31, 008 Abstract An introduction to vectors in R and R 3. Lines and planes in R 3. Linear dependence. 1 Contents Introduction 3 1 Vectors 4 1.1 Plane vectors...............................
More information7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes
The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =
More informationFor those who want to skip this chapter and carry on, that s fine, all you really need to know is that for the scalar expression: 2 H
1 Matrices are rectangular arrays of numbers. hey are usually written in terms of a capital bold letter, for example A. In previous chapters we ve looed at matrix algebra and matrix arithmetic. Where things
More informationHomework #1 Solution
February 7, 4 Department of Electrical and Computer Engineering University of Wisconsin Madison ECE 734 VLSI Array Structures for Digital Signal Processing Homework # Solution Due: February 6, 4 in class.
More informationIntroduc)on to linear algebra
Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed
More informationSTATIC AND DYNAMIC RECURSIVE LEAST SQUARES
STATC AND DYNAMC RECURSVE LEAST SQUARES 3rd February 2006 1 Problem #1: additional information Problem At step we want to solve by least squares A 1 b 1 A 1 A 2 b 2 A 2 A x b, A := A, b := b 1 b 2 b with
More informationLecture 8: Determinants I
8-1 MATH 1B03/1ZC3 Winter 2019 Lecture 8: Determinants I Instructor: Dr Rushworth January 29th Determinants via cofactor expansion (from Chapter 2.1 of Anton-Rorres) Matrices encode information. Often
More informationsum of squared error.
IT 131 MATHEMATCS FOR SCIENCE LECTURE NOTE 6 LEAST SQUARES REGRESSION ANALYSIS and DETERMINANT OF A MATRIX Source: Larson, Edwards, Falvo (2009): Elementary Linear Algebra, Sixth Edition You will now look
More informationAddition and subtraction: element by element, and dimensions must match.
Matrix Essentials review: ) Matrix: Rectangular array of numbers. ) ranspose: Rows become columns and vice-versa ) single row or column is called a row or column) Vector ) R ddition and subtraction: element
More informationICS 6N Computational Linear Algebra Matrix Algebra
ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix
More information7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4
7 Matrix Operations Copyright 017, Gregory G. Smith 9 October 017 The product of two matrices is a sophisticated operations with a wide range of applications. In this chapter, we defined this binary operation,
More information1-D Convection-Diffusion Lab
Computational Fluid Dynamics -D Convection-Diffusion Lab The lab. uses scientificworkplace symbolic calculus and maths editor software (SWP) This file Concevtion-Diffusion-Lab is available from Blackboard
More informationChapter 1: Systems of Linear Equations and Matrices
: Systems of Linear Equations and Matrices Multiple Choice Questions. Which of the following equations is linear? (A) x + 3x 3 + 4x 4 3 = 5 (B) 3x x + x 3 = 5 (C) 5x + 5 x x 3 = x + cos (x ) + 4x 3 = 7.
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationMiscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier
Miscellaneous Regarding reading materials Reading materials will be provided as needed If no assigned reading, it means I think the material from class is sufficient Should be enough for you to do your
More informationECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form
ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER KRISTOFFER P. NIMARK The Kalman Filter We will be concerned with state space systems of the form X t = A t X t 1 + C t u t 0.1 Z t
More informationAPPENDIX 2 MATHEMATICAL EXPRESSION ON THE METHODOLOGY OF CONSTRUCTION OF INPUT OUTPUT TABLES
APPENDIX 2 MATHEMATICAL EXPRESSION ON THE METHODOLOGY OF CONSTRUCTION OF INPUT OUTPUT TABLES A2.1 This Appendix gives a brief discussion of methods of obtaining commodity x commodity table and industry
More informationFactor Analysis and Kalman Filtering (11/2/04)
CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used
More informationENGR-1100 Introduction to Engineering Analysis. Lecture 21. Lecture outline
ENGR-1100 Introduction to Engineering Analysis Lecture 21 Lecture outline Procedure (algorithm) for finding the inverse of invertible matrix. Investigate the system of linear equation and invertibility
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationMatrix & Linear Algebra
Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A
More informationSatellite Navigation error sources and position estimation
Satellite Navigation error sources and position estimation Picture: ESA AE4E08 Sandra Verhagen Course 2010 2011, lecture 6 1 Today s topics Recap: GPS measurements and error sources Signal propagation
More informationENGR-1100 Introduction to Engineering Analysis. Lecture 21
ENGR-1100 Introduction to Engineering Analysis Lecture 21 Lecture outline Procedure (algorithm) for finding the inverse of invertible matrix. Investigate the system of linear equation and invertibility
More informationMATH2210 Notebook 2 Spring 2018
MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationThe Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.
The Kalman Filter (part 1) HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev http://www.cs.iastate.edu/~alex/classes/2007_spring_575x/ March 5, 2007 HCI/ComS 575X: Computational Perception
More informationLinear Classifiers as Pattern Detectors
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2014/2015 Lesson 16 8 April 2015 Contents Linear Classifiers as Pattern Detectors Notation...2 Linear
More informationDerivation of the Kalman Filter
Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P
More informationGPS Geodesy - LAB 7. Neglecting the propagation, multipath, and receiver errors, eq.(1) becomes:
GPS Geodesy - LAB 7 GPS pseudorange position solution The pseudorange measurements j R i can be modeled as: j R i = j ρ i + c( j δ δ i + ΔI + ΔT + MP + ε (1 t = time of epoch j R i = pseudorange measurement
More informationFor final project discussion every afternoon Mark and I will be available
Worshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient quality) 3. I suggest writing it on one presentation. 4. Include
More informationRelation of Pure Minimum Cost Flow Model to Linear Programming
Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m
More informationA. Barbu, J. Laurent-Varin, F. Perosanz, F. Mercier and J. Marty. AVENUE project. June, 20
Efficient QR Sequential Least Square algorithm for high frequency GNSS Precise Point Positioning A. Barbu, J. Laurent-Varin, F. Perosanz, F. Mercier and J. Marty AVENUE project June, 20 A. Barbu, J. Laurent-Varin,
More informationAnalysis of Finite Wordlength Effects
Analysis of Finite Wordlength Effects Ideally, the system parameters along with the signal variables have infinite precision taing any value between and In practice, they can tae only discrete values within
More informationA Study of Covariances within Basic and Extended Kalman Filters
A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying
More informationNonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations
Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations SEBASTIÁN OSSANDÓN Pontificia Universidad Católica de Valparaíso Instituto de Matemáticas Blanco Viel 596, Cerro Barón,
More informationAdvanced Digital Signal Processing -Introduction
Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary
More informationQuantitative Trendspotting. Rex Yuxing Du and Wagner A. Kamakura. Web Appendix A Inferring and Projecting the Latent Dynamic Factors
1 Quantitative Trendspotting Rex Yuxing Du and Wagner A. Kamakura Web Appendix A Inferring and Projecting the Latent Dynamic Factors The procedure for inferring the latent state variables (i.e., [ ] ),
More informationJUST THE MATHS UNIT NUMBER 1.5. ALGEBRA 5 (Manipulation of algebraic expressions) A.J.Hobson
JUST THE MATHS UNIT NUMBER 1.5 ALGEBRA 5 (Manipulation of algebraic expressions) by A.J.Hobson 1.5.1 Simplification of expressions 1.5.2 Factorisation 1.5.3 Completing the square in a quadratic expression
More informationAbstract: Introduction:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Standard Deviation and Network/Local Accuracy of Geospatial Data (Note:
More informationGPS point positioning (using psuedorange from code). and earthquake location (using P or S travel times).
GPS point positioning (using psuedorange from code). and earthquake location (using P or S travel times). 15 17 From J. HOW, MIT, GPS-RJH5.pdf 18 P 1 = P 2 = P 3 = P 4 = ( X X 1 ) 2 + (Y Y 1 ) 2 + (Z
More informationLinear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway
Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row
More information