TREATMENT OF NUISANCE PARAMETER IN ADJUSTMENT (PHASED ADJUSTMENT)

Similar documents
Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

DIFFERENCE EQUATIONS

Direct Methods for Solving Linear Systems. Matrix Factorization

Basic Linear Algebra in MATLAB

USING THE INTEGER DECORRELATION PROCEDURE TO INCREASE OF THE EFFICIENCY OF THE MAFA METHOD

Linear Equations and Matrix

POLI270 - Linear Algebra

Computational Methods. Systems of Linear Equations

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

JACOBI S ITERATION METHOD

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

MATRICES. a m,1 a m,n A =

1.Chapter Objectives

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Numerical Analysis Lecture Notes

Formulation of L 1 Norm Minimization in Gauss-Markov Models

Problem 1: Calculating deflection by integration uniform load. Problem 2: Calculating deflection by integration - triangular load pattern

LINEAR SYSTEMS AND MATRICES

Chapter 5 Equations for Wave Function

Autocorrelation Functions in GPS Data Processing: Modeling Aspects

Regression. Oscar García

Math 123, Week 2: Matrix Operations, Inverses

Lecture Notes: Solving Linear Systems with Gauss Elimination

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Markov Chains, Stochastic Processes, and Matrix Decompositions

Chapter 3. September 11, ax + b = 0.

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters

Numerical Linear Algebra

Algebraic Properties of Solutions of Linear Systems

Section 5.3 Systems of Linear Equations: Determinants

CMPSCI 240: Reasoning about Uncertainty

Lecture 1: Systems of linear equations and their solutions

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Chapter 2 Optimal Control Problem

Operations Research. Duality in linear programming.

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

FINITE ELEMENT METHOD: APPROXIMATE SOLUTIONS

Approximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning

Linear Algebra (Review) Volker Tresp 2018

Fundamentals of Engineering Analysis (650163)

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

BALANCED HALF-SAMPLE REPLICATION WITH AGGREGATION UNITS

4 Relativistic kinematics

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Simple Linear Regression Estimation and Properties

Quartic Equation. By CH vd Westhuizen A unique Solution assuming Complex roots. Ax^4 + Bx^3 + Cx^2 + Dx + E = 0

Invariants Describing Quark Mixing and Mass Matrices arxiv:hep-ph/ v1 9 Jul 1992

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Lecture 7. Bayes formula and independence

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

Inner image-kernel (p, q)-inverses in rings

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

The greatest common factor, or GCF, is the largest factor that two or more terms share.

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Matrix Approach to Simple Linear Regression: An Overview

18.600: Lecture 7 Bayes formula and independence

I = i 0,

MATH 260 Homework 2 solutions. 7. (a) Compute the dimension of the intersection of the following two planes in R 3 : x 2y z 0, 3x 3y z 0.

System of Linear Equations. Slide for MA1203 Business Mathematics II Week 1 & 2

CS626 Data Analysis and Simulation

A primer on matrices


Math 313 Chapter 1 Review

Lecture 9: Elementary Matrices

4.5 Integration of Rational Functions by Partial Fractions

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

GPS Geodesy - LAB 7. Neglecting the propagation, multipath, and receiver errors, eq.(1) becomes:

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Math 511 Exam #1. Show All Work No Calculators

Bayesian statistics, simulation and software

Supplementary Section D: Additional Material Relating to Helicopter Flight Mechanics Models for the Case Study of Chapter 10.

MTH 215: Introduction to Linear Algebra

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Unions of Solutions. Unions. Unions of solutions

18.600: Lecture 7 Bayes formula and independence

DETERMINANTS, COFACTORS, AND INVERSES

The Theory of the Simplex Method. Chapter 5: Hillier and Lieberman Chapter 5: Decision Tools for Agribusiness Dr. Hurley s AGB 328 Course

Elementary Linear Algebra

Modeling Population Growth

Chapter 1. The Noble Eightfold Path to Linear Regression

Probability Theory for Machine Learning. Chris Cremer September 2015

M. Matrices and Linear Algebra

Linear Algebra and its Applications

1.5 Gaussian Elimination With Partial Pivoting.

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Systems of Linear ODEs

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals.

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Numerical Methods - Numerical Linear Algebra

(x + 1)(x 2) = 4. x

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

A very simple proof of Pascal s hexagon theorem and some applications

System of Linear Equations

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Math 1314 Week #14 Notes

Transcription:

TREATMENT OF NUISANCE PARAMETER IN ADJUSTMENT (PHASED ADJUSTMENT) Aaditya Verma a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - aaditya@iitk.ac.in KEY WORDS: Least square adjustment, Nuisance parameter, Equivalent observation equation ABSTRACT: Least squares estimation is a standard tool for processing of geodetic data. Sometimes, when only a particular group of unknowns is of interest, it is better to eliminate the other group of unknowns. The unknowns which are not of interest are generally called nuisance parameters. This paper discusses the method of formation of equivalent observation equation which can be used to eliminate these nuisance parameters. A numerical example has been included to illustrate the method. 1. INTRODUCTION In general, geodetic data processing consists of the following information: 1. Parameters of interest (or unknowns) 2. Known quantities (or observations) 3. Explicit biases that can be parameterised 4. Errors that are not parameterised Errors are defined as those effects on the measurements that cause the measured quantity to be different from the true quantity. Any such effects that cause this phenomenon by a systematic amount are generally referred to as biases. In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest. During the processing of geodetic data, many situations are encountered where we come across such nuisance parameters. The equivalently eliminated observation equation system can be used for elimination of nuisance parameters. The derivation of equivalent observation equation was first made by Zhou(1985). Based on the derivation of equivalent equation, a diagonalisation algorithm has been derived which can be used for separating one adjustment problem into two sub-problems. 2. LEAST SQUARES ADJUSTMENT The principle of least squares adjustment involves formation of a linearised observation equation system represented by: V = L AX, P (1)

where L : observation vector of dimension m, A : coefficient matrix of dimension m n, X : unknown parameter vector of dimension n, V : residual vector of dimension m, n : number of unknowns, m : number of observations, and P : weight matrix of dimension m m. The least squares criterion to solve the above mentioned system of equations is as follows: F = V T PV = min (2) The function F reaches minimum value if the partial differentiation of F with respect to X equals zero. F X = 2VT P( A) = 0 Multiplying A T P with Eq. 1, we get A T PV = 0 (3) The least square solution of Eq. 1 is given by (A T PA)X A T PL = 0 (4) X = (A T PA) 1 (A T PL) (5) 3. EQUIVALENTLY ELIMINATED OBSERVATION EQUATION SYSTEM In least squares adjustment, the unknowns can be divided into two groups and solved in a blockwise manner. In order to eliminate nuisance parameters, the unknowns should be divided such that one of the groups contains parameters of interest and the other contains the nuisance parameters. Equivalently eliminated observation equation system is used to perform this task. Using this method, the nuisance parameters can be eliminated directly from the observation equations. After dividing the unknowns into two groups, the linearised observation equation system can be represented by V = L [A B] [ X 1 ], P (6) where L : observation vector of dimension m, A, B : coefficient matrices of dimension m (n-r) and m r, X 1, : unknown vectors of dimension n-r and r, V : residual vector of dimension m, n : number of unknowns, m : number of observations, and P : weight matrix of dimension m m. Eq. 6 can also be solved in a similar manner as Eq. 1 was solved in section 1. The solution to Eq. 6 can hence be calculated from the following equation:

[A B] T P[A B] [ X 1 ] = [A B] T PL (7) [ AT B T] P[ A B] [ X 1 ] = [ X AT 2 BT] PL [ AT PA A T PB B T PA B T PB ] [X 1 ] = [ AT PL B T PL ] [ M 11 M 12 M 21 M 22 ] [ X 1 ] = [ B 1 B 2 ] (8) where [ M 11 M 12 M 21 M 22 ] = [ AT PA A T PB B T PA B T PB ] and [B 1 B 2 ] = [ AT PL B T PL ] On expanding Eq. 8, the following equations are obtained: From Eq. 9, the value of X 1 is given as: On substituting the value of X 1 from Eq. 11 in Eq. 10, M 11 X 1 + M 12 = B 1 (9) M 21 X 1 + M 22 = B 2 (10) X 1 = M 1 11 (B 1 M 12 ) (11) M 21 (M 1 11 (B 1 M 12 )) + M 22 = B 2 Now, Eq. 12 is written as follows: where M 2 = M 22 M 21 M 1 11 M 12 (M 22 M 21 M 1 11 M 12 ) = (B 2 M 21 M 1 11 B 1 ) (12) M 2 = R 2 (11) = B T PB B T PAM 1 11 A T PB = B T P(I AM 1 11 A T P)B (12) R 2 = B 2 M 21 M 1 11 B 1 = B T P(I AM 1 11 A T P)L (13) Only Eq. 11 needs to be solved to calculate the unknown vector. Consider AM 1 11 A T P = J (14) From the following calculations, it can be proved that matrices J and (I J) are idempotent and (I J) T P is symmetric. J 2 = (AM 1 11 A T P)(AM 1 11 A T P) = AM 1 11 A T PAM 1 11 A T P = AM 1 11 A T P = J J 2 = J (15)

(I J) 2 = (I J)(I J) = I 2 2IJ + J 2 = I 2J + J = (I J) (I J) 2 = I J (16) [P(I J)] T = (I J T )P = P (AM 1 11 A T P) T P = P PAM 1 11 A T P = P(I J) (I J) T P = P(I J) (17) Using the properties derived in Eq. 12, Eq. 13 and Eq. 14 above, M 2 and R 2 can be rewritten as follows: M 2 = B T P(I J)B = B T P(I J)(I J)B = B T (I J) T P(I J)B (18) Now, considering R 2 = B T P(I J)L = B T (I J) T PL (19) (I J)B = D 2 (20) Using Eq. 18, Eq. 19 and Eq. 20, one can rewrite Eq. 11 as follows: B T (I J) T P(I J)B = B T (I J) T PL or (21) D 2 T PD 2 = D 2 T PL (22) The equivalently eliminated observation equation for Eq. 11 can be written as: U 2 = L D 2, P or (23) U 2 = L (I J)B, P (24) where L and P are original observation vector and weight matrix and U 2 is the residual vector having the same property as V in Eq.6. The advantage of using Eq. 16 is that the unknown vector X 1 has been eliminated without changing the observation and weight matrices. 4. DIAGONALISED NORMAL EQUATION In the previous section, the value of X 1 calculated from Eq. 9 was substituted in Eq. 10 to calculate. Similarly, the value of from Eq. 10 can be calculated and substituted in Eq. 9 if the required unknown vector is X 1. The normal equation mentioned in Eq. 8 can hence be diagonalised for the two groups of unknowns by using elimination process twice. The algorithm is outlined as follows: From Eq. 10, we get = M 1 22 (B 2 M 21 X 1 ) (25) Substituting this value of from Eq. 25 in Eq. 9, one gets: M 1 X 1 = R 1 (26) where M 1 = M 11 M 12 M 1 22 M 21 = A T PA A T PBM 1 22 B T PA = A T P(I BM 1 22 B T P)B and (27) R 1 = B 1 M 12 M 1 22 B 2 = A T P(I BM 1 22 B T P)L (28)

Combining Eq. 11 and Eq. 26, one gets [ M 1 0 0 M 2 ] [ X 1 ] = [ R 1 R 2 ] (29) The process of forming Eq. 29 from Eq. 8 is called the diagonalisation of a normal equation. As discussed in previous section, the equivalently eliminated observation equation of Eq. 11 is Eq. 24. Similarly, if we denote BM 1 22 B T P = K and (30) (I K)A = D 1 (31) then, the equivalently eliminated observation equation for Eq. 26 can be written as follows: U 1 = L (I K)AX 1, P (32) where U 1 is a residual vector which has the same property as V of Eq. 6. L and P are the original observation vector and weight matrix respectively. Eq. 24 and Eq. 32 can be written together as follows: Eq. 29 is called diagonalised equation of Eq. 6. [ U 1 U 2 ] = [ L L ] [D 1 0 0 D 2 ] [ X 1 ], [ P 0 0 P ] (33) 5. NUMERICAL EXAMPLE OF THE DIAGONALISATION ALGORITHM 5.1 MalLab Code The MatLab code to demonstrate the application of the discussed algorithm is given below: % Date: 07-Nov-2014 % Numerical example to demonstrate equivalently eliminated observation % equation system for elimination of nuisance parameters % INPUT - L (observation matrix) % A & B (coefficient matrices) % P (weight matrix) % OUTPUT - Section 1 - X (solution vector) % V (residual vector) % Section 2 - X1 & X2 (solution vectors of divided equations) % U1 & U2 (residual vectors)

Inputs L = [1;2;-1;2;1;0;-2] % observation vector A = [1 1; 1 2; 1 1; 0 0; 0 0; 0 0; 0 0] % coefficient matrix for 1st set of unknowns(x1) B = [0 0 0; 0 0 0; 0 0 0; 1 1 1; 2 1 1; 1 1 2; 1 1 1] % coefficient matrix for 2nd set of unknowns(x2) P = [1 0 0 0 0 0 0; 0 1 0 0 0 0 0; 0 0 1 0 0 0 0; 0 0 0 1 0 0 0; 0 0 0 0 1 0 0; 0 0 0 0 0 1 0; 0 0 0 0 0 0 1] % weight matrix General least square solution A1 = [A B]; N = A1'*P*A1; U = A1'*P*L; X = inv(n)*u V = L - A1*X % combined coefficient matrix for all the unknowns % solution matrix % residual matrix Using equivalently eliminated observation equation to find X1 and X2 M11 = A'*P*A; M12 = A'*P*B; M21 = B'*P*A; M22 = B'*P*B; B1 = A'*P*L; B2 = B'*P*L; K = B*inv(M22)*B'*P; D1 = (eye(7) - K)*A; J = A*inv(M11)*A'*P; D2 = (eye(7) - J)*B;

M1 = M11 - (M12*inv(M22)*M21); R1 = B1 - (M12*inv(M22)*B2); X1 = inv(m1)*r1 % solution vector for 1st set of unknowns U1 = L - D1*X1 % residual vector M2 = M22 - (M21*inv(M11)*M12); R2 = B2 - (M21*inv(M11)*B1); X2 = inv(m2)*r2 % solution vector for 2nd set of unknowns U2 = L - D2*X2 % residual vector The above mentioned code first uses general least squares method to solve for five unknowns. Then, it demonstrates the use of equivalently eliminated observation equation system to solve for two unknown vectors containing two and three unknowns respectively. 5.2 Results The results of the code mentioned in previous section are as follows : Solution using general least squares method Solutions using equivalently eliminated observation equation system X = X1= X2= - - - - V = U1= U2= 0.0000 - - 0.0000-0.0000 - - - Table 1. Results of the demonstration code Table one shows the results of the program mentioned in section 5.1. Column 1 of the table shows the solution and residual vectors obtained using general least squares approach. Column 2 shows the results after the unknown matrix is divided into two matrices containing two and three unknowns respectively. It is also evident from the table that the unknown vector X 1 can be calculated even if we omit the vector from our computations without affecting the results and vice versa.

6. CONCLUSIONS This paper discusses equivalently eliminated observation equation system which can be used to eliminate nuisance parameters. Instead of solving the original problem, this method allows us to solve only for the required parameters without changing the observation and weight matrices. A diagonalisation algorithm to separate one adjustment problem into two separate problems has also been discussed. REFERENCES Rizos, C. (1999). THE NATURE OF GPS OBSERVATION MODEL BIASES. Available: http://www.gmat.unsw.edu.au/snap/gps/gps_survey/chap6/612.htm. Last accessed 6th Nov 2014. Xu, G. (2003). A Diagonalisation Algorithm and Its Application in Ambiguity Search. Journal of Global Positioning Systems. 2 (1), pp. 35-41. Xu, G. (2007). Adjustment and Filtering Methods. In: GPS Theory, Algorithms and Applications. 2nd ed. Meppel: Springer. pp.146-149. Xu, G. (2007). Equivalence of Undifferenced and Differencing Algorithms. In: GPS Theory, Algorithms and Applications. 2nd ed. Meppel: Springer. pp.122-124.