TREATMENT OF NUISANCE PARAMETER IN ADJUSTMENT (PHASED ADJUSTMENT) Aaditya Verma a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - aaditya@iitk.ac.in KEY WORDS: Least square adjustment, Nuisance parameter, Equivalent observation equation ABSTRACT: Least squares estimation is a standard tool for processing of geodetic data. Sometimes, when only a particular group of unknowns is of interest, it is better to eliminate the other group of unknowns. The unknowns which are not of interest are generally called nuisance parameters. This paper discusses the method of formation of equivalent observation equation which can be used to eliminate these nuisance parameters. A numerical example has been included to illustrate the method. 1. INTRODUCTION In general, geodetic data processing consists of the following information: 1. Parameters of interest (or unknowns) 2. Known quantities (or observations) 3. Explicit biases that can be parameterised 4. Errors that are not parameterised Errors are defined as those effects on the measurements that cause the measured quantity to be different from the true quantity. Any such effects that cause this phenomenon by a systematic amount are generally referred to as biases. In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest. During the processing of geodetic data, many situations are encountered where we come across such nuisance parameters. The equivalently eliminated observation equation system can be used for elimination of nuisance parameters. The derivation of equivalent observation equation was first made by Zhou(1985). Based on the derivation of equivalent equation, a diagonalisation algorithm has been derived which can be used for separating one adjustment problem into two sub-problems. 2. LEAST SQUARES ADJUSTMENT The principle of least squares adjustment involves formation of a linearised observation equation system represented by: V = L AX, P (1)
where L : observation vector of dimension m, A : coefficient matrix of dimension m n, X : unknown parameter vector of dimension n, V : residual vector of dimension m, n : number of unknowns, m : number of observations, and P : weight matrix of dimension m m. The least squares criterion to solve the above mentioned system of equations is as follows: F = V T PV = min (2) The function F reaches minimum value if the partial differentiation of F with respect to X equals zero. F X = 2VT P( A) = 0 Multiplying A T P with Eq. 1, we get A T PV = 0 (3) The least square solution of Eq. 1 is given by (A T PA)X A T PL = 0 (4) X = (A T PA) 1 (A T PL) (5) 3. EQUIVALENTLY ELIMINATED OBSERVATION EQUATION SYSTEM In least squares adjustment, the unknowns can be divided into two groups and solved in a blockwise manner. In order to eliminate nuisance parameters, the unknowns should be divided such that one of the groups contains parameters of interest and the other contains the nuisance parameters. Equivalently eliminated observation equation system is used to perform this task. Using this method, the nuisance parameters can be eliminated directly from the observation equations. After dividing the unknowns into two groups, the linearised observation equation system can be represented by V = L [A B] [ X 1 ], P (6) where L : observation vector of dimension m, A, B : coefficient matrices of dimension m (n-r) and m r, X 1, : unknown vectors of dimension n-r and r, V : residual vector of dimension m, n : number of unknowns, m : number of observations, and P : weight matrix of dimension m m. Eq. 6 can also be solved in a similar manner as Eq. 1 was solved in section 1. The solution to Eq. 6 can hence be calculated from the following equation:
[A B] T P[A B] [ X 1 ] = [A B] T PL (7) [ AT B T] P[ A B] [ X 1 ] = [ X AT 2 BT] PL [ AT PA A T PB B T PA B T PB ] [X 1 ] = [ AT PL B T PL ] [ M 11 M 12 M 21 M 22 ] [ X 1 ] = [ B 1 B 2 ] (8) where [ M 11 M 12 M 21 M 22 ] = [ AT PA A T PB B T PA B T PB ] and [B 1 B 2 ] = [ AT PL B T PL ] On expanding Eq. 8, the following equations are obtained: From Eq. 9, the value of X 1 is given as: On substituting the value of X 1 from Eq. 11 in Eq. 10, M 11 X 1 + M 12 = B 1 (9) M 21 X 1 + M 22 = B 2 (10) X 1 = M 1 11 (B 1 M 12 ) (11) M 21 (M 1 11 (B 1 M 12 )) + M 22 = B 2 Now, Eq. 12 is written as follows: where M 2 = M 22 M 21 M 1 11 M 12 (M 22 M 21 M 1 11 M 12 ) = (B 2 M 21 M 1 11 B 1 ) (12) M 2 = R 2 (11) = B T PB B T PAM 1 11 A T PB = B T P(I AM 1 11 A T P)B (12) R 2 = B 2 M 21 M 1 11 B 1 = B T P(I AM 1 11 A T P)L (13) Only Eq. 11 needs to be solved to calculate the unknown vector. Consider AM 1 11 A T P = J (14) From the following calculations, it can be proved that matrices J and (I J) are idempotent and (I J) T P is symmetric. J 2 = (AM 1 11 A T P)(AM 1 11 A T P) = AM 1 11 A T PAM 1 11 A T P = AM 1 11 A T P = J J 2 = J (15)
(I J) 2 = (I J)(I J) = I 2 2IJ + J 2 = I 2J + J = (I J) (I J) 2 = I J (16) [P(I J)] T = (I J T )P = P (AM 1 11 A T P) T P = P PAM 1 11 A T P = P(I J) (I J) T P = P(I J) (17) Using the properties derived in Eq. 12, Eq. 13 and Eq. 14 above, M 2 and R 2 can be rewritten as follows: M 2 = B T P(I J)B = B T P(I J)(I J)B = B T (I J) T P(I J)B (18) Now, considering R 2 = B T P(I J)L = B T (I J) T PL (19) (I J)B = D 2 (20) Using Eq. 18, Eq. 19 and Eq. 20, one can rewrite Eq. 11 as follows: B T (I J) T P(I J)B = B T (I J) T PL or (21) D 2 T PD 2 = D 2 T PL (22) The equivalently eliminated observation equation for Eq. 11 can be written as: U 2 = L D 2, P or (23) U 2 = L (I J)B, P (24) where L and P are original observation vector and weight matrix and U 2 is the residual vector having the same property as V in Eq.6. The advantage of using Eq. 16 is that the unknown vector X 1 has been eliminated without changing the observation and weight matrices. 4. DIAGONALISED NORMAL EQUATION In the previous section, the value of X 1 calculated from Eq. 9 was substituted in Eq. 10 to calculate. Similarly, the value of from Eq. 10 can be calculated and substituted in Eq. 9 if the required unknown vector is X 1. The normal equation mentioned in Eq. 8 can hence be diagonalised for the two groups of unknowns by using elimination process twice. The algorithm is outlined as follows: From Eq. 10, we get = M 1 22 (B 2 M 21 X 1 ) (25) Substituting this value of from Eq. 25 in Eq. 9, one gets: M 1 X 1 = R 1 (26) where M 1 = M 11 M 12 M 1 22 M 21 = A T PA A T PBM 1 22 B T PA = A T P(I BM 1 22 B T P)B and (27) R 1 = B 1 M 12 M 1 22 B 2 = A T P(I BM 1 22 B T P)L (28)
Combining Eq. 11 and Eq. 26, one gets [ M 1 0 0 M 2 ] [ X 1 ] = [ R 1 R 2 ] (29) The process of forming Eq. 29 from Eq. 8 is called the diagonalisation of a normal equation. As discussed in previous section, the equivalently eliminated observation equation of Eq. 11 is Eq. 24. Similarly, if we denote BM 1 22 B T P = K and (30) (I K)A = D 1 (31) then, the equivalently eliminated observation equation for Eq. 26 can be written as follows: U 1 = L (I K)AX 1, P (32) where U 1 is a residual vector which has the same property as V of Eq. 6. L and P are the original observation vector and weight matrix respectively. Eq. 24 and Eq. 32 can be written together as follows: Eq. 29 is called diagonalised equation of Eq. 6. [ U 1 U 2 ] = [ L L ] [D 1 0 0 D 2 ] [ X 1 ], [ P 0 0 P ] (33) 5. NUMERICAL EXAMPLE OF THE DIAGONALISATION ALGORITHM 5.1 MalLab Code The MatLab code to demonstrate the application of the discussed algorithm is given below: % Date: 07-Nov-2014 % Numerical example to demonstrate equivalently eliminated observation % equation system for elimination of nuisance parameters % INPUT - L (observation matrix) % A & B (coefficient matrices) % P (weight matrix) % OUTPUT - Section 1 - X (solution vector) % V (residual vector) % Section 2 - X1 & X2 (solution vectors of divided equations) % U1 & U2 (residual vectors)
Inputs L = [1;2;-1;2;1;0;-2] % observation vector A = [1 1; 1 2; 1 1; 0 0; 0 0; 0 0; 0 0] % coefficient matrix for 1st set of unknowns(x1) B = [0 0 0; 0 0 0; 0 0 0; 1 1 1; 2 1 1; 1 1 2; 1 1 1] % coefficient matrix for 2nd set of unknowns(x2) P = [1 0 0 0 0 0 0; 0 1 0 0 0 0 0; 0 0 1 0 0 0 0; 0 0 0 1 0 0 0; 0 0 0 0 1 0 0; 0 0 0 0 0 1 0; 0 0 0 0 0 0 1] % weight matrix General least square solution A1 = [A B]; N = A1'*P*A1; U = A1'*P*L; X = inv(n)*u V = L - A1*X % combined coefficient matrix for all the unknowns % solution matrix % residual matrix Using equivalently eliminated observation equation to find X1 and X2 M11 = A'*P*A; M12 = A'*P*B; M21 = B'*P*A; M22 = B'*P*B; B1 = A'*P*L; B2 = B'*P*L; K = B*inv(M22)*B'*P; D1 = (eye(7) - K)*A; J = A*inv(M11)*A'*P; D2 = (eye(7) - J)*B;
M1 = M11 - (M12*inv(M22)*M21); R1 = B1 - (M12*inv(M22)*B2); X1 = inv(m1)*r1 % solution vector for 1st set of unknowns U1 = L - D1*X1 % residual vector M2 = M22 - (M21*inv(M11)*M12); R2 = B2 - (M21*inv(M11)*B1); X2 = inv(m2)*r2 % solution vector for 2nd set of unknowns U2 = L - D2*X2 % residual vector The above mentioned code first uses general least squares method to solve for five unknowns. Then, it demonstrates the use of equivalently eliminated observation equation system to solve for two unknown vectors containing two and three unknowns respectively. 5.2 Results The results of the code mentioned in previous section are as follows : Solution using general least squares method Solutions using equivalently eliminated observation equation system X = X1= X2= - - - - V = U1= U2= 0.0000 - - 0.0000-0.0000 - - - Table 1. Results of the demonstration code Table one shows the results of the program mentioned in section 5.1. Column 1 of the table shows the solution and residual vectors obtained using general least squares approach. Column 2 shows the results after the unknown matrix is divided into two matrices containing two and three unknowns respectively. It is also evident from the table that the unknown vector X 1 can be calculated even if we omit the vector from our computations without affecting the results and vice versa.
6. CONCLUSIONS This paper discusses equivalently eliminated observation equation system which can be used to eliminate nuisance parameters. Instead of solving the original problem, this method allows us to solve only for the required parameters without changing the observation and weight matrices. A diagonalisation algorithm to separate one adjustment problem into two separate problems has also been discussed. REFERENCES Rizos, C. (1999). THE NATURE OF GPS OBSERVATION MODEL BIASES. Available: http://www.gmat.unsw.edu.au/snap/gps/gps_survey/chap6/612.htm. Last accessed 6th Nov 2014. Xu, G. (2003). A Diagonalisation Algorithm and Its Application in Ambiguity Search. Journal of Global Positioning Systems. 2 (1), pp. 35-41. Xu, G. (2007). Adjustment and Filtering Methods. In: GPS Theory, Algorithms and Applications. 2nd ed. Meppel: Springer. pp.146-149. Xu, G. (2007). Equivalence of Undifferenced and Differencing Algorithms. In: GPS Theory, Algorithms and Applications. 2nd ed. Meppel: Springer. pp.122-124.