Autocorrelation Functions in GPS Data Processing: Modeling Aspects

Similar documents
Statistical characteristics of L1 carrier phase observations from four low-cost GPS receivers

Derivation of the Kalman Filter

It is the purpose of this contribution to present the probability distribution of the 'xed' baseline. This will be done for the case the ambiguities a

Noise Characteristics in High Precision GPS Positioning

ENGR352 Problem Set 02

Carrier Phase Integer Ambiguity Resolution Recent results and open issues

Evaluation of Two Integer Ambiguity Resolution Methods for Real Time GPS Positioning

GPS Geodesy - LAB 7. Neglecting the propagation, multipath, and receiver errors, eq.(1) becomes:

Carrier Phase Techniques

ESTIMATING THE RESIDUAL TROPOSPHERIC DELAY FOR AIRBORNE DIFFERENTIAL GPS POSITIONING (A SUMMARY)

Ionosphere influence on success rate of GPS ambiguity resolution in a satellite formation flying

EESC Geodesy with the Global Positioning System. Class 6: Point Positioning using Pseuduorange

Regression. Oscar García

6 The SVD Applied to Signal and Image Deblurring

Chapter 2 Notes, Linear Algebra 5e Lay

Review Questions REVIEW QUESTIONS 71

A Review of Matrix Analysis

Matrix Algebra. Learning Objectives. Size of Matrix

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring

12. Perturbed Matrices

Modern Navigation

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Sparse least squares and Q-less QR

Influence of ambiguity precision on the success rate of GNSS integer ambiguity bootstrapping

SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System

Dynamic data processing recursive least-squares

MATRICES. a m,1 a m,n A =

Elementary maths for GMT

Approximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Mathematical preliminaries and error analysis

Elementary Linear Algebra

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

Numerical Methods Lecture 2 Simultaneous Equations

This appendix provides a very basic introduction to linear algebra concepts.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

EESC Geodesy with the Global Positioning System. Class 4: The pseudorange and phase observables

Basic Concepts in Matrix Algebra

Linear Algebra V = T = ( 4 3 ).

Principles of the Global Positioning System Lecture 11

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

USING THE INTEGER DECORRELATION PROCEDURE TO INCREASE OF THE EFFICIENCY OF THE MAFA METHOD

Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory

Vectors and Matrices Statistics with Vectors and Matrices

Chapter 2. Matrix Arithmetic. Chapter 2

Satellite Navigation error sources and position estimation

ARTIFICIAL SATELLITES, Vol. 41, No DOI: /v THE LAMBDA METHOD FOR THE GNSS COMPASS

Statistical methods. Mean value and standard deviations Standard statistical distributions Linear systems Matrix algebra

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Time: 1 hour 30 minutes

Introduction to Matrix Algebra

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Steven J. Leon University of Massachusetts, Dartmouth

ARTIFICIAL SATELLITES, Vol. 41, No DOI: /v

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Figure from Mike Rymer, USGS

1.1 Basic Algebra. 1.2 Equations and Inequalities. 1.3 Systems of Equations

CITS2401 Computer Analysis & Visualisation

Modern Navigation. Thomas Herring

Numerical Methods Lecture 2 Simultaneous Equations

Principles of the Global Positioning System Lecture 14

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

A Review of Linear Algebra

The Matrix Algebra of Sample Statistics

Week 02. Assist. Prof. Dr. Himmet KARAMAN

Chapter 3 Numerical Methods

Next topics: Solving systems of linear equations

On the realistic stochastic model of GPS observables: Implementation and Performance

Modern Navigation. Thomas Herring

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Chapter 8: Linear Algebraic Equations

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Kinematic Monitoring for Baselines Ranging from 3m to 30km

RECURSIVE ESTIMATION AND KALMAN FILTERING

Joint GPS and Vision Estimation Using an Adaptive Filter

CITS2401 Computer Analysis & Visualisation

Modern Navigation. Thomas Herring

SOLVING LINEAR SYSTEMS

EESC Geodesy with the Global Positioning System. Class 7: Relative Positioning using Carrier-Beat Phase

Understanding the Differences between LS Algorithms and Sequential Filters

1 Cricket chirps: an example

Designing Information Devices and Systems I Discussion 13B

EE731 Lecture Notes: Matrix Computations for Signal Processing

Linear Algebra and Matrices

1. Introduction to commutative rings and fields

Stat 159/259: Linear Algebra Notes

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Modular numbers and Error Correcting Codes. Introduction. Modular Arithmetic.

CAAM 454/554: Stationary Iterative Methods

Satellite Navigation PVT estimation and integrity

Lecture 3 Linear Algebra Background

9709 MATHEMATICS. 9709/33 Paper 3, maximum raw mark 75

STATIC AND DYNAMIC RECURSIVE LEAST SQUARES

Calculating determinants for larger matrices

Transcription:

Autocorrelation Functions in GPS Data Processing: Modeling Aspects Kai Borre, Aalborg University Gilbert Strang, Massachusetts Institute of Technology

Consider a process that is actually random walk but is incorrectly modeled as a random constant c. We have then for the true model x k = x k + ɛ k, ɛ k = unity Gaussian white noise, σ 2 {x } = b k = x k + e k, k =,, 2,... σ 2 {e k } =. and for the incorrect model x k = c, c N(, ) b k = x k + e k, k =,, 2,... σ 2 {e k } =. The wrong model has F k =, ɛ,k =, A k =, e,k =., ˆx =, and P =. For the true model the parameters are the same except that ɛ,k =, rather than zero.

Correctly and incorrectly filtered random walk process 5 Simulated random walk process State variable 5 5 Observations Random walk process Filter for correct model Filter for incorrect model 2 4 6 8

Computing the Autocorrelation Function Shift = : a a a 2 a 3 a 4... a a a 2 a 3 a 4... auto() = n a i a i /n. We multiply elements a i a i above each other, and add. Now shift the lower row: Shift = : a a a 2 a 3 a 4... a a a 2 a 3 a 4... auto() = n a i a i /n.

Autocorrelation for random data. The peak at zero measures the variance Linear trend subtracted 4 2 spike at zero shift 2 4 2 3 increasing shift 4 2 Quadratic trend subtracted spike at zero shift 2 4 2 3 increasing shift

Ionospheric delays for one-ways, single and double differences 2 I k = ( 2,k λ 2 N 2 ) (,k λ N ) ( f /f 2 ) 2 One ways at master One ways at rover 5 6 4 6 Ionospheric delay I [m] 9 8 7 27 2 9 23 Ionospheric delay I [m] 3 2 27 9 2 23 6 5 2 3 4 5 6 7 8 9 Epochs, epoch interval 2 s 26 9 8 2 3 4 5 6 7 8 9 Epochs, epoch interval 2 s 26 Ionospheric delay I [m] 3.5 3. 3.5 3 2.95 2.9 2.85 2.8 2.75 2.7 Single differences 6 2 2.65 2 3 4 5 6 7 8 9 Epochs, epoch interval 2 s 9 26 27 23 Ionospheric delay I [m].25.2.5..5.5..5 Double differences 2 6.2 2 3 4 5 6 7 8 9 Epochs, epoch interval 2 s 9 26 27 23

x 7 x 4 25 4 2 5 One-way PRN 2 differenced in time 3 2 One-way PRN 2 m 2 m 2 5 5 2 3 4 5 6 x 6 2 2 3 4 5 6 x 6 2 5 Single difference PRN 2 5 4 3 Double difference PRN 2 PRN 26 m 2 5 m 2 2 5 2 3 4 5 6 2 2 3 4 5 6 Autocorrelation for ionospheric delay to PRN 2. Upper left shows difference in time: I k I k, upper right the one-way, lower left the single difference, and lower left the double difference. Note the various orders of magnitude

Autocorrelation of ionosphere delay for one-ways PRN Elevation σ I (in m) Shift for first zero (in ) master rover master rover 26 68.9.8. 35 3 2 59..8.4 5 35 27 28..39.35 3 32 6 22.8.77.7 3 3 23 2.4.7.9 2 2 9 8.5.48.9 5 3

Reference Borre, Kai & Gilbert Strang (997) Autocorrelation Functions in GPS Data Processing: Modeling Aspects. Pages 43 5. Institute of Navigation GPS-97, September 6 9, Kansas City, Missouri

Kalman Filters, Correlation Functions, Power Spectrums, and Ambiguity Experiments Kai Borre, Aalborg University

The following matrix equation P = P 2 2 λ ( f /f 2 ) 2 ( f /f 2 ) 2 λ 2 ρ e I ɛ N e 2 N 2 ɛ 2. () can be used in various ways. One method is to solve for the 4 unknowns on an epoch-to-epoch basis. An alternative is to employ () as an observation equation in a filter. In any case we are faced with an interesting problem: ρ and I are to be estimated as reals while N and N 2 by definition are integers. This situation is difficult. One procedure is to solve for all 4 unknowns as were they reals. This gives what is called a float solution. Given the float solution we can use a smart method that estimates N and N 2 as integers. This refined solution is known as the fixed solution. A popular method is the LAMBDA method decscribed by Teunissen [4]. However, our Matlab code uses a less complicated method described in [5].

Computed positions for a static receiver. The present observations are taken with active SA. The variation over time is mainly due to SA 2 X, Y, Z components of receiver position relative to mean position [m] 2 2 4 6 8 2 4 6 8 [s] Raw receiver positions relative to mean position 57.55 φ [ ] 57.5 57.45 9.984 9.9845 9.985 9.9855 9.986 9.9865 9.987 9.9875 λ [ ]

The first 3 epochs are deleted due to bad data. During epochs 3 5 the rover remains at rest, from epoch 5 it is moving a bit. During epochs 29 38 it is moved in a circle with radius ca. 6 meters. Then it rests at the circumference in epochs 38 587 and then makes a random movement outside the circle in epochs 65 787. The difference in computed positions for a static (master) receiver and a moving (rover) receiver. Now the detailed picture of the actual movement becomes clear. The differential positions are good to within a few dm. 2 X, Y, Z components of differential vector [m] 2 2 4 6 8 2 4 6 8 [s] Differential receiver position 2 [m] 6 4 2 2 4 [m]

Double differenced code and phase observations on L from static receivers. The ambiguities N are estimated as a float solution based on observations from epochs 3 4. Next the vector components (x, y, z) are computed and the vector length. x [m] y [m] 9.95 Estimated vector, L observations only, float ambiguities 9.9 2 3 4 5 9.9 9.88 9.86 2 3 4 5 6.45 z [m] 6.5 norm of vector [m] 6.55 2 3 4 5 5.5 5.45 5.4 2 3 4 5

The figure shows the innovation for the four satellites as computed with respect to the reference satellite, covariance function for a selected innovation component, and the corresponding power spectrum illustrating the color of the innovation noise. norm [m] innov. [mm] pow. spec. [db] 4 2 5.5 5.45 Estimated vector, L observations only, float ambiguities 2 3 4 5 5.4 2 3 4 5 corr. fnc. [mm 2 ] 5 4 x 6 2 2 4 6 8 2 4 6 8 σ vector = 2.6 mm

Float and fixed values of L ambiguities: 22 97. 22 97 664 99.67 664 98 63 53.36 63 53 98 426.78 98 429 The fixed values result from a procedure given by Yang et al. [5]. The above table lists the estimated ambiguities. We observe that the rounded values of the float ambiguities deviate at most 2 from the fixed values. The standard deviation of the length of the vector between master and rover is σ vector = 6.7 mm for the float solution and σ vector = 8.3 mm for the fixed solution. We see that the vector components drift over time when incorrect ambiguities are used. Fixing the ambiguities to the correct values yields an estimate of the vector components at cm-level or even at mm-level.

Double differenced code and phase observations on L from static receivers. Note that the vector length is easier to estimate than its single components x, y, z. x [m] y [m] z [m] norm of vector [m].5.45 Estimated vector, L observations only, fixed ambiguities.4 2 3 4 5 9.72 9.7 9.7 2 3 4 5 7. 7.5 7.2 2 3 4 5 6 5.95 2 3 4 5

The ambiguities N are estimated as a fixed solution based on observations from epochs 3 4. Note the dramatic increase in accuracy by fixing the ambiguities! Note the nice peak of the covariance function. The innovations are close to white noise. norm [m] corr. fnc. [mm 2 ] pow. spec. [db] innov. [mm] Estimated vector, L observations only, fixed ambiguities 2 3 4 5 6 5.95 2 3 4 5 2 2 2 4 6 8 2 4 6 8 σ vector = 8.59 mm

The same as Figure 3, but at epoch 25 we change an ambiguity with the amount of cycle x [m] y [m] Estimated vector, L observations only, fixed ambiguities, one changed by.5 2 3 4 5 9.9 9.8 9.7 2 3 4 5 z [m] 7 norm of vector [m] 6.2 2 3 4 5 6 5.8 2 3 4 5

innov. [mm] pow. spec. [db] Estimated vector, L observations only, fixed ambiguities, one changed by cycle 2 3 4 5 6 5.8 2 3 4 5 norm [m]6.2 corr. fnc. [mm 2 ] 5 x 5 5 2 4 6 8 5 2 4 6 8 σ vector = 66.2 mm

The length of the baseline vector v = x 2 + y 2 + z 2 = d often is easier to estimate than the components of v = (x, y, z). We illustrate this by applying the law of variance propagation: σ 2 d = [ d x d y d z ] v d x d y d z = ( d v T) ( d v v ). For the actual data and with fixed ambiguities we have σ d = 3.5 mm, σ x = 4.8 mm, σ y =.7 mm, and σ z = 3.2 mm. This partially supports our statement.

All M-files producing the figures in this paper can be found at kom.auc.dk/~borre/life-l99 The original Ashtech data are stored in files *.5. The M-files bdata, edata, and sdata read the binary observation files and rearrange the data into a useful MATLAB format in the *.dat files. The call rawrecpo produces our first figure. This file calls two other M-files togeod and topocent. togeod is a function that computes geodetic coordinates (φ, λ, h) given Cartesian coordinates (X, Y, Z) and the semi-major axis a and inverse flattening f for a reference ellipsoid. topocent transforms a 3-D vector dx into topocentric coordinates (Dist, Az, El) with origin at X. The main M-file makebase reads the *.dat files, estimates ambiguities for DD on L according to Yang et al. (994) and uses get_eph and nd_eph to compute satellite positions. The file get_rho computes the distance in ECEF between satellite and receiver at raw transmit time given an ephemeris. Now we can write the observation equations for the unknown vector components (x, y, z). We solve the normals on an epoch-to-epoch basis.

Conclusions We have given a standard presentation of GPS observations and their characteristics. With a real data set we focus on a few specific topics: float and fixed ambiguity solutions and the effect on the estimated vector. In our special case we note that the standard deviation of the vector length is halved when changing from float to fixed solution. We also note that it is easier to estimate the vector length rather than the vector components themselves. Under controlled circumstances we change an ambiguity and study the effect on the solution and observe how the innovation vector slowly strives for the correct known solution.

References [] Borre, Kai (995) GPS i Landmålingen. Order at borre@kom.auc.dk [2] Borre, Kai (2) The GPS Toolbox. GPS Solutions, 4:3:47 5. John Wiley & Sons, Inc. [3] Strang, Gilbert & Borre, Kai (997) Linear Algebra, Geodesy, and GPS. Wellesley-Cambridge Press. Order at www.wellesleycambridge.com [4] Teunissen, P. J. G. (993) Least-Squares Estimation of the Integer GPS Ambiguities. In: LGR-Series No. 6, Delft Geodetic Computing Center [5] Yang, Ming & Goad, Clyde & Schaffrin, Burkhard (994) Real-time On-the-Fly Ambiguity Resolution Over Short Baselines in the Presence of Anti-Spoofing. In Proceedings of ION GPS-94, 59 525

Block Elimination and Weight Matrices Kai Borre Aalborg University

Block Elimination A C I A C A I C B x = b. D x 2 b 2 B x I = b D x 2 C A I b 2 or explicitly A B x D C A = B x 2 b b 2 C A. b Final result A B C D = A (I + B(D C A B) C A ) (D C A B) C A A B(D C A B) (D C A B).

Schreiber's Device In 877 Schreiber described a procedure for eliminating the unknowns x (we present a formulation somewhat more general than the original one): Let be given a set of observation equations A [ x x 2 ] = b ɛ. The unknown x is eliminated if we delete the columns in A corresponding to the unknown x, and add the fictitious observational row [ a T a 2 a T a 3... a T a n ] having the negative weight /a T a.

We demonstrate the procedure on a numerical example: 3 4 [ x x 2 ] = 8 8 2. The augmented left side of the normals A T A is [ 3 4 8 ] 4 3 4 8 = [ ]. The augmented right side of the normals A T b is [ 3 4 8 ] 4 8 8 2 36 = [ 4 ].

Partition of Observation Equations Let be given a set of observation equations Ax = b ɛ partitioned into A A 2 x = b b 2 ɛ ɛ 2 with weights 2. (2) The matrix A describes usual geodetic observations while A 2 exclusively deals with observations of coordinates. Considering only the first block row of observations we get the usual solution ˆx = ˆx f : ˆx f = (A T A ) A T b = f A T b. The subscript f stands for free: The ˆx f solution relates to a network not constrained to the observed coordinates b 2. Of course we suppose f exists! Next we look for the solution x c when we include the second block row, namely the coordinate observations A 2 x = b 2 ɛ 2. By adding the normals from both rows we get

We recall that f (A T A + A T 2 2 A 2 ) ˆx = A T b + A T 2 2 b 2. (3) = A T A and A T b = f ˆx f : ˆx c = ( f + A T ) ( 2 2 A 2 f ˆx f + A T 2 2 b 2). The common step in all derivations is our favorite matrix identity With (4) we get = (T + A T e A) = T T A T ( e + AT A T ) AT. (4) ˆx c = ( f f A T 2 ( 2 + A 2 f A T 2 ) A 2 f )( f Multiplying out yields ˆx f + A T 2 2 b 2). ˆx c = ˆx f + f A T 2 2 b 2 f A T 2 ( 2 + A 2 f A T 2 ) A 2 ˆx f f A T 2 ( 2 + A 2 f A T 2 ) A 2 f A T 2 2 b 2.

Rearranging we get ˆx c = ˆx f f A T 2 ( 2 + A 2 f A T 2 ) A 2 ˆx f Using (4) once again in the opposite direction we get + ( f f A T 2 ( 2 + A 2 f A T 2 ) A 2 f ) A T 2 2 b 2. ˆx c = ˆx f f A T 2 ( 2 + A 2 f A T 2 ) A 2 ˆx f + ( f Finally the matrix identity (26) takes us the last step: + A T 2 2 A 2 ) A T 2 2 b 2. ˆx c = ˆx f f A T 2 ( 2 + A 2 f A T 2 ) A 2 ˆx f + f A T 2 ( 2 + A 2 f A T 2 ) b2 or rearranged ˆx c = ˆx f + f A T 2 ( 2 + A 2 f A T 2 ) (b 2 A 2 ˆx f ). (5) From (3) we read the covariance matrix of the constrained estimate ˆx c c = (A T A + A T 2 2 A 2 ). (6)

We recall that f = (A T A ), so (6) can be written Using (4) we get c = ( f + A T 2 2 A 2 ). (7) c = f f A T 2 ( 2 + A 2 2 A T 2 ) A 2 f. (8) A nonnegative definite matrix is subtracted from f, so c f. (9) This says that covariances in the constrained network are smaller than or equal to those in the free network. Putting 2 = in (2) the observation error equals zero is equivalent to introduce the constraint A 2 x = b 2 : A x = b ɛ () A 2 x = b 2. ()

This has a large impact on formulation of filters. Krarup calls this a hard postulation of the least-squares problem, opposed to a soft postulation of the constraints with 2 >. So setting the observation error equal to zero in a Kalman filter is the same as using a constraint in a standard least-squares formulation.

Dependency on the Weights b = b b 2, ɛ = ɛ ɛ 2, A = A A 2, = 2. The normal equation for the original, total problem is ( A T A + A T ) 2 2 A 2 ˆx = A T b + A T 2 2 b 2. (2) The perturbed problem is ( A T A + A T 2 ( 2 + ( 2 ))A 2) ( ˆx + x) = ( A T b + A T 2 ( 2 + ( 2 ))b 2). (3) Now subtract (2) from (3) to find an equation for x: ( A T A + A T 2 ( 2 + ( 2 ))A ) 2 x + A T 2 ( 2 )A 2 ˆx = A T 2 ( 2 )b 2.

We set N = A T A + A T 2 2 A 2 and ˆɛ 2 = b 2 A 2 ˆx. Then the change in ˆx is x = ( N + A T 2 ( 2 )A ) 2 A T 2 ( 2 )ˆɛ 2. (4) Now (4) yields the change in the inverse: ( N + A T 2 ( 2 )A ) 2 = N N A T( 2 ( 2 ) + A 2 N A T ) 2 A2 N. For m observations, this matrix multiplies A T 2 ( 2 )ˆɛ 2 to give x. If we specialize to one single observation, a2 T is an m by vector and 2 is a by scalar. We name the product a 2 N a2 T = s and get or x = N a T 2 x = ( 2 ) + s ( 2 ) s ( 2 ) ˆɛ 2 ˆɛ 2 ( 2 ) + s ( 2 ) N a T 2. (5)

A First Order Derivation If we consider a small change in the weighting matrix, we can compute the corresponding small movement ˆx in the estimate. The problem is linearized and higher order terms are ignored: A T( + ( ) ) A( ˆx + ˆx) = A T( + ( ) ) b. Subtracting A T A ˆx = A T b leaves an equation for the first-order correction ˆx: A T A( ˆx) = A T ( )(b A ˆx). This correction to ˆx is small when is small, and when the residual b A ˆx is small. If we work with the covariance matrix instead of, linearization gives = ( ). This is the matrix analog of x/x 2, the first-order change in /x. Linearization also gives the change in Q = (A T A), which contains the variances and covariances of the estimate ˆx: Q = Q A T ( )AQ.

Useful Matrix Identities Whenever the following inverse matrices exist we have (AB) = B A (6) (I + AB) A = A(I + B A) (7) A and B may not be square (8) for vectors we especially have (I + ab T ) a = a(i + b T a) (9) (A + B ) = A(A + B) B = B(A + B) A (2) (A + B DC) = A A B(D + C A B) C A (2) DC(A + B DC) = DC A (I + B DC A ) (22) = DC(I + A B DC) A (23) = D(I + C A B D) C A (24) = (I + DC A B) DC A (25) = (D + C A B) C A. (26)

MATLAB Code Demonstrating the Theory A = [ 2 6; 2 3 7; 3 4 8; 4 5 ]; b = [;7;8;2]; x = A\b A = A(:,:2); A2 = A(:,3); %Projector P = eye(4) A * inv(a * A) * A ; PA = P * A; Pb = P * b; x3 = Pb(4)/PA(4,3) %Block elimination N = A * A; b = A * b; b = b(:2); b2 = b(3); A = N(:2,:2); B = N(:2,3); D = N(3,3); x3 = inv(d B * inv(a) * B) * (b2 B * inv(a) * b) %General formula M = A2 * A * inv(a * A) * A ; R = A2 M; x3 = inv(r * A2) * R * b

Reference Strang, Gilbert & Kai Borre (997) Linear Algebra, Geodesy, and GPS. Wellesley-Cambridge Press. Order at www.wellesleycambridge.com