11 Ranging by Global Navigation Satellite Systems (GNSS)
|
|
- Simon Allison
- 5 years ago
- Views:
Transcription
1 Algebraic Geodesy and Geoinformatics PART II APPLICATIONS 11 Ranging by Global Navigation Satellite Systems (GNSS) Overview First the observation equations are developed for implicit and explicit error definition, and simplified by preliminary elimination. These equations of the 4- Point Problem can be solved in symbolic way via different methods: Sturmfels method, Dixon resultant, standard Groebner basis, reduced Groebner basis and Global Symbolic Solver (GSS). All of these methods give the same result, a quadratic monomial. In addition Global Numerical Solver (GNS ) can provide a fast, numerical solution. For N- Point Problem the solution of the implicit and explicit error representation are different. However, one can use the result of the 4- Point Problem with the Gauss- Jacobi combinatorial algorithm for N- points, too, but the weights should be computed on the basis of the equations of the explicit distance error representation. ALESS model for explicit error representation can be also generated and solved by homotopy method as well as with Extended Newton- Raphson method and local minimization method employing a subset solution of the Gauss-Jacobi algorithm as initial condition Problem definition Observation equations Throughout history, position determination has been one of the most important tasks of mountaineers, pilots, sailor, civil engineers etc. In modern times, Global Navigation Satellite Systems (GNSS) provide an ultimate method to accomplish this task. If one has a hand held GNSS receiver, which measures the travel time of the signal transmitted from the satellites, the distance travelled by the signal from the satellites to the receiver can be computed by multiplying the measured time by the speed of light in vacuum. The distance of the receiver from the i-th GNSS satellite, the pseudo- range observations, di is related to the unknown position of the receiver, {x1, x2, x3 } by di = Hx1 - ai L2 + H x2 - bi L2 + Hx3 - ci L2 + x4 where 8ai, bi, ci <, i = 0, 1, 2, 3 are the coordinates of the ith satellite. The distance is influenced also by the satellite and receiver clock biases. The satellite clock bias can be modelled while the receiver clock bias has to be considered as an unknown variable, x4. This means, we have four unknowns, consequently we need four satellite to provide a minimum observation. The general form of the equation for the i-th satellite is fi = Hx1 - ai L2 + Hx2 - bi L2 + Hx3 - ci L2 - Hx4 - di L2 The residual of this type of equation represents the error implicitly. However in geodesy the explicit distance error definition is usual, namely,
2 2 RangingGNSS_11.nb The residual of this type of equation represents the error implicitly. However in geodesy the explicit distance error definition is usual, namely, The relation between the two expressions, which implies that if f i = 0 then g i = 0 and vica versa. Therefore, in case of four observations, determined system, we employ the first expression, which is easy to handle as a polynomial. The observation equations, Clear"Global " e1 x1 a 0 2 x2 b 0 2 x3 c 0 2 x4 d 0 2 ; e2 x1 a 1 2 x2 b 1 2 x3 c 1 2 x4 d 1 2 ; e3 x1 a 2 2 x2 b 2 2 x3 c 2 2 x4 d 2 2 ; e4 x1 a 3 2 x2 b 3 2 x3 c 3 2 x4 d 3 2 ; Let us suppose that the observation data are, data a , a , a , a , b , b , b , b , c , c , c , c , d , d , d , d ; Preliminary elimination First, this system of polynomials, will be transformed into a system of linear equations and a quadratic equation. Let us expand and multiply by minus one, and arranged the original equations, eqsl e1l, e2l, e3l, e4l MapSortExpand &, e1, e2, e3, e4 x1 2 x2 2 x3 2 x4 2 2 x1 a 0 a x2 b 0 b x3 c 0 c x4 d 0 d 0 2, x1 2 x2 2 x3 2 x4 2 2 x1 a 1 a x2 b 1 b x3 c 1 c x4 d 1 d 1 2, x1 2 x2 2 x3 2 x4 2 2 x1 a 2 a x2 b 2 b x3 c 2 c x4 d 2 d 2 2, x1 2 x2 2 x3 2 x4 2 2 x1 a 3 a x2 b 3 b x3 c 3 c x4 d 3 d 3 2 Subtruct the fourth equation from the other three ones,
3 RangingGNSS_11.nb q TableeqsLi eqsl4, i, 1, 3 Simplify 2 x1 a 0 a x1 a 3 a x2 b 0 b x2 b 3 b x3 c 0 c x3 c 3 c x4 d 0 d x4 d 3 d 3 2, 2 x1 a 1 a x1 a 3 a x2 b 1 b x2 b 3 b x3 c 1 c x3 c 3 c x4 d 1 d x4 d 3 d 3 2, 2 x1 a 2 a x1 a 3 a x2 b 2 b x2 b 3 b x3 c 2 c x3 c 3 c x4 d 2 d x4 d 3 d 3 2 This is a system of three linear equations, and they can be written as, g 1 a 0,3 x1 b 0,3 x2 c 0,3 x3 d 0,3 x4 e 0,3 ; g 2 g , 0 1 x1 a 1,3 x2 b 1,3 x3 c 1,3 x4 d 1,3 e 1,3 g 3 g , 0 2 x1 a 2,3 x2 b 2,3 x3 c 2,3 x4 d 2,3 e 2,3 The coefficients a i,3, b i,3, c i,3, d i,3, e i,3, i 0,.. 2, can be determined as, coeffs0 TableCoefficientqi 1, x1, Coefficientqi 1, x2, Coefficientqi 1, x3, Coefficientqi 1, x4 Factor, i, 0, 2; which are the coefficients of the variables x 1, x 2, x 3, x 4. The constant part is, coeffs1 Tableqi coeffs0i.x1, x2, x3, x4 Simplify, i, 1, 3; Therefore, all of the coefficients are, coeffs TableUnioncoeffs0i, coeffs1i, i, 1, 3 2 a 0 a 3, 2 b 0 b 3, 2 c 0 c 3, 2 d 0 d 3, a 0 2 a 3 2 b 0 2 b 3 2 c 0 2 c 3 2 d 0 2 d 3 2, 2 a 1 a 3, 2 b 1 b 3, 2 c 1 c 3, 2 d 1 d 3, a 1 2 a 3 2 b 1 2 b 3 2 c 1 2 c 3 2 d 1 2 d 3 2, 2 a 2 a 3, 2 b 2 b 3, 2 c 2 c 3, 2 d 2 d 3, a 2 2 a 3 2 b 2 2 b 3 2 c 2 2 c 3 2 d 2 2 d 3 2 Let us assign these coefficients to the linear system, coeffsn Flatten TableInner1 2 &, a i,3, b i,3, c i,3, d i,3, e i,3, coeffsi 1, List, i, 0, 2 a 0,3 2 a 0 a 3, b 0,3 2 b 0 b 3, c 0,3 2 c 0 c 3, d 0,3 2 d 0 d 3, e 0,3 a 0 2 a 3 2 b 0 2 b 3 2 c 0 2 c 3 2 d 0 2 d 3 2, a 1,3 2 a 1 a 3, b 1,3 2 b 1 b 3, c 1,3 2 c 1 c 3, d 1,3 2 d 1 d 3, e 1,3 a 1 2 a 3 2 b 1 2 b 3 2 c 1 2 c 3 2 d 1 2 d 3 2, a 2,3 2 a 2 a 3, b 2,3 2 b 2 b 3, c 2,3 2 c 2 c 3, d 2,3 2 d 2 d 3, e 2,3 a 2 2 a 3 2 b 2 2 b 3 2 c 2 2 c 3 2 d 2 2 d 3 2 In addition, we take one of the nonlinear equations, let say the fourth one, e4 x1 a 3 2 x2 b 3 2 x3 c 3 2 x4 d 3 2 Now, we shall solve the linear system for the variables x 1, x 2, x 3, with x 4 as parameter. It means, the relations x 1 = g 1 (x 4 ), x 2 = g 2 (x 4 ) and x 3 = g 3 (x 4 ) will be computed. To do that, different elimination methods can be employed.
4 4 RangingGNSS_11.nb 11-2 GPS 4-Point Problem Sturmfels method The Sturmfels approach can be employed to solve the linear system of {g 1, g 2, g 3 }. Depending on which variable one wants, the original system is rewritten such that this particular variable is hidden (i.e. is treated as a constant). If our interest is to solve x 1 = g 1 (x 4 ), the equations are first homogenized using a new variable x 5 and consider the variables x 1 and x 4 as well as the constant part as parameters, f1 a 0,3 x1 d 0,3 x4 e 0,3 x5 b 0,3 x2 c 0,3 x3; f2 a 1,3 x1 d 1,3 x4 e 1,3 x5 b 1,3 x2 c 1,3 x3; f3 a 2,3 x1 d 2,3 x4 e 2,3 x5 b 2,3 x2 c 2,3 x3; The Jacobian determinant then becomes, Jx1 Det x2 f1 x3 f1 x5 f1 x2 f2 x3 f2 x5 f2 x2 f3 x3 f3 x5 f3 ; Then the solution for x 1 as function of x 4 is, solx1 SolveJx1 0, x1 x1 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 Similarly, the homogenized system for x 2 = g 2 (x 4 ) then f4 b 0,3 x2 d 0,3 x4 e 0,3 x5 a 0,3 x1 c 0,3 x3; f5 b 1,3 x2 d 1,3 x4 e 1,3 x5 a 1,3 x1 c 1,3 x3; f6 b 2,3 x2 d 2,3 x4 e 2,3 x5 a 2,3 x1 c 2,3 x3; Jx2 Det and its solution is, x1 f4 x3 f4 x5 f4 x1 f5 x3 f5 x5 f5 x1 f6 x3 f6 x5 f6 ; solx2 SolveJx2 0, x2 x2 x4 a 2,3 c 1,3 d 0,3 x4 a 1,3 c 2,3 d 0,3 x4 a 2,3 c 0,3 d 1,3 x4 a 0,3 c 2,3 d 1,3 x4 a 1,3 c 0,3 d 2,3 x4 a 0,3 c 1,3 d 2,3 a 2,3 c 1,3 e 0,3 a 1,3 c 2,3 e 0,3 a 2,3 c 0,3 e 1,3 a 0,3 c 2,3 e 1,3 a 1,3 c 0,3 e 2,3 a 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 Finally x 3 g 3 x 4 leads to f7 c 0,3 x3 d 0,3 x4 e 0,3 x5 a 0,3 x1 b 0,3 x2; f8 c 1,3 x3 d 1,3 x4 e 1,3 x5 a 1,3 x1 b 1,3 x2; f9 c 2,3 x3 d 2,3 x4 e 2,3 x5 a 2,3 x1 b 2,3 x2;
5 RangingGNSS_11.nb Jx3 Det x1 f7 x2 f7 x5 f7 x1 f8 x2 f8 x5 f8 x1 f9 x2 f9 x5 f9 ; solx3 SolveJx3 0, x3 x3 x4 a 2,3 b 1,3 d 0,3 x4 a 1,3 b 2,3 d 0,3 x4 a 2,3 b 0,3 d 1,3 x4 a 0,3 b 2,3 d 1,3 x4 a 1,3 b 0,3 d 2,3 x4 a 0,3 b 1,3 d 2,3 a 2,3 b 1,3 e 0,3 a 1,3 b 2,3 e 0,3 a 2,3 b 0,3 e 1,3 a 0,3 b 2,3 e 1,3 a 1,3 b 0,3 e 2,3 a 0,3 b 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 Substituting the obtained expressions of x 1 g 1 x 4, x 2 g 2 x 4 and x 3 g 3 x 4 in the fourth equation e 4 (x 1, x 2, x 3, x 4 ), G e4. solx11, 1, solx21, 1, solx31, 1; This is a quadratic equation for x 4 ExponentG, x4, List 0, 1, 2 2 The coefficients of this equation, h 2 x 4 + h 1 x 4 + h 0 = 0 are quite long expressions, therefore here we do not display them. h2 CoefficientG, x4 2 Simplify; h1 CoefficientG, x4 Simplify; h0 SimplifyG h2 x4 2 h1 x4; The actual numeric solution of the original system means the evaluation of these coefficients with the numerical data, h2c h2. coeffsn. data h1c h1. coeffsn. data h0c h0. coeffsn. data and then solving the quadratic equation for variable x 4, solx4 Solveh2c x4 ^ 2 h1c x4 h0c 0, x4 x , x The two solutions are x41, x42 x4. solx , Substituting these values in x 1 = g 1 (x 4 ), we get x 1 X1, X2 Mapx1. solx11, 1. coeffsn. data. x4 &, x41, x , Similarly, the values for x 2 are Y1, Y2 Mapx2. solx21, 1. coeffsn. data. x4 &, x41, x ,
6 6 RangingGNSS_11.nb and for x 3 Z1, Z2 Mapx3. solx31, 1. coeffsn. data. x4 &, x41, x , We can select the proper solution according to their norms, NormX1, Y1, Z NormX2, Y2, Z The second position is in space, consequently, the first solution is admissible, SetPrecisionX1, Y1, Z1, , , In order to get the symbolic expression for the coefficients of the quadratic equation, other methods also can be used Dixon Resultant Resultant Dixon Now we can solve the original system, eliminating the variables x 2 and x 3 to get x 1 = g 1 (x 4 ), AbsoluteTimingdrx1 DixonResultantg 1, g 2, g 3, x2, x3, u2, u3; , Null drx1 x1 a 2,3 b 1,3 c 0,3 x1 a 1,3 b 2,3 c 0,3 x1 a 2,3 b 0,3 c 1,3 x1 a 0,3 b 2,3 c 1,3 x1 a 1,3 b 0,3 c 2,3 x1 a 0,3 b 1,3 c 2,3 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 This is a linear expression contains only x 1 and x 4, Exponentdrx1, x1, x2, x3, x4 1, 0, 0, 1 Then the solution for x 1 solx1 Solvedrx1 0, x1 x1 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 Similarly, for the two additional variables, x 2 = g 2 (x 4 ) and x 3 = g 3 (x 4 ), drx2 DixonResultantg 1, g 2, g 3, x1, x3, u1, u3 x2 a 2,3 b 1,3 c 0,3 x2 a 1,3 b 2,3 c 0,3 x2 a 2,3 b 0,3 c 1,3 x2 a 0,3 b 2,3 c 1,3 x2 a 1,3 b 0,3 c 2,3 x2 a 0,3 b 1,3 c 2,3 x4 a 2,3 c 1,3 d 0,3 x4 a 1,3 c 2,3 d 0,3 x4 a 2,3 c 0,3 d 1,3 x4 a 0,3 c 2,3 d 1,3 x4 a 1,3 c 0,3 d 2,3 x4 a 0,3 c 1,3 d 2,3 a 2,3 c 1,3 e 0,3 a 1,3 c 2,3 e 0,3 a 2,3 c 0,3 e 1,3 a 0,3 c 2,3 e 1,3 a 1,3 c 0,3 e 2,3 a 0,3 c 1,3 e 2,3
7 RangingGNSS_11.nb Exponentdrx2, x1, x2, x3, x4 0, 1, 0, 1 solx2 Solvedrx2 0, x2 x2 x4 a 2,3 c 1,3 d 0,3 x4 a 1,3 c 2,3 d 0,3 x4 a 2,3 c 0,3 d 1,3 x4 a 0,3 c 2,3 d 1,3 x4 a 1,3 c 0,3 d 2,3 x4 a 0,3 c 1,3 d 2,3 a 2,3 c 1,3 e 0,3 a 1,3 c 2,3 e 0,3 a 2,3 c 0,3 e 1,3 a 0,3 c 2,3 e 1,3 a 1,3 c 0,3 e 2,3 a 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 and drx3 DixonResultantg 1, g 2, g 3, x1, x2, u1, u2 x3 a 2,3 b 1,3 c 0,3 x3 a 1,3 b 2,3 c 0,3 x3 a 2,3 b 0,3 c 1,3 x3 a 0,3 b 2,3 c 1,3 x3 a 1,3 b 0,3 c 2,3 x3 a 0,3 b 1,3 c 2,3 x4 a 2,3 b 1,3 d 0,3 x4 a 1,3 b 2,3 d 0,3 x4 a 2,3 b 0,3 d 1,3 x4 a 0,3 b 2,3 d 1,3 x4 a 1,3 b 0,3 d 2,3 x4 a 0,3 b 1,3 d 2,3 a 2,3 b 1,3 e 0,3 a 1,3 b 2,3 e 0,3 a 2,3 b 0,3 e 1,3 a 0,3 b 2,3 e 1,3 a 1,3 b 0,3 e 2,3 a 0,3 b 1,3 e 2,3 Exponentdrx3, x1, x2, x3, x4 0, 0, 1, 1 solx3 Solvedrx3 0, x3 x3 x4 a 2,3 b 1,3 d 0,3 x4 a 1,3 b 2,3 d 0,3 x4 a 2,3 b 0,3 d 1,3 x4 a 0,3 b 2,3 d 1,3 x4 a 1,3 b 0,3 d 2,3 x4 a 0,3 b 1,3 d 2,3 a 2,3 b 1,3 e 0,3 a 1,3 b 2,3 e 0,3 a 2,3 b 0,3 e 1,3 a 0,3 b 2,3 e 1,3 a 1,3 b 0,3 e 2,3 a 0,3 b 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 After substitution, we have again a quadratic equation for x 4, G e4. solx11, 1, solx21, 1, solx31, 1; ExponentG, x4, List 0, 1, 2 The coefficents of the quadratic equation are, h2d CoefficientG, x4 2 ; h1d CoefficientG, x4; h0d SimplifyG h2d x4 2 h1d x4; The coefficients provided by the Sturmfels method and the Dixon resultant are the same, h2 h2d, h1 h1d, h0 h0d Simplify 0, 0, Groebner basis First, again we want x 1 = g(x 4 ), therefore variables x 2 and x 3 should be eliminated from the Groebner basis, AbsoluteTiminggbx1 GroebnerBasisg 1, g 2, g 3, x1, x2, x3, x4, x2, x3; , Null
8 8 RangingGNSS_11.nb gbx1 x1 a 2,3 b 1,3 c 0,3 x1 a 1,3 b 2,3 c 0,3 x1 a 2,3 b 0,3 c 1,3 x1 a 0,3 b 2,3 c 1,3 x1 a 1,3 b 0,3 c 2,3 x1 a 0,3 b 1,3 c 2,3 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 Now, the basis contains only one equation, Lengthgbx1 1 in which only x 1 and x 4 can be found, Exponentgbx1, x1, x2, x3, x4 1, 0, 0, 1 Therefore x 1 g x 4 can be computed directly, solx1 Solvegbx1 0, x1 Simplify x1 b 2,3 c 1,3 x4 d 0,3 e 0,3 c 0,3 x4 d 1,3 e 1,3 b 1,3 c 2,3 x4 d 0,3 e 0,3 c 0,3 x4 d 2,3 e 2,3 b 0,3 c 2,3 x4 d 1,3 e 1,3 c 1,3 x4 d 2,3 e 2,3 a 2,3 b 1,3 c 0,3 b 0,3 c 1,3 a 1,3 b 2,3 c 0,3 b 0,3 c 2,3 a 0,3 b 2,3 c 1,3 b 1,3 c 2,3 Similarly, in the other cases and gbx2 GroebnerBasisg 1, g 2, g 3, x1, x2, x3, x4, x1, x3; Exponentgbx2, x1, x2, x3, x4 0, 1, 0, 1 solx2 Solvegbx2 0, x2 Simplify x2 a 2,3 c 1,3 x4 d 0,3 e 0,3 c 0,3 x4 d 1,3 e 1,3 a 1,3 c 2,3 x4 d 0,3 e 0,3 c 0,3 x4 d 2,3 e 2,3 a 0,3 c 2,3 x4 d 1,3 e 1,3 c 1,3 x4 d 2,3 e 2,3 a 2,3 b 1,3 c 0,3 b 0,3 c 1,3 a 1,3 b 2,3 c 0,3 b 0,3 c 2,3 a 0,3 b 2,3 c 1,3 b 1,3 c 2,3 gbx3 GroebnerBasisg 1, g 2, g 3, x1, x2, x3, x4, x1, x2; Exponentgbx3, x1, x2, x3, x4 0, 0, 1, 1 solx3 Solvegbx3 0, x3 Simplify x3 a 2,3 b 1,3 x4 d 0,3 e 0,3 b 0,3 x4 d 1,3 e 1,3 a 1,3 b 2,3 x4 d 0,3 e 0,3 b 0,3 x4 d 2,3 e 2,3 a 0,3 b 2,3 x4 d 1,3 e 1,3 b 1,3 x4 d 2,3 e 2,3 a 2,3 b 1,3 c 0,3 b 0,3 c 1,3 a 1,3 b 2,3 c 0,3 b 0,3 c 2,3 a 0,3 b 2,3 c 1,3 b 1,3 c 2,3 After substition them, we get G e4. solx11, 1, solx21, 1, solx31, 1; ExponentG, x4, List 0, 1, 2
9 RangingGNSS_11.nb Then the coefficients of the quadratic equation are, h2gr CoefficientG, x4 2 ; h1gr CoefficientG, x4; h0gr SimplifyG h2gr x4 2 h1gr x4; We have again the same result, h2 h2gr, h1 h1gr, h0 h0gr Simplify 0, 0, Reduced Groebner basis First, again we want to determine x 1 = g(x 4 ), therefore variables x 2 and x 3 should be eliminated from the Groebner basis, AbsoluteTiminggbx1 GroebnerBasisg 1, g 2, g 3, x1, x2, x3, x4, x2, x3; , Null gbx1 x1 a 2,3 b 1,3 c 0,3 x1 a 1,3 b 2,3 c 0,3 x1 a 2,3 b 0,3 c 1,3 x1 a 0,3 b 2,3 c 1,3 x1 a 1,3 b 0,3 c 2,3 x1 a 0,3 b 1,3 c 2,3 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 Now, the basis contains only one polynomial, Lengthgbx1 1 in which only x 1 and x 4 can be found, Exponentgbx1, x1, x2, x3, x4 1, 0, 0, 1 Therefore x 1 g x 4 can be computed directly, solx1 Solvegbx1 0, x1 Simplify x1 b 2,3 c 1,3 x4 d 0,3 e 0,3 c 0,3 x4 d 1,3 e 1,3 b 1,3 c 2,3 x4 d 0,3 e 0,3 c 0,3 x4 d 2,3 e 2,3 b 0,3 c 2,3 x4 d 1,3 e 1,3 c 1,3 x4 d 2,3 e 2,3 a 2,3 b 1,3 c 0,3 b 0,3 c 1,3 a 1,3 b 2,3 c 0,3 b 0,3 c 2,3 a 0,3 b 2,3 c 1,3 b 1,3 c 2,3 Similarly, in the other cases gbx2 GroebnerBasisg 1, g 2, g 3, x1, x2, x3, x4, x1, x3; Exponentgbx2, x1, x2, x3, x4 0, 1, 0, 1 solx2 Solvegbx2 0, x2 Simplify x2 a 2,3 c 1,3 x4 d 0,3 e 0,3 c 0,3 x4 d 1,3 e 1,3 a 1,3 c 2,3 x4 d 0,3 e 0,3 c 0,3 x4 d 2,3 e 2,3 a 0,3 c 2,3 x4 d 1,3 e 1,3 c 1,3 x4 d 2,3 e 2,3 a 2,3 b 1,3 c 0,3 b 0,3 c 1,3 a 1,3 b 2,3 c 0,3 b 0,3 c 2,3 a 0,3 b 2,3 c 1,3 b 1,3 c 2,3
10 10 RangingGNSS_11.nb and gbx3 GroebnerBasisg 1, g 2, g 3, x1, x2, x3, x4, x1, x2; Exponentgbx3, x1, x2, x3, x4 0, 0, 1, 1 solx3 Solvegbx3 0, x3 Simplify x3 a 2,3 b 1,3 x4 d 0,3 e 0,3 b 0,3 x4 d 1,3 e 1,3 a 1,3 b 2,3 x4 d 0,3 e 0,3 b 0,3 x4 d 2,3 e 2,3 a 0,3 b 2,3 x4 d 1,3 e 1,3 b 1,3 x4 d 2,3 e 2,3 a 2,3 b 1,3 c 0,3 b 0,3 c 1,3 a 1,3 b 2,3 c 0,3 b 0,3 c 2,3 a 0,3 b 2,3 c 1,3 b 1,3 c 2,3 After substition them, we get G e4. solx11, 1, solx21, 1, solx31, 1; ExponentG, x4, List 0, 1, 2 Then the coefficients of the quadratic equation are, h2gr CoefficientG, x4 2 ; h1gr CoefficientG, x4; h0gr SimplifyG h2gr x4 2 h1gr x4; We have again the same result, h2 h2gr, h1 h1gr, h0 h0gr Simplify 0, 0, Global Symbolic Solver The solution of the system of 4 equations simultaneously in symbolic form, leads to a very large, impractical expression. However the solution of the linear system of g 1, g 2, g 3 with x 4 as parameter is easy, AbsoluteTimingsolGSS3 Solveg 1 0, g 2 0, g 3 0, x1, x2, x3; , Null solgss3 x1 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3, x2 x4 a 2,3 c 1,3 d 0,3 x4 a 1,3 c 2,3 d 0,3 x4 a 2,3 c 0,3 d 1,3 x4 a 0,3 c 2,3 d 1,3 x4 a 1,3 c 0,3 d 2,3 x4 a 0,3 c 1,3 d 2,3 a 2,3 c 1,3 e 0,3 a 1,3 c 2,3 e 0,3 a 2,3 c 0,3 e 1,3 a 0,3 c 2,3 e 1,3 a 1,3 c 0,3 e 2,3 a 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3, x3 a 2,3 b 0,3 a 0,3 b 2,3 a 1,3 x4 d 0,3 e 0,3 a 0,3 x4 d 1,3 e 1,3 a 1,3 b 0,3 a 0,3 b 1,3 a 2,3 x4 d 0,3 e 0,3 a 0,3 x4 d 2,3 e 2,3 a 2,3 b 0,3 a 0,3 b 2,3 a 1,3 c 0,3 a 0,3 c 1,3 a 1,3 b 0,3 a 0,3 b 1,3 a 2,3 c 0,3 a 0,3 c 2,3 The second order equation for solving x 4 is,
11 RangingGNSS_11.nb solgss34 e4. solgss3 x4 d 3 2 b 3 x4 a 2,3 c 1,3 d 0,3 x4 a 1,3 c 2,3 d 0,3 x4 a 2,3 c 0,3 d 1,3 x4 a 0,3 c 2,3 d 1,3 x4 a 1,3 c 0,3 d 2,3 x4 a 0,3 c 1,3 d 2,3 a 2,3 c 1,3 e 0,3 a 1,3 c 2,3 e 0,3 a 2,3 c 0,3 e 1,3 a 0,3 c 2,3 e 1,3 a 1,3 c 0,3 e 2,3 a 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 2 a 3 x4 b 2,3 c 1,3 d 0,3 x4 b 1,3 c 2,3 d 0,3 x4 b 2,3 c 0,3 d 1,3 x4 b 0,3 c 2,3 d 1,3 x4 b 1,3 c 0,3 d 2,3 x4 b 0,3 c 1,3 d 2,3 b 2,3 c 1,3 e 0,3 b 1,3 c 2,3 e 0,3 b 2,3 c 0,3 e 1,3 b 0,3 c 2,3 e 1,3 b 1,3 c 0,3 e 2,3 b 0,3 c 1,3 e 2,3 a 2,3 b 1,3 c 0,3 a 1,3 b 2,3 c 0,3 a 2,3 b 0,3 c 1,3 a 0,3 b 2,3 c 1,3 a 1,3 b 0,3 c 2,3 a 0,3 b 1,3 c 2,3 2 c 3 a 2,3 b 0,3 a 0,3 b 2,3 a 1,3 x4 d 0,3 e 0,3 a 0,3 x4 d 1,3 e 1,3 a 1,3 b 0,3 a 0,3 b 1,3 a 2,3 x4 d 0,3 e 0,3 a 0,3 x4 d 2,3 e 2,3 a 2,3 b 0,3 a 0,3 b 2,3 a 1,3 c 0,3 a 0,3 c 1,3 a 1,3 b 0,3 a 0,3 b 1,3 a 2,3 c 0,3 a 0,3 c 2,3 2 ExponentsolGSS34, x4 2 This is the same equation what we have got by the different elimination techniques, for example with the Sturmfels method, Simplifyh2 x4 2 h1 x4 h0 solgss Global Numeric Solver NSolveg 1 0, g 2 0, g 3 0, e4 0. coeffsn. data, x1, x2, x3, x4 AbsoluteTiming , x , x , x , x , x , x , x , x NumberForm, , x , x , x , x , x , x , x , x Linear Homotopy We have already solved this problem in Section GPS N-point Problem Observation equations In case of m > 4 satellites, the two representations, and
12 12 RangingGNSS_11.nb will be not equivalent in least square sense, namely Let us consider six satellites with the following numerical values, datan a , a , a , a , a , a , b , b , b , b , b , b , c , c , c , c , c , c , d , d , d , d , d , d ; The number of the equations, m 6; Let us see the result for the two different representations. In the first case the general form of the equation for the i-th satellite is, e x1 a i 2 x2 b i 2 x3 c i 2 x4 d i 2 ; The objective to be minimized is the sum of the residium of the equations, f ApplyPlus, Tablee 2. datan, i, 0, m 1 Simplify x x x x x x x x x x x x x x x x x x x x x x x x4 2 2 The global minimum can be found by using the built-in function NMinimize, soln NMinimizef, x1, x2, x3, x4 AbsoluteTiming , , x , x , x , x
13 RangingGNSS_11.nb or NumberForm, , , x , x , x , x However, if we employ the norm of the distance instead of the residium of the equations, namely en d i x1 a i 2 x2 b i 2 x3 c i 2 x4; and then the objective is, fn ApplyPlus, Tableen 2. datan, i, 0, m x x x3 2 x x x x3 2 2 x x x x3 2 x x x x3 2 2 x x x x3 2 x x x x3 2 x4 The optimum will be somewhat different, solnn NMinimizefn, x1, x2, x3, x4 AbsoluteTiming , , x , x , x , x or NumberForm, , , x , x , x , x
14 14 RangingGNSS_11.nb Gauss - Jacobi solution Because in case of 4 satellites the two representation are the same, we can use the result of the GPS- 4 Point problem, but we should compute the weights of the Gauss-Jacobi algorithm on the basis of the second representation! First, the subsets should be determined. In our case n 4; m 6; The number of the subsets mn Binomialm, n 15 These subsets are, qs PartitionMap 1 &, FlattenSubsetsRangem, n, n 0, 1, 2, 3, 0, 1, 2, 4, 0, 1, 2, 5, 0, 1, 3, 4, 0, 1, 3, 5, 0, 1, 4, 5, 0, 2, 3, 4, 0, 2, 3, 5, 0, 2, 4, 5, 0, 3, 4, 5, 1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 4, 5, 1, 3, 4, 5, 2, 3, 4, 5 The value of the indices start from zero in correspondence of the indices of the coefficients of the equations. Now, we shall utilize the symbolic solution of the GPS 4-points problem, namely the expressions of the coefficients of the quadratic equation (h 2,h 1,h 0 ). Therefore, we construct a new data list, datap similar to datan,which assignes the proper values to the coefficients of the equations of the subsets. This is the same technique that we have already used. datap TableMapSelectdatan, MemberQqsi, 1, 2 &. 1 0, 2 1, 3 2, 4 3 &, qsi, i, 1, mn; For example, the fourth subset is indexed as qs4 0, 1, 3, 4 and it has the proper data assignments, datap4 a , a , a , a , b , b , b , b , c , c , c , c , d , d , d , d Now, we can employ the symbolic expressions of the coefficients of the quadratic equation for x 4, (h 2, h 1, h 0 ), which were developed for the GPS 4-points problem. Let us consider the result of the Sturmfels approach. These coefficients can be evaluated for all of the 15 combinatorial subsets, (H 2, H 1, H 0 ), H2 Maph2. coeffsn. Flatten &, datap; H1 Maph1. coeffsn. Flatten &, datap; H0 Maph0. coeffsn. Flatten &, datap; It is useful to display these coeffients, H210 TransposeH2, H1, H0;
15 RangingGNSS_11.nb H210c Map. datan &, H210; TableFormNumberFormH210c, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , This table indicates that the 10 th combination has a poor geometry (see first column and Fig.11.1), which fact can be also detected by computing its PDOP (Position Dilution of Precision), see the text book. Then the 15 quadratic equations can be solved for x 4, ListPlotH2, PlotRange All, Joined True Fig Computed values of H 2 coefficients AbsoluteTiming X4 Mapx4. Solve1 x4 ^ 2 2 x4 3 0, x41, 1 &, H210c; , Null These values of x 4 can be substituted into the symbolic relations x 1 = g(x 4 ), x 2 = g(x 4 ) and x 3 = g(x 4 ) developed for GPS 4 - points problem. X1 MapThreadx1. solx11, 1. coeffsn. Flatten1. x4 2 &, datap, X4; X2 MapThreadx2. solx21, 1. coeffsn. Flatten1. x4 2 &, datap, X4; X3 MapThreadx3. solx31, 1. coeffsn. Flatten1. x4 2 &, datap, X4; Let us display these solutions for (x 1, x 2, x 3, x 4 ),
16 16 RangingGNSS_11.nb X TransposeX1, X2, X3, X4; TableFormNumberFormX, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , The poor geometry of the 10 th combination can be also realized in Fig.11.2 as well as in the last column of X. ListPlotX4, PlotRange All, Joined True 500 Computing the arithmetic average, MeanX Fig Computed values of variable x , , , NumberForm, , , , But it is far from the acceptable solution. In order to compute the weights of these solutions, one has to compute the square of the 15 Jacobi determinants. Each has size of 4 4, because of four equations and four variables. Starting with the general form of the i-th equation, en x4 x1 a i 2 x2 b i 2 x3 c i 2 d i The partial derivatives are,
17 RangingGNSS_11.nb de Den, x1, Den, x2, Den, x3, Den, x4 Simplify x1 a i x1 a i 2 x2 b i 2 x3 c i 2, x2 b i x1 a i 2 x2 b i 2 x3 c i 2, x3 c i x1 a i 2 x2 b i 2 x3 c i 2, 1 The numerical values of these partial derivatives will be computed at the corresponding combinatorial solutions. Therefore, the weights, Π j, the square of the 15 Jacobi determinants are, Πs TableMapDetde. i 1, de. i 2, de. i 3, de. i 4. datan 2 &, qsj. x1 Xj, 1, x2 Xj, 2, x3 Xj, 3, x4 Xj, 4, j, 1, mn , , , , , , , , , , , , , , The sum of these weights are sπ, sπs ApplyPlus, Πs Then the weighted solution of the variable x i is X1s, X2s, X3s, X4s MapΠs. &, X1, X2, X3, X4 sπs , , , NumberForm, , , , The result is correct ALESS Equations The equations of the determined model, The prototype of g i, en since x4 x1 a i 2 x2 b i 2 x3 c i 2 d i
18 18 RangingGNSS_11.nb MapDen 2, &, x1, x2, x3, x4 2 x1 a i x4 x1 a i 2 x2 b i 2 x3 c i 2 d i x1 a i 2 x2 b i 2 x3 c i 2, 2 x2 b i x4 x1 a i 2 x2 b i 2 x3 c i 2 d i x1 a i 2 x2 b i 2 x3 c i 2, 2 x3 c i x4 x1 a i 2 x2 b i 2 x3 c i 2 d i, x1 a i 2 x2 b i 2 x3 c i 2 2 x4 x1 a i 2 x2 b i 2 x3 c i 2 d i then in general form, m1 v Map &, i 0 1m i 0 2 x1 a i x4 x1 a i 2 x2 b i 2 x3 c i 2 d i x1 a i 2 x2 b i 2 x3 c i 2, 1m i 0 2 x2 b i x4 x1 a i 2 x2 b i 2 x3 c i 2 d i x1 a i 2 x2 b i 2 x3 c i 2, 1m i 0 1m 2 x3 c i x4 x1 a i 2 x2 b i 2 x3 c i 2 d i x1 a i 2 x2 b i 2 x3 c i 2, 2 x4 x1 a i 2 x2 b i 2 x3 c i 2 d i i 0 In our case m = 6, therefore the numeric form of the equations, vn v. m m. datan Expand; For example the first equation vn1
19 RangingGNSS_11.nb
20 20 RangingGNSS_11.nb
21 RangingGNSS_11.nb x x x x3 2 2 x1 x x x x x x x x3 2 2 x1 x x x x ALESS Numeric Now, to solve the ALESS equations, let us employ the homotopy method. In this case the system is not a polynomial one. We can not generate complex start system and its solution. However, we can use for example, fixed point homotopy. In order to illustrate the robustness of the method, we employ the result of the worst geometry of Gauss-Jacobi solution, that of the combination 10 th. This will be the solution of the start system, X0 X , , , The variables, V x1, x2, x3, x4; The start system itself, gf V FirstX x1, x2, x3, x4 To avoid singularity of the homotopy function, let Γ 1, 1, 1, 1; GeoAlgebra LinearHomotopy Now employing path tracing by integration with high precision, AbsoluteTimingsolH LinearHomotopyFR vn, gf, V, X0, Γ, 10, Λ; , Null solh , , , NumberForm, , , , Displaying the homotopy paths,
22 22 RangingGNSS_11.nb GraphicsArray TableParametricPlotReViΛ, ImViΛ. solh2, , Λ, 0, 1, PlotRange All, BaseStyle FontSize 10, FontFamily "Times", Axes None, FrameLabel "Re", "Im", Frame True, AspectRatio 0.6, PlotLabel StringJoinToStringVi, "Λ 10 6 ", Epilog PointSize0.02, Blue, PointReX01, i 10 6, ImX01, i 10 6, PointSize0.02, Red, PointReViΛ. solh2, 1. Λ , ImViΛ. solh2, 1. Λ , i, 1, LengthV Im
23 RangingGNSS_11.nb Im Fig Homotopy paths Extended Newton- Raphson solution Now, we solve the original overdetermined system employing one of the solutions of the Gauss-Jacobi subset solution. Let us use again the worst combination, X0 X , , , GeoAlgebra NewtonExtended The equations of the overdetermined system, F Tableen, i, 0, m 1. datan x x x3 2 x4, x x x3 2 x4, x x x3 2 x4, x x x3 2 x4, x x x3 2 x4, x x x3 2 x4 AbsoluteTimingsolNE NewtonExtended F, V, X0; , Null The solution is, solne Last , , , NumberForm, , , ,
24 24 RangingGNSS_11.nb The convergence is fast, TakesolNE, 6 NumberForm, 11 & , , , , , , , , , , , , , , , , , , , , , , , Direct Least Square via Local Minimization We have solved the problem with global minimization, here we solve with a local method. Again we use the worst Gauss- Jacobi subset solution as initial guess, OffFindMinimum::"precw" FindMinimumSetPrecisionfn, 30, x1, X10, 1, x2, X10, 2, x3, X10, 3, x4, X10, 4, WorkingPrecision 30, Method "ConjugateGradient " AbsoluteTiming , , x , x , x , x NumberForm, , , x , x , x , x We needed high precision computation to reach acceptable solution, but even doing that the computation time is short. Conclusions For solving GPS ranging problem all of the methods introduced here are efficient. Concerning the implementation out of Mathematica perhaps the results of the Dixon and Groebner methods for 4- point problem, and Gauss- Jacobi as well as Extended Newton- Raphson method for N- point problem are mostly recommended.
Now let us consider the solutions of the combinatorial pairs. the weights are the square of the corresponding determinants,
Algebraic Geodesy and Geoinformatics - 2009 - PART I METHODS 7 Gauss- Jacobi Combinatorial Algorithm 7-1 Linear model Another technique to solve overdetermined system is proposed by Gauss and Jacobi. The
More information16 Conformal Mapping. Overview
Algebraic Geodesy and Geoinformatics - 2009 - PART II APPLICATIONS 16 Conformal Mapping Overview First, the 3- point problem is discussed. A preliminary elimination of the translation vector reduces the
More information5 Linear Homotopy. 5-1 Introduction
Algebraic Geodesy and Geoinformatics - 2009 PART I METHODS 5 Linear Homotopy 5-1 Introduction Most often, there exist a fundamental task of solving systems of equations in geodesy. In such cases, many
More information17 Affine Mapping. Overview
Algebraic Geodesy and Geoinformatics - 2009 - PART II APPLICATIONS 17 Affine Mapping Overview The number of equations of the 3 - point problem can be reduced to 6 equations by eliminating the 3 translation
More information13 Positioning by Photogrammetric Resection
Algebraic Geodesy and Geoinformatics - 2009 - PART II APPLICATIONS 13 Positioning by Photogrammetric Resection Overview First the 3- point photogrammetric resection model is solved following the Grafarend-
More informationThe original expression can be written as, 4-2 Greatest common divisor of univariate polynomials. Let us consider the following two polynomials.
Algebraic Geodesy and Geoinformatics 2009 PART I - METHODS 4 Groebner Basis 4- Greatest common divisor of integers Let us consider the following integer numbers 2, 20 and 8. Factorize them, Clear@"Global
More information2 Basics of Polynomials
Algebraic Geodesy and Geoinformatics - 2009 PART I - METHODS 2 Basics of Polynomials 2-1 Representations of Polynomials 2-1- 1 List of monomials In Geodesy and Geoinformatics, most observations are related
More informationLinear Algebraic Equations
Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n
More informationComputational Methods. Least Squares Approximation/Optimization
Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the
More informationCHAPTER 11. A Revision. 1. The Computers and Numbers therein
CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of
More informationQuantum Mechanics using Matrix Methods
Quantum Mechanics using Matrix Methods Introduction and the simple harmonic oscillator In this notebook we study some problems in quantum mechanics using matrix methods. We know that we can solve quantum
More informationCommon Core Algebra 2 Review Session 1
Common Core Algebra 2 Review Session 1 NAME Date 1. Which of the following is algebraically equivalent to the sum of 4x 2 8x + 7 and 3x 2 2x 5? (1) 7x 2 10x + 2 (2) 7x 2 6x 12 (3) 7x 4 10x 2 + 2 (4) 12x
More informationSolving Algebraic Systems of Equations
Solving Algebraic Systems of Equations Daniel Lichtblau Wolfram Research, Inc. July 2000 Presented at: SCI2000, Orlando Overview Systems of polynomial equations with finitely many solutions arise in many
More informationI. Numerical Computing
I. Numerical Computing A. Lectures 1-3: Foundations of Numerical Computing Lecture 1 Intro to numerical computing Understand difference and pros/cons of analytical versus numerical solutions Lecture 2
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationCHAPTER 5. Higher Order Linear ODE'S
A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 2 A COLLECTION OF HANDOUTS ON SCALAR LINEAR ORDINARY
More information10.34: Numerical Methods Applied to Chemical Engineering. Lecture 7: Solutions of nonlinear equations Newton-Raphson method
10.34: Numerical Methods Applied to Chemical Engineering Lecture 7: Solutions of nonlinear equations Newton-Raphson method 1 Recap Singular value decomposition Iterative solutions to linear equations 2
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements
More informationNumerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point
Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error In this method we assume initial value of x, and substitute in the equation. Then modify x and continue till we
More informationALGEBRAIC EXPRESSIONS AND POLYNOMIALS
MODULE - ic Epressions and Polynomials ALGEBRAIC EXPRESSIONS AND POLYNOMIALS So far, you had been using arithmetical numbers, which included natural numbers, whole numbers, fractional numbers, etc. and
More informationNONLINEAR EQUATIONS AND TAYLOR S THEOREM
APPENDIX C NONLINEAR EQUATIONS AND TAYLOR S THEOREM C.1 INTRODUCTION In adjustment computations it is frequently necessary to deal with nonlinear equations. For example, some observation equations relate
More informationIntroduction to Applied Linear Algebra with MATLAB
Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation
More informationClass 4: More Pendulum results
Class 4: More Pendulum results The pendulum is a mechanical problem with a long and interesting history, from Galileo s first ansatz that the period was independent of the amplitude based on watching priests
More informationDepartment of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015
Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two
More informationAppendix C Vector and matrix algebra
Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationcha1873x_p02.qxd 3/21/05 1:01 PM Page 104 PART TWO
cha1873x_p02.qxd 3/21/05 1:01 PM Page 104 PART TWO ROOTS OF EQUATIONS PT2.1 MOTIVATION Years ago, you learned to use the quadratic formula x = b ± b 2 4ac 2a to solve f(x) = ax 2 + bx + c = 0 (PT2.1) (PT2.2)
More informationEigenvalues and Eigenvectors
Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors
More informationApplied Math 205. Full office hour schedule:
Applied Math 205 Full office hour schedule: Rui: Monday 3pm 4:30pm in the IACS lounge Martin: Monday 4:30pm 6pm in the IACS lounge Chris: Tuesday 1pm 3pm in Pierce Hall, Room 305 Nao: Tuesday 3pm 4:30pm
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More information7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.
7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply
More informationComplex numbers. Learning objectives
CHAPTER Complex numbers Learning objectives After studying this chapter, you should be able to: understand what is meant by a complex number find complex roots of quadratic equations understand the term
More informationExact and Approximate Numbers:
Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.
More informationNONLINEAR DC ANALYSIS
ECE 552 Numerical Circuit Analysis Chapter Six NONLINEAR DC ANALYSIS OR: Solution of Nonlinear Algebraic Equations I. Hajj 2017 Nonlinear Algebraic Equations A system of linear equations Ax = b has a
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More informationCS 221 Lecture 9. Tuesday, 1 November 2011
CS 221 Lecture 9 Tuesday, 1 November 2011 Some slides in this lecture are from the publisher s slides for Engineering Computation: An Introduction Using MATLAB and Excel 2009 McGraw-Hill Today s Agenda
More informationNumerical Algorithms as Dynamical Systems
A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #2 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationCapacity Pre-log of Noncoherent SIMO Channels via Hironaka s Theorem
Capacity Pre-log of Noncoherent SIMO Channels via Hironaka s Theorem Veniamin I. Morgenshtern 22. May 2012 Joint work with E. Riegler, W. Yang, G. Durisi, S. Lin, B. Sturmfels, and H. Bőlcskei SISO Fading
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationSection 5.2 Solving Recurrence Relations
Section 5.2 Solving Recurrence Relations If a g(n) = f (a g(0),a g(1),..., a g(n 1) ) find a closed form or an expression for a g(n). Recall: nth degree polynomials have n roots: a n x n + a n 1 x n 1
More informationAlgorithms. Shanks square forms algorithm Williams p+1 Quadratic Sieve Dixon s Random Squares Algorithm
Alex Sundling Algorithms Shanks square forms algorithm Williams p+1 Quadratic Sieve Dixon s Random Squares Algorithm Shanks Square Forms Created by Daniel Shanks as an improvement on Fermat s factorization
More informationCambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information
Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,
More informationNUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING
NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical
More informationS.F. Xu (Department of Mathematics, Peking University, Beijing)
Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationAlgebra Review. Terrametra Resources. Lynn Patten
Terrametra Resources Lynn Patten ALGEBRAIC EXPRESSION A combination of ordinary numbers, letter symbols, variables, grouping symbols and operation symbols. Numbers remain fixed in value and are referred
More informationPractical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software
Practical Algebra A Step-by-step Approach Brought to you by Softmath, producers of Algebrator Software 2 Algebra e-book Table of Contents Chapter 1 Algebraic expressions 5 1 Collecting... like terms 5
More information1 Matrices and Systems of Linear Equations
March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationCOMPUTATIONAL EXPLORATIONS IN MAGNETRON SPUTTERING
COMPUTATIONAL EXPLORATIONS IN MAGNETRON SPUTTERING E. J. McInerney Basic Numerics Press 4. ELECTRON MOTION With the groundwork laid in the last two chapters, we can now simulate the motion of electrons
More informationSolving Algebraic Computational Problems in Geodesy and Geoinformatics
Solving Algebraic Computational Problems in Geodesy and Geoinformatics The Answer to Modern Challenges Bearbeitet von Joseph L Awange, Erik W Grafarend 1. Auflage 2004. Buch. XVII, 333 S. Hardcover ISBN
More informationIntroduction to Matrices
214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the
More informationNon-polynomial Least-squares fitting
Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts
More information2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable
Deciding positivity of real polynomials Eberhard Becker, Victoria Powers, and Thorsten Wormann Abstract. We describe an algorithm for deciding whether or not a real polynomial is positive semidenite. The
More information10.34 Numerical Methods Applied to Chemical Engineering Fall Quiz #1 Review
10.34 Numerical Methods Applied to Chemical Engineering Fall 2015 Quiz #1 Review Study guide based on notes developed by J.A. Paulson, modified by K. Severson Linear Algebra We ve covered three major topics
More informationMathematics Standards for High School Algebra I
Mathematics Standards for High School Algebra I Algebra I is a course required for graduation and course is aligned with the College and Career Ready Standards for Mathematics in High School. Throughout
More informationMATHEMATICS Lecture. 4 Chapter.8 TECHNIQUES OF INTEGRATION By Dr. Mohammed Ramidh
MATHEMATICS Lecture. 4 Chapter.8 TECHNIQUES OF INTEGRATION By TECHNIQUES OF INTEGRATION OVERVIEW The Fundamental Theorem connects antiderivatives and the definite integral. Evaluating the indefinite integral,
More informationLinear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway
Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationDifferential Equations with a Convergent Integer power Series Solution
Differential Equations with a Convergent Integer power Series Solution Mark van Hoeij AMC Conference TU/e (Algebra Meetkunde and Computation) July 3, 2014 Notations Let y Q[[x]] and suppose that 1 y has
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationComputational Study of 3D Affine Coordinate Transformation
Computational Study of D Affine Coordinate Transformation Part I. -point Problem Bela Palancz 1, Robert H. Lewis 2, Piroska Zaletnyik and Joseph Awange 4 1 Department of Photogrammetryand Geoinformatics
More informationLecture 20 - Plotting and Nonlinear Equations
Lecture 2 - Plotting and Nonlinear Equations Outline Prayer/Spiritual Thought Announcements 3. 4. Plotting Plotting Roots of Polynomials Single Nonlinear Equations Systems of Nonlinear Equations Plots
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationAlgebra I Number and Quantity The Real Number System (N-RN)
Number and Quantity The Real Number System (N-RN) Use properties of rational and irrational numbers N-RN.3 Explain why the sum or product of two rational numbers is rational; that the sum of a rational
More informationUse of ground-based GNSS measurements in data assimilation. Reima Eresmaa Finnish Meteorological Institute
Use of ground-based GNSS measurements in data assimilation Reima Eresmaa Finnish Meteorological Institute 16 June 2006 Outline 1) Introduction GNSS * positioning Tropospheric delay 2) GNSS as a meteorological
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent
More informationGPS Geodesy - LAB 7. Neglecting the propagation, multipath, and receiver errors, eq.(1) becomes:
GPS Geodesy - LAB 7 GPS pseudorange position solution The pseudorange measurements j R i can be modeled as: j R i = j ρ i + c( j δ δ i + ΔI + ΔT + MP + ε (1 t = time of epoch j R i = pseudorange measurement
More information1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i
Problems in basic linear algebra Science Academies Lecture Workshop at PSGRK College Coimbatore, June 22-24, 2016 Govind S. Krishnaswami, Chennai Mathematical Institute http://www.cmi.ac.in/~govind/teaching,
More informationUnit 5 Evaluation. Multiple-Choice. Evaluation 05 Second Year Algebra 1 (MTHH ) Name I.D. Number
Name I.D. Number Unit Evaluation Evaluation 0 Second Year Algebra (MTHH 039 09) This evaluation will cover the lessons in this unit. It is open book, meaning you can use your textbook, syllabus, and other
More informationPrecalculus Lesson 4.1 Polynomial Functions and Models Mrs. Snow, Instructor
Precalculus Lesson 4.1 Polynomial Functions and Models Mrs. Snow, Instructor Let s review the definition of a polynomial. A polynomial function of degree n is a function of the form P(x) = a n x n + a
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationDifferential equations
Differential equations Math 7 Spring Practice problems for April Exam Problem Use the method of elimination to find the x-component of the general solution of x y = 6x 9x + y = x 6y 9y Soln: The system
More informationSolving non-linear systems of equations
Solving non-linear systems of equations Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch2 October 7, 2017 1 / 38 The problem
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More information1.5 F15 O Brien. 1.5: Linear Equations and Inequalities
1.5: Linear Equations and Inequalities I. Basic Terminology A. An equation is a statement that two expressions are equal. B. To solve an equation means to find all of the values of the variable that make
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationDevelopment of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience
Development of an algorithm for the problem of the least-squares method: Preliminary Numerical Experience Sergey Yu. Kamensky 1, Vladimir F. Boykov 2, Zakhary N. Khutorovsky 3, Terry K. Alfriend 4 Abstract
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45
Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more
More informationChapter 9 Factorisation and Discrete Logarithms Using a Factor Base
Chapter 9 Factorisation and Discrete Logarithms Using a Factor Base February 15, 2010 9 The two intractable problems which are at the heart of public key cryptosystems, are the infeasibility of factorising
More informationGaussian interval quadrature rule for exponential weights
Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice
More informationIntroduction to Scientific Computing
Introduction to Scientific Computing Benson Muite benson.muite@ut.ee http://kodu.ut.ee/ benson https://courses.cs.ut.ee/2018/isc/spring 26 March 2018 [Public Domain,https://commons.wikimedia.org/wiki/File1
More information{ independent variable some property or restriction about independent variable } where the vertical line is read such that.
Page 1 of 5 Introduction to Review Materials One key to Algebra success is identifying the type of work necessary to answer a specific question. First you need to identify whether you are dealing with
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information