TREATMENT OF NUISANCE PARAMETER IN ADJUSTMENT (PHASED ADJUSTMENT)

Size: px
Start display at page:

Download "TREATMENT OF NUISANCE PARAMETER IN ADJUSTMENT (PHASED ADJUSTMENT)"

Transcription

1 TREATMENT OF NUISANCE PARAMETER IN ADJUSTMENT (PHASED ADJUSTMENT) Aaditya Verma a a Department of Civil Engineering, Indian Institute of Technology, Kanpur, India - aaditya@iitk.ac.in KEY WORDS: Least square adjustment, Nuisance parameter, Equivalent observation equation ABSTRACT: Least squares estimation is a standard tool for processing of geodetic data. Sometimes, when only a particular group of unknowns is of interest, it is better to eliminate the other group of unknowns. The unknowns which are not of interest are generally called nuisance parameters. This paper discusses the method of formation of equivalent observation equation which can be used to eliminate these nuisance parameters. A numerical example has been included to illustrate the method. 1. INTRODUCTION In general, geodetic data processing consists of the following information: 1. Parameters of interest (or unknowns) 2. Known quantities (or observations) 3. Explicit biases that can be parameterised 4. Errors that are not parameterised Errors are defined as those effects on the measurements that cause the measured quantity to be different from the true quantity. Any such effects that cause this phenomenon by a systematic amount are generally referred to as biases. In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest. During the processing of geodetic data, many situations are encountered where we come across such nuisance parameters. The equivalently eliminated observation equation system can be used for elimination of nuisance parameters. The derivation of equivalent observation equation was first made by Zhou(1985). Based on the derivation of equivalent equation, a diagonalisation algorithm has been derived which can be used for separating one adjustment problem into two sub-problems. 2. LEAST SQUARES ADJUSTMENT The principle of least squares adjustment involves formation of a linearised observation equation system represented by: V = L AX, P (1)

2 where L : observation vector of dimension m, A : coefficient matrix of dimension m n, X : unknown parameter vector of dimension n, V : residual vector of dimension m, n : number of unknowns, m : number of observations, and P : weight matrix of dimension m m. The least squares criterion to solve the above mentioned system of equations is as follows: F = V T PV = min (2) The function F reaches minimum value if the partial differentiation of F with respect to X equals zero. F X = 2VT P( A) = 0 Multiplying A T P with Eq. 1, we get A T PV = 0 (3) The least square solution of Eq. 1 is given by (A T PA)X A T PL = 0 (4) X = (A T PA) 1 (A T PL) (5) 3. EQUIVALENTLY ELIMINATED OBSERVATION EQUATION SYSTEM In least squares adjustment, the unknowns can be divided into two groups and solved in a blockwise manner. In order to eliminate nuisance parameters, the unknowns should be divided such that one of the groups contains parameters of interest and the other contains the nuisance parameters. Equivalently eliminated observation equation system is used to perform this task. Using this method, the nuisance parameters can be eliminated directly from the observation equations. After dividing the unknowns into two groups, the linearised observation equation system can be represented by V = L [A B] [ X 1 ], P (6) where L : observation vector of dimension m, A, B : coefficient matrices of dimension m (n-r) and m r, X 1, : unknown vectors of dimension n-r and r, V : residual vector of dimension m, n : number of unknowns, m : number of observations, and P : weight matrix of dimension m m. Eq. 6 can also be solved in a similar manner as Eq. 1 was solved in section 1. The solution to Eq. 6 can hence be calculated from the following equation:

3 [A B] T P[A B] [ X 1 ] = [A B] T PL (7) [ AT B T] P[ A B] [ X 1 ] = [ X AT 2 BT] PL [ AT PA A T PB B T PA B T PB ] [X 1 ] = [ AT PL B T PL ] [ M 11 M 12 M 21 M 22 ] [ X 1 ] = [ B 1 B 2 ] (8) where [ M 11 M 12 M 21 M 22 ] = [ AT PA A T PB B T PA B T PB ] and [B 1 B 2 ] = [ AT PL B T PL ] On expanding Eq. 8, the following equations are obtained: From Eq. 9, the value of X 1 is given as: On substituting the value of X 1 from Eq. 11 in Eq. 10, M 11 X 1 + M 12 = B 1 (9) M 21 X 1 + M 22 = B 2 (10) X 1 = M 1 11 (B 1 M 12 ) (11) M 21 (M 1 11 (B 1 M 12 )) + M 22 = B 2 Now, Eq. 12 is written as follows: where M 2 = M 22 M 21 M 1 11 M 12 (M 22 M 21 M 1 11 M 12 ) = (B 2 M 21 M 1 11 B 1 ) (12) M 2 = R 2 (11) = B T PB B T PAM 1 11 A T PB = B T P(I AM 1 11 A T P)B (12) R 2 = B 2 M 21 M 1 11 B 1 = B T P(I AM 1 11 A T P)L (13) Only Eq. 11 needs to be solved to calculate the unknown vector. Consider AM 1 11 A T P = J (14) From the following calculations, it can be proved that matrices J and (I J) are idempotent and (I J) T P is symmetric. J 2 = (AM 1 11 A T P)(AM 1 11 A T P) = AM 1 11 A T PAM 1 11 A T P = AM 1 11 A T P = J J 2 = J (15)

4 (I J) 2 = (I J)(I J) = I 2 2IJ + J 2 = I 2J + J = (I J) (I J) 2 = I J (16) [P(I J)] T = (I J T )P = P (AM 1 11 A T P) T P = P PAM 1 11 A T P = P(I J) (I J) T P = P(I J) (17) Using the properties derived in Eq. 12, Eq. 13 and Eq. 14 above, M 2 and R 2 can be rewritten as follows: M 2 = B T P(I J)B = B T P(I J)(I J)B = B T (I J) T P(I J)B (18) Now, considering R 2 = B T P(I J)L = B T (I J) T PL (19) (I J)B = D 2 (20) Using Eq. 18, Eq. 19 and Eq. 20, one can rewrite Eq. 11 as follows: B T (I J) T P(I J)B = B T (I J) T PL or (21) D 2 T PD 2 = D 2 T PL (22) The equivalently eliminated observation equation for Eq. 11 can be written as: U 2 = L D 2, P or (23) U 2 = L (I J)B, P (24) where L and P are original observation vector and weight matrix and U 2 is the residual vector having the same property as V in Eq.6. The advantage of using Eq. 16 is that the unknown vector X 1 has been eliminated without changing the observation and weight matrices. 4. DIAGONALISED NORMAL EQUATION In the previous section, the value of X 1 calculated from Eq. 9 was substituted in Eq. 10 to calculate. Similarly, the value of from Eq. 10 can be calculated and substituted in Eq. 9 if the required unknown vector is X 1. The normal equation mentioned in Eq. 8 can hence be diagonalised for the two groups of unknowns by using elimination process twice. The algorithm is outlined as follows: From Eq. 10, we get = M 1 22 (B 2 M 21 X 1 ) (25) Substituting this value of from Eq. 25 in Eq. 9, one gets: M 1 X 1 = R 1 (26) where M 1 = M 11 M 12 M 1 22 M 21 = A T PA A T PBM 1 22 B T PA = A T P(I BM 1 22 B T P)B and (27) R 1 = B 1 M 12 M 1 22 B 2 = A T P(I BM 1 22 B T P)L (28)

5 Combining Eq. 11 and Eq. 26, one gets [ M M 2 ] [ X 1 ] = [ R 1 R 2 ] (29) The process of forming Eq. 29 from Eq. 8 is called the diagonalisation of a normal equation. As discussed in previous section, the equivalently eliminated observation equation of Eq. 11 is Eq. 24. Similarly, if we denote BM 1 22 B T P = K and (30) (I K)A = D 1 (31) then, the equivalently eliminated observation equation for Eq. 26 can be written as follows: U 1 = L (I K)AX 1, P (32) where U 1 is a residual vector which has the same property as V of Eq. 6. L and P are the original observation vector and weight matrix respectively. Eq. 24 and Eq. 32 can be written together as follows: Eq. 29 is called diagonalised equation of Eq. 6. [ U 1 U 2 ] = [ L L ] [D D 2 ] [ X 1 ], [ P 0 0 P ] (33) 5. NUMERICAL EXAMPLE OF THE DIAGONALISATION ALGORITHM 5.1 MalLab Code The MatLab code to demonstrate the application of the discussed algorithm is given below: % Date: 07-Nov-2014 % Numerical example to demonstrate equivalently eliminated observation % equation system for elimination of nuisance parameters % INPUT - L (observation matrix) % A & B (coefficient matrices) % P (weight matrix) % OUTPUT - Section 1 - X (solution vector) % V (residual vector) % Section 2 - X1 & X2 (solution vectors of divided equations) % U1 & U2 (residual vectors)

6 Inputs L = [1;2;-1;2;1;0;-2] % observation vector A = [1 1; 1 2; 1 1; 0 0; 0 0; 0 0; 0 0] % coefficient matrix for 1st set of unknowns(x1) B = [0 0 0; 0 0 0; 0 0 0; 1 1 1; 2 1 1; 1 1 2; 1 1 1] % coefficient matrix for 2nd set of unknowns(x2) P = [ ; ; ; ; ; ; ] % weight matrix General least square solution A1 = [A B]; N = A1'*P*A1; U = A1'*P*L; X = inv(n)*u V = L - A1*X % combined coefficient matrix for all the unknowns % solution matrix % residual matrix Using equivalently eliminated observation equation to find X1 and X2 M11 = A'*P*A; M12 = A'*P*B; M21 = B'*P*A; M22 = B'*P*B; B1 = A'*P*L; B2 = B'*P*L; K = B*inv(M22)*B'*P; D1 = (eye(7) - K)*A; J = A*inv(M11)*A'*P; D2 = (eye(7) - J)*B;

7 M1 = M11 - (M12*inv(M22)*M21); R1 = B1 - (M12*inv(M22)*B2); X1 = inv(m1)*r1 % solution vector for 1st set of unknowns U1 = L - D1*X1 % residual vector M2 = M22 - (M21*inv(M11)*M12); R2 = B2 - (M21*inv(M11)*B1); X2 = inv(m2)*r2 % solution vector for 2nd set of unknowns U2 = L - D2*X2 % residual vector The above mentioned code first uses general least squares method to solve for five unknowns. Then, it demonstrates the use of equivalently eliminated observation equation system to solve for two unknown vectors containing two and three unknowns respectively. 5.2 Results The results of the code mentioned in previous section are as follows : Solution using general least squares method Solutions using equivalently eliminated observation equation system X = X1= X2= V = U1= U2= Table 1. Results of the demonstration code Table one shows the results of the program mentioned in section 5.1. Column 1 of the table shows the solution and residual vectors obtained using general least squares approach. Column 2 shows the results after the unknown matrix is divided into two matrices containing two and three unknowns respectively. It is also evident from the table that the unknown vector X 1 can be calculated even if we omit the vector from our computations without affecting the results and vice versa.

8 6. CONCLUSIONS This paper discusses equivalently eliminated observation equation system which can be used to eliminate nuisance parameters. Instead of solving the original problem, this method allows us to solve only for the required parameters without changing the observation and weight matrices. A diagonalisation algorithm to separate one adjustment problem into two separate problems has also been discussed. REFERENCES Rizos, C. (1999). THE NATURE OF GPS OBSERVATION MODEL BIASES. Available: Last accessed 6th Nov Xu, G. (2003). A Diagonalisation Algorithm and Its Application in Ambiguity Search. Journal of Global Positioning Systems. 2 (1), pp Xu, G. (2007). Adjustment and Filtering Methods. In: GPS Theory, Algorithms and Applications. 2nd ed. Meppel: Springer. pp Xu, G. (2007). Equivalence of Undifferenced and Differencing Algorithms. In: GPS Theory, Algorithms and Applications. 2nd ed. Meppel: Springer. pp

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

DIFFERENCE EQUATIONS

DIFFERENCE EQUATIONS Chapter 3 DIFFERENCE EQUATIONS 3.1 Introduction Differential equations are applicable for continuous systems and cannot be used for discrete variables. Difference equations are the discrete equivalent

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Basic Linear Algebra in MATLAB

Basic Linear Algebra in MATLAB Basic Linear Algebra in MATLAB 9.29 Optional Lecture 2 In the last optional lecture we learned the the basic type in MATLAB is a matrix of double precision floating point numbers. You learned a number

More information

USING THE INTEGER DECORRELATION PROCEDURE TO INCREASE OF THE EFFICIENCY OF THE MAFA METHOD

USING THE INTEGER DECORRELATION PROCEDURE TO INCREASE OF THE EFFICIENCY OF THE MAFA METHOD ARIFICIAL SAELLIES, Vol. 46, No. 3 2 DOI:.2478/v8-2-2- USING HE INEGER DECORRELAION PROCEDURE O INCREASE OF HE EFFICIENCY OF HE MAFA MEHOD S. Cellmer Institute of Geodesy University of Warmia and Mazury

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved. 7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

Formulation of L 1 Norm Minimization in Gauss-Markov Models

Formulation of L 1 Norm Minimization in Gauss-Markov Models Formulation of L 1 Norm Minimization in Gauss-Markov Models AliReza Amiri-Simkooei 1 Abstract: L 1 norm minimization adjustment is a technique to detect outlier observations in geodetic networks. The usual

More information

Problem 1: Calculating deflection by integration uniform load. Problem 2: Calculating deflection by integration - triangular load pattern

Problem 1: Calculating deflection by integration uniform load. Problem 2: Calculating deflection by integration - triangular load pattern Problem 1: Calculating deflection by integration uniform load Problem 2: Calculating deflection by integration - triangular load pattern Problem 3: Deflections - by differential equations, concentrated

More information

LINEAR SYSTEMS AND MATRICES

LINEAR SYSTEMS AND MATRICES CHAPTER 3 LINEAR SYSTEMS AND MATRICES SECTION 3. INTRODUCTION TO LINEAR SYSTEMS This initial section takes account of the fact that some students remember only hazily the method of elimination for and

More information

Chapter 5 Equations for Wave Function

Chapter 5 Equations for Wave Function Chapter 5 Equations for Wave Function In very simple cases, the explicit expressions for the SALCs could be deduced by inspection, but not for complicated system. It would be useful for cases like these

More information

Autocorrelation Functions in GPS Data Processing: Modeling Aspects

Autocorrelation Functions in GPS Data Processing: Modeling Aspects Autocorrelation Functions in GPS Data Processing: Modeling Aspects Kai Borre, Aalborg University Gilbert Strang, Massachusetts Institute of Technology Consider a process that is actually random walk but

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Lecture Notes: Solving Linear Systems with Gauss Elimination

Lecture Notes: Solving Linear Systems with Gauss Elimination Lecture Notes: Solving Linear Systems with Gauss Elimination Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Echelon Form and Elementary

More information

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

Chapter 3. September 11, ax + b = 0.

Chapter 3. September 11, ax + b = 0. Chapter 3 September 11, 2017 3.1 Solving equations Solving Linear Equations: These are equations that can be written as ax + b = 0. Move all the variables to one side of the equation and all the constants

More information

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters C. Kotsakis Department of Geodesy and Surveying, Aristotle University of Thessaloniki

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Algebraic Properties of Solutions of Linear Systems

Algebraic Properties of Solutions of Linear Systems Algebraic Properties of Solutions of Linear Systems In this chapter we will consider simultaneous first-order differential equations in several variables, that is, equations of the form f 1t,,,x n d f

More information

Section 5.3 Systems of Linear Equations: Determinants

Section 5.3 Systems of Linear Equations: Determinants Section 5. Systems of Linear Equations: Determinants In this section, we will explore another technique for solving systems called Cramer's Rule. Cramer's rule can only be used if the number of equations

More information

CMPSCI 240: Reasoning about Uncertainty

CMPSCI 240: Reasoning about Uncertainty CMPSCI 240: Reasoning about Uncertainty Lecture 4: Sequential experiments Andrew McGregor University of Massachusetts Last Compiled: February 2, 2017 Outline 1 Recap 2 Sequential Experiments 3 Total Probability

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

Operations Research. Duality in linear programming.

Operations Research. Duality in linear programming. Operations Research Duality in linear programming Duality in linear programming As we have seen in past lessons, linear programming are either maximization or minimization type, containing m conditions

More information

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization

More information

FINITE ELEMENT METHOD: APPROXIMATE SOLUTIONS

FINITE ELEMENT METHOD: APPROXIMATE SOLUTIONS FINITE ELEMENT METHOD: APPROXIMATE SOLUTIONS I Introduction: Most engineering problems which are expressed in a differential form can only be solved in an approximate manner due to their complexity. The

More information

Approximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning

Approximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning he 9 th International Conference ENVIRONMENAL ENGINEERING 22 23 May 24, Vilnius, Lithuania SELECED PAPERS eissn 229-792 / eisbn 978-69-457-64-9 Available online at http://enviro.vgtu.lt Section: echnologies

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. . MATH 030: MATRICES Matrix Operations We have seen how matrices and the operations on them originated from our study of linear equations In this chapter we study matrices explicitely Definition 01 A matrix

More information

BALANCED HALF-SAMPLE REPLICATION WITH AGGREGATION UNITS

BALANCED HALF-SAMPLE REPLICATION WITH AGGREGATION UNITS BALANCED HALF-SAMPLE REPLICATION WITH AGGREGATION UNITS Steven Kaufman, National Center for Education Statistics Room 422e, 555 New Jersey Ave., Washington, D.C. 20208 KEY WORDSs Half-sample replication,

More information

4 Relativistic kinematics

4 Relativistic kinematics 4 Relativistic kinematics In astrophysics, we are often dealing with relativistic particles that are being accelerated by electric or magnetic forces. This produces radiation, typically in the form of

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.2: LU and Cholesky Factorizations 2 / 82 Preliminaries 3 / 82 Preliminaries

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo

Draft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo Lecture 12 Gaussian Elimination and LU Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa State University[0.5in]

More information

Simple Linear Regression Estimation and Properties

Simple Linear Regression Estimation and Properties Simple Linear Regression Estimation and Properties Outline Review of the Reading Estimate parameters using OLS Other features of OLS Numerical Properties of OLS Assumptions of OLS Goodness of Fit Checking

More information

Quartic Equation. By CH vd Westhuizen A unique Solution assuming Complex roots. Ax^4 + Bx^3 + Cx^2 + Dx + E = 0

Quartic Equation. By CH vd Westhuizen A unique Solution assuming Complex roots. Ax^4 + Bx^3 + Cx^2 + Dx + E = 0 Quartic Equation By CH vd Westhuizen A unique Solution assuming Complex roots The general Quartic is given by Ax^4 + Bx^3 + Cx^ + Dx + E = 0 As in the third order polynomial we are first going to reduce

More information

Invariants Describing Quark Mixing and Mass Matrices arxiv:hep-ph/ v1 9 Jul 1992

Invariants Describing Quark Mixing and Mass Matrices arxiv:hep-ph/ v1 9 Jul 1992 ITP-SB-92-38 July 6, 1992 Invariants Describing Quark Mixing and Mass Matrices arxiv:hep-ph/9207230v1 9 Jul 1992 Alexander Kusenko 1 Institute for Theoretical Physics State University of New York Stony

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Lecture 7. Bayes formula and independence

Lecture 7. Bayes formula and independence 18.440: Lecture 7 Bayes formula and independence Scott Sheffield MIT 1 Outline Bayes formula Independence 2 Outline Bayes formula Independence 3 Recall definition: conditional probability Definition: P(E

More information

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points. 8.06 Problem Set 7 Solution Due Wednesday, 5 April 2009 at 4 pm in 2-06. Total: 50 points. ( ) 2 Problem : Diagonalize A = and compute SΛ 2 k S to prove this formula for A k : A k = ( ) 3 k + 3 k 2 3 k

More information

Inner image-kernel (p, q)-inverses in rings

Inner image-kernel (p, q)-inverses in rings Inner image-kernel (p, q)-inverses in rings Dijana Mosić Dragan S. Djordjević Abstract We define study the inner image-kernel inverse as natural algebraic extension of the inner inverse with prescribed

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

The greatest common factor, or GCF, is the largest factor that two or more terms share.

The greatest common factor, or GCF, is the largest factor that two or more terms share. Unit, Lesson Factoring Recall that a factor is one of two or more numbers or expressions that when multiplied produce a given product You can factor certain expressions by writing them as the product of

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

18.600: Lecture 7 Bayes formula and independence

18.600: Lecture 7 Bayes formula and independence 18.600 Lecture 7 18.600: Lecture 7 Bayes formula and independence Scott Sheffield MIT 18.600 Lecture 7 Outline Bayes formula Independence 18.600 Lecture 7 Outline Bayes formula Independence Recall definition:

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

MATH 260 Homework 2 solutions. 7. (a) Compute the dimension of the intersection of the following two planes in R 3 : x 2y z 0, 3x 3y z 0.

MATH 260 Homework 2 solutions. 7. (a) Compute the dimension of the intersection of the following two planes in R 3 : x 2y z 0, 3x 3y z 0. MATH 6 Homework solutions Problems from Dr Kazdan s collection 7 (a) Compute the dimension of the intersection of the following two planes in R 3 : x y z, 3x 3y z (b) A map L: R 3 R is defined by the matrix

More information

System of Linear Equations. Slide for MA1203 Business Mathematics II Week 1 & 2

System of Linear Equations. Slide for MA1203 Business Mathematics II Week 1 & 2 System of Linear Equations Slide for MA1203 Business Mathematics II Week 1 & 2 Function A manufacturer would like to know how his company s profit is related to its production level. How does one quantity

More information

CS626 Data Analysis and Simulation

CS626 Data Analysis and Simulation CS626 Data Analysis and Simulation Instructor: Peter Kemper R 104A, phone 221-3462, email:kemper@cs.wm.edu Today: Probability Primer Quick Reference: Sheldon Ross: Introduction to Probability Models 9th

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

200 2 10 17 5 5 10 18 12 1 10 19 960 1 10 21 Deductive Logic Probability Theory π X X X X = X X = x f z x Physics f Sensor z x f f z P(X = x) X x X x P(X = x) = 1 /6 P(X = x) P(x) F(x) F(x) =

More information

Math 313 Chapter 1 Review

Math 313 Chapter 1 Review Math 313 Chapter 1 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 1.1 Introduction to Systems of Linear Equations 2 2 1.2 Gaussian Elimination 3 3 1.3 Matrices and Matrix Operations

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

4.5 Integration of Rational Functions by Partial Fractions

4.5 Integration of Rational Functions by Partial Fractions 4.5 Integration of Rational Functions by Partial Fractions From algebra, we learned how to find common denominators so we can do something like this, 2 x + 1 + 3 x 3 = 2(x 3) (x + 1)(x 3) + 3(x + 1) (x

More information

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization

Section 3.5 LU Decomposition (Factorization) Key terms. Matrix factorization Forward and back substitution LU-decomposition Storage economization Section 3.5 LU Decomposition (Factorization) Key terms Matrix factorization Forward and back substitution LU-decomposition Storage economization In matrix analysis as implemented in modern software the

More information

GPS Geodesy - LAB 7. Neglecting the propagation, multipath, and receiver errors, eq.(1) becomes:

GPS Geodesy - LAB 7. Neglecting the propagation, multipath, and receiver errors, eq.(1) becomes: GPS Geodesy - LAB 7 GPS pseudorange position solution The pseudorange measurements j R i can be modeled as: j R i = j ρ i + c( j δ δ i + ΔI + ΔT + MP + ε (1 t = time of epoch j R i = pseudorange measurement

More information

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University Math 30 Calculus II Brian Veitch Fall 015 Northern Illinois University Integration of Rational Functions by Partial Fractions From algebra, we learned how to find common denominators so we can do something

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Math 511 Exam #1. Show All Work No Calculators

Math 511 Exam #1. Show All Work No Calculators Math 511 Exam #1 Show All Work No Calculators 1. Suppose that A and B are events in a sample space S and that P(A) = 0.4 and P(B) = 0.6 and P(A B) = 0.3. Suppose too that B, C, and D are mutually independent

More information

Bayesian statistics, simulation and software

Bayesian statistics, simulation and software Module 1: Course intro and probability brush-up Department of Mathematical Sciences Aalborg University 1/22 Bayesian Statistics, Simulations and Software Course outline Course consists of 12 half-days

More information

Supplementary Section D: Additional Material Relating to Helicopter Flight Mechanics Models for the Case Study of Chapter 10.

Supplementary Section D: Additional Material Relating to Helicopter Flight Mechanics Models for the Case Study of Chapter 10. Supplementary Section D: Additional Material Relating to Helicopter Flight Mechanics Models for the Case Study of Chapter 1. D1 Nonlinear Flight-Mechanics Models and their Linearisation D1.1 Introduction

More information

MTH 215: Introduction to Linear Algebra

MTH 215: Introduction to Linear Algebra MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

Unions of Solutions. Unions. Unions of solutions

Unions of Solutions. Unions. Unions of solutions Unions of Solutions We ll begin this chapter by discussing unions of sets. Then we ll focus our attention on unions of sets that are solutions of polynomial equations. Unions If B is a set, and if C is

More information

18.600: Lecture 7 Bayes formula and independence

18.600: Lecture 7 Bayes formula and independence 18.600: Lecture 7 Bayes formula and independence Scott Sheffield MIT Outline Bayes formula Independence Outline Bayes formula Independence Recall definition: conditional probability Definition: P(E F )

More information

DETERMINANTS, COFACTORS, AND INVERSES

DETERMINANTS, COFACTORS, AND INVERSES DEERMIS, COFCORS, D IVERSES.0 GEERL Determinants originally were encountered in the solution of simultaneous linear equations, so we will use that perspective here. general statement of this problem is:

More information

The Theory of the Simplex Method. Chapter 5: Hillier and Lieberman Chapter 5: Decision Tools for Agribusiness Dr. Hurley s AGB 328 Course

The Theory of the Simplex Method. Chapter 5: Hillier and Lieberman Chapter 5: Decision Tools for Agribusiness Dr. Hurley s AGB 328 Course The Theory of the Simplex Method Chapter 5: Hillier and Lieberman Chapter 5: Decision Tools for Agribusiness Dr. Hurley s AGB 328 Course Terms to Know Constraint Boundary Equation, Hyperplane, Constraint

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Modeling Population Growth

Modeling Population Growth Exponential Growth 1.4 19 Modeling Population Growth Example. Modeling the size of a population. We would like to build a simple model to predict the size of a population in 10 years. A very macro-level

More information

Chapter 1. The Noble Eightfold Path to Linear Regression

Chapter 1. The Noble Eightfold Path to Linear Regression Chapter 1 The Noble Eightfold Path to Linear Regression In this chapter, I show several di erent ways of solving the linear regression problem. The di erent approaches are interesting in their own way.

More information

Probability Theory for Machine Learning. Chris Cremer September 2015

Probability Theory for Machine Learning. Chris Cremer September 2015 Probability Theory for Machine Learning Chris Cremer September 2015 Outline Motivation Probability Definitions and Rules Probability Distributions MLE for Gaussian Parameter Estimation MLE and Least Squares

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 (2010) 770 776 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa A dual transformation

More information

1.5 Gaussian Elimination With Partial Pivoting.

1.5 Gaussian Elimination With Partial Pivoting. Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals.

Z - Transform. It offers the techniques for digital filter design and frequency analysis of digital signals. Z - Transform The z-transform is a very important tool in describing and analyzing digital systems. It offers the techniques for digital filter design and frequency analysis of digital signals. Definition

More information

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University

Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo. Department of Mathematics Iowa State University Lecture 16 Methods for System of Linear Equations (Linear Systems) Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

(x + 1)(x 2) = 4. x

(x + 1)(x 2) = 4. x dvanced Integration Techniques: Partial Fractions The method of partial fractions can occasionally make it possible to find the integral of a quotient of rational functions. Partial fractions gives us

More information

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ] Matrices: Definition: An m n matrix, A m n is a rectangular array of numbers with m rows and n columns: a, a, a,n a, a, a,n A m,n =...... a m, a m, a m,n Each a i,j is the entry at the i th row, j th column.

More information

A very simple proof of Pascal s hexagon theorem and some applications

A very simple proof of Pascal s hexagon theorem and some applications Proc. Indian Acad. Sci. (Math. Sci.) Vol. 120, No. 5, November 2010, pp. 619 629. Indian Academy of Sciences A very simple proof of Pascal s hexagon theorem and some applications NEDELJKO STEFANOVIĆ and

More information

System of Linear Equations

System of Linear Equations Chapter 7 - S&B Gaussian and Gauss-Jordan Elimination We will study systems of linear equations by describing techniques for solving such systems. The preferred solution technique- Gaussian elimination-

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Math 1314 Week #14 Notes

Math 1314 Week #14 Notes Math 3 Week # Notes Section 5.: A system of equations consists of two or more equations. A solution to a system of equations is a point that satisfies all the equations in the system. In this chapter,

More information