On the Covariance Matrix of Weighted Total Least-Squares Estimates

Size: px
Start display at page:

Download "On the Covariance Matrix of Weighted Total Least-Squares Estimates"

Transcription

1 On the Covariance Matrix of Weighted Total Least-Squares Estimates A R Amiri-Simkooei, MASCE 1 ; F Zangeneh-Nejad ; and J Asgari Abstract: Three strategies are employed to estimate the covariance matrix of the unknown parameters in an error-in-variable model The first strategy simply computes the inverse of the normal matrix of the observation equations, in conjunction with the standard least-squares theory The second strategy applies the error propagation law to the existing nonlinear weighted total least-squares (WTLS) algorithms for which some required partial derivatives are derived The third strategy uses the residual matrix of the WTLS estimates applicable only to simulated data This study investigated whether the covariance matrix of the estimated parameters can precisely be approximated by the direct inversion of the normal matrix of the observation equations This turned out to be the case when the original observations were precise enough, which holds for many geodetic applications The three strategies were applied to two commonly used problems, namely a linear regression model and a two-dimensional affine transformation model, using real and simulated data The results of the three strategies closely followed each other, indicating that the simple covariance matrix based on the inverse of the normal matrix provides promising results that fulfill the requirements for many practical applications DOI: /(ASCE)SU American Society of Civil Engineers Author keywords: Weighted total least squares (WTLS); Covariance matrix of WTLS estimate; Error propagation law; Two-dimensional (D) affine transformation; Linear regression Introduction Total least squares (TLS), originally introduced by Golub and van Loan (1980) in mathematical literature, is now a standard method applicable to a variety of science and engineering problems TLS is used to estimate the unknown parameters of a so-called error-in-variable (EIV) model, in which both the observation vector and the design matrix are perturbed by random errors There are a large number of publications in statistical and geodetic literature on the solution of EIV models using the weighted total least-squares (WTLS) method In geodetic literature, although the terminology EIV was not directly used, an EIV model was treated in a two-dimensional (D) nonlinear symmetric Helmert transformation by Teunissen (1988), in which the exact solution was given using a rotational invariant covariance structure For studies on TLS, refer to those by Van Huffel and Vandewalle (1991), Davis (1999), Felus (004), Felus and Schaffrin (005), Akyilmaz (007), Schaffrin and Wieser (008, 009, 011), Schaffrin and Felus (009), Neitzel (010), Xu et al (01, 014), Xu and Liu (014), Fang (011, 01), Tong et al (011, 015), Shen et al (011), Amiri-Simkooei and Jazaeri (01, 01), Jazaeri et al (014), Amiri-Simkooei (01), Amiri-Simkooei et al (014, 015), and Shi et al (015) 1 Associate Professor, Dept of Geomatics Engineering, Faculty of Engineering, Univ of Isfahan, Hezar-Zarib Ave, Isfahan , Iran (corresponding author) amiri@enguiacir Lecturer, Dept of Geomatics Engineering, Faculty of Engineering, Univ of Isfahan, Hezar-Zarib Ave, Isfahan , Iran; PhD Student, Dept of Surveying and Geomatics Engineering, Geodesy Division, Faculty of Engineering, Univ of Tehran, North-Kargar Ave, Amir-Abad, Tehran , Iran Assistant Professor, Dept of Geomatics Engineering, Faculty of Engineering, Univ of Isfahan, Hezar-Zarib Ave, Isfahan , Iran Note This manuscript was submitted on March 10, 014; approved on August 10, 015; published online on January 8, 016 Discussion period open until June 8, 016; separate discussions must be submitted for individual papers This paper is part of the Journal of Surveying Engineering, ASCE, ISSN Amiri-Simkooei and Jazaeri (01) formulated the solution of the WTLS problem based on the standard least-squares theory It applies an iterative algorithm to the linearly structured Gauss- Markov model (GMM) instead of solving a nonlinear Gauss- Helmert model (GHM) The algorithm takes into consideration the complete structure for the covariance matrix of the coefficient matrix Having this formulation available, one can apply the existing body of the knowledge of the least squares to the WTLS problem Jazaeri et al (014) derived another algorithm for solving the WTLS problem by considering the complete description of the covariance matrices without the use of Lagrange multipliers in a straightforward manner Xu et al (01) reformulated the nonlinear equality-constrained adjustment solution of an EIV model as a nonlinear adjustment model and further extended it to a partial EIV model in which not all elements of the design matrix are random Following the study by Xu et al (01), Shi et al (015) derived an alternative formula for parameter estimation in a partial EIV model Having an estimate available, in the estimation theory in general and in the geodetic applications in particular, an important question arises as to the degree of precision of the estimated parameters The precision of the estimates can be provided using the covariance matrix of the estimated parameters In the standard least-squares theory, the covariance matrix can simply be obtained by inverting the normal matrix of the observation equations The WTLS formulation based on the study of Amiri-Simkooei and Jazaeri (01) provides such a covariance matrix Xu et al (01) also investigated the error evaluation of the nonlinear TLS estimate, including the first-order approximation of accuracy, nonlinear confidence region, and bias of the nonlinear TLS estimate Because of the nonlinearity and hence randomness of the elements involved, the existing methods provide only an approximation for the actual covariance matrix (Teunissen 1985, 1990) A better approximation for the covariance matrix should, in fact, take into consideration the randomness of all random variables Such a covariance matrix can be obtained by applying the error propagation law to the linear approximation of nonlinear functions This indeed holds also for the WTLS estimates ASCE J Surv Eng

2 The objective of this study was the estimation of the covariance matrix of the unknown WTLS parameters using three strategies: (1) computing the inverse of the normal matrix of the observation equations presented by Amiri-Simkooei and Jazaeri (01); () applying the error propagation law to the linearized form of the WTLS estimates (three algorithms) presented by Fang (011), Tong et al (011), Amiri-Simkooei and Jazaeri (01), and Jazaeri et al (014); and () using a simulation scenario, which is explained in detail in the next sections A comparison is made of the numerical results of these three strategies This paper is organized as follows The following section provides a brief review of the WTLS approaches to an EIV model The three aforementioned algorithms are used to approximate the covariance matrix of the WTLS estimates in a later section The error propagation law is applied to these nonlinear algorithms, and some required partial derivatives and technical issues relevant to these applications are also described A subsequent section includes one empirical example and some simulation studies to investigate the efficacy of the formulations A comparison is made with the simplest covariance matrix of the estimates Finally, conclusions are provided in the last section WTLS Formulation Explanations of three formulations for solving the WTLS problem are provided Consider the following EIV model, in which, in addition to the observation vector, the elements of the design matrix are also affected by random errors y ¼ðA E A Þx þ e y (1) with the stochastic properties characterized by e y e : ¼ y 0 ; s Q y 0 e A vecðe A Þ Q A where y = m-vector of observations; e y = m-vector of the observational error; A = m n design matrix; E A = m n random error of the design matrix; x = n-vector of unknown parameters; Dðe y Þ¼s 0 Q y and Dðe A Þ¼s 0 Q A, in which D is the dispersion operator and Q y and Q A are the corresponding symmetric and nonnegative cofactor matrices of size m m and mn mn for the observation vector and the design matrix, respectively; and s 0 = unknown variance factor of the unit weight assumed to be the same for both e y and e A ¼ vecðe A Þ For simplicity, assume s 0 ¼ 1, indicating that the terminologies of the cofactor matrices and covariance matrices are equivalent The symbol vec stands for the vec operator, which creates a column vector from a matrix by stacking the column vectors of matrix below one another The WTLS problem aims to solve the following minimization problem: Minimize: e T y Q 1 y e y þ e T A Q 1 A e A () Subject to: y e y ¼ðA E A Þx () To solve for the unknown parameters x, three methods and their corresponding references are presented in Table 1 The symbols and ^ represent the predicted and estimated quantities, respectively In these formulations, the predicted design matrix ~A and the predicted observation vector ~y (Amiri-Simkooei and Jazaeri 01) are obtained as Table 1 Three Formulations of WTLS Estimators and Their Corresponding References Formulation number WTLS formulation Related reference(s) 1 ^x ¼ð~A T Q 1 ~y ~A ¼ A ~E A ; ~y ¼ y ~E A^x (4) where the predicted residuals ~E A are obtained as with ~E A ¼ ivec ð~e A ~AÞ 1 ~A T Q~y 1 ~y Fang (011), Shen et al (011), Amiri-Simkooei and Jazaeri (01), Xu et al (01), Jazaeri et al (014) ^x ¼ð~A T Q~y 1 AÞ 1 ~A T Q~y 1 y Tong et al (011), Fang (011), Amiri-Simkooei and Jazaeri (01) ^x ¼ð~A T Q 1 y ~AÞ 1 ~A T Q 1 y y Jazaeri et al (014) Þ; with ~e A ¼ Q A ð^x I m ÞQ 1 ^e (5) ^e ¼ y A^x (6) the estimated total residuals of the EIV model, which are different from the predicted residuals ~e y ¼ y ^y ¼ y ~A^x ¼ Q y Q~y 1 ðy A^xÞ ¼Q y Q~y 1 ^e of the observation y in an EIV model Thus, ^e ¼ ~e y ~E A^x In Eq (5), ivec denotes an operator that converts an mn-vector to an m n matrix Furthermore, the covariance matrix of the actual predicted observation y ¼ y E A x, ie, Q y ¼ Q y þðx T I m ÞQ A ðx I m Þ, is approximated by Q ~y ¼ Q y þð^x T I m ÞQ A ð^x I m Þ (7) where = Kronecker product of two matrices; and I m = identity matrix of size m The three formulations in Table 1 provide identical estimates by iterative algorithms (Amiri-Simkooei and Jazaeri 01; Jazaeri et al 014) Formulation 1 is of particular interest because it is similar to the standard least-squares formulation This formulation allows the existing body of knowledge of the least-squares theory to be applied to the EIV model (Amiri-Simkooei and Jazaeri 01) For example, the so-called normal matrix is a symmetric positive-definite matrix Also, the predicted observation vector and design matrix are obtained as ~y ¼ y ~E A^x and ~A ¼ A ~E A, respectively The covariance matrix of the predicted observations is approximated by Eq (7) In addition, because of the randomness of ~A and Q ~y, the covariance matrix of the estimated parameters ^x is approximated as Q^x ffi ^s 0 ð ~A T Q~y 1 ~AÞ 1 (8) from which the variances and covariances among the estimates can be derived In Eq (8), ^s 0 ¼ ^et Q~y 1 ^e=ðm nþ is the leastsquares estimate of the variance factor of the unit weight The numerical results of Eq (8) have been shown to be identical to those of the nonlinear GHM (Amiri-Simkooei and Jazaeri 01) For detailed information about this formulation and its different applications, see Amiri-Simkooei and Jazaeri (01), Amiri- Simkooei (01), and Amiri-Simkooei et al (014) In this paper, the results obtained from the covariance matrix of the estimated parameters expressed in Eq (8) were compared with those of the three formulations in Table 1, for which the error ~y ASCE J Surv Eng

3 propagation law is used For further comparison, some simulation scenarios were used In the next section, the covariance matrix of the estimated parameters Q^x was approximated by applying the error propagation law (Teunissen 000) to the three WTLS estimators listed in Table 1 Covariance Matrix of WTLS Estimates This paper aims to assess the error of the estimated parameters For other researchers who have also provided the first-order approximation of accuracy of the nonlinear TLS estimates, refer to Xu et al (01) They provided the covariance matrix of their estimates using their partial EIV formulation This section aimed to determine the covariance matrix of the estimated parameters using three strategies The first strategy used the approximate covariance matrix based on Eq (8) The second strategy applied the error propagation law to the formulations of three WTLS estimators presented in Table 1, for which some necessary partial derivatives were derived (this section) The third strategy used simulated data, which is explained in detail in the next section To derive the covariance matrix of the estimated parameters ^x, one can either linearize the nonlinear (partial) EIV model, as done by Xu et al (01), or directly apply the error propagation law to the linearized form of the three WTLS formulations This latter approach was followed here because the three WTLS estimates are provided in Table 1 For this purpose, the following equations are introduced: 8 < : ^x ¼ Fð^x; y; vecðaþþ y ¼ y vecðaþ ¼vecðAÞ ^x ) Y ¼ 4 y 5 vecðaþ where the first equation is one of the functions ^x in Table 1 The second and third equations are introduced because ^x is a function of the observation vector y and the design matrix A, in addition to the ^x itself The estimated unknown ^x, the observation vector y, and the design matrix A are then staked into a vector of size n þ m þ nm, namely Y The covariance matrix Q Y is then to be determined It is of the form Q^x Q^xy Q^xA Q^x Q^xy Q^xA Q Y ¼ 4 Q y^x Q y Q ya 5 ¼ 4 Q y^x Q y 0 5 Q A^x Q Ay Q A Q A^x 0 Q A (9) (10) in which the correlation among the design matrix A and the observation vector y was ignored, ie, Q Ay ¼ Q ya ¼ 0 It is further assumed that the Q y and Q A matrices are known, but Q^x, Q^xy,andQ^xA are unknown to be determined Using the error propagation law, the covariance matrix Q Y can be approximated by Q Y ¼ JQ 0 Y JT in an iterative manner, starting from an initial matrix Q 0 Y expressed as Q 0 Y ¼ 4 0 Q y Q A (11) or Q 0^x 0 0 Q 0 Y ¼ 4 0 Q y Q A (1) where Q0^x = approximation for the unknown covariance matrix of ^x, obtained from Eq (8), for instance The Jacobi matrix J then reads J^x^x J^xy J^xA J ¼ 4 J y^x J yy J ya 5 J A^x J Ay J AA (1) where J ½:Š½:Š are some partial derivatives to be determined From the first formula of Eq (9), it is obvious that ^x is a function of three variables, ^x ¼ Fð^x; y; AÞ Therefore, the partial derivatives of ^x with respect to ^x, y, and A exist Also, the partial derivatives of y with respect to y and the partial derivatives of A with respect to A are obviously equal to identity matrices of size m and mn, respectively, ie, J yy ¼ I m and J AA ¼ I mn The other partial derivatives are equal to zero, ie, J y^x ¼ 0, J ya ¼ 0, J A^x ¼ 0, and J Ay ¼ 0 Therefore, Eq (1) simplifies to J^x^x J^xy J^xA J ¼ 4 0 I m I mn (14) in which the first row of the partial derivatives will be determined for the three formulations in Table 1 Using this formulation, it was noted that the covariance matrices Q y and Q A, obtained from Q Y ¼ JQ 0 Y JT, remain unchanged in the iterations, which is expected It was also noted that the Jacobi matrix J is calculated only once because the random variables y and vecðaþ are given and the estimated vector ^x has already been determined through the iterative WTLS algorithm The three partial derivatives in Eq (14) can be calculated as follows: 1 The n n matrix J^x^x is obtained as J^x^x ¼ ½ x1^x; x^x; ; xn^x Š nn (15) where the n 1ith column of J^x^x (ie, xi^x; i ¼ 1; ; ; n) is given in Fig 1 for the three formulations presented in Table 1 The n m matrix J^xy is of the form J^xy ¼ ½ y1^x; y^x; ; ym^x Š nm (16) where the n 1 ith column of J^xy (ie, yj^x; j ¼ 1; ; ; m) is given in Fig for the three formulations presented in Table 1 Finally, the n mn matrix J^xA has the following form: J^xA ¼ J^xvecðAÞ ¼ ½ a11^x; ; am1^x; a1 ^x; ; am^x; ; a1n^x; ; amn^x Š nmn (17) where the kth n-column of J^xA [ie, aij^x; i ¼ 1; ; m; j ¼ 1; ; n; k ¼ 1; ; mn, inwhich1 k ¼ðj 1Þmþi mn holds] is given in Fig for the three formulations of Table 1 In Figs 1, to calculate the partial derivatives, a few terms are needed, which are addressed in the following section It was noted that the partial derivatives of Q~y 1 with respect to y and A are zeros, but its partial derivatives with respect to x follow as xi ¼ Q~y 1 c T i I m QA ð^x I m Þ þ ^x T I m Q 1 ~y QA ðc i I m ÞŠQ~y 1 (18) in which the identity dða 1 Þ¼ A 1 dðaþa 1 is used to derive the derivative of an inverse matrix Also, ASCE J Surv Eng

4 Fig 1 Partial derivatives of ^x with respect to x i for the three formulations in Table 1 Fig Partial derivatives of ^x with respect to y j for the three formulations in Table 1 Fig Partial derivatives of ^x with respect to a ij for the three formulations in Table 1 xi ð~aþ ¼ivec Q A ðc i I m ÞQ~y 1 ðy A^xÞ þ Q A ð^x I m Þ xi ðq 1 ~y Þðy A^xÞ Q A ð^x I m ÞQ 1 ~y Ac i Þ Þ (19) and yj ð~aþ ¼ivec Q A ð^x I m ÞQ~y 1 c j (0) and aij ð~aþ ¼c ij ivec Q A ð^x I m ÞQ~y 1 c ij^x (1) where c i ¼ ð0; ; 0; 1; 0; ; 0Þ T and c j ¼ ð0; ; 0; 1; 0; ; 0Þ T are the canonical unit vectors of order n and m, which contain zeros except a 1 at ith and jth positions, respectively, and c ij = m n matrix, which contains zeros except a 1 at Row i and Column j ASCE J Surv Eng

5 Having an estimate available for the unknown parameters, Algorithm 1 can be used in an iterative manner to calculate the covariance matrix of the WTLS estimates A MATLAB script that implements this algorithm is provided in the Supplemental Data Algorithm 1 Algorithm for Approximation of Covariance Matrix of WTLS Estimates by Calculating Partial Derivatives and Applying Error Propagation Law Input Design matrix A and its covariance matrix Q A Observation vector y and its covariance matrix Q y Begin 1 Estimate unknown parameters ^x using WTLS algorithm Estimate variance factor of unit weight Set value for converge tolerance of «4 Set iteration counter k =0 5 Initialize Q Y [either Eq (11)or(1)] 6 Calculate numerical partial derivatives in Jacobi matrix J 7 Begin a Update Q Y using Q ðkþ1þ Y ¼ JQ ðkþ Y JT b Increase k : ¼ k þ 1 c While kq ðkþ1þ Y Q ðkþ Y k > ɛ repeat 8 End Output Extract covariance matrix Q^x from Q Y End Numerical Results and Discussions To investigate the efficacy of the presented methods for estimating the covariance matrix Q^x, two case studies were presented The first case study was a linear regression model in which real and simulated data sets were used This example has been widely used in many TLS research papers and is of interest in many engineering disciplines The second case study was a D affine transformation for which simulated weighted data sets were used This application is particularly of interest in many geomatics and surveying engineering applications For each application, the covariance matrix of the estimates was directly computed using Eq (8) (first strategy) Moreover, matrix Q^x was estimated by applying the error propagation law to the corresponding formulation of the WTLS approach within an EIV model (second strategy) In the sequel, a comparison was made between the results obtained by Eq (8) and those obtained using the error propagation law Furthermore, for the simulated examples, a comparison was also made with the results of the simulation (third strategy) for both the linear regression and D affine transformation models Linear Regression Model This section consists of two parts The first example is a linear regression model in which real data sets were used, whereas simulated data were employed in the second example Real Data For the first case study, the problem of linear regression was considered A linear relation between the two coordinate components u and v is written as v ¼ au þ b () where a and b are the slope and abscissa of the straight line, respectively In many practical applications, the two variables u and v result from experimental measurements, so errors in both variables are involved Therefore, Eq () can be rewritten as v i e vi ¼ aðu i e ui Þþb, i ¼ 1; ; m, in which m = number of points; and e ui and e vi = errors in the two variables u and v, respectively This paper used the data presented by Neri et al (1989) Observed data are u and v, which, along with the corresponding weights W u and W v, have been provided by Neri et al (1989) and many other research papers Therefore, they are not repeated here The aim was to obtain the precision of the WTLS estimates (slope a and intercept b of the regression line) along with the correlation coefficient, ie, r ^a^b ¼ s ^a^b=ðs ^a s ^bþ The solution schema using WTLS are nowbriefly explained If T, the parameter vector is defined as x ¼ a b only the first column of the coefficient matrix has random errors, whereas the values in the second column are fixed Thus, Q y ¼fdiag W v1 ; ; W v10 ÞŠg 1 and Q A ¼ Q Q 10, where Q ¼ and Q 10 ¼fdiag W u1 ; ; W u10 g 1, where the weights come from the data set presented by Neri et al (1989) and diag is the operator that converts a vector into a diagonal matrix, of which the diagonal entries of the matrix are the vector s elements The convergence threshold is chosen as ɛ ¼ 10 1 Further explanation is provided by Amiri- Simkooei and Jazaeri (01) The estimated line parameters of WTLS are ^a ¼ 0: and ^b ¼ 5: , in agreement with the results of Neri et al (1989) For this data set, approximations for the covariance matrix of ^x can be achieved using the two strategies mentioned previously: (1) by computing the inverse of the normal matrix of the observation equations [ie, using Q^x ¼ ^s 0 ð ~A T Q~y 1 ~AÞ 1 ] and () by applying the error propagation law to the WTLS estimators presented in Table 1 (three formulations) The value for the convergence tolerance of Algorithm 1 is also set to ɛ ¼ 10 1 for the three formulations to fairly compare the results Table provides the standard deviations of the WTLS estimates and the correlation coefficient obtained via the two strategies A few observations from the results of Table can be highlighted First, the results of the second strategy for the three formulations were identical The standard deviations of the estimated parameters and the correlation coefficient obtained from applying the error propagation law to the different WTLS estimates of Table 1 were exactly the same In fact, this held true for the subsequent results, and therefore, only the results of Formulation 1 (Method I) are presented for the other case studies Also, the results (standard deviations and correlation coefficient) of the second strategy closely followed those of the first strategy based on Q^x ¼ ^s 0 ð ~A T Q~y 1 ~AÞ 1 of Eq (8) The results of Eq (8) have already been shown to be identical to those obtained using the nonlinear GHM (Amiri- Simkooei and Jazaeri 01) Simulated Data Similar to the method applied by Amiri-Simkooei and Jazaeri (01), 50 points were simulated in the linear regression model Simulations were performed for different cases to investigate how the covariance matrices of the observables and coefficient matrix influence the precision of the estimates through Q^x To construct the covariance matrices of the observables and the coefficient matrix, three cases were considered as Case 1: Q y ¼ 0:5I 50 and Q A ¼ Q Q 50, where Q 50 ¼ 0:5I 50 Case : Q y ¼ 0:5I 50 and Q A ¼ Q Q 50, where Q 50 ¼ 0:5I 50 Case : Q y ¼ I 50 and Q A ¼ Q Q 50, where Q 50 ¼ I 50 ASCE J Surv Eng

6 Table Standard Deviations and Correlation Coefficient of Estimated Parameters Using Data from Neri et al (1989) Standard deviation/ correlation coefficient First strategy [using Eq (8)] Second strategy Method I Method II Method III s ^a s ^b r ^a^b Note: The first strategy uses Eq (8), and the second strategy is from applying error propagation law to the three WTLS formulations shown in Table 1 (Methods I III) For these cases, the actual line parameters were set to a ¼ 1 and b ¼ 10 The u i components were assumed to be u i ¼ i; i ¼ 1; ; 50, and the v i components were then calculated based on the line parameters using Eq () Both components were corrupted by white Gaussian noise using the preceding covariance matrices For this case study, the third strategy, in addition to the first and second strategies, was also used To estimate the covariance matrix of the parameters via the simulation process, the simulation over 1,000,000 independent runs was repeated for the three simulation scenarios (Cases 1 ) For each run, the line parameters ^a and ^b were estimated by the WTLS algorithm The difference between the estimates and the actual line parameter made a 1; 000; 000 residual matrix as V ¼ ^a ðjþ a;^b ðjþ b ; j ¼ 1:1; 000; 000, where ^a ðjþ and ^b ðjþ denote the estimated line parameters of the jth run The covariance matrix of the estimated parameters ^x was then approximated by R x^ ¼ V T V=ðm nþ; where m = 1,000,000; and n = 0 (Teunissen and Amiri-Simkooei 008; Amiri-Simkooei 009) Having the covariance matrix of simulation (R^x ) available, the variances of the estimates and the covariances among the estimates can be derived (ie, s a ¼ s ii ¼ s i, s b ¼ s jj ¼ s j,ands ab ¼ s ij ) Furthermore, from the covariance matrix of simulation (R^x ), the correlation coefficient between the estimates can be computed as follows: ^s ij ^r ij ¼ pffiffiffiffiffiffiffiffiffiffiffiffi ¼ ^s ij q ffiffiffiffiffiffiffiffiffiffiffi () ^s ii ^s jj ^s i ^s j The covariance matrix of these three estimators is given by Amiri-Simkooei (009) 8 9 < ^s ij = Q ij^s ¼D 4 ^s ii 5 : ; ¼ 1 s ii s jj þs ij s ii s ij s jj s ij 6 s ii s ij s ii s 7 4 ij 5 ^s m n jj s jj s ij s ij s jj (4) where n =0;andm = 1,000,000 for this q application ffiffiffiffiffiffi The precision of the standard deviation estimator ^s i ¼ ^s p i ¼ ffiffiffiffiffiffi ^s ii can be approximated by applying the error propagation law to this nonlinear function s ^s i s ^s i ^s i ¼ s ^s ii ^s i (5) In a similar manner, applying the error propagation law to the linearized form of Eq () yields the variance of the correlation coefficient ^r ij It can easily be shown that s ^r ij is of the form (Amiri-Simkooei 009) s ^r ij ¼ ð1 r ij Þ (6) m n The mean values of the line parameters estimated over 1,000,000 runs for the three cases are presented in Table The mean covariance matrix Q^x can be obtained by averaging the covariance matrices over 1,000,000 independent runs With this Table Mean Values of Estimated Line Parameters over 1,000,000 Independent Runs for Three Simulation Cases Case ^a ^b Table 4 Mean Correlation Matrix Computed Using the Direct Formula [ie, Eq (8)] over 1,000,000 Independent Runs for All Simulation Cases (First Strategy) Case Correlation matrix 1 0: : : :0009 0: : : : : : : :40584 covariance matrix available, the mean correlation matrix of the WTLS estimators can then be obtained The correlation matrix is a symmetric matrix whose diagonal elements are the standard deviations of the WTLS estimates, whereas its off-diagonal elements are the correlation coefficients between the WTLS estimates The correlation matrix computed by using the direct formula [ie, Eq (8)] for the three simulated cases is given in Table 4 The standard deviations of the estimated line parameters and the correlation coefficient from the three simulated cases could also be obtained by applying the error propagation law to the WTLS formulations given in Table 1, averaged over 1,000,000 runs (R^x EP, second strategy) Furthermore, these parameters were obtained from R^x ¼ V T V=ðm nþ (third strategy) The results are listed in Table 5 for the three cases In Table 5, the standard deviations of the estimated line parameters and the correlation coefficient using the first strategy [ie, Eq (8)] were considered as the reference, all set to 100% (second column) The results of the other two strategies are compared with this column For this purpose, the ratios of the standard deviations and the correlation coefficients obtained from the second strategy (third column) and the third strategy (fourth column) were computed The precision of the estimated standard deviation and correlation coefficient for the third strategy are also provided in this table [Eqs (4 6)] As mentioned before, because the covariance matrices of the estimates based on the application of the error propagation law to the three WTLS estimators (second strategy), presented in Table 1, were identical to each other, only the results of Formulation 1 are presented Therefore, the second and third formulations provided identical results to the first formulation It was noted, however, that the computational load of the third formulation was 5 times that of the first and second formulations The average number of iterations of the third formulation was approximately 40, ASCE J Surv Eng

7 Table 5 Standard Deviations and Correlation Coefficients of Estimated Parameters for Three Simulation Cases Case Standard deviation/ correlation coefficient Q^x to Q^x [Eq R EP (8)] (%) a ^x to Q^x (%) b R^x to Q^x (%) c 1 s ^a s ^b r ^a^b s ^a s ^b r ^a^b s ^a s ^b r ^a^b a Estimated standard deviations and correlation coefficient using direct formula [ie, Eq (8)] as reference values b Ratio of standard deviations and correlation coefficient obtained by applying error propagation law to WTLS Formulation 1 of Table 1 to reference values c Ratio of standard deviations and correlation coefficients obtained from simulation procedure to reference values compared with 5 and 5 for the first and second formulations, respectively Such a low convergence rate is also in agreement with Jazaeri et al (014) The results indicated that the direct use of Eq (8) provided a good agreement with the results obtained by applying the error propagation law to the three formulations in Table 1 This also held true when comparing the results with those of the third strategy In other words, Eq (8) can be used to directly and reliably obtain the covariance matrix of the estimates in an EIV model Another observation is that when the precision of the original observations (either in y or A) decreased, the differences became larger when, for example, comparing the results of Cases 1 and This makes sense because the randomness of ^x will then increasingly affect the covariance matrix of its estimate It was noted, however, that for many geodetic applications, the precision of the original observations y and A are sufficiently high, resulting in negligible effects if the randomness of ^x is ignored, and therefore, Eq (8) provides reliable results D Affine Transformation One of the most frequently encountered problems in geomatics is the coordinate transformation between two systems There are a few transformation functions described in the literature, with differences in the number of parameters used for the transformation Of the transformation functions, the D affine transformation was used in this paper The model is expressed as a 1 u t ¼ u b 1 s v s c 1 v t u s v s 1 6 a 7 4 b 5 c (7) which employs six physical parameters as c 1 and c represent the shifts along the u- and v-axes, respectively The other parameters a 1, a, b 1, and b are also related to the four physical parameters of a D linear transformation, including two scales along the u- and v- axes, one rotation, and one nonperpendicularity (or affinity) parameter The coordinates in the target system are u t and v t, whereas the coordinates in the start system are u s and v s, with both systems being observed For k number of points, the observation vector y and the design matrix A can be written as u t1 u s1 v s v t u s1 v s1 1 y ¼ ; A ¼ u tk 5 4 u sk v sk v tk u sk v sk 1 (8) To estimate the unknown parameter vector ^x ¼½a 1 ; b 1 ; c 1 ;a ; b ; c Š T, the coordinates of a series of points (at least three points) in both the start and target systems need to be observed Therefore, the coordinates in both the start and transformed systems are contaminated by random errors, and thus, an EIV is involved In fact, six transformation parameters of the D affine transformation can be estimated in an EIV model using WTLS The data for the planar linear affine transformation (six-parameter transformation) are simulated similar to the approach applied by Amiri-Simkooei and Jazaeri (01) The variance-covariance matrix of the random error vector e A ¼ vecðe A Þ can directly be obtained by applying the error propagation law to the columns of the coefficient matrix A This will then give Q Q Q 0 0 Q Q A ¼ Q Q Q Q (9) where Q 11 ¼ Q ¼ s I k Q 44 ¼ Q 55 ¼ s I k Q 14 ¼ Q 5 ¼ s I k Q 41 ¼ Q 5 ¼ s I k (0) For this structure, it is assumed that the measurement noise in both the start and target systems is the independent Gaussian white noise with variances s and s t, respectively The aim was to simulate the coordinates of a series of points in both the start and target systems For this purpose, 0 points in the start system (ie, k =0) were transformed by the parameters a 1 ¼, b 1 ¼ 1, c 1 ¼ 0, a ¼ 1, b ¼, and c ¼ 0 into the target system The errorless coordinates in the start and target systems are shown in Fig 4 The coordinates of the points in the start and target systems were then corrupted by white Gaussian noise with variances s ¼ 0:01 and s t ¼ 0:0, respectively The same threshold ɛ ¼ 10 1 as the previous case study was used for termination This process was then repeated over 1,000,000 independent runs For each of the simulated data sets, the transformation parameters were estimated using WTLS The average (over 1,000,000 runs) values of the estimated parameters were a 1 = , b 1 = , c 1 = , a = , b = , and c = To estimate the covariance matrix of the estimated parameters ^x via the simulation process, the estimates of each simulation over the ASCE J Surv Eng

8 1,000,000 independent runs were used For each run, the affine transformation parameters were estimated using the WTLS algorithm Having available the actual parameters (ie, a 1 ¼, b 1 ¼ 1, c 1 ¼ 0, a ¼ 1, b ¼, and c ¼ 0) and their estimates for each run, the 1; 000; residuals matrix of the estimates ^b ðjþ 1 b 1 ^c ðjþ 1 c 1 ^a ðjþ c Š, where j runs from 1 to 1,000,000 The covariance could be obtained as V ¼½^a ðjþ 1 a 1 a ^bðjþ ; b ^c ðjþ matrix of the estimated parameters ^x could then be estimated as R^x ¼ V T V=1; 000; 000 (third strategy) R^x ¼ which consists of the standard deviations (diagonal entries) of the estimated affine transformation parameters along with their correlation coefficients (off-diagonal entries) These standard deviations and as the correlation coefficients could also be determined by averaging over 1,000,000 runs using the other two strategies: (1) applying the error propagation law to the WTLS formulations in Table 1 (second strategy) and () using the matrix R^x explained above (third strategy) The results of the second and third strategies were then compared with those of the first strategy The results are provided in Table 6 The standard deviations of the estimated affine parameters and their correlation coefficients using the first strategy [ie, Eq (8)] were considered as the reference, all set to 100% (second column) The results of the other two strategies were compared with this column For this purpose, the ratio of the standard deviations and the correlation coefficients obtained from the second strategy Fig 4 Errorless coordinates of 0 points in start and target systems The mean covariance matrix Q^x could be estimated by averaging the covariance matrices obtained over 1,000,000 independent runs [using Eq (8)] Similar to the previous cases, with the mean covariance matrix Q^x available, the mean correlation matrix of the WTLS estimators, ie, R^x, could then be obtained The standard deviations of the estimated affine transformation parameters and the correlation coefficients of the estimates were considered as the reference values The correlation matrix computed using the direct formula [ie, Eq (8) of the first strategy] is as follows: 0 1 0: : :697 0:5714 0: :961 0: :697 0: :5714 0:961 0:0166 0:961 0:961 0:5714 B Symmetric 0: : :697 0: :697 A 0:0166 (1) (third column) and the third strategy (fourth column) were computed The precision of the estimated standard deviation and correlation coefficients for the third strategy are also provided in Table 6 [Eqs (4 6)] It was noted that, according to matrix R^x of Eq (1), some correlation coefficients were nearly zero, which were excluded from the table A few observations on the results of the aforementioned application are highlighted When applying the error propagation law to the three formulations in Table 1, the results of the first formulation are presented The results of the second and third formulations were identical to those of the first formulation, and the repetition was avoided However, the convergence rate of the third formulation was lower than that of the first and second formulation by a factor of 1 The average number of iterations of the third formulation was approximately 15, compared with 4 and 4 for the first and second ASCE J Surv Eng

9 Table 6 Standard Deviations and Correlation Coefficient of Estimated Parameters Type Standard deviations Correlation coefficients Standard deviation/ correlation coefficient formulations, respectively Furthermore, the results of the three strategies were nearly identical In particular, the results of the second strategy closely followed those of the first strategy The precise geodetic observations allow Eq (8) to be used as a good approximation for the covariance matrix of the estimates in an EIV model Concluding Remarks Q^x to Q^x [Eq (8)] (%) a R^x EP to Q^x (%) b R^x to Q^x (%) c s ^a s ^b s ^c s ^a s ^b s ^c ^a 1 ; ^b 1 ^a 1 ; ^c ^a 1 ; ^a ^a 1 ; ^b ^a 1 ; ^c ^b 1 ; ^c ^b 1 ; ^a ^b 1 ; ^b ^b 1 ; ^c ^c 1 ; ^a ^c 1 ; ^b ^c 1 ; ^c ^a ; ^b ^a ; ^c ^b ; ^c a Estimated standard deviations and correlation coefficient using direct formula [ie, Eq (8)] b Ratio of standard deviations and correlation coefficients obtained by applying error propagation law to WTLS Formulation 1 of Table 1 to reference values c Ratio of standard deviations and correlation coefficients obtained from simulation procedure to reference values In the estimation theory, the covariance matrix of the estimated parameters in general and their precision in particular are important issues In a linear model of observation equations, the covariance matrix of the estimates is simply obtained by inverting the normal matrix of the observation equations Theoretically, such a simple covariance matrix cannot directly be obtained for the nonlinear problems an EIV model, for instance Attempts have been made recently to approximate the covariance matrix of the EIV model parameters Two recent works addressing this issue are the studies by Xu et al (01) and Amiri-Simkooei and Jazaeri (01) In this paper, it was noted that the estimated ^x is a nonlinear function of itself, in addition to the observables This indicates that a correct covariance matrix should be sought by applying the error propagation law to nonlinear functions through iterations This paper dealt with the estimation of the covariance matrix of the WTLS estimates Three strategies were employed as follows: (1) computing the inverse of the normal matrix of the observation equations (first strategy), () applying the error propagation law to the existing WTLS estimator (second strategy), and () using the residual matrix of the WTLS estimates of simulated data In this paper, the authors aimed to investigate whether the covariance matrix of the estimated parameters can precisely be approximated by the inverse of the normal matrix of the observation equations This turned out to be the case The efficacy of the direct formula, ie, inverse of the normal matrix of the observation equations, for estimating the covariance matrix of the WTLS estimates was investigated using a few experimental and simulated data sets The results indicated that Eq (8) provides a good agreement with the results obtained by applying the error propagation law This is mainly because the precision of geodetic observation is considerably high, resulting in a negligible effect of the randomness of ^x on its covariance matrix Eq (8) can thus approximate the covariance matrix of the WTLS estimates in an EIV model This is in conjunction with all nonlinear problems in which the covariance matrix of ^x is approximated with that of d ^x Supplemental Data The file CovMat_WTLStxt is available online in the ASCE Library (wwwascelibraryorg) Acknowledgments The authors acknowledge the useful comments of the reviewers, which improved the presentation and clarification of this paper References Akyilmaz, O (007) Total least squares solution of coordinate transformation Surv Rev, 9(0), Amiri-Simkooei, A R (009) Noise in multivariate GPS position time series J Geod, 8(), Amiri-Simkooei, A R (01) Application of least squares variance component estimation to errors-in-variables models J Geod, 87(10), Amiri-Simkooei, A R, and Jazaeri, S (01) Weighted total least squares formulated by standard least squares theory J Geodetic Sci, (), Amiri-Simkooei, A R, and Jazaeri, S (01) Data-snooping procedure applied to errors-in-variables models Stud Geophys Geod, 57(), Amiri-Simkooei, A R, Mortazavi, S, and Asgari, J (015) Weighted total least squares applied to mixed observation model Surv Rev, in press Amiri-Simkooei, A R, Zangeneh-Nejad, F, Asgari, J, and Jazaeri, S (014) Estimation of straight line parameters with fully correlated coordinates Measurement, 48(), Davis, T G (1999) Total least-squares spiral curve fitting J Surv Eng, /(ASCE)07-945(1999)15:4(159), Fang, X (011) Weighted total least squares solutions for applications in Geodesy PhD dissertation, Publication No 94, Dept of Geodesy and Geoinformatics, Leibniz Univ, Hannover, Germany Fang, X (01) Weighted total least squares: Necessary and sufficient conditions, fixed and random parameters J Geod, 87(8), Felus, Y A (004) Application of total least squares for spatial point process analysis J Surv Eng, /(ASCE)07-945(004)10: (16), Felus, Y, and Schaffrin, B (005) Performing similarity transformations using the errors-in-variables-model Proc, ASPRS 005 Annual Conf: Geospatial goes global: From your neighborhood to the whole planet, American Society of Photogrammetry and Remote Sensing, Bethesda, ASCE J Surv Eng

10 MD hhttp://wwwasprsorg/a/conference-archive/baltimore005/i (Nov 5, 015) Golub, G H, and van Loan, C (1980) An analysis of the total leastsquares problem SIAM J Numer Anal, 17(6), Jazaeri, S, Amiri-Simkooei, A R, and Sharifi, M A (014) Iterative algorithm for weighted total least squares adjustment Surv Rev, 4(46), 19 7 MATLAB [Computer software] MathWorks, Natick, MA Neitzel, F (010) Generalization of total least-squares on example of unweighted and weighted D similarity transformation J Geod, 84(1), Neri, F, Saitta, G, and Chiofalo, S (1989) An accurate and straightforward approach to line regression analysis of error-affected experimental data J Phys E Sci Instrum, (4), Schaffrin, B, and Felus, Y (009) An algorithmic approach to the total least-squares problem with linear and quadratic constraints Stud Geophys Geod, 5(1), 1 16 Schaffrin, B, and Wieser, A (008) On weighted total least-squares adjustment for linear regression J Geod, 8(7), Schaffrin, B, and Wieser, A (009) Empirical affine reference frame transformations by weighted multivariate TLS adjustment International Association of Geodesy Symposia: Geodetic reference frames, Vol14, H Drewes, ed, Springer, Berlin, 1 18 Schaffrin, B, and Wieser, A (011) Total least-squares adjustment of condition equations Stud Geophys Geod, 55(), Shen, Y, Li, B, and Chen, Y (011) An iterative solution of weighted total least-squares adjustment J Geod, 85(4), 9 8 Shi, Y, Xu, P L, Liu, J, and Shi, C (015) Alternative formulae for parameter estimation in partial errors-in-variables models J Geod, 89(1), 1 16 Teunissen, P J G (1985) The geometry of geodetic inverse linear mapping and non-linear adjustment Publications on Geodesy, New Series,Vol8, No 1, Netherlands Geodetic Commission, Delft, the Netherlands Teunissen, P J G (1988) The non-linear D symmetric Helmert transformation: an exact nonlinear least-squares solution Bull Geod, 6(1), 1 15 Teunissen, P J G (1990) Nonlinear least-squares Manus Geod, 15(), Teunissen, P J G (000) Mathematical geodesy and positioning: Adjustment theory: An introduction, Delft University Press, Delft, the Netherlands Teunissen, P J G, and Amiri-Simkooei, A R (008) Least-squares variance component estimation J Geod, 8(), 65 8 Tong,X,Jin,Y,andLi,L(011) An improved weighted total least squares method with applications in linear fitting and coordinate transformation J Surv Eng, /(ASCE)SU , Tong, X, Jin, Y, Zhang, S, Li, L, and Liu, S (015) Bias-corrected weighted total least-squares adjustment of condition equations J Surv Eng, /(ASCE)SU , Van Huffel, S, and Vandewalle, J (1991) Frontiers in applied mathematics: The total least-squares problem: Computational aspects and analysis, Vol 9, Society for Industrial and Applied Mathematics, Philadelphia Xu, P L, and Liu, J (014) Variance components in errors-in-variables models: Estimability, stability and bias analysis J Geod, 88(8), Xu, P L, Liu, J, and Shi, C (01) Total least squares adjustment in partial errors-in-variables models: algorithm and statistical analysis J Geod, 86(8), Xu, P L, Liu, J, Zeng, W X, and Shen, Y Z (014) Effects of errors-invariables on weighted least squares estimation J Geod, 88(7), ASCE J Surv Eng

Iteratively reweighted total least squares: a robust estimation in errors-in-variables models

Iteratively reweighted total least squares: a robust estimation in errors-in-variables models : a robust estimation in errors-in-variables models V Mahboub*, A R Amiri-Simkooei,3 and M A Sharifi 4 In this contribution, the iteratively reweighted total least squares (IRTLS) method is introduced

More information

Formulation of L 1 Norm Minimization in Gauss-Markov Models

Formulation of L 1 Norm Minimization in Gauss-Markov Models Formulation of L 1 Norm Minimization in Gauss-Markov Models AliReza Amiri-Simkooei 1 Abstract: L 1 norm minimization adjustment is a technique to detect outlier observations in geodetic networks. The usual

More information

On the reliability of errors-in-variables models

On the reliability of errors-in-variables models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 16, Number 1, 212 Available online at www.math.ut.ee/acta/ On the reliability of errors-in-variables models Burkhard Schaffrin and

More information

On the realistic stochastic model of GPS observables: Implementation and Performance

On the realistic stochastic model of GPS observables: Implementation and Performance he International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, Volume XL-/W5, 05 International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 3 5 Nov

More information

Topics in Total Least-Squares Adjustment within the Errors-In-Variables Model: Singular Cofactor Matrices and Prior Information

Topics in Total Least-Squares Adjustment within the Errors-In-Variables Model: Singular Cofactor Matrices and Prior Information Topics in Total Least-Squares Adjustment within the Errors-In-Variables Model: Singular Cofactor Matrices and Prior Information by Kyle Snow Report No. 502 Geodetic Science The Ohio State University Columbus,

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters C. Kotsakis Department of Geodesy and Surveying, Aristotle University of Thessaloniki

More information

Possible numbers of ones in 0 1 matrices with a given rank

Possible numbers of ones in 0 1 matrices with a given rank Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

II. Determinant Functions

II. Determinant Functions Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

More information

Noise Characteristics in High Precision GPS Positioning

Noise Characteristics in High Precision GPS Positioning Noise Characteristics in High Precision GPS Positioning A.R. Amiri-Simkooei, C.C.J.M. Tiberius, P.J.G. Teunissen, Delft Institute of Earth Observation and Space systems (DEOS), Delft University of Technology,

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Deformation analysis with Total Least Squares

Deformation analysis with Total Least Squares Nat Hazards Earth Syst Sci, 6, 663 669, 06 wwwnat-hazards-earth-syst-scinet/6/663/06/ Author(s) 06 This work is licensed under a Creative Commons License Natural Hazards and Earth System Sciences Deformation

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

How do our representations change if we select another basis?

How do our representations change if we select another basis? CHAPTER 6 Linear Mappings and Matrices 99 THEOREM 6.: For any linear operators F; G AðV Þ, 6. Change of Basis mðg FÞ ¼ mðgþmðfþ or ½G FŠ ¼ ½GŠ½FŠ (Here G F denotes the composition of the maps G and F.)

More information

The Kalman filter is arguably one of the most notable algorithms

The Kalman filter is arguably one of the most notable algorithms LECTURE E NOTES «Kalman Filtering with Newton s Method JEFFREY HUMPHERYS and JEREMY WEST The Kalman filter is arguably one of the most notable algorithms of the 0th century [1]. In this article, we derive

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation

An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL 2, NO 5, SEPTEMBER 2003 883 An Efficient Approach to Multivariate Nakagami-m Distribution Using Green s Matrix Approximation George K Karagiannidis, Member,

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Linear Algebra and Vector Analysis MATH 1120

Linear Algebra and Vector Analysis MATH 1120 Faculty of Engineering Mechanical Engineering Department Linear Algebra and Vector Analysis MATH 1120 : Instructor Dr. O. Philips Agboola Determinants and Cramer s Rule Determinants If a matrix is square

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty

Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Learning Multiple Tasks with a Sparse Matrix-Normal Penalty Yi Zhang and Jeff Schneider NIPS 2010 Presented by Esther Salazar Duke University March 25, 2011 E. Salazar (Reading group) March 25, 2011 1

More information

Linear Algebra I Lecture 8

Linear Algebra I Lecture 8 Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n

More information

A Novel Technique to Improve the Online Calculation Performance of Nonlinear Problems in DC Power Systems

A Novel Technique to Improve the Online Calculation Performance of Nonlinear Problems in DC Power Systems electronics Article A Novel Technique to Improve the Online Calculation Performance of Nonlinear Problems in DC Power Systems Qingshan Xu 1, Yuqi Wang 1, * ID, Minjian Cao 1 and Jiaqi Zheng 2 1 School

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

ESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression

ESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression ESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression Lecture 11 May 10, 2005 Multivariant Regression A multi-variant relation between a dependent variable y and several independent variables

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Two-Stage Stochastic and Deterministic Optimization

Two-Stage Stochastic and Deterministic Optimization Two-Stage Stochastic and Deterministic Optimization Tim Rzesnitzek, Dr. Heiner Müllerschön, Dr. Frank C. Günther, Michal Wozniak Abstract The purpose of this paper is to explore some interesting aspects

More information

Least-Squares Variance Component Estimation:

Least-Squares Variance Component Estimation: Least-Squares Variance Component Estimation: Theory and GPS Applications A.R. Amiri-Simkooei Least-Squares Variance Component Estimation: Theory and GPS Applications PROEFSCHRIFT ter verkrijging van de

More information

Regression Analysis for Data Containing Outliers and High Leverage Points

Regression Analysis for Data Containing Outliers and High Leverage Points Alabama Journal of Mathematics 39 (2015) ISSN 2373-0404 Regression Analysis for Data Containing Outliers and High Leverage Points Asim Kumer Dey Department of Mathematics Lamar University Md. Amir Hossain

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

Appendix A Solving Linear Matrix Inequality (LMI) Problems

Appendix A Solving Linear Matrix Inequality (LMI) Problems Appendix A Solving Linear Matrix Inequality (LMI) Problems In this section, we present a brief introduction about linear matrix inequalities which have been used extensively to solve the FDI problems described

More information

Linear and Nonlinear Models

Linear and Nonlinear Models Erik W. Grafarend Linear and Nonlinear Models Fixed Effects, Random Effects, and Mixed Models magic triangle 1 fixed effects 2 random effects 3 crror-in-variables model W DE G Walter de Gruyter Berlin

More information

GENERAL SOLUTION OF FULL ROW RANK LINEAR SYSTEMS OF EQUATIONS VIA A NEW EXTENDED ABS MODEL

GENERAL SOLUTION OF FULL ROW RANK LINEAR SYSTEMS OF EQUATIONS VIA A NEW EXTENDED ABS MODEL U.P.B. Sci. Bull., Series A, Vol. 79, Iss. 4, 2017 ISSN 1223-7027 GENERAL SOLUTION OF FULL ROW RANK LINEAR SYSTEMS OF EQUATIONS VIA A NEW EXTENDED ABS MODEL Leila Asadbeigi 1, Mahmoud Paripour 2, Esmaeil

More information

A method for computing quadratic Brunovsky forms

A method for computing quadratic Brunovsky forms Electronic Journal of Linear Algebra Volume 13 Volume 13 (25) Article 3 25 A method for computing quadratic Brunovsky forms Wen-Long Jin wjin@uciedu Follow this and additional works at: http://repositoryuwyoedu/ela

More information

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS TWMS J. Pure Appl. Math., V.6, N.1, 2015, pp.15-26 ON A HOMOTOPY BASED METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS J. SAEIDIAN 1, E. BABOLIAN 1, A. AZIZI 2 Abstract. A new iterative method is proposed

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Key Algebraic Results in Linear Regression

Key Algebraic Results in Linear Regression Key Algebraic Results in Linear Regression James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Key Algebraic Results in

More information

Autocorrelation Functions in GPS Data Processing: Modeling Aspects

Autocorrelation Functions in GPS Data Processing: Modeling Aspects Autocorrelation Functions in GPS Data Processing: Modeling Aspects Kai Borre, Aalborg University Gilbert Strang, Massachusetts Institute of Technology Consider a process that is actually random walk but

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

On Weighted Structured Total Least Squares

On Weighted Structured Total Least Squares On Weighted Structured Total Least Squares Ivan Markovsky and Sabine Van Huffel KU Leuven, ESAT-SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium {ivanmarkovsky, sabinevanhuffel}@esatkuleuvenacbe wwwesatkuleuvenacbe/~imarkovs

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Math 314 Lecture Notes Section 006 Fall 2006

Math 314 Lecture Notes Section 006 Fall 2006 Math 314 Lecture Notes Section 006 Fall 2006 CHAPTER 1 Linear Systems of Equations First Day: (1) Welcome (2) Pass out information sheets (3) Take roll (4) Open up home page and have students do same

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Approximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning

Approximation of ambiguity covariance matrix for integer de-correlation procedure in single-epoch GNSS positioning he 9 th International Conference ENVIRONMENAL ENGINEERING 22 23 May 24, Vilnius, Lithuania SELECED PAPERS eissn 229-792 / eisbn 978-69-457-64-9 Available online at http://enviro.vgtu.lt Section: echnologies

More information

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. MATH 311-504 Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product. Determinant is a scalar assigned to each square matrix. Notation. The determinant of a matrix A = (a ij

More information

Distance-based test for uncertainty hypothesis testing

Distance-based test for uncertainty hypothesis testing Sampath and Ramya Journal of Uncertainty Analysis and Applications 03, :4 RESEARCH Open Access Distance-based test for uncertainty hypothesis testing Sundaram Sampath * and Balu Ramya * Correspondence:

More information

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX

PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX Math. Operationsforsch. u. Statist. 5 974, Heft 2. pp. 09 6. PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX András Prékopa Technological University of Budapest and Computer

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

TOPIC III LINEAR ALGEBRA

TOPIC III LINEAR ALGEBRA [1] Linear Equations TOPIC III LINEAR ALGEBRA (1) Case of Two Endogenous Variables 1) Linear vs. Nonlinear Equations Linear equation: ax + by = c, where a, b and c are constants. 2 Nonlinear equation:

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System

SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System A. Wieser and F.K. Brunner Engineering Surveying and Metrology, Graz University of Technology, Steyrergasse 3, A-8 Graz, Austria Keywords.

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 1 Ridge Regression Ridge regression and the Lasso are two forms of regularized

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

AA 242B / ME 242B: Mechanical Vibrations (Spring 2016)

AA 242B / ME 242B: Mechanical Vibrations (Spring 2016) AA 242B / ME 242B: Mechanical Vibrations (Spring 206) Solution of Homework #3 Control Tab Figure : Schematic for the control tab. Inadequacy of a static-test A static-test for measuring θ would ideally

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

MIXED MODELS THE GENERAL MIXED MODEL

MIXED MODELS THE GENERAL MIXED MODEL MIXED MODELS This chapter introduces best linear unbiased prediction (BLUP), a general method for predicting random effects, while Chapter 27 is concerned with the estimation of variances by restricted

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

arxiv: v1 [math.na] 21 Oct 2014

arxiv: v1 [math.na] 21 Oct 2014 Computing Symmetric Positive Definite Solutions of Three Types of Nonlinear Matrix Equations arxiv:1410.5559v1 [math.na] 21 Oct 2014 Negin Bagherpour a, Nezam Mahdavi-Amiri a, a Department of Mathematical

More information

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.

Least Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e. Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

A STRATEGY FOR IDENTIFICATION OF BUILDING STRUCTURES UNDER BASE EXCITATIONS

A STRATEGY FOR IDENTIFICATION OF BUILDING STRUCTURES UNDER BASE EXCITATIONS A STRATEGY FOR IDENTIFICATION OF BUILDING STRUCTURES UNDER BASE EXCITATIONS G. Amato and L. Cavaleri PhD Student, Dipartimento di Ingegneria Strutturale e Geotecnica,University of Palermo, Italy. Professor,

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Online Appendix for Sterba, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48,

Online Appendix for Sterba, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48, Online Appendix for, S.K. (2013). Understanding linkages among mixture models. Multivariate Behavioral Research, 48, 775-815. Table of Contents. I. Full presentation of parallel-process groups-based trajectory

More information

Lecture Notes Part 2: Matrix Algebra

Lecture Notes Part 2: Matrix Algebra 17.874 Lecture Notes Part 2: Matrix Algebra 2. Matrix Algebra 2.1. Introduction: Design Matrices and Data Matrices Matrices are arrays of numbers. We encounter them in statistics in at least three di erent

More information

5.3 LINEARIZATION METHOD. Linearization Method for a Nonlinear Estimator

5.3 LINEARIZATION METHOD. Linearization Method for a Nonlinear Estimator Linearization Method 141 properties that cover the most common types of complex sampling designs nonlinear estimators Approximative variance estimators can be used for variance estimation of a nonlinear

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information