Iteratively reweighted total least squares: a robust estimation in errors-in-variables models

Size: px
Start display at page:

Download "Iteratively reweighted total least squares: a robust estimation in errors-in-variables models"

Transcription

1 : a robust estimation in errors-in-variables models V Mahboub*, A R Amiri-Simkooei,3 and M A Sharifi 4 In this contribution, the iteratively reweighted total least squares (IRTLS) method is introduced as a robust estimation in errors-in-variables (EIV) models The method is a follow-up to the iteratively reweighted least squares (IRLS) that is applied to the Gauss Markov and/or Gauss Helmert models, when the observations are corrupted by gross errors (outliers) In a relatively new class of models, namely EIV models, IRLS or other known robust estimation methods introduced in geodetic literature cannot be directly applied This is because the vector of observations or the coefficient matrix of the EIV model may be falsified by gross errors The IRTLS can then be a good alternative as a robust estimation method in the EIV models This method is based on the algorithm of weighted total least squares problem according to the traditional Lagrange approach to optimise the target function of this problem Also a new weight function is introduced for IRTLS approach in order to obtain better results A simulation study and an empirical example give insight into the robustness and the efficiency of the procedure proposed Keywords: Errors-in-variables model, Weighted total least squares, Robust estimation, Iteratively Reweighted total least squares Introduction If the observations are distorted by gross errors in addition to random ones, one can use robust estimation techniques The concept of robustness has been already introduced in the last years The term robust was coined in statistics by [4] Various definitions of greater or lesser mathematical rigor are possible for the term, but in general, referring to a statistical estimator, it means insensitive to small departures from the idealised assumptions for which the estimator is optimized [9], [3] In the last two decades, several publications concerning robust estimation and outlier detection methods have been published in geodetic literature We may for instance refer to [8], [], [], [7], and [3] None of them, however, has been applied to the relatively new class of models named errors-in-variables (EIV) models, which can be solved using the total least squares (TLS) method developed by [7], although [9] and [0] have Department of Surveying and Geomatics Engineering, Geodesy Division, Faculty of Engineering, University of Tehran, North Kargar Ave, Amir- Abad, Tehran, Iran Department of Surveying Engineering, Faculty of Engineering, University of Isfahan, , Isfahan, Iran 3 Acoustic Remote Sensing Group, Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg, 69 HS, Delft, The Netherlands 4 Department of Surveying and Geomatics Engineering, Geodesy Division, Faculty of Engineering, University of Tehran, North Kargar Ave, Amir- Abad, Tehran, Iran *Corresponding author, vahid_mahboobk@yahoocom proposed a method for outlier detection in EIV model based on the traditional statistical test The method is, however, applicable when only one outlier appears, either in the observation vector or in the coefficient matrix In the last decade, there has been a growing demand to use the TLS method, generally in engineering sciences and particularly in geodetic surveying applications Therefore, a robust estimation method for EIV models needs to be established Although some researchers such as [8] and [5] have investigated this problem in the statistical literature, the methods proposed can only be applied to a linear regression model In addition, the methods are difficult to use We aim to adapt the methods for many engineering applications This came out of the real and simulated examples that show, in order to achieve more accurate results, one has to further develop the method and adopt it for engineering applications In the present contribution, one approximate approach is introduced for robust estimation in EIV models which is coming from the fields of optimal control and filtering rather than the field of mathematical statistics It is the iteratively reweighted total least squares (IRTLS) which is a followup to the iteratively reweighted least squares (IRLS) that was originally introduced by [] into the geodetic applications To develop the IRTLS algorithm, we select one algorithm among the several existing algorithms that solve the TLS problem This algorithm was developed by [5] for the weighted total least squares (WTLS) problem It is based on the traditional Lagrange approach of which ß 03 Survey Review Ltd Received 9 November 0; accepted 8 May 0 9 Survey Review 03 VOL 45 NO 39 DOI 079/75706Y

2 Mahboub et al its target function was proposed by [3] The algorithm has a general structure and can be applied to many geodetic applications such as linear regression and affine transformation The algorithm has also the capability to be reweighted This paper is organised as follows In the next section, the EIV model and the WTLS algorithm are introduced We then introduce the IRTLS problem An algorithm to solve an IRTLS problem is proposed In a later section, a simulation study and an empirical example give insight into the robustness of the IRTLS and the efficiency of the algorithm proposed Finally, we conclude the paper Weighted total least squares In many geodetic surveying applications, one usually assumes that only the observables are contaminated by random noise There are, however, some cases that the model itself has also random noise In this case, the Gauss Markov model is replaced by an EIV model, which is expressed as follows y~ ða{ E A Þxz e y () where underlines indicates random variables, and one has e y e A : ~ e y vec( E A ) & 0 0, s 0 Q y 0 0 Q A where y is the m6 observation vector, e y is the m6 vector of observational noise, A is the m6ncoefficient matrix of input variables (observed), E A is the corresponding m6n matrix of random noise, x is the n6 parameters vector (unknown), D E y ~s 0 Q y and DðE A Þ~s 0 Q A are the corresponding (partly known) dispersion matrices of size m6m and mn6mn, with s 0 the unknown variance component The WTLS estimator is used to solve the EIV model defined in equations () and () To achieve this goal, [5] introduced the generalised total least squares (GTLS)[3] argued that GTLS does not solve the WTLS problem, because the so called weights of GTLS does not actually refer to a covariance matrix of the matrix observations To avoid confusion, the GTLS is now more commonly known as equilibrated TLS What we call weighted TLS follows, in fact, the geodetic tradition and is based on the inverse of the dispersion matrix (variance covariance) In case of a diagonal covariance matrix, element-wise WTLS was first introduced by [7], who claim that in general the problem has no closed-form solution and its computation involves solving a non-convex optimization problem [3] and [4] have similarly introduced an iterative method according to the traditional Lagrange approach to solve WTLS They are, however, restricted to the condition P A ~Q { A ~(P 06P x ), where : denotes the Kronecker product and P 0 is of size m6m and P x of size n6n Based on the traditional Lagrange approach, another algorithm has recently been developed for the WTLS problem [5] It has a general structure and can be applied to many geodetic applications such as linear regression model and an affine transformation Also in [5] it was proved that the WTLS approach preserves the structure of the coefficient matrix in an EIV model when based on the prefect description of the dispersion matrix () Furthermore, as it becomes clear in the present contribution, this algorithm is appropriate to be reweighted The target (Lagrange) function proposed by [3] is W : ~e T y Q{ y e y ze T A Q{ A e Azl T y{ax{e y z(x T 6I m )e A (3) where l5m6 (unknown) Lagrange multiplier vector and I m 5 m6m identity matrix From the first derivatives of equation (3) with respect to the variables e y, e A, l and x, we get the following four equations LW Le T y LW Le T A ~Q { y ~ e y {^l~0 (4) ~Q { A ~ e A z(^x6i m )^l~0 (5) LW Ll T ~y{ax{~ e y z(^x T 6I m ) ~ e A ~0 (6) LW Lx T ~{(AT^l{ ~ E T A^l)~0 (7) where (, ) and ( ˆ ) represent predicted and estimated ones respectively We readily obtain the residual vectors from equations (4) and (5) as follows ~ ey ~Q y^l (8) ~ ea ~vec ~ E A ~{Q A (^x6i m )^l (9) As a starting point one may use ^x(0) ~ A T Q { {A A T Q { y and apply the following iterative algorithm [5]: Step : h ~ Q yz ^x i { (i{) T6I m Q ^x(i{) A 6I m (0) ^l (i) ~ () y{a^x (i{) ~ I n6^l (i) T Q A ^x(i{) 6I m () { ^x(i) ~ A T Az A A T yz y (3) for i5,, Step : Repeat step until one has ^x(i) {^x (i{) ƒd (4) where d is a chosen relative error (threshold value) for the convergence As it was shown by [5], the WTLS algorithms including the above algorithm will deliver the optimal Structured TLS solution as soon as the proper Q A matrix is introduced, in agreement with the following theorem Theorem? The prefect description of the dispersion matrix Q A of the coefficient matrix A according to the Survey Review 03 VOL 45 NO 39 93

3 Mahboub et al following five rules guarantees the complete structured approach for the WTLS problem: If an element of A is repeated, one must use the 00% correlation between the two repeated elements Consequently, these two elements have a covariance equal to their unique variance If an element of A is repeated with a negative sign, one must use a 00% correlation between these two repeated elements Consequently, these two elements have a negative covariance equal to their (minus) unique variance 3 If an element of A is fixed, one must put zero in the corresponding row and column of the dispersion matrix of the coefficient matrix 4 If there are two different elements, it is evident that one puts the corresponding covariance if they are correlated, otherwise it is zero 5 Even in the homoscedastic case the above rules can be applied, one simply uses the number if an element is random otherwise 0 for the fixed element As mentioned, IRTLS is a follow-up to the IRLS that was originally introduced by [] into geodetic applications Although iterative reweighting methods are sometimes not reliable as they can provide approximate solutions in some cases, these methods can easily be applied, because they are based on the L norm or, in this research, the TLS estimator Moreover, their degree of robustness is comparable to the robust estimators, such as M-estimates, S-estimates and L-estimates, see eg [8], [9], [3] Before proposing the IRTLS algorithm, it is necessary to introduce a theorem of advanced linear algebra as follows ([4]) Theorem 3? (The special case of Schur decomposition): Let A be an n6n real symmetric matrix Then there is an orthogonal n6n matrix S and a diagonal matrix L whose diagonal elements are the eigenvalues of A, such that S T AS~L (5) S T S~I n It is to be noted that the special case of Schur decomposition which is a powerful decomposition is not the well known eigenvalue decomposition We prefer to use the special case of Schur decomposition, since the columns of orthogonal matrix S provide a basis with much better numerical properties than a set of eigenvectors which appear in the eigenvalue decomposition The following algorithm is proposed for IRTLS: Step : Perform Schur decomposition to Q A and Cholesky decomposition to Q y as follows Q A ~SLS T Q y ~G T y G (6) y where G is an upper triangular matrix from the diagonal and upper triangle of matrix Q y, satisfying the equation Q y ~G T y G y Step : Use initial values of the residuals e A and e y, and form the diagonal matrixes W A and W y (see the direction to provide such weights in Note ) Step 3: ~ { G T y W{ y G y z(^x (i{) T6I m )SW {0:5 A LW {0:5 A S T i{ ( ^x ð Þ 6I m ) (7) ^l (i) ~ y{a^x (i{) ~ I n6^l (i) T SW {0: 5 A LW{0: 5 A ^x (i) ~ A T Az { A A T yzr i (8) S T ^x(i{) 6I m (9) ðþ y (0) i~,, Step 4: The procedure should iterate from step until no further improvement of the results is possible The weight matrixes W A and W y in equations (7) (9) should be accommodated according to the residuals of each iteration given in equations (8) and (9) Therefore, the weight matrixes W A and W y in equations (7) and (9) should be replaced with W A (i) and W y (i), meaning that they are modified in each iteration Remarks Note : The weight matrixes W are of diagonal type and their diagonal elements are functions of the residuals Here, we use the Huber s M-estimator weights, which are given as [9] and [8] 8 e j >< ƒs0 k ½WŠ jj ~ s 0 k e j j~,, () >: s0 k e j (7) Typically, k is chosen as k5?5 or and s 0 is the square root of the variance component It should be noted that the variable ks ej = e j, in which sej is the standard deviation of the residual e j, should be more reasonable than ks 0 = e j in the heterogeneous case A possible estimation of s 0, based on the proposed WTLS of the previous section, is given by [5] ~ e T s 0 (i)~ y P ~ y e y z ~ e T A P ~ A e A ~ ^l (i)t R (i){ ^l (i) () m{n m{n In a forthcoming publication, a more general approach is introduced to estimate s 0 Note : In the first step, the Schur decomposition is performed to Q A instead of the Cholesky decomposition It is because the coefficient matrix A, in an EIV model, can usually have fixed or deterministic elements and consequently Q A can have columns or rows with elements all equal to zero In this case, Q A is not positive definite and hence one cannot apply the Cholesky decomposition to such a matrix Therefore, a more generalised decomposition algorithm, namely the Schur decomposition, is used Note 3: There are a few technical limitations to employ the TLS techniques to the TLS problems One should in particular be careful in the selection of an appropriate TLS technique for solving a special TLS problem In the IRTLS algorithm, the observations can be correlated when the dispersion matrix of the coefficient matrix is generated by the assumptions of Theorem? See also [6] 94 Survey Review 03 VOL 45 NO 39

4 Mahboub et al Note 4: It should be noted that in contrast to the WTLS [6] and other existing TLS approaches in the field of engineering applications, in the IRTLS approach, the dispersion matrixes Q A and Q y are updated in each iteration according to the algebraic decompositions and the weight functions in order to make the TLS approach insensitive to the grass errors similar to the traditional IRLS approach for the GM model There is no guarantee that the IRTLS approach gives the correct result in all cases because of the empirical constant k and the square root of the variance component s 0 which appear in equation () or even the general form of the weight function in equation () The constant k may take a smaller value in the TLS adjustment compared to the LS adjustment since more variables exist in the EIV model in contrast to the GM model This results in the smaller residuals in the EIV models To obtain an optimum empirical constant for the IRTLS algorithm, further research is needed Similar reasoning can be given to the general form of the weight function A possible alternative weight function is proposed as follows 8 e j ƒs0 k >< ½WŠ jj ~ s 0 k j~,, : (3)! e j >: s0 k e j Both weight functions are examined in the next section The estimation method of variance component of the IRTLS algorithm is not robust One may use a median based variance scale (see [6]) as an approximate robust estimation of the variance component In a forthcoming publication, we introduce a proper approach to achieve this goal Numerical results and discussion For verifying the IRTLS algorithm presented, two examples are given The first example is a simulated twodimensional affine transformation for which weighted data have been generated The second one is a linear regression model in which a real data set is used (homoscedastic case) Both examples contain gross errors and are used frequently in engineering surveying and geodesy Also the results on IRTLS based on the two weight functions in equations () and (3) are compared Example : two-dimensional affine transformation The issue of reference frame transformation has a very long history in the geodetic sciences including geodesy, photogrammetry, mapping, and engineering surveying Whenever a set of points is given for which coordinate estimates are available in two systems (along with their covariance matrix), the transformation parameters can be estimated on the basis of a suitable model [4] In this simulated example, the planar linear affine transformation, also known as the six-parameter transformation, will be employed as follows " # " #" #" # " x t cos b {sin b 0 sx x o ~ z c # ~ y t sin b cos b m s y y o c " #" # " a b x0 z c # a b y 0 c (4) This transformation employs six physical parameters: s x and s y for scales along the x and y axes respectively, b for rotation, c and c for the shifts along the x and y axes respectively, and a non-perpendicularity (or affinity) m between the two axes of the system to be rotated The physical parameters s x, s y, m and b can be replaced by the mathematical parameters a, a, b and b x t and y t are the transformed coordinates in the target system, while x o and y o are the original coordinates being observed in the start system In terms of the observations, equations, equation (4) will have the following structure 3 a b x t ~ x o y o c y t x o y o a (5) The multivariate model of equation (5) is defined as follows (see eg []) 3 3 x t () y t () x o () y o () 3 a a ~ b b 5(6) x t (n) y t (n) x o (n) y o (n) c c where n is the number of the identical points Unfortunately, a few authors still use the classical TLS approach to adjust such a problem where the coefficient matrix is structured or patterned Contributions that neglect this very structure such as the one by [] and several followers must hence be dismissed [] For more details see [6] Suppose that the coordinates of 3 points in the start system are transformed by the parameters a 50?9, b 50?8, c 5, a 50?6, b 50?7 and c 55 into the coordinates in the target system These coordinates are given in Table Then the coordinates of the points in the start and target systems are corrupted by white Gaussian noise but the points differ in precision Q start ~I 6Q S (7) Q target ~I 6Q T (8) where Q S and Q T are given as Q S ~0 : 005Diagð½,,3,,5,4,,7,,,8,3,6ŠÞ b c Table Point coordinates without any error in start (x o and y o ) and target system (x t and y t ) Point x o y o x t y t Survey Review 03 VOL 45 NO 39 95

5 Mahboub et al Table Estimated parameters using the WTLS and IRTLS for 0 series of simulated data without any outlier Parameters a b c a b c Average s Table 3 Estimated parameters using the WTLS for 0 series of simulated data when there exists an outlier in observations Parameters a b c a b c Average s Q T ~0 : 005Diagð½,3,6,,,8,4,3,6,5,4,5,ŠÞ In fact, we assume the same standard deviation (precision) for each coordinate of a point Finally, a bias with the magnitude of m is added to the coordinates of point 4 in the start system The results of post-adjustment shows that the bias is reduced to 5 cm for IRTLS based on the Huber weight function and to cm based on the proposed weight function (equation (0)), which shows that the IRTLS approach is less sensitive to an outlier than other existing techniques of estimation in EIV models This process has been repeated 0 times, because we are interested in obtaining reliable results Therefore, in each simulation, the white Gaussian noise generator produces different noise In each simulation, the transformation parameters have been estimated by the WTLS and IRTLS algorithms based on the two weight functions when an outlier exists and when there is no outlier The average (over 0 runs) estimated parameters by the two algorithms along with their standard deviations, with and without an outlier, are given in Tables 5 When there is no outlier, the results of IRTLS and WTLS are not significantly different from each other, and hence they are only shown once in Table The results on WTLS, IRTLS (Huber weight function) and IRTLS (proposed weight function), when there exists an outlier, are given in Tables 3 5 respectively The IRTLS approach based on the proposed weight function can make the best improvement in three latter transformation parameters, namely a, b and c To make a better validation for the accuracy of each method, in the rest of this section, we use check points showing the goodness of fit for each method in all parts of the model To examine the results of the WTLS and IRTLS algorithms based on the two weight functions, with and without an outlier, 0 check points were generated in the simulation process The vectors of residuals in each check point for all methods with the position of the point in the target system are shown in Fig Also the magnitude of these vectors (in meters) in each check point shows the goodness of fit for each method (Fig ) The IRTLS approach is more robust than the classical approach and improves the accuracy of transformation parameters well The IRTLS approach based on the proposed weight function in equation (3) made the maximum improvement of 0 cm, while the method based on the Huber s weight function in equation () made only the maximum improvement of 4 cm Figures and can interpret the results of Tables 5 and also show the efficiency of the IRTLS approach particularly when one employs the proposed weight function; however, we use another indicator to substantiate the improvement of the results which has been made by IRTLS approach particularly that based on the proposed weight function This indicator is the estimated variance component using each method as follows (see Table 6) As it is seen, the IRTLS approach particularly based on the proposed weight function has the minimum variance component among other methods We should note that the IRTLS method depends on several experimental conditions which were discussed above Therefore, although the IRTLS can mitigate the effect of an outlier (bias) to a large extent, it can still leave part of the bias to the estimated solution compared to the correct unbiased solution We also note that we could further improve the results using the proposed weight function When there is no outlier, the IRTLS and WTLS approaches give correct result Example : application to linear regression Orthogonal regression is employed whenever both X and Y coordinates are noisy In such a case, the Table 4 Estimated parameters using the IRTLS for 0 series of simulated data when there exists an outlier in observations based on Huber s weight function Parameters a b c a b c Average s Table 5 Estimated parameters using the IRTLS for 0 series of simulated data when there exists an outlier in observations based on proposed weight function Parameters a b c a b c Average s Survey Review 03 VOL 45 NO 39

6 Mahboub et al The highlighted residuals vectors in each check point using all methods with the position of control points denoted by asterisk (*) in the target system predicted residuals of TLS approach are orthogonal to the fitted line; consequently, linear regression by TLS method is also named orthogonal regression In the last two decades, scientists in the field of mathematical statistics carried out research on making this method more robust, see eg [8], [6] and [5] They have proposed robust estimators of orthogonal regression in the field of mathematical statistics such as S-estimates of orthogonal regression and M-estimates of orthogonal regression, though they are not yet applicable to the engineering sciences Therefore, to contrast our work with proper criteria, a data set is selected that has already been used by different groups of statisticians such as [8], [0] and [6] They have all employed their rigorous powerful estimators to this data set They also show that there is more than one outlier in this data set Therefore, the recent method of [9] and [0] cannot be applied to this data set Robust orthogonal regression The magnitude of residual vector (in m) for each check point using different methods Survey Review 03 VOL 45 NO 39 97

7 Mahboub et al 3 Line fitting by different estimators methods can also be used to identify multidimensional outliers in situations when classical methods are not very reliable, as, for example, when outliers occur in bunches and mask each other In fact, robust orthogonal regression can help to find projections which are interesting from the outlier detection point of view [8] [0] presented pairs of empirical measurements The measurements were obtained by two different methods Table 6 Estimated variance component using each method for transformation Method WTLS IRTLS with Huber weight function IRTLS with the proposed weight function s Table 7 Twenty pairs of empirical data presented in [0] Point No X Y Point No X Y for the x coordinate and y coordinates The 0 pairs of this data set are presented in Table 7 The assumption that both measurements are subject to random errors with equal variances seems reasonable [8] The M-estimates and S-estimates of orthogonal regression and classical orthogonal regression were reported by [6] The results of the M and S estimates of orthogonal regression, the results of classical orthogonal regression, the results of WTLS solution and the results of simple IRTLS algorithm based on the two weight functions are given in Table 8 and shown in Fig 3 As it is seen in Fig 3 and Table 8, the IRTLS approach improved the results of classical orthogonal regression and WTLS approach very well, since the results of the proposed ways particularly based on the proposed weight function are close to the results of M and S estimates of orthogonal regression which show that the proposed methods, particularly that based on the proposed weight function, are close to the maximum likelihood estimation In addition, it is more applicable than the statistics robust estimators, since it is based on the L norm minimisation principle There is, however, still a bias in the IRTLS solution, which is equal to? in the intercept and 0? in the slope which can create a significant bias by increasing the value of the coordinates One of the reasons of the bias of the IRTLS approach is because of the experimental weight functions used in the IRTLS algorithm (the section on Iterativly reweighted total least squares ) This leads us to reduce this bias to 0?94 in the intercept and 0?05 in the slope based on the proposed weight function in equation (3) Table 8 Estimated lines parameters by different estimation methods Estimation TLS WTLS IRTLS (based on Huber s weight function) IRTLS (based on proposed weight function) S estimates of orthogonal regression M estimates of orthogonal regression Intercept Slope Survey Review 03 VOL 45 NO 39

8 Mahboub et al Table 9 Estimated variance component using each method for transformation Method WTLS IRTLS with Huber s weight function IRTLS with the proposed weight function s Another statistical indicator is the estimated variance component using each method as follows (see Table 9) As it is seen, the IRTLS approach particularly based on the proposed weight function has the minimum variance component among other methods Conclusions In this research, IRTLS has been introduced as a robust estimation method in the EIV models This method is based on the algorithm of WTLS according to the traditional Lagrange approach to optimise the target function of this problem It can be easily implemented since it is based on the L norm minimisation principle, and consequently no linear programming scheme is required Moreover, as shown in the numerical example, in some cases the IRTLS considerably improves the accuracy of the classical TLS and WTLS approach when a few outliers exist in the data This IRTLS algorithm can be applied to several applications There is, however, still a bias in the IRTLS solution As it was discussed in the section on, the nature and source of this bias is likely due to the empirical constant k and the square root of the variance component s 0 appeared in equation () or even the general form of the weight function in equation () This led us to improve the results using a different weight function as an alternative for the Huber s weight function Further work is likely required to come up with a weight function that gives the most accurate solution Our final remark regards the necessity of introducing the statistics robust estimators of the EIV models into the engineering applications, which is currently in progress for future work References Akyilmaz, O, 007 Total least-squares solution of coordinate transformation Survey Review, 39: Amiri-Simkooei, A R, 003 Formulation of L norm minimization in Gauss Markov models Journal of Surveying Engineering, 9(): Baselga, B, 007 Global optimization solution of robust estimation Journal of Surveying Engineering, 33(3): Box, G E P, 953 Non-normality and tests on variances Biometrika, 40: Brown, M, 98 Robust line estimation with errors in both variables Journal of the American Statistical Association, 77: Fekri, M, Ruiz-Gazen, A, 004 Robust weighted orthogonal regression in the errors-in-variables model Journal of Multivariate Analysis, 88: Golub, G and van Loan, C, (980) An analysis of the total least squares problem SIAM Journal on Numerical Analysis, 7: Hekimoglu, S and Berber, M, 003 Effectiveness of robust methods in heterogeneous linear models Journal of Geodesy, 76: Huber, P J, 98 Robust statistics Wiley, New York 0 Kelly, G, 984 The influence function in the errors in variables problem The Annals of Statistics, : Koch, K R, 999 Parameter estimation and hypothesis testing in linear models Springer, Berlin Krarup, T, Juhl, J and Kubik, K, 980 Götterdämmerung over least squares adjustment Proceedings of 4th Congress of the International Society of Photogrammetry, B3: Launer, R L and Wilkinson, G N (eds), 979 Robustness in statistics Academic Press, New York 4 Magnus, J R, 988 Linear structures London School of Economics and Political Science, Charles Griffin and Company LTD, Oxford University Press, London 5 Mahboub, V, 0 On weighted total least-squares for geodetic transformations Journal of Geodesy, 86: Mahboub, V, 0 Discussion of An improved weighted total least squares method with applications in linear fitting and coordinate transformation by Xiaohua Tong; Yanmin Jin; and Lingyun Li Journal of Surveying Engineering, in press, doi:006/ (ASCE)SU Markovsky, I, Rastello M, Premoli A, Kukush A and van Huffel S, 006 The element-wise weighted total least-squares problem Computational Statistics & Data Analysis, 50: Marshall, J and Bethel J, 996 Basic concepts of L norm minimization for surveying applications Journal of Surveying Engineering, (4): Schaffrin, B, 0 Errors-in-variables for mobile algorithms in the presence of outliers, Proceedings of the International Symposium on Mobile Mapping Technology, June 0, Krakow, Poland 0 Schaffrin, B, 0 On the reliability of errors-in-variables models, International Workshop on Matrices and Statistics, June 0, Tartu, Estonia Schaffrin, B and Felus Y, 008 On the multivariate total leastsquares approach to empirical coordinate transformations Three algorithms, Journal of Geodesy, 8: Schaffrin, B, Neitzel, F, Uzun, S and Mahboub, V, 0 Modifying Cadzow s algorithm to generate the optimal TLS solution for the Structured EIV-Model of a similarity transformation, Journal of Geodetic Sciences, accepted in April 0 3 Schaffrin, B and Wieser A, 008 On weighted total least-squares adjustment for linear regression Journal of Geodesy, 8(7): Schaffrin, B and Wieser A, 009 Empirical affine reference frame transformations by weighted multivariate TLS adjustment International Association of Geodesy Symposia Volume 34, Geodetic Reference Frames IAG Symposium, October 006, Munich, Germany: 3 5 H Drewes (Ed) 5 van Huffel, S and Vandewalle, J, 99 The total least-squares problem Computational aspects and analysis SIAM, Philadelphia, PA 6 Yang, Y, 999 Robust estimation of geodetic datum transformation, Journal of Geodesy, 73: Yang, Y, Song, L and Xu T, 00 Robust estimator for correlated observations based on bifactor equivalent weights Journal of Geodesy, 76: Zamar, R H, 989 Robust estimation in the errors-in-variables model Biometrika, 76(): Survey Review 03 VOL 45 NO 39 99

On the Covariance Matrix of Weighted Total Least-Squares Estimates

On the Covariance Matrix of Weighted Total Least-Squares Estimates On the Covariance Matrix of Weighted Total Least-Squares Estimates A R Amiri-Simkooei, MASCE 1 ; F Zangeneh-Nejad ; and J Asgari Abstract: Three strategies are employed to estimate the covariance matrix

More information

On the reliability of errors-in-variables models

On the reliability of errors-in-variables models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 16, Number 1, 212 Available online at www.math.ut.ee/acta/ On the reliability of errors-in-variables models Burkhard Schaffrin and

More information

Formulation of L 1 Norm Minimization in Gauss-Markov Models

Formulation of L 1 Norm Minimization in Gauss-Markov Models Formulation of L 1 Norm Minimization in Gauss-Markov Models AliReza Amiri-Simkooei 1 Abstract: L 1 norm minimization adjustment is a technique to detect outlier observations in geodetic networks. The usual

More information

Deformation analysis with Total Least Squares

Deformation analysis with Total Least Squares Nat Hazards Earth Syst Sci, 6, 663 669, 06 wwwnat-hazards-earth-syst-scinet/6/663/06/ Author(s) 06 This work is licensed under a Creative Commons License Natural Hazards and Earth System Sciences Deformation

More information

Outliers Robustness in Multivariate Orthogonal Regression

Outliers Robustness in Multivariate Orthogonal Regression 674 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART A: SYSTEMS AND HUMANS, VOL. 30, NO. 6, NOVEMBER 2000 Outliers Robustness in Multivariate Orthogonal Regression Giuseppe Carlo Calafiore Abstract

More information

Topics in Total Least-Squares Adjustment within the Errors-In-Variables Model: Singular Cofactor Matrices and Prior Information

Topics in Total Least-Squares Adjustment within the Errors-In-Variables Model: Singular Cofactor Matrices and Prior Information Topics in Total Least-Squares Adjustment within the Errors-In-Variables Model: Singular Cofactor Matrices and Prior Information by Kyle Snow Report No. 502 Geodetic Science The Ohio State University Columbus,

More information

Linear Models 1. Isfahan University of Technology Fall Semester, 2014

Linear Models 1. Isfahan University of Technology Fall Semester, 2014 Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System

SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System SIGMA-F: Variances of GPS Observations Determined by a Fuzzy System A. Wieser and F.K. Brunner Engineering Surveying and Metrology, Graz University of Technology, Steyrergasse 3, A-8 Graz, Austria Keywords.

More information

Parameter Estimation and Hypothesis Testing in Linear Models

Parameter Estimation and Hypothesis Testing in Linear Models Parameter Estimation and Hypothesis Testing in Linear Models Springer-Verlag Berlin Heidelberg GmbH Karl-Rudolf Koch Parameter Estimation and Hypothesis Testing in Linear Models Second, updated and enlarged

More information

On the realistic stochastic model of GPS observables: Implementation and Performance

On the realistic stochastic model of GPS observables: Implementation and Performance he International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, Volume XL-/W5, 05 International Conference on Sensors & Models in Remote Sensing & Photogrammetry, 3 5 Nov

More information

INCREASING THE RELIABILITY OF GPS GEODETIC NETWORKS

INCREASING THE RELIABILITY OF GPS GEODETIC NETWORKS Proceedings of the first international conference on satellite positioning systems, Alexandria, December 12-13, 1995. INCREASING THE RELIABILITY OF GPS GEODETIC NETWORKS ABSTRACT By Dr. Dalal S. Alnaggar

More information

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017 Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters

The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters The effect of an unknown data bias in least-squares adjustment: some concerns for the estimation of geodetic parameters C. Kotsakis Department of Geodesy and Surveying, Aristotle University of Thessaloniki

More information

Sparse least squares and Q-less QR

Sparse least squares and Q-less QR Notes for 2016-02-29 Sparse least squares and Q-less QR Suppose we want to solve a full-rank least squares problem in which A is large and sparse. In principle, we could solve the problem via the normal

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Regression Analysis for Data Containing Outliers and High Leverage Points

Regression Analysis for Data Containing Outliers and High Leverage Points Alabama Journal of Mathematics 39 (2015) ISSN 2373-0404 Regression Analysis for Data Containing Outliers and High Leverage Points Asim Kumer Dey Department of Mathematics Lamar University Md. Amir Hossain

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising Review: Optimal Average/Scaling is equivalent to Minimise χ Two 1-parameter models: Estimating < > : Scaling a pattern: Two equivalent methods: Algebra of Random Variables: Optimal Average and Optimal

More information

Noise Characteristics in High Precision GPS Positioning

Noise Characteristics in High Precision GPS Positioning Noise Characteristics in High Precision GPS Positioning A.R. Amiri-Simkooei, C.C.J.M. Tiberius, P.J.G. Teunissen, Delft Institute of Earth Observation and Space systems (DEOS), Delft University of Technology,

More information

Reduction of Random Variables in Structural Reliability Analysis

Reduction of Random Variables in Structural Reliability Analysis Reduction of Random Variables in Structural Reliability Analysis S. Adhikari and R. S. Langley Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) February 21,

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

Linear and Nonlinear Models

Linear and Nonlinear Models Erik W. Grafarend Linear and Nonlinear Models Fixed Effects, Random Effects, and Mixed Models magic triangle 1 fixed effects 2 random effects 3 crror-in-variables model W DE G Walter de Gruyter Berlin

More information

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R. Methods and Applications of Linear Models Regression and the Analysis of Variance Third Edition RONALD R. HOCKING PenHock Statistical Consultants Ishpeming, Michigan Wiley Contents Preface to the Third

More information

Estimating Variances and Covariances in a Non-stationary Multivariate Time Series Using the K-matrix

Estimating Variances and Covariances in a Non-stationary Multivariate Time Series Using the K-matrix Estimating Variances and Covariances in a Non-stationary Multivariate ime Series Using the K-matrix Stephen P Smith, January 019 Abstract. A second order time series model is described, and generalized

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Overview of total least-squares methods

Overview of total least-squares methods Signal Processing 87 (2007) 2283 2302 wwwelseviercom/locate/sigpro Overview of total least-squares methods Ivan Markovsky a,, Sabine Van Huffel b a School of Electronics and Computer Science, University

More information

Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter

Impulsive Noise Filtering In Biomedical Signals With Application of New Myriad Filter BIOSIGAL 21 Impulsive oise Filtering In Biomedical Signals With Application of ew Myriad Filter Tomasz Pander 1 1 Division of Biomedical Electronics, Institute of Electronics, Silesian University of Technology,

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives

More information

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury Matrix Theory, Math634 Lecture Notes from September 27, 212 taken by Tasadduk Chowdhury Last Time (9/25/12): QR factorization: any matrix A M n has a QR factorization: A = QR, whereq is unitary and R is

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

AN EFFICIENT GLS ESTIMATOR OF TRIANGULAR MODELS WITH COVARIANCE RESTRICTIONS*

AN EFFICIENT GLS ESTIMATOR OF TRIANGULAR MODELS WITH COVARIANCE RESTRICTIONS* Journal of Econometrics 42 (1989) 267-273. North-Holland AN EFFICIENT GLS ESTIMATOR OF TRIANGULAR MODELS WITH COVARIANCE RESTRICTIONS* Manuel ARELLANO Institute of Economics and Statistics, Oxford OXI

More information

S 7ITERATIVELY REWEIGHTED LEAST SQUARES - ENCYCLOPEDIA ENTRY.(U) FEB 82 D B RUBIN DAAG29-80-C-O0N1 UNCLASSIFIED MRC-TSR-2328

S 7ITERATIVELY REWEIGHTED LEAST SQUARES - ENCYCLOPEDIA ENTRY.(U) FEB 82 D B RUBIN DAAG29-80-C-O0N1 UNCLASSIFIED MRC-TSR-2328 AD-A114 534 WISCONSIN UNIV-MADISON MATHEMATICS RESEARCH CENTER F/B 12/1 S 7ITERATIVELY REWEIGHTED LEAST SQUARES - ENCYCLOPEDIA ENTRY.(U) FEB 82 D B RUBIN DAAG29-80-C-O0N1 UNCLASSIFIED MRC-TSR-2328 NL MRC

More information

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES

MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES MULTICHANNEL SIGNAL PROCESSING USING SPATIAL RANK COVARIANCE MATRICES S. Visuri 1 H. Oja V. Koivunen 1 1 Signal Processing Lab. Dept. of Statistics Tampere Univ. of Technology University of Jyväskylä P.O.

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

Spatial autocorrelation: robustness of measures and tests

Spatial autocorrelation: robustness of measures and tests Spatial autocorrelation: robustness of measures and tests Marie Ernst and Gentiane Haesbroeck University of Liege London, December 14, 2015 Spatial Data Spatial data : geographical positions non spatial

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Grassmann Averages for Scalable Robust PCA Supplementary Material

Grassmann Averages for Scalable Robust PCA Supplementary Material Grassmann Averages for Scalable Robust PCA Supplementary Material Søren Hauberg DTU Compute Lyngby, Denmark sohau@dtu.dk Aasa Feragen DIKU and MPIs Tübingen Denmark and Germany aasa@diku.dk Michael J.

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Least-Squares Variance Component Estimation:

Least-Squares Variance Component Estimation: Least-Squares Variance Component Estimation: Theory and GPS Applications A.R. Amiri-Simkooei Least-Squares Variance Component Estimation: Theory and GPS Applications PROEFSCHRIFT ter verkrijging van de

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Multi-Class Linear Dimension Reduction by. Weighted Pairwise Fisher Criteria

Multi-Class Linear Dimension Reduction by. Weighted Pairwise Fisher Criteria Multi-Class Linear Dimension Reduction by Weighted Pairwise Fisher Criteria M. Loog 1,R.P.W.Duin 2,andR.Haeb-Umbach 3 1 Image Sciences Institute University Medical Center Utrecht P.O. Box 85500 3508 GA

More information

A ROBUST METHOD OF ESTIMATING COVARIANCE MATRIX IN MULTIVARIATE DATA ANALYSIS G.M. OYEYEMI *, R.A. IPINYOMI **

A ROBUST METHOD OF ESTIMATING COVARIANCE MATRIX IN MULTIVARIATE DATA ANALYSIS G.M. OYEYEMI *, R.A. IPINYOMI ** ANALELE ŞTIINłIFICE ALE UNIVERSITĂłII ALEXANDRU IOAN CUZA DIN IAŞI Tomul LVI ŞtiinŃe Economice 9 A ROBUST METHOD OF ESTIMATING COVARIANCE MATRIX IN MULTIVARIATE DATA ANALYSIS G.M. OYEYEMI, R.A. IPINYOMI

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY G.L. Shevlyakov, P.O. Smirnov St. Petersburg State Polytechnic University St.Petersburg, RUSSIA E-mail: Georgy.Shevlyakov@gmail.com

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

Research Letter An Algorithm to Generate Representations of System Identification Errors

Research Letter An Algorithm to Generate Representations of System Identification Errors Research Letters in Signal Processing Volume 008, Article ID 5991, 4 pages doi:10.1155/008/5991 Research Letter An Algorithm to Generate Representations of System Identification Errors Wancheng Zhang and

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Heteroscedasticity and Autocorrelation

Heteroscedasticity and Autocorrelation Heteroscedasticity and Autocorrelation Carlo Favero Favero () Heteroscedasticity and Autocorrelation 1 / 17 Heteroscedasticity, Autocorrelation, and the GLS estimator Let us reconsider the single equation

More information

Linear Models in Statistics

Linear Models in Statistics Linear Models in Statistics ALVIN C. RENCHER Department of Statistics Brigham Young University Provo, Utah A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane

More information

Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap

Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap Dale J. Poirier University of California, Irvine September 1, 2008 Abstract This paper

More information

Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model

Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model 204 Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model S. A. Ibrahim 1 ; W. B. Yahya 2 1 Department of Physical Sciences, Al-Hikmah University, Ilorin, Nigeria. e-mail:

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Performance of Deming regression analysis in case of misspecified analytical error ratio in method comparison studies

Performance of Deming regression analysis in case of misspecified analytical error ratio in method comparison studies Clinical Chemistry 44:5 1024 1031 (1998) Laboratory Management Performance of Deming regression analysis in case of misspecified analytical error ratio in method comparison studies Kristian Linnet Application

More information

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising

Algebra of Random Variables: Optimal Average and Optimal Scaling Minimising Review: Optimal Average/Scaling is equivalent to Minimise χ Two 1-parameter models: Estimating < > : Scaling a pattern: Two equivalent methods: Algebra of Random Variables: Optimal Average and Optimal

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar

A NO-REFERENCE SHARPNESS METRIC SENSITIVE TO BLUR AND NOISE. Xiang Zhu and Peyman Milanfar A NO-REFERENCE SARPNESS METRIC SENSITIVE TO BLUR AND NOISE Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 9564 xzhu@soeucscedu ABSTRACT A no-reference

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Matrix Differential Calculus with Applications in Statistics and Econometrics

Matrix Differential Calculus with Applications in Statistics and Econometrics Matrix Differential Calculus with Applications in Statistics and Econometrics Revised Edition JAN. R. MAGNUS CentERjor Economic Research, Tilburg University and HEINZ NEUDECKER Cesaro, Schagen JOHN WILEY

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

A variational radial basis function approximation for diffusion processes

A variational radial basis function approximation for diffusion processes A variational radial basis function approximation for diffusion processes Michail D. Vrettas, Dan Cornford and Yuan Shen Aston University - Neural Computing Research Group Aston Triangle, Birmingham B4

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Matrix Decomposition in Privacy-Preserving Data Mining JUN ZHANG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF KENTUCKY

Matrix Decomposition in Privacy-Preserving Data Mining JUN ZHANG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF KENTUCKY Matrix Decomposition in Privacy-Preserving Data Mining JUN ZHANG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF KENTUCKY OUTLINE Why We Need Matrix Decomposition SVD (Singular Value Decomposition) NMF (Nonnegative

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Forecasting 1 to h steps ahead using partial least squares

Forecasting 1 to h steps ahead using partial least squares Forecasting 1 to h steps ahead using partial least squares Philip Hans Franses Econometric Institute, Erasmus University Rotterdam November 10, 2006 Econometric Institute Report 2006-47 I thank Dick van

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Stephen Boyd and Almir Mutapcic Notes for EE364b, Stanford University, Winter 26-7 April 13, 28 1 Noisy unbiased subgradient Suppose f : R n R is a convex function. We say

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity International Mathematics and Mathematical Sciences Volume 2011, Article ID 249564, 7 pages doi:10.1155/2011/249564 Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity J. Martin van

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley Zellner s Seemingly Unrelated Regressions Model James L. Powell Department of Economics University of California, Berkeley Overview The seemingly unrelated regressions (SUR) model, proposed by Zellner,

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

FINANCIAL ECONOMETRICS AND EMPIRICAL FINANCE -MODULE2 Midterm Exam Solutions - March 2015

FINANCIAL ECONOMETRICS AND EMPIRICAL FINANCE -MODULE2 Midterm Exam Solutions - March 2015 FINANCIAL ECONOMETRICS AND EMPIRICAL FINANCE -MODULE2 Midterm Exam Solutions - March 205 Time Allowed: 60 minutes Family Name (Surname) First Name Student Number (Matr.) Please answer all questions by

More information