A new method for searching an L 1 solution of an overdetermined. overdetermined system of linear equations and applications

Size: px
Start display at page:

Download "A new method for searching an L 1 solution of an overdetermined. overdetermined system of linear equations and applications"

Transcription

1 A new method for searching an L 1 solution of an overdetermined system of linear equations and applications Goran Kušec 1 Ivana Kuzmanović 2 Kristian Sabo 2 Rudolf Scitovski 2 1 Faculty of Agriculture, University of Osijek 2 Department of Mathematics, University of Osijek

2 LAD Problem A R m n, (m n), b R m A = }{{}, b =.. m n

3 LAD Problem A R m n, (m n), b R m A = }{{}, b =.. m n

4 system of linear equations Ax = b b R(A) b R(A)

5 overdetermined system of linear equations Ax b b / R(A) b R(A)

6 b Ax min x R n Euclidean l 2 -norm (Least Square (LS) solution) l 1 -norm (Least absolute deviation (LAD) solution) l -norm (Tschebishev solution) applications in various fields of applied research J. A. Cadzow, Minimum l 1, l 2 and l Norm Approximate Solutions to an Overdetermined System of Linear Equations, Digital Signal Processing 12(2002) S. Dasgupta, S. K. Mishra, Least absolute deviation estimation of linear econometric models: A literature review, Munich Personal RePEc Archive, 1-26, 2004 C. Gurwitz, Weighted median algorithms for L 1 approximation, BIT 30(1990) Dodge Y. (Ed.), Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel, 1997

7 b Ax min x R n Euclidean l 2 -norm (Least Square (LS) solution) l 1 -norm (Least absolute deviation (LAD) solution) l -norm (Tschebishev solution) applications in various fields of applied research J. A. Cadzow, Minimum l 1, l 2 and l Norm Approximate Solutions to an Overdetermined System of Linear Equations, Digital Signal Processing 12(2002) S. Dasgupta, S. K. Mishra, Least absolute deviation estimation of linear econometric models: A literature review, Munich Personal RePEc Archive, 1-26, 2004 C. Gurwitz, Weighted median algorithms for L 1 approximation, BIT 30(1990) Dodge Y. (Ed.), Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel, 1997

8 b Ax min x R n Euclidean l 2 -norm (Least Square (LS) solution) l 1 -norm (Least absolute deviation (LAD) solution) l -norm (Tschebishev solution) applications in various fields of applied research J. A. Cadzow, Minimum l 1, l 2 and l Norm Approximate Solutions to an Overdetermined System of Linear Equations, Digital Signal Processing 12(2002) S. Dasgupta, S. K. Mishra, Least absolute deviation estimation of linear econometric models: A literature review, Munich Personal RePEc Archive, 1-26, 2004 C. Gurwitz, Weighted median algorithms for L 1 approximation, BIT 30(1990) Dodge Y. (Ed.), Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel, 1997

9 Vector x R n such that min x R n b Ax 1 = b Ax 1 is called the best LAD-solution of overdetermined system Ax b. operational research literature - a parameter estimation problem of a hyperplane on the basis of a given set of points N. M. Korneenko, H. Martini, Hyperplane approximation and related topics, in: New Trends in Discrete and Computational Geometry, (J. Pach, Ed.), Springer-Verlag, Berlin, A. Schöbel, Locating Lines and Hyperplanes: Theory and Algorithms, Springer Verlag, Berlin, statistics literature - a parameter estimation problem for linear regression S. Dasgupta, S. K. Mishra, Least absolute deviation estimation of linear econometric models: A literature review, Munich Personal RePEc Archive, 1-26, 2004 E. Z. Demidenko, Optimization and Regression, Nauka, Moscow, 1989, in Russian Dodge Y. (Ed.) (1997): Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel

10 Vector x R n such that min x R n b Ax 1 = b Ax 1 is called the best LAD-solution of overdetermined system Ax b. operational research literature - a parameter estimation problem of a hyperplane on the basis of a given set of points N. M. Korneenko, H. Martini, Hyperplane approximation and related topics, in: New Trends in Discrete and Computational Geometry, (J. Pach, Ed.), Springer-Verlag, Berlin, A. Schöbel, Locating Lines and Hyperplanes: Theory and Algorithms, Springer Verlag, Berlin, statistics literature - a parameter estimation problem for linear regression S. Dasgupta, S. K. Mishra, Least absolute deviation estimation of linear econometric models: A literature review, Munich Personal RePEc Archive, 1-26, 2004 E. Z. Demidenko, Optimization and Regression, Nauka, Moscow, 1989, in Russian Dodge Y. (Ed.) (1997): Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel

11 principle is considered to have been proposed by the Croatian mathematician J. R. Bošković in the mid-eighteenth century D. Birkes, Y. Dodge, Alternative Methods of Regression, Wiley, New York, 1993 P. Bloomfield, W. Steiger, Least Absolute Deviations: Theory, Applications, and Algorithms, Birkhauser, Boston, 1983 Dodge Y. (Ed.) (1997): Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel Numerical methods classical nondifferentiable minimization methods cannot be applied directly-unreasonably long computing time specialized algorithms J. A. Cadzow, Minimum l 1, l 2 and l Norm Approximate Solutions to an Overdetermined System of Linear Equations, Digital Signal Processing 12(2002) E. Castillo, R. Mínguez, C. Castillo, A. S. Cofinño, Dealing with the multiplicity of solutions of the l 1 and l regression models, European J. Oper. Res. 188(2008), Y. Li, A maximum likelihood approach to least absolute deviation regression, EURASIP Journal on Applied Signal Processing 12(2004), N. T. Trendafilov, G. A. Watson, The l 1 oblique procrustes problem, Statistics and Computing 14(2004) 39 51

12 principle is considered to have been proposed by the Croatian mathematician J. R. Bošković in the mid-eighteenth century D. Birkes, Y. Dodge, Alternative Methods of Regression, Wiley, New York, 1993 P. Bloomfield, W. Steiger, Least Absolute Deviations: Theory, Applications, and Algorithms, Birkhauser, Boston, 1983 Dodge Y. (Ed.) (1997): Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel Numerical methods classical nondifferentiable minimization methods cannot be applied directly-unreasonably long computing time specialized algorithms J. A. Cadzow, Minimum l 1, l 2 and l Norm Approximate Solutions to an Overdetermined System of Linear Equations, Digital Signal Processing 12(2002) E. Castillo, R. Mínguez, C. Castillo, A. S. Cofinño, Dealing with the multiplicity of solutions of the l 1 and l regression models, European J. Oper. Res. 188(2008), Y. Li, A maximum likelihood approach to least absolute deviation regression, EURASIP Journal on Applied Signal Processing 12(2004), N. T. Trendafilov, G. A. Watson, The l 1 oblique procrustes problem, Statistics and Computing 14(2004) 39 51

13 principle is considered to have been proposed by the Croatian mathematician J. R. Bošković in the mid-eighteenth century D. Birkes, Y. Dodge, Alternative Methods of Regression, Wiley, New York, 1993 P. Bloomfield, W. Steiger, Least Absolute Deviations: Theory, Applications, and Algorithms, Birkhauser, Boston, 1983 Dodge Y. (Ed.) (1997): Statistical Data Analysis Based on the L 1 -norm and Related Methods, Proceedings of The Thrid International Conference on Statistical Data Analysis Based on the L 1 -norm and Related Methods, Elsevier, Neuchâtel Numerical methods classical nondifferentiable minimization methods cannot be applied directly-unreasonably long computing time specialized algorithms J. A. Cadzow, Minimum l 1, l 2 and l Norm Approximate Solutions to an Overdetermined System of Linear Equations, Digital Signal Processing 12(2002) E. Castillo, R. Mínguez, C. Castillo, A. S. Cofinño, Dealing with the multiplicity of solutions of the l 1 and l regression models, European J. Oper. Res. 188(2008), Y. Li, A maximum likelihood approach to least absolute deviation regression, EURASIP Journal on Applied Signal Processing 12(2004), N. T. Trendafilov, G. A. Watson, The l 1 oblique procrustes problem, Statistics and Computing 14(2004) 39 51

14 LAD problem On the basis of the given set of points Λ = {T i (x (i) 1,..., x (i) n 1, z(i) ) R n : i I }, I = {1,..., m}, vector a = (a0, a 1,..., a n 1 ) Rn of optimal parameters of hyperplane f (x; a) = a 0 + n 1 j=1 a jx j should be determined such that: m G(a ) = min G(a), G(a) = n 1 a R n z(i) a 0 a j x (i) j. i=1 j=1

15 n = 2 LAD Problem Data-points Λ = {(x (i) 1, z(i) ), i = 1,..., m} LAD line z = a0 + a 1 x 1 min (a 0,a 1 ) R 2 m z (i) a 0 a 1 x (i) m 1 = i=1 i=1 z (i) a 0 a 1x (i) 1

16 n = 2 LAD Problem Data-points Λ = {(x (i) 1, z(i) ), i = 1,..., m} LAD line z = a0 + a 1 x 1 min (a 0,a 1 ) R 2 m z (i) a 0 a 1 x (i) m 1 = i=1 i=1 z (i) a 0 a 1x (i) 1

17 n = 2 LAD Problem Data-points Λ = {(x (i) 1, z(i) ), i = 1,..., m} LAD line z = a0 + a 1 x 1 min (a 0,a 1 ) R 2 m z (i) a 0 a 1 x (i) m 1 = i=1 i=1 z (i) a 0 a 1x (i) 1

18 n = 2 LAD Problem Data-points Λ = {(x (i) 1, z(i) ), i = 1,..., m} LAD line z = a0 + a 1 x min (a 0,a 1 ) R 2 m z (i) a 0 a 1 x (i) m 1 = i=1 i=1 z (i) a 0 a 1x (i) 1

19 LAD solution always exists N. M. Korneenko, H. Martini, Hyperplane approximation and related topics, in: New Trends in Discrete and Computational Geometry, (J. Pach, Ed.), Springer-Verlag, Berlin, A. Pinkus, On L 1 -approximation, Cambridge University Press, New York, G. A. Watson, Approximation Theory and Numerical Methods, John Wiley & Sons, Chichester, 1980.

20 LAD solution is generally not unique

21 n=2, LAD solution is not unique

22 n=2, LAD solution is not unique

23 n=2, LAD solution is not unique

24 n=2, LAD solution is not unique

25 n=2, LAD solution is not unique

26 n=2, LAD solution is not unique

27 n=2, LAD solution is not unique

28 n=2, LAD solution is not unique

29 n=2, LAD solution is not unique

30 Definition Matrix A R m n (m n) is said to satisfy the Haar condition if every n n submatrix is nonsingular.

31 Theorem 1 Let A R m n, m > n, be a matrix of full column rank and b = (b 1,..., b m) T R m a given vector. Then there exists a permutation matrix Π R m m such that ΠA = [ A1 A 2 ], Πb = [ b1 b 2 ], A 1 R n n, A 2 R (m n) n, b 1 R n, b 2 R m n, whereby A 1 is a nonsingular matrix, and there exists a LAD-solution x R n such that A 1 x = b 1. Furthermore, if matrix [A b] satisfies the Haar condition, then the solution x R n of the system A 1 x = b 1 is a solution of LAD problem if and only if vector v = (A T 1 ) 1 A T 2 s, s i := sign (b 2 A 2 x ) i satisfies v 1. Furthermore, x is a unique solution of LAD problem if and only if v < 1. A. Pinkus, On L 1 -approximation, Cambridge University Press, New York, G. A. Watson, Approximation Theory and Numerical Methods, John Wiley & Sons, Chichester, 1980.

32 parameter estimation problem of a hyperplane for a set of points in a plane n = 2 for the given set of points in a plane, with condition that not all of them lie on some line parallel to the second axis, there exists the best LAD-line passing through at least two different data points n = 3 for the given set of points in the space, with condition that not all of them lie on some plane parallel to the third axis, there exists the best LAD-plane passing through at least three different data points Y. Li, A maximum likelihood approach to least absolute deviation regression, EURASIP Journal on Applied Signal Processing 12(2004), R. Scitovski, K. Sabo, I. Kuzmanović, I. Vazler, R. Cupec, R. Grbić, Three points method for searching the best least absolute deviations plane, 4 th Croatian Mathematical Congress, Osijek, June 17 20, 2008 A. Schöbel, Locating Lines and Hyperplanes: Theory and Algorithms, Springer Verlag, Berlin, 1999.

33 parameter estimation problem of a hyperplane for a set of points in a plane n = 2 for the given set of points in a plane, with condition that not all of them lie on some line parallel to the second axis, there exists the best LAD-line passing through at least two different data points n = 3 for the given set of points in the space, with condition that not all of them lie on some plane parallel to the third axis, there exists the best LAD-plane passing through at least three different data points Y. Li, A maximum likelihood approach to least absolute deviation regression, EURASIP Journal on Applied Signal Processing 12(2004), R. Scitovski, K. Sabo, I. Kuzmanović, I. Vazler, R. Cupec, R. Grbić, Three points method for searching the best least absolute deviations plane, 4 th Croatian Mathematical Congress, Osijek, June 17 20, 2008 A. Schöbel, Locating Lines and Hyperplanes: Theory and Algorithms, Springer Verlag, Berlin, 1999.

34 LAD Problem I. Kuzmanović, K. Sabo, R. Scitovski, Method for searching the best least absolute deviations solution of an overdetermined system of linear equations, submitted

35 numerical methods for searching the best LAD-solution: Hooke and Jeeves Method, Differential Evolution, Nelder Mead, Random Search, Simulated Annealing J. A. Nelder, R. Mead, A simplex method for function minimization, Comput. J. 7(1965) C. T. Kelley, Iterative Methods for Optimization, SIAM, Philadelphia, various specializations of the Gauss-Newton method E. Z. Demidenko, Optimization and Regression, Nauka, Moscow, 1989, in Russia M. R. Osborne, Finite Algorithms in Optimization and Data Analysis, Wiley, Chichester, Linear Programming methods: A. Barrodale, F. D. K. Roberts, An improved algorithm for discrete l 1 linear approximation, SIAM J. Numer. Anal. 10(1973) N. M. Korneenko, H. Martini, Hyperplane approximation and related topics, in: New Trends in Discrete and Computational Geometry, (J. Pach, Ed.), Springer-Verlag, Berlin, some of specialized methods: Barrodale-Roberts, Bartels-Conn-Sinclair, Bloomfield-Steiger N. T. Trendafilov, G. A. Watson, The l 1 oblique procrustes problem, Statistics and Computing 14(2004) P. Bloomfield, W. Steiger, Least Absolute Deviations: Theory, Applications, and Algorithms, Birkhauser, Boston, 1983 Y. Dodge, Statistical Data Analysis Based on the L 1 -norm and Related Methods, Kušec, Proceedings Kuzmanović, of The Sabo, Thrid Scitovski International A new Conference methodon forstatistical searchingdata an L 1 Analysis solutionbased of an on overdetermined

36 Lemma 1 Let y 1 y 2... y m be real numbers with corresponding weights w i > 0, W := m i=1 w i, and I = {1,..., m}, m 2 a set of indices. Denote { { max ν I : ν ν 0 = i=1 w } { i W 2, if ν I : ν i=1 w } i W 2 0, otherwise Furthermore, let F : R R be a function defined by the formula F (α) = m i=1 w i y i α. Then (i) if ν 0 i=1 w i < W, then the minimum of the function F is attained at the 2 point α = y ν0 +1; (ii) if ν 0 i=1 w i = W, then the minimum of the function F is attained at every 2 point α from the segment [y ν0, y ν0 +1].

37 Lemma 2 Let x 0 be a unique solution of the system Ax = b, where A R n n is a square nonsingular matrix and b R n a given vector. If the j-th equation of this system a T j x = b j is replaced with equation u T x = β, whereby u T d j 0, where d j R n is the j-th column of matrix A 1, then the solution of a new system is given by x = x 0 + αd j, α = β ut x 0 u T d j.

38 Definition Vector x J k R n, J k = {i 1,..., i k } I, k n is called the best J k -LAD-solution of problem min x R n b Ax 1 = b Ax 1 if on x J k the minimum of functional x r(x) 1 is attained with condition r i (x J k ) := b i (Ax) i = 0, i J k. the best J n -LAD-solution is obtained by solving the system of linear equations r i (x) = 0, i = i 1,..., i n, the best J n 1 -LAD-solution is obtained as a solution of one weighted median problem.

39 Lemma 3 Let A R m n, (m > n) be a matrix of full column rank, b R m a given vector and e = (1,..., m) T a vector of indices. Furthermore, let Π R m m be a permutation matrix such that ΠA = [ A 1 A + 2 ] [ ] [ A11 a = 1k A 12, Πb = A 21 a 2k A 22 b 1 b + 2 ], where A 11 R (n 1) (k 1), A 12 R (n 1) (n k), A 21 R (m n+1) (k 1), A 22 R (m n+1) (n k), a 1k, b 1 R n 1, a 2k, b + 2 Rm n+1, whereby index k is such that A 1k := [A 11 A 12 ] is a nonsingular matrix. Let A 2k := [A 21 A 22 ], I = {(Πe) k : k = 1,..., n 1} and I + = {(Πe) k : k = n, n + 1,..., m}. Then there exists i 0 I + and the best J n-lad-solution x R n, such that r i (x ) = 0 for all i I {i 0 }. The k-th component xk of vector x is thereby a solution of the weighted median problem A 2k η b + 2 α(a 2k ξ a 2k ) 1 min α, (1) whereby ξ, i.e. η, is a unique solution of systems A 1k ξ = a 1k, i.e. A 1k η = b 1, (2) whereas other components of vector x are obtained by solving the system A 1k (x 1,..., x k 1, x k+1,..., x n) T = b 1 x k a 1k. (3)

40 matrices A and [A b] - satisfy the Haar condition.

41 Step 0 (Input) m, n, e = (1,..., m) T, A R m n, b R m, Π 0 R m m. Step 1 (Defining the first submatrix) Determine index k such that A 1k := [A 11 A 12 ] R (n 1) (n 1) is a nonsingular matrix, where [ A ] [ ] [ H 0 := Π 0 A = 1 A11 a A + = 1k A 12 b ], Π A 2 21 a 2k A 0 b = 1 22 b +. 2 Set I + = {(Π 0 e) i : i = n, n + 1,..., m} and e := Π 0 e. Step 2 (Searching for a new equation) According to Lemma 3, determine i 0 I + and solution x (0). Calculate G 0 = G(x (0) [ ) := b ] Ax (0) 1 [ A Let A 1 = 1 a T R n n b ], b 1 = 1 R n. i b 0 i0 Furthermore, let A 2 R (m n) n be a matrix obtained from A + 2 by dropping the i 0-th row, b 2 R m n a vector obtained from b + 2 by dropping the i 0-th component Solve system A T 1 v = AT 2 s, s i := sign (b 2 A 2 x) i, i = 1,..., n, Step 3 and denote the solution by v (0) Determine j 0 such that v (0) (0) = max v =: v j 0 i=1,...,n i M If v M 1, STOP; otherwise go to Step 3 (Equations exchange) Let Π 1 R m m be a permutation matrix which in matrix H 0 replaces the j 0 -th row with the i 0 -th row. Determine index k such that A 1k := [A 11 A 12 ] R (n 1) (n 1) is a nonsingular matrix, where H 1 := Π 1 H 0 = [ A 1 A + 2 Set I + = {(Π 1 e) i : i = n, n + 1,..., m} and e := Π 1 e. ] [ ] [ A11 a = 1k A 12, Π A 21 a 2k A 1 b = 22 b 1 b + 2 ],

42 Step 4 (Searching for a new equation) According to Lemma 2, determine i 0 I + and solution x (1). Calculate G 1 = G(x (1) ) := b Ax (1) 1 ; [ ] [ A Step 5 (Preparation for a new loop) Let A 1 = 1 a T R n n b ], b 1 = 1 R n. i b 0 i0 Furthermore, let A 2 R (m n) n be a matrix obtained from A + 2 by dropping the i 0-th row, b 2 R m n a vector obtained from b + 2 by dropping the i 0-th component. Solve system A T 1 v = AT 2 s, s i := sign (b 2 A 2 x) i, i = 1,..., n, and denote the solution by v (1) Determine j 0 such that v (1) (1) = max v =: v j 0 i=1,...,n i M If v M 1, STOP; otherwise put H 0 = H 1 and go to Step 3

43 Initialization LAD Problem The initial matrix A 1 matrix A satisfies the Haar condition the first (n 1) rows of matrix A Π 0 = I- identity matrix A nonsingular submatrix A 1k of matrix [ ] [ ] A H 0 := Π 0 A = 1 A11 a A + = 1k A 12 2 A 21 a 2k A 22 in Step 1 - by applying QR factorization with column pivoting A 1 Π := [A 11 a 1k A 12 ] Π = Q[R ρ], Π R n n - permutation matrix Q R (n 1) (n 1) - an orthogonal matrix R R (n 1) (n 1) - an upper triangular nonsingular matrix ρ R (n 1)

44 Initialization LAD Problem The initial matrix A 1 matrix A satisfies the Haar condition the first (n 1) rows of matrix A Π 0 = I- identity matrix A nonsingular submatrix A 1k of matrix [ ] [ ] A H 0 := Π 0 A = 1 A11 a A + = 1k A 12 2 A 21 a 2k A 22 in Step 1 - by applying QR factorization with column pivoting A 1 Π := [A 11 a 1k A 12 ] Π = Q[R ρ], Π R n n - permutation matrix Q R (n 1) (n 1) - an orthogonal matrix R R (n 1) (n 1) - an upper triangular nonsingular matrix ρ R (n 1)

45 A 1k := [A 11 A 12 ] = Q R Π T 1, a 1k = Qρ, Π 1 is the matrix - from Π by dropping the k th row and the n th column.

46 Theorem 2 Let x be approximation of LAD-problem obtained in Step 2, i.e. [ Step] 4, of the A Algorithm as a solution of the system A 1x = b 1, 1 where A 1 = a T R n n, i [ ] 0 b 1 b 1 = R n. Furthermore, let A 2 R (m n) n be a matrix obtained from b i0 A + 2 by dropping the i0-th row, b2 Rm n a vector obtained from b + 2 by dropping the i 0-th component, and v R n a solution of the system A T 1 v = A T 2 s, s i := sign (b 2 A 2x) i, i = 1,..., n. Denote v j0 = max i=1,...,n v i. I. If v j0 1, then x is the best LAD-solution; II. If v j0 > 1, then j 0 n, and maximum decreasing of minimizing function values is attained such that in Step 3 the j 0-th row of matrix A 1 is replaced.

47 Convergence Theorem 4 Let A R m n be a matrix and b R m a given vector, m n, such that matrices A and [A b] satisfy the Haar condition. Then sequence (x (n) ) defined by the iterative procedure in the Algorithm converges in finitely many steps to the best LAD-solution.

48 Example 1 LAD Problem Table 1. i A b G 0 =

49 Example 1 LAD Problem Table 1. i A b G 1 =

50 Example 1 LAD Problem Table 1. i A b G 2 =

51 Example 1 LAD Problem Table 1. i A b G 3 =

52 Example 1 LAD Problem Table 1. i A b G 4 =

53 Application LAD Problem Data-points T i = (x (i) 1, x (i) 2, z(i) ), i = 1,..., m -fat thickness x (i) 1 -lumbar muscle thickness z (i) - muscle percent of the i th pig x (i) 2 z = a 0 + a 1 x 1 + a 2 x 2 outliers

54 Application LAD Problem Data-points T i = (x (i) 1, x (i) 2, z(i) ), i = 1,..., m -fat thickness x (i) 1 -lumbar muscle thickness z (i) - muscle percent of the i th pig x (i) 2 z = a 0 + a 1 x 1 + a 2 x 2 outliers

55 Application LAD Problem Data-points T i = (x (i) 1, x (i) 2, z(i) ), i = 1,..., m -fat thickness x (i) 1 -lumbar muscle thickness z (i) - muscle percent of the i th pig x (i) 2 z = a 0 + a 1 x 1 + a 2 x 2 outliers

56 Application LAD Problem Data-points T i = (x (i) 1, x (i) 2, z(i) ), i = 1,..., m -fat thickness x (i) 1 -lumbar muscle thickness z (i) - muscle percent of the i th pig x (i) 2 z = a 0 + a 1 x 1 + a 2 x 2 outliers

57 Application LAD Problem Data-points T i = (x (i) 1, x (i) 2, z(i) ), i = 1,..., m -fat thickness x (i) 1 -lumbar muscle thickness z (i) - muscle percent of the i th pig x (i) 2 z = a 0 + a 1 x 1 + a 2 x 2 outliers

58 Root-Mean-Square Error of Prediction (RMSEP) D. Causeur, G. Daumas, T. Dhorne, B. Engel, M. Font, I. Furnols, S. Hojsgaard, Statistical handbook for assessing pig classification methods: Recommendations from the EUPIGCLASS project group EU reference method P. Walstra, G. S. M. Merkus, Procedure for assessment of the lean meat percentage as a consequence of the new EU reference dissection method in pig carcass classification, DLO-Research Institute for Animal Science and Health (ID - DLO). Research Branch Zeist, P.O. Box 501, 3700 AM Zeist, The Netherlands, 1995

59 Example 2 m=145 EU reference method - RMSEP a2 = , a1 = , a0 = LAD method a2 = , a1 = , a0 =

THE BEST LEAST ABSOLUTE DEVIATION LINEAR REGRESSION: PROPERTIES AND TWO EFFICIENT METHODS

THE BEST LEAST ABSOLUTE DEVIATION LINEAR REGRESSION: PROPERTIES AND TWO EFFICIENT METHODS THE BEST LEAST ABSOLUTE DEVIATION LINEAR REGRESSION: PROPERTIES AND TWO EFFICIENT METHODS Kuzmanović I., HR), Sabo K., HR), Scitovski R., HR), Vazler I., HR) Abstract. For the given set of data, among

More information

A choice of norm in discrete approximation

A choice of norm in discrete approximation 147 A choice of norm in discrete approximation Tomislav Marošević Abstract. We consider the problem of choice of norms in discrete approximation. First, we describe properties of the standard l 1, l 2

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Chapter 6 - Orthogonality

Chapter 6 - Orthogonality Chapter 6 - Orthogonality Maggie Myers Robert A. van de Geijn The University of Texas at Austin Orthogonality Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Orthogonal Vectors and Subspaces http://z.cs.utexas.edu/wiki/pla.wiki/

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 06 48 USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS A. L. CUSTÓDIO, J. E. DENNIS JR. AND

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4 Department of Aerospace Engineering AE6 Mathematics for Aerospace Engineers Assignment No.. Decide whether or not the following vectors are linearly independent, by solving c v + c v + c 3 v 3 + c v :

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS A. L. CUSTÓDIO, J. E. DENNIS JR., AND L. N. VICENTE Abstract. It has been shown recently that the efficiency of direct search methods

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

Scientific Data Computing: Lecture 3

Scientific Data Computing: Lecture 3 Scientific Data Computing: Lecture 3 Benson Muite benson.muite@ut.ee 23 April 2018 Outline Monday 10-12, Liivi 2-207 Monday 12-14, Liivi 2-205 Topics Introduction, statistical methods and their applications

More information

Control for Coordination of Linear Systems

Control for Coordination of Linear Systems Control for Coordination of Linear Systems André C.M. Ran Department of Mathematics, Vrije Universiteit De Boelelaan 181a, 181 HV Amsterdam, The Netherlands Jan H. van Schuppen CWI, P.O. Box 9479, 19 GB

More information

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Multiple ellipse fitting by center-based clustering

Multiple ellipse fitting by center-based clustering Croatian Operational Research Review 3 CRORR (015), 3 53 Multiple ellipse fitting by center-based clustering Tomislav Marošević 1, and Rudolf Scitovski 1 1 Department of Mathematics, Josip Juraj Strossmayer

More information

STATISTICAL HANDBOOK FOR ASSESSING PIG CLASSIFICATION METHODS: Recommendations from the EUPIGCLASS project group

STATISTICAL HANDBOOK FOR ASSESSING PIG CLASSIFICATION METHODS: Recommendations from the EUPIGCLASS project group Coordination: Gérard DAUMAS Compilation: David CAUSEUR STATISTICAL HANDBOOK FOR ASSESSING PIG CLASSIFICATION METHODS: Recommendations from the EUPIGCLASS project group by CAUSEUR D., DAUMAS G., DHORNE

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small. 0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture Operations Research is the branch of science dealing with techniques for optimizing the performance of systems. System is any organization,

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

SOLVE LEAST ABSOLUTE VALUE REGRESSION PROBLEMS USING MODIFIED GOAL PROGRAMMING TECHNIQUES

SOLVE LEAST ABSOLUTE VALUE REGRESSION PROBLEMS USING MODIFIED GOAL PROGRAMMING TECHNIQUES Computers Ops Res. Vol. 25, No. 12, pp. 1137±1143, 1998 # 1998 Elsevier Science Ltd. All rights reserved Printed in Great Britain PII: S0305-0548(98)00016-1 0305-0548/98 $19.00 + 0.00 SOLVE LEAST ABSOLUTE

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Proceedings of the Third International DERIV/TI-92 Conference

Proceedings of the Third International DERIV/TI-92 Conference Proceedings of the Third International DERIV/TI-9 Conference The Use of a Computer Algebra System in Modern Matrix Algebra Karsten Schmidt Faculty of Economics and Management Science, University of Applied

More information

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING 0. Gaussian Elimination with Partial Pivoting 0.2 Iterative Methods for Solving Linear Systems 0.3 Power Method for Approximating Eigenvalues 0.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity)

Householder reflectors are matrices of the form. P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) Householder QR Householder reflectors are matrices of the form P = I 2ww T, where w is a unit vector (a vector of 2-norm unity) w Px x Geometrically, P x represents a mirror image of x with respect to

More information

Computational rank-based statistics

Computational rank-based statistics Article type: Advanced Review Computational rank-based statistics Joseph W. McKean, joseph.mckean@wmich.edu Western Michigan University Jeff T. Terpstra, jeff.terpstra@ndsu.edu North Dakota State University

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

Solving Separable Nonlinear Equations Using LU Factorization

Solving Separable Nonlinear Equations Using LU Factorization Western Washington University Western CEDAR Mathematics College of Science and Engineering 03 Solving Separable Nonlinear Equations Using LU Factorization Yun-Qiu Shen Western Washington University, yunqiu.shen@wwu.edu

More information

VII Selected Topics. 28 Matrix Operations

VII Selected Topics. 28 Matrix Operations VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. 10. Rank-nullity Definition 10.1. Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns. The nullity ν(a) of A is the dimension of the kernel. The

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Problem 1. CS205 Homework #2 Solutions. Solution

Problem 1. CS205 Homework #2 Solutions. Solution CS205 Homework #2 s Problem 1 [Heath 3.29, page 152] Let v be a nonzero n-vector. The hyperplane normal to v is the (n-1)-dimensional subspace of all vectors z such that v T z = 0. A reflector is a linear

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

11.5 Reduction of a General Matrix to Hessenberg Form

11.5 Reduction of a General Matrix to Hessenberg Form 476 Chapter 11. Eigensystems 11.5 Reduction of a General Matrix to Hessenberg Form The algorithms for symmetric matrices, given in the preceding sections, are highly satisfactory in practice. By contrast,

More information

Subset selection for matrices

Subset selection for matrices Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions Filomat 27:5 (2013), 899 908 DOI 10.2298/FIL1305899Y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Mathematical Programming Involving

More information

Section 4.4 Reduction to Symmetric Tridiagonal Form

Section 4.4 Reduction to Symmetric Tridiagonal Form Section 4.4 Reduction to Symmetric Tridiagonal Form Key terms Symmetric matrix conditioning Tridiagonal matrix Similarity transformation Orthogonal matrix Orthogonal similarity transformation properties

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Convex Quadratic Approximation

Convex Quadratic Approximation Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see

Downloaded 11/07/17 to Redistribution subject to SIAM license or copyright; see SIAM J MATRIX ANAL APPL Vol 20, No 1, pp 14 30 c 1998 Society for Industrial and Applied Mathematics STRUCTURED TOTAL LEAST NORM FOR NONLINEAR PROBLEMS J B ROSEN, HAESUN PARK, AND JOHN GLICK Abstract An

More information

Author copy. for some integers a i, b i. b i

Author copy. for some integers a i, b i. b i Cent. Eur. J. Math. 6(3) 008 48-487 DOI: 10.478/s11533-008-0038-4 Central European Journal of Mathematics Rational points on the unit sphere Research Article Eric Schmutz Mathematics Department, Drexel

More information

3.2 Iterative Solution Methods for Solving Linear

3.2 Iterative Solution Methods for Solving Linear 22 CHAPTER 3. NUMERICAL LINEAR ALGEBRA 3.2 Iterative Solution Methods for Solving Linear Systems 3.2.1 Introduction We continue looking how to solve linear systems of the form Ax = b where A = (a ij is

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Primitive sets in a lattice

Primitive sets in a lattice Primitive sets in a lattice Spyros. S. Magliveras Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431, U.S.A spyros@fau.unl.edu Tran van Trung Institute for Experimental

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 13: Conditioning of Least Squares Problems; Stability of Householder Triangularization Xiangmin Jiao Stony Brook University Xiangmin Jiao

More information

Clustering VS Classification

Clustering VS Classification MCQ Clustering VS Classification 1. What is the relation between the distance between clusters and the corresponding class discriminability? a. proportional b. inversely-proportional c. no-relation Ans:

More information

A fast randomized algorithm for orthogonal projection

A fast randomized algorithm for orthogonal projection A fast randomized algorithm for orthogonal projection Vladimir Rokhlin and Mark Tygert arxiv:0912.1135v2 [cs.na] 10 Dec 2009 December 10, 2009 Abstract We describe an algorithm that, given any full-rank

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Some Notes on Least Squares, QR-factorization, SVD and Fitting Department of Engineering Sciences and Mathematics January 3, 013 Ove Edlund C000M - Numerical Analysis Some Notes on Least Squares, QR-factorization, SVD and Fitting Contents 1 Introduction 1 The Least

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

The Linear Least Squares Problem

The Linear Least Squares Problem CHAPTER 3 The Linear Least Squares Problem In this chapter we we study the linear least squares problem introduced in (4) Since this is such a huge and important topic, we will only be able to briefly

More information