Solving Regularized Total Least Squares Problems
|
|
- Russell McLaughlin
- 5 years ago
- Views:
Transcription
1 Solving Regularized Total Least Squares Problems Heinrich Voss Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
2 Regularized total least squres problems Total Least Squares Problem The ordinary Least Squares (LS) method assumes that the system matrix A of a linear model is error free, and all errors are confined to the right hand side b. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
3 Regularized total least squres problems Total Least Squares Problem The ordinary Least Squares (LS) method assumes that the system matrix A of a linear model is error free, and all errors are confined to the right hand side b. Given A R m n, b R m, m n Find x R n and b R m such that b = min! subject to Ax = b + b. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
4 Regularized total least squres problems Total Least Squares Problem The ordinary Least Squares (LS) method assumes that the system matrix A of a linear model is error free, and all errors are confined to the right hand side b. Given A R m n, b R m, m n Find x R n and b R m such that b = min! subject to Ax = b + b. Obviously equivalent to: Find x R n such that Ax b = min! TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
5 Regularized total least squres problems Total Least Squares In practical applications it may happen that all data are contaminated by noise, for instance because the matrix A is also obtained from measurements. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
6 Regularized total least squres problems Total Least Squares In practical applications it may happen that all data are contaminated by noise, for instance because the matrix A is also obtained from measurements. If the true values of the observed variables satisfy linear relations, and if the errors in the observations are independent random variables with zero mean and equal variance, then the total least squares (TLS) approach often gives better estimates than LS. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
7 Regularized total least squres problems Total Least Squares In practical applications it may happen that all data are contaminated by noise, for instance because the matrix A is also obtained from measurements. If the true values of the observed variables satisfy linear relations, and if the errors in the observations are independent random variables with zero mean and equal variance, then the total least squares (TLS) approach often gives better estimates than LS. Given A R m n, b R m, m n Find A R m n, b R m and x R n such that [ A, b] 2 F = min! subject to (A + A)x = b + b, (1) where F denotes the Frobenius norm. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
8 Regularized total least squres problems Total Least Squares In practical applications it may happen that all data are contaminated by noise, for instance because the matrix A is also obtained from measurements. If the true values of the observed variables satisfy linear relations, and if the errors in the observations are independent random variables with zero mean and equal variance, then the total least squares (TLS) approach often gives better estimates than LS. Given A R m n, b R m, m n Find A R m n, b R m and x R n such that [ A, b] 2 F = min! subject to (A + A)x = b + b, (1) where F denotes the Frobenius norm. In statistics this approach is called errors-in-variables problem or orthogonal regression, in image deblurring blind deconvolution TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
9 Regularized total least squres problems Regularized Total Least Squares Problem If A and [A, b] are ill-conditioned, regularization is necessary. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
10 Regularized total least squres problems Regularized Total Least Squares Problem If A and [A, b] are ill-conditioned, regularization is necessary. Let L R k n, k n and δ > 0. Then the quadratically constrained formulation of the Regularized Total Least Squares (RTLS) problem reads: Find A R m n, b R m and x R n such that [ A, b] 2 F = min! subject to (A + A)x = b + b, Lx 2 δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
11 Regularized total least squres problems Regularized Total Least Squares Problem If A and [A, b] are ill-conditioned, regularization is necessary. Let L R k n, k n and δ > 0. Then the quadratically constrained formulation of the Regularized Total Least Squares (RTLS) problem reads: Find A R m n, b R m and x R n such that [ A, b] 2 F = min! subject to (A + A)x = b + b, Lx 2 = δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
12 Regularized total least squres problems Regularized Total Least Squares Problem If A and [A, b] are ill-conditioned, regularization is necessary. Let L R k n, k n and δ > 0. Then the quadratically constrained formulation of the Regularized Total Least Squares (RTLS) problem reads: Find A R m n, b R m and x R n such that [ A, b] 2 F = min! subject to (A + A)x = b + b, Lx 2 = δ 2. Using the orthogonal distance this problems can be rewritten as (cf. Golub, Van Loan 1980) Find x R n such that f (x) := Ax b x 2 = min! subject to Lx 2 = δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
13 Regularized total least squres problems Regularized Total Least Squares Problem ct. Theorem 1: Let N (L) be the null space of L. If then f = inf{f (x) : Lx 2 = δ 2 } < f (x) := admits a global minimum. Ax 2 min x N (L), x 0 x 2 (1) Ax b x 2 = min! subject to Lx 2 = δ 2. (2) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
14 Regularized total least squres problems Regularized Total Least Squares Problem ct. Theorem 1: Let N (L) be the null space of L. If then f = inf{f (x) : Lx 2 = δ 2 } < f (x) := admits a global minimum. Ax 2 min x N (L), x 0 x 2 (1) Ax b x 2 = min! subject to Lx 2 = δ 2. (2) Conversely, if problem (2) admits a global minimum, then f Ax 2 min x N (L), x 0 x 2 TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
15 Regularized total least squres problems Regularized Total Least Squares Problem ct. Under the condition (1) problem (2) is equivalent to the quadratic optimization problem Ax b 2 f (1 + x 2 ) = min! subject to Lx 2 = δ 2, (3) i.e. x is a global minimizer of problem (2) if and only if it is a global minimizer of (3). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
16 Regularized total least squres problems Regularized Total Least Squares Problem ct. Under the condition (1) problem (2) is equivalent to the quadratic optimization problem Ax b 2 f (1 + x 2 ) = min! subject to Lx 2 = δ 2, (3) i.e. x is a global minimizer of problem (2) if and only if it is a global minimizer of (3). For fixed y R n find x R n such that g(x; y) := Ax b 2 f (y)(1 + x 2 ) = min! subject to Lx 2 = δ 2. (P y ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
17 Regularized total least squres problems Regularized Total Least Squares Problem ct. Under the condition (1) problem (2) is equivalent to the quadratic optimization problem Ax b 2 f (1 + x 2 ) = min! subject to Lx 2 = δ 2, (3) i.e. x is a global minimizer of problem (2) if and only if it is a global minimizer of (3). For fixed y R n find x R n such that g(x; y) := Ax b 2 f (y)(1 + x 2 ) = min! subject to Lx 2 = δ 2. (P y ) Lemma 1 (Sima, Van Huffel, Golub 2004) Problem (P y ) admits a global minimizer if and only if f (y) min x N (L),x 0 x T A T Ax x T. x TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
18 Regularized total least squres problems RTLSQEP Method (Sima, Van Huffel, Golub 2004) Lemma 2 Assume that y satisfies conditions of Lemma 1 and Ly = δ, and let z be a global minimizer of problem (P y ). Then it holds that f (z) f (y). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
19 Regularized total least squres problems RTLSQEP Method (Sima, Van Huffel, Golub 2004) Lemma 2 Assume that y satisfies conditions of Lemma 1 and Ly = δ, and let z be a global minimizer of problem (P y ). Then it holds that f (z) f (y). Proof (1 + z ) 2 (f (z) f (y)) = g(z; y) g(y; y) = (1 + y 2 )(f (y) f (y)) = 0. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
20 Regularized total least squres problems RTLSQEP Method (Sima, Van Huffel, Golub 2004) Lemma 2 Assume that y satisfies conditions of Lemma 1 and Ly = δ, and let z be a global minimizer of problem (P y ). Then it holds that f (z) f (y). Proof (1 + z ) 2 (f (z) f (y)) = g(z; y) g(y; y) = (1 + y 2 )(f (y) f (y)) = 0. Require: x 0 satisfying conditions of Lemma 1 and Lx 0 = δ. for m = 0, 1, 2,... until convergence do Determine global minimizer x m+1 of end for g(x; x m ) = min! subject to Lx 2 = δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
21 Regularized total least squres problems RTLS Method Obviously, 0 f (x m+1 ) f (x m ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
22 Regularized total least squres problems RTLS Method Obviously, 0 f (x m+1 ) f (x m ) The (P x m) can be solved via the first order necessary optimality conditions (A T A f (x m )I)x + λl T Lx = A T b, Lx 2 = δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
23 Regularized total least squres problems RTLS Method Obviously, 0 f (x m+1 ) f (x m ) The (P x m) can be solved via the first order necessary optimality conditions (A T A f (x m )I)x + λl T Lx = A T b, Lx 2 = δ 2. Although g( ; x m ) in general is not convex these conditions are even sufficient if the Lagrange parameter is chosen maximal. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
24 Regularized total least squres problems RTLS Method Theorem 2 Assume that (ˆλ, ˆx) solves the first order conditions. (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2. ( ) If Ly = δ and ˆλ is the maximal Lagrange multiplier then ˆx is a global minimizer of problem (P y ). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
25 Regularized total least squres problems RTLS Method Theorem 2 Assume that (ˆλ, ˆx) solves the first order conditions. (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2. ( ) If Ly = δ and ˆλ is the maximal Lagrange multiplier then ˆx is a global minimizer of problem (P y ). Proof The statement follows immediately from the following equation which can be shown similarly as in W. Gander (1981): If (λ j, z j ), j = 1, 2, are solutions of ( ) then it holds that g(z 2 ; y) g(z 1 ; y) = 1 2 (λ 1 λ 2 ) L(z 1 z 2 ) 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
26 A quadratic eigenproblem A quadratic eigenproblem (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2 ( ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
27 A quadratic eigenproblem A quadratic eigenproblem (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2 ( ) If L is square and nonsingular: Let z := Lx. Then Wz + λz := L T (A T A f (y)i)l 1 z + λz = L T A T b =: h, z T z = δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
28 A quadratic eigenproblem A quadratic eigenproblem (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2 ( ) If L is square and nonsingular: Let z := Lx. Then Wz + λz := L T (A T A f (y)i)l 1 z + λz = L T A T b =: h, z T z = δ 2. u := (W + λi) 2 h h T u = z T z = δ 2 h = δ 2 hh T u TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
29 A quadratic eigenproblem A quadratic eigenproblem (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2 ( ) If L is square and nonsingular: Let z := Lx. Then Wz + λz := L T (A T A f (y)i)l 1 z + λz = L T A T b =: h, z T z = δ 2. u := (W + λi) 2 h h T u = z T z = δ 2 h = δ 2 hh T u (W + λi) 2 u δ 2 hh T u = 0. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
30 A quadratic eigenproblem A quadratic eigenproblem (A T A f (y)i)x + λl T Lx = A T b, Lx 2 = δ 2 ( ) If L is square and nonsingular: Let z := Lx. Then Wz + λz := L T (A T A f (y)i)l 1 z + λz = L T A T b =: h, z T z = δ 2. u := (W + λi) 2 h h T u = z T z = δ 2 h = δ 2 hh T u (W + λi) 2 u δ 2 hh T u = 0. If ˆλ is the right-most real eigenvalue, and the corresponding eigenvector is scaled such that h T u = δ 2 then the solution of problem ( ) is recovered as x = L 1 (W + ˆλI)u. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
31 A quadratic eigenproblem A quadratic eigenproblem ct. If L R k n with linearly independent rows and k < n, the first order conditions can be reduced to a quadratic eigenproblem (W + λi) 2 u δ 2 hh T u = 0. where W m = (C f (x m )D S(T f (x m )I n k ) 1 S T ) h m = g D(T f (x m )I n k ) 1 c with C, D R k k, S R k n k, T R n k n k, g R k, c R n k, and C, D, T are symmetric matrices. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
32 Nonlinear maxmin characterization Nonlinear maxmin characterization Let T (λ) C n n, T (λ) = T (λ) H, λ J R an open interval (maybe unbounded). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
33 Nonlinear maxmin characterization Nonlinear maxmin characterization Let T (λ) C n n, T (λ) = T (λ) H, λ J R an open interval (maybe unbounded). For every fixed x C n, x 0 assume that the real function f ( ; x) : J R, f (λ; x) := x H T (λ)x is continuous, and that the real equation f (λ, x) = 0 has at most one solution λ =: p(x) in J. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
34 Nonlinear maxmin characterization Nonlinear maxmin characterization Let T (λ) C n n, T (λ) = T (λ) H, λ J R an open interval (maybe unbounded). For every fixed x C n, x 0 assume that the real function f ( ; x) : J R, f (λ; x) := x H T (λ)x is continuous, and that the real equation f (λ, x) = 0 has at most one solution λ =: p(x) in J. Then equation f (λ, x) = 0 implicitly defines a functional p on some subset D of C n which we call the Rayleigh functional. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
35 Nonlinear maxmin characterization Nonlinear maxmin characterization Let T (λ) C n n, T (λ) = T (λ) H, λ J R an open interval (maybe unbounded). For every fixed x C n, x 0 assume that the real function f ( ; x) : J R, f (λ; x) := x H T (λ)x is continuous, and that the real equation f (λ, x) = 0 has at most one solution λ =: p(x) in J. Then equation f (λ, x) = 0 implicitly defines a functional p on some subset D of C n which we call the Rayleigh functional. Assume that (λ p(x))f (λ, x) > 0 for every x D, λ p(x). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
36 Nonlinear maxmin characterization maxmin characterization (V., Werner 1982) Let sup v D p(v) J and assume that there exists a subspace V C n of dimension l such that V D and inf p(v) J. v V D TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
37 Nonlinear maxmin characterization maxmin characterization (V., Werner 1982) Let sup v D p(v) J and assume that there exists a subspace V C n of dimension l such that V D and inf p(v) J. v V D Then T (λ)x = 0 has at least l eigenvalues in J, and for j = 1,..., l the j-largest eigenvalue λ j can be characterized by λ j = max dim V =j, V D inf v V D p(v). (1) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
38 Nonlinear maxmin characterization maxmin characterization (V., Werner 1982) Let sup v D p(v) J and assume that there exists a subspace V C n of dimension l such that V D and inf p(v) J. v V D Then T (λ)x = 0 has at least l eigenvalues in J, and for j = 1,..., l the j-largest eigenvalue λ j can be characterized by λ j = max dim V =j, V D inf v V D p(v). (1) For j = 1,..., l every j dimensional subspace Ṽ Cn with Ṽ D and λ j = inf v Ṽ D p(v) is contained in D {0}, and the maxmin characterization of λ j can be replaced by λ j = min p(v). v V \{0} max dim V =j, V \{0} D TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
39 Back to Nonlinear maxmin characterization T (λ)x := ( (W + λi) 2 δ 2 hh T ) x = 0 (QEP) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
40 Back to Nonlinear maxmin characterization T (λ)x := ( (W + λi) 2 δ 2 hh T ) x = 0 (QEP) f (λ, x) = x H T (λ)x = λ 2 x 2 + 2λx H Wx + Wx 2 x H h 2 /δ 2, x 0 is a parabola which attains its minimum at λ = x H Wx x H x. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
41 Back to Nonlinear maxmin characterization T (λ)x := ( (W + λi) 2 δ 2 hh T ) x = 0 (QEP) f (λ, x) = x H T (λ)x = λ 2 x 2 + 2λx H Wx + Wx 2 x H h 2 /δ 2, x 0 is a parabola which attains its minimum at λ = x H Wx x H x. Let J = ( λ min, ) where λ min is the minimum eigenvalue of W. Then f (λ, x) = 0 has at most one solution p(x) J for every x 0. Hence, the Rayleigh functional p of (QEP) corresponding to J is defined, and the general conditions are satisfied. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
42 Nonlinear maxmin characterization Characterization of maximal real eigenvalue Let V min be the eigenspace of W corresponding to λ min. Then for every x min V min f ( λ min, x min ) = x H min(w λ min ) 2 x min x H minh 2 /δ 2 = x H minh 2 /δ 2 0 TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
43 Nonlinear maxmin characterization Characterization of maximal real eigenvalue Let V min be the eigenspace of W corresponding to λ min. Then for every x min V min f ( λ min, x min ) = x H min(w λ min ) 2 x min x H minh 2 /δ 2 = x H minh 2 /δ 2 0 Hence, if x H min h 0 for some x min V min, then x min D. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
44 Nonlinear maxmin characterization Characterization of maximal real eigenvalue Let V min be the eigenspace of W corresponding to λ min. Then for every x min V min f ( λ min, x min ) = x H min(w λ min ) 2 x min x H minh 2 /δ 2 = x H minh 2 /δ 2 0 Hence, if x H min h 0 for some x min V min, then x min D. If h V min, and if the minimum eigenvalue µ min of T ( λ min ) is negative, then for the corresponding eigenvector y min it holds and y min D. f ( λ min, y min ) = y H mint ( λ min )y min = µ min y min 2 < 0, TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
45 Nonlinear maxmin characterization Characterization of maximal real eigenvalue Let V min be the eigenspace of W corresponding to λ min. Then for every x min V min f ( λ min, x min ) = x H min(w λ min ) 2 x min x H minh 2 /δ 2 = x H minh 2 /δ 2 0 Hence, if x H min h 0 for some x min V min, then x min D. If h V min, and if the minimum eigenvalue µ min of T ( λ min ) is negative, then for the corresponding eigenvector y min it holds and y min D. f ( λ min, y min ) = y H mint ( λ min )y min = µ min y min 2 < 0, If h V min, and T ( λ min ) is positive semi-definite, then f ( λ min, x) = x H T ( λ min )x 0 for every x 0, and D =. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
46 Nonlinear maxmin characterization Characterization of maximal real eigenvalue ct. Assume that D. For x H h = 0 it holds that i.e. x D. f (λ, x) = (W + λi)x 2 > 0 for every λ J, TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
47 Nonlinear maxmin characterization Characterization of maximal real eigenvalue ct. Assume that D. For x H h = 0 it holds that i.e. x D. f (λ, x) = (W + λi)x 2 > 0 for every λ J, Hence, D does not contain a two-dimensional subspace of R n, and therefore J contains at most one eigenvalue of (QEP). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
48 Nonlinear maxmin characterization Characterization of maximal real eigenvalue ct. Assume that D. For x H h = 0 it holds that i.e. x D. f (λ, x) = (W + λi)x 2 > 0 for every λ J, Hence, D does not contain a two-dimensional subspace of R n, and therefore J contains at most one eigenvalue of (QEP). If λ C is a non-real eigenvalue of (QEP) and x a corresponding eigenvector, then x H T (λ)x = λ 2 x 2 + 2λx H Wx + Wx 2 x H h 2 /δ 2 = 0. Hence, the real part of λ satisfies real(λ) = x H Wx x H x λ min. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
49 Theorem 3 Nonlinear maxmin characterization Let λ min be the minimal eigenvalue of W, and V min be the corresponding eigenspace. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
50 Theorem 3 Nonlinear maxmin characterization Let λ min be the minimal eigenvalue of W, and V min be the corresponding eigenspace. If h V min and T ( λ min ) is positive semi-definite, then ˆλ := λ min is the maximal real eigenvalue of (QEP). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
51 Theorem 3 Nonlinear maxmin characterization Let λ min be the minimal eigenvalue of W, and V min be the corresponding eigenspace. If h V min and T ( λ min ) is positive semi-definite, then ˆλ := λ min is the maximal real eigenvalue of (QEP). Otherwise, the maximal real eigenvalue is the unique eigenvalue ˆλ of (QEP) in J = ( λ min, ), and it holds ˆλ = max x D p(x). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
52 Theorem 3 Nonlinear maxmin characterization Let λ min be the minimal eigenvalue of W, and V min be the corresponding eigenspace. If h V min and T ( λ min ) is positive semi-definite, then ˆλ := λ min is the maximal real eigenvalue of (QEP). Otherwise, the maximal real eigenvalue is the unique eigenvalue ˆλ of (QEP) in J = ( λ min, ), and it holds ˆλ = max x D p(x). ˆλ is the right most eigenvalue of (QEP), i.e. real(λ) λ min ˆλ for every eigenvalue λ ˆλ of (QEP). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
53 Example Nonlinear maxmin characterization TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
54 Nonlinear maxmin characterization Example: close up TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
55 Nonlinear maxmin characterization Positivity of ˆλ Sima et al. claimed that the right-most eigenvalue problem is always positive. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
56 Nonlinear maxmin characterization Positivity of ˆλ Sima et al. claimed that the right-most eigenvalue problem is always positive. Simplest counter example: If W is positive definite with eigenvalue λ j > 0, then λ j are the only eigenvalues of the quadratic eigenproblem (W + λi) 2 x = 0, and if the term δ 2 hh T is small enough, then the quadratic problem will have no positive eigenvalue, but the right most eigenvalue will be negative. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
57 Nonlinear maxmin characterization Positivity of ˆλ Sima et al. claimed that the right-most eigenvalue problem is always positive. Simplest counter example: If W is positive definite with eigenvalue λ j > 0, then λ j are the only eigenvalues of the quadratic eigenproblem (W + λi) 2 x = 0, and if the term δ 2 hh T is small enough, then the quadratic problem will have no positive eigenvalue, but the right most eigenvalue will be negative. However, in quadratic eigenproblems occurring in regularized total least squares problems δ and h are not arbitrary, but regularization only makes sense if δ Lx TLS where x TLS denotes the solution of the total least squares problem without regularization. The following theorem characterizes the case that the right most eigenvalue is negative. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
58 Nonlinear maxmin characterization Positivity of ˆλ ct. Theorem 4 The maximal real eigenvalue ˆλ of the quadratic problem (W + λi) 2 x δ 2 hh T x = 0 is negative if and only if W is positive definite and W 1 h < δ. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
59 Nonlinear maxmin characterization Positivity of ˆλ ct. Theorem 4 The maximal real eigenvalue ˆλ of the quadratic problem (W + λi) 2 x δ 2 hh T x = 0 is negative if and only if W is positive definite and W 1 h < δ. For the quadratic eigenproblem occuring in regularized total least squares it holds that W 1 h = L(A T A f (x)i) 1 A T b. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
60 Nonlinear maxmin characterization Positivity of ˆλ ct. Theorem 4 The maximal real eigenvalue ˆλ of the quadratic problem (W + λi) 2 x δ 2 hh T x = 0 is negative if and only if W is positive definite and W 1 h < δ. For the quadratic eigenproblem occuring in regularized total least squares it holds that W 1 h = L(A T A f (x)i) 1 A T b. For the standard case L = I the right-most eigenvalue ˆλ is always nonnegative if δ < x TLS. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
61 Nonlinear maxmin characterization Convergence Theorem 5 Any limit point x of the sequence {x m } constructed by RTLSQEP is a global minimizer of Ax b 2 f (x) = 1 + x 2 subject to Lx 2 = δ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
62 Nonlinear maxmin characterization Convergence Theorem 5 Any limit point x of the sequence {x m } constructed by RTLSQEP is a global minimizer of Ax b 2 f (x) = 1 + x 2 subject to Lx 2 = δ 2. Proof: Let x be a limit point of {x m }, and let {x m j } be a subsequence converging to x. Then x m j solves the first order conditions (A T A f (x m j 1 )I)x m j + λ mj L T Lx m j = A T b. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
63 Nonlinear maxmin characterization Convergence Theorem 5 Any limit point x of the sequence {x m } constructed by RTLSQEP is a global minimizer of Ax b 2 f (x) = 1 + x 2 subject to Lx 2 = δ 2. Proof: Let x be a limit point of {x m }, and let {x m j } be a subsequence converging to x. Then x m j solves the first order conditions (A T A f (x m j 1 )I)x m j + λ mj L T Lx m j = A T b. From the monotonicity of f (x m ) it follows that lim j f (x m j 1 ) = f (x ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
64 Proof ct. Nonlinear maxmin characterization Since W (y) and h(y) depend continuously on y the sequence of right-most eigenvalues {λ mj } converges to some λ, and x satisfies (A T A f (x )I)x λ L T Lx = A T b, Lx 2 = δ 2, where λ is the maximal Lagrange multiplier. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
65 Proof ct. Nonlinear maxmin characterization Since W (y) and h(y) depend continuously on y the sequence of right-most eigenvalues {λ mj } converges to some λ, and x satisfies (A T A f (x )I)x λ L T Lx = A T b, Lx 2 = δ 2, where λ is the maximal Lagrange multiplier. Hence x is a global minimizer of g(x; x ) = min! subject to Lx 2 = δ 2, TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
66 Proof ct. Nonlinear maxmin characterization Since W (y) and h(y) depend continuously on y the sequence of right-most eigenvalues {λ mj } converges to some λ, and x satisfies (A T A f (x )I)x λ L T Lx = A T b, Lx 2 = δ 2, where λ is the maximal Lagrange multiplier. Hence x is a global minimizer of g(x; x ) = min! subject to Lx 2 = δ 2, and for y R n with Ly 2 = δ 2 it follows that 0 = g(x ; x ) g(y; x ) = Ay b 2 f (x )(1 + y 2 ) = (f (y) f (x ))(1 + y 2 ), i.e. f (y) f (x ). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
67 Numerical considerations Quadratic eigenproblem The quadratic eigenproblems T m (λ)z = (W m + λi) 2 z 1 δ 2 h mhmz T = 0 can be solved by linearization Krylov subspace method for QEP (Li & Ye 2003) SOAR (Bai & Su 2005) nonlinear Arnoldi method (Meerbergen 2001, V. 2004) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
68 Numerical considerations Krylov subspace method: Li & Ye 2003 For A, B R n n such that some linear combination of A and B is a matrix of rank q with an Arnoldi-type process a matrix Q R n m+q+1 with orthonormal columns and two matrices H a R m+q+1 m and H b R m+q+1 m with lower bandwidth q + 1 are determined such that AQ(:, 1 : m) = Q(:, 1 : m + q + 1)H a and BQ(:, 1 : m) = Q(:, 1 : m + q + 1)H b. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
69 Numerical considerations Krylov subspace method: Li & Ye 2003 For A, B R n n such that some linear combination of A and B is a matrix of rank q with an Arnoldi-type process a matrix Q R n m+q+1 with orthonormal columns and two matrices H a R m+q+1 m and H b R m+q+1 m with lower bandwidth q + 1 are determined such that AQ(:, 1 : m) = Q(:, 1 : m + q + 1)H a and BQ(:, 1 : m) = Q(:, 1 : m + q + 1)H b. Then approximations to eigenpairs of the quadratic eigenproblem (λ 2 I λa B)y = 0 are obtained from its projection onto spanq(:, 1 : m) which reads (λ 2 I m λh a (1 : m, 1 : m) H b (1 : m, 1 : m))z = 0. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
70 Numerical considerations Krylov subspace method: Li & Ye 2003 For A, B R n n such that some linear combination of A and B is a matrix of rank q with an Arnoldi-type process a matrix Q R n m+q+1 with orthonormal columns and two matrices H a R m+q+1 m and H b R m+q+1 m with lower bandwidth q + 1 are determined such that AQ(:, 1 : m) = Q(:, 1 : m + q + 1)H a and BQ(:, 1 : m) = Q(:, 1 : m + q + 1)H b. Then approximations to eigenpairs of the quadratic eigenproblem (λ 2 I λa B)y = 0 are obtained from its projection onto spanq(:, 1 : m) which reads (λ 2 I m λh a (1 : m, 1 : m) H b (1 : m, 1 : m))z = 0. For the quadratic eigenproblem ((W + λi) 2 δ 2 hh T )x = 0 the algorithm of Li & Ye is applied with A = W and B = hh T from which the projected problem (θ 2 I 2θH a(1 : m, 1 : m) H a(1 : m+2, 1 : m) T H a(1 : m+2, m) δ 2 H b (1 : m, 1 : m))z = 0 is easily obtained. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
71 Numerical considerations SOAR: Bai & Su 2005 The Second Order Arnoldi Reduction method is based on the observation that the Krylov space of the linearization ( ( ) ( ) ( ) A B y I O y = λ I O) z O I z ( ) of (λ 2 r0 I λa B)y = 0 with initial vector has the form 0 where K k = {( ( ) r0 r1,, 0) r 0 ( r2 r 1 ),..., ( rk 1 r 1 = Ar 0 r j = Ar j 1 + Br j 2, for j 2. r k 2 )} TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
72 Numerical considerations SOAR: Bai & Su 2005 The Second Order Arnoldi Reduction method is based on the observation that the Krylov space of the linearization ( ( ) ( ) ( ) A B y I O y = λ I O) z O I z ( ) of (λ 2 r0 I λa B)y = 0 with initial vector has the form 0 where K k = {( ( ) r0 r1,, 0) r 0 ( r2 r 1 ),..., ( rk 1 r 1 = Ar 0 r j = Ar j 1 + Br j 2, for j 2. r k 2 )} The entire information on K k is therefore contained in the Second Order Krylov Space G k (A, B) = span{r 0, r 1,..., r k 1 }. SOAR determines an orthonormal basis of G k (A, B). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
73 Numerical considerations Nonlinear Arnoldi Method 1: start with initial basis V, V T V = I 2: determine preconditioner M T (σ) 1, σ close to wanted eigenvalue 3: find largest eigenvalue µ of V T T (µ)vy = 0 and corresponding eigenvector y 4: set u = Vy, r = T (µ)u 5: while r / u > ɛ do 6: v = Mr 7: v = v VV T v 8: ṽ = v/ v, V = [V, ṽ] 9: find largest eigenvalue µ of V T T (µ)vy = 0 and corresponding eigenvector y 10: set u = Vy, r = T (µ)u 11: end while TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
74 Numerical considerations Reuse of information The convergence of W m and h m suggests to reuse information from the previous iterations when solving T m (λ)z = 0. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
75 Numerical considerations Reuse of information The convergence of W m and h m suggests to reuse information from the previous iterations when solving T m (λ)z = 0. Krylov subspace methods for T m (λ)z = 0 can be started with the solution z m 1 of T m 1 (λ)z = 0. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
76 Numerical considerations Reuse of information The convergence of W m and h m suggests to reuse information from the previous iterations when solving T m (λ)z = 0. Krylov subspace methods for T m (λ)z = 0 can be started with the solution z m 1 of T m 1 (λ)z = 0. The nonlinear Arnoldi method can use thick starts, i.e. the projection method for T m (λ)z = 0 can be initialized by V m 1 where z m 1 = V m 1 u j 1, and u j 1 is an eigenvector of V T m 1 T m 1(λ)V m 1 u = 0. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
77 Numerical considerations Thick and early updates W j = C f (x j )D S(T f (x j )I n k ) 1 S T h j = g D(T f (x j )I n k ) 1 c with C, D R k k, S R k n k, T R n k n k, g R k, c R n k. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
78 Numerical considerations Thick and early updates W j = C f (x j )D S(T f (x j )I n k ) 1 S T h j = g D(T f (x j )I n k ) 1 c with C, D R k k, S R k n k, T R n k n k, g R k, c R n k. Hence, in order to update the projected problem V T (W m+1 + λi) 2 Vu 1 δ 2 V T h m hmvu T = 0 one has to keep only CV, DV, S T V, and g T V. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
79 Numerical considerations Thick and early updates W j = C f (x j )D S(T f (x j )I n k ) 1 S T h j = g D(T f (x j )I n k ) 1 c with C, D R k k, S R k n k, T R n k n k, g R k, c R n k. Hence, in order to update the projected problem V T (W m+1 + λi) 2 Vu 1 δ 2 V T h m h T mvu = 0 one has to keep only CV, DV, S T V, and g T V. Since it is inexpensive to obtain updates of W m and h m we decided to terminate the inner iteration long before convergence, namely if the residual of the quadratic eigenvalue was reduced by at least This reduced the computing time further. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
80 Numerical considerations Example: shaw(2000); Li & Ye 10 4 shaw(2000) convergence history of Li & Ye method residual norm inner iterations TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
81 Numerical considerations shaw(2000); Early updates 10 2 shaw(2000) convergence history NL Arnoldi early termination residual norm inner iterations TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
82 Numerical considerations Necessary Condition: Golub, Hansen, O Leary 1999 The RTLS solution x satisfies (A T A + λ I I + λ L L T L)x = A T b, Lx 2 = δ 2 where λ I = f (x) λ L = 1 δ 2 (bt (Ax b) λ I ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
83 Numerical considerations Necessary Condition: Golub, Hansen, O Leary 1999 The RTLS solution x satisfies (A T A + λ I I + λ L L T L)x = A T b, Lx 2 = δ 2 where λ I = f (x) λ L = 1 δ 2 (bt (Ax b) λ I ) Eliminating λ I : (A T A f (x)i + λ L L T L)x = A T b, Lx 2 = δ 2, iterating on f (x m ), and solving via quadratic eigenproblems yields the method of Sima, Van Huffel and Golub. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
84 Numerical considerations Necessary Condition: Renaut, Guo 2002,2005 The RTLS solution x satisfies the eigenproblem ( ( ) x x B(x) = λ 1) I 1 where ( ) B(x) = [A, b] T L [A, b] + λ L (x) T L 0 0 δ 2 λ I = f (x) λ L = 1 δ 2 (bt (Ax b) λ I ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
85 Numerical considerations Necessary Condition: Renaut, Guo 2002,2005 The RTLS solution x satisfies the eigenproblem ( ( ) x x B(x) = λ 1) I 1 where ( ) B(x) = [A, b] T L [A, b] + λ L (x) T L 0 0 δ 2 λ I = f (x) λ L = 1 δ 2 (bt (Ax b) λ I ) ( ( ) ) Conversely, if ˆλ, ˆx is an eigenpair of B(ˆx), and 1 λ L (ˆx) = 1 (b T (Aˆx b) + f (ˆx)), then ˆx satisfies the necessary conditions of δ 2 the last slide, and the eigenvalue is given by ˆλ = f (ˆx). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
86 Numerical considerations Method of Renaut, Guo 2005 For θ R + let (xθ T, 1)T be the eigenvector corresponding to the smallest eigenvalue of ( ) B(θ) := [A, b] T L [A, b] + θ T L 0 0 δ 2 ( ) TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
87 Numerical considerations Method of Renaut, Guo 2005 For θ R + let (xθ T, 1)T be the eigenvector corresponding to the smallest eigenvalue of ( ) B(θ) := [A, b] T L [A, b] + θ T L 0 0 δ 2 ( ) Problem Determine θ such that g(θ) = 0 where g(θ) := Lx θ 2 δ x θ 2. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
88 Numerical considerations Method of Renaut, Guo 2005 For θ R + let (xθ T, 1)T be the eigenvector corresponding to the smallest eigenvalue of ( ) B(θ) := [A, b] T L [A, b] + θ T L 0 0 δ 2 ( ) Problem Determine θ such that g(θ) = 0 where g(θ) := Lx θ 2 δ x θ 2. Given θ k, the eigenvalue problem (*) is solved by the Rayleigh quotient iteration, and θ k is updated according to θ k+1 = θ k + θ k δ 2 g(θ k). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
89 Typical g Numerical considerations 4 Plot of g(λ L ) g(λ L ) λ L TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
90 Back tracking Numerical considerations 16 x g(λ L ) λ L TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
91 Close up Numerical considerations x g(λ L ) λ L TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
92 Typical g 1 Numerical considerations 0.08 Plot of the inverse of g(λ L ) λ L g(λ ) L TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
93 Numerical considerations Rational interpolation Assumption A: Let θ 1 < θ 2 < θ 3 such that g(θ 3 ) < 0 < g(θ 1 ). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
94 Numerical considerations Rational interpolation Assumption A: Let θ 1 < θ 2 < θ 3 such that g(θ 3 ) < 0 < g(θ 1 ). Let p 1 < p 2 be the poles of g 1, let h := α + βx + γx 2 (p 2 x)(x p 1 ) be the interpolation of g 1 at (g(θ j ), θ j ), j = 1, 2, 3, and set θ 4 = h(0). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
95 Numerical considerations Rational interpolation Assumption A: Let θ 1 < θ 2 < θ 3 such that g(θ 3 ) < 0 < g(θ 1 ). Let p 1 < p 2 be the poles of g 1, let h := α + βx + γx 2 (p 2 x)(x p 1 ) be the interpolation of g 1 at (g(θ j ), θ j ), j = 1, 2, 3, and set θ 4 = h(0). Drop θ 1 or θ 3 such that the remaining θ values satisfy assumption A, and repeat until convergence. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
96 Numerical considerations Rational interpolation Assumption A: Let θ 1 < θ 2 < θ 3 such that g(θ 3 ) < 0 < g(θ 1 ). Let p 1 < p 2 be the poles of g 1, let h := α + βx + γx 2 (p 2 x)(x p 1 ) be the interpolation of g 1 at (g(θ j ), θ j ), j = 1, 2, 3, and set θ 4 = h(0). Drop θ 1 or θ 3 such that the remaining θ values satisfy assumption A, and repeat until convergence. To evaluate g(θ) one has to solve the eigenvalue problem B(θ)x = µx which can be done efficiently by the nonlinear Arnoldi method starting with the entire search space of the previous step. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
97 Numerical example Numerical example We added white noise to the data of phillips(n) and deriv2(n) with noise level 1%, and chose L to be an approximate first derivative. The following tabel contains the average CPU time for 100 test problems of dimensions n = 1000, n = 2000, and n = 4000 (CPU: Pentium D, 3.4 GHz). TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
98 Numerical example Numerical example We added white noise to the data of phillips(n) and deriv2(n) with noise level 1%, and chose L to be an approximate first derivative. The following tabel contains the average CPU time for 100 test problems of dimensions n = 1000, n = 2000, and n = 4000 (CPU: Pentium D, 3.4 GHz). problem n SOAR Li & Ye NL Arn. R & G phillips deriv TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
99 Numerical example Numerical example ct. We added white noise to the data of phillips(n) and deriv2(n) with noise level 10%, and chose L to be an approximate first derivative. The following tabel contains the average CPU time for 100 test problems of dimensions n = 1000, n = 2000, and n = 4000 (CPU: Pentium D, 3.4 GHz). problem n SOAR Li & Ye NL Arn. R & G phillips deriv TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
100 Conclusions Conclusions Using minmax theory for nonlinear eigenproblems we proved a localization and characterization result for the maximum real eigenvalue of a quadratic eigenproblem occuring in TLS problems. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
101 Conclusions Conclusions Using minmax theory for nonlinear eigenproblems we proved a localization and characterization result for the maximum real eigenvalue of a quadratic eigenproblem occuring in TLS problems. The right most eigenvalue is not necesarily nonnegative TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
102 Conclusions Conclusions Using minmax theory for nonlinear eigenproblems we proved a localization and characterization result for the maximum real eigenvalue of a quadratic eigenproblem occuring in TLS problems. The right most eigenvalue is not necesarily nonnegative The maximum eigenvalue can be determined efficiently by the nonlinear Arnoldi method, and taking advantage of thick starts and early updates it can be accelerated substantially. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
103 Conclusions Conclusions Using minmax theory for nonlinear eigenproblems we proved a localization and characterization result for the maximum real eigenvalue of a quadratic eigenproblem occuring in TLS problems. The right most eigenvalue is not necesarily nonnegative The maximum eigenvalue can be determined efficiently by the nonlinear Arnoldi method, and taking advantage of thick starts and early updates it can be accelerated substantially. Likewise, The approach of Renaut & Guo can be significantly improved by rational interpolation and the nonlinear Arnoldi method with thick starts. TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
104 Conclusions Conclusions Using minmax theory for nonlinear eigenproblems we proved a localization and characterization result for the maximum real eigenvalue of a quadratic eigenproblem occuring in TLS problems. The right most eigenvalue is not necesarily nonnegative The maximum eigenvalue can be determined efficiently by the nonlinear Arnoldi method, and taking advantage of thick starts and early updates it can be accelerated substantially. Likewise, The approach of Renaut & Guo can be significantly improved by rational interpolation and the nonlinear Arnoldi method with thick starts. THANK YOU FOR YOUR ATTENTION TUHH Heinrich Voss Total Least Squares Harrachov, August / 42
SOLVING REGULARIZED TOTAL LEAST SQUARES PROBLEMS BASED ON EIGENPROBLEMS
SOLVING REGULARIZED TOTAL LEAST SQUARES PROBLEMS BASED ON EIGENPROBLEMS JÖRG LAMPE AND HEINRICH VOSS Key words. total least squares, regularization, ill-posedness, nonlinear Arnoldi method AMS subject
More informationComputable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization
Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Heinrich Voss voss@tuhh.de Joint work with Kemal Yildiztekin Hamburg University of Technology Institute
More informationEigenvalue Problems CHAPTER 1 : PRELIMINARIES
Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14
More informationAn Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems
An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric
More informationVariational Principles for Nonlinear Eigenvalue Problems
Variational Principles for Nonlinear Eigenvalue Problems Heinrich Voss voss@tuhh.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Variational Principles for Nonlinear EVPs
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationIterative projection methods for sparse nonlinear eigenvalue problems
Iterative projection methods for sparse nonlinear eigenvalue problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Iterative projection
More informationFinal Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015
Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,
More informationComputing Transfer Function Dominant Poles of Large Second-Order Systems
Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson
More informationA Jacobi Davidson-type projection method for nonlinear eigenvalue problems
A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationA Jacobi Davidson Method for Nonlinear Eigenproblems
A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationRational Krylov methods for linear and nonlinear eigenvalue problems
Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationDUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee
J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationCHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS
CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Nonlinear eigenvalue problems Eigenvalue problems
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting
More informationInexact inverse iteration with preconditioning
Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned
More informationLecture 3: Inexact inverse iteration with preconditioning
Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationMATH 221, Spring Homework 10 Solutions
MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the
More informationMA 265 FINAL EXAM Fall 2012
MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators
More informationDavidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD
Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More information1. Introduction. We consider the solution of the ill-posed model dependent problem
SIAM J. MATRIX ANA. APP. Vol. 26, No. 2, pp. 457 476 c 25 Society for Industrial and Applied Mathematics EFFICIENT AGORITHMS FOR SOUTION OF REGUARIZED TOTA EAST SQUARES ROSEMARY A. RENAUT AND HONGBIN GUO
More informationQUADRATIC MAJORIZATION 1. INTRODUCTION
QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationA Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations
A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations Heinrich Voss voss@tuhh.de Joint work with Karl Meerbergen (KU Leuven) and Christian
More informationKeeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.
RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject
More informationAnswer Keys For Math 225 Final Review Problem
Answer Keys For Math Final Review Problem () For each of the following maps T, Determine whether T is a linear transformation. If T is a linear transformation, determine whether T is -, onto and/or bijective.
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationMa/CS 6b Class 23: Eigenvalues in Regular Graphs
Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues
More informationLecture 4 Eigenvalue problems
Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationSolving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method
Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationSummer Session Practice Final Exam
Math 2F Summer Session 25 Practice Final Exam Time Limit: Hours Name (Print): Teaching Assistant This exam contains pages (including this cover page) and 9 problems. Check to see if any pages are missing.
More informationEfficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis
Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationcomplex dot product x, y =
MODULE 11 Topics: Hermitian and symmetric matrices Setting: A is an n n real or complex matrix defined on C n with the complex dot product x, y = Notation: A = A T, i.e., a ij = a ji. We know from Module
More informationGQE ALGEBRA PROBLEMS
GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout
More informationEigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis
Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply
More informationNumerical Linear Algebra Chap. 2: Least Squares Problems
Numerical Linear Algebra Chap. 2: Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationComputation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.
Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces
More information5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.
Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationTranspose & Dot Product
Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationTranspose & Dot Product
Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:
More informationON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS
ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ALEKSANDRA KOSTIĆ AND HEINRICH VOSS Key words. eigenvalue, variational characterization, principle, Sylvester s law of inertia AMS subject
More informationMath 323 Exam 2 Sample Problems Solution Guide October 31, 2013
Math Exam Sample Problems Solution Guide October, Note that the following provides a guide to the solutions on the sample problems, but in some cases the complete solution would require more work or justification
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationOn the Modification of an Eigenvalue Problem that Preserves an Eigenspace
Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationKrylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17
Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve
More informationEconomics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010
Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n
More informationLinear Algebra problems
Linear Algebra problems 1. Show that the set F = ({1, 0}, +,.) is a field where + and. are defined as 1+1=0, 0+0=0, 0+1=1+0=1, 0.0=0.1=1.0=0, 1.1=1.. Let X be a non-empty set and F be any field. Let X
More information235 Final exam review questions
5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More information1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)
A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 4 : CONJUGATE GRADIENT METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More information1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det
What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationMath 310 Final Exam Solutions
Math 3 Final Exam Solutions. ( pts) Consider the system of equations Ax = b where: A, b (a) Compute deta. Is A singular or nonsingular? (b) Compute A, if possible. (c) Write the row reduced echelon form
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationLSTRS 1.2: MATLAB Software for Large-Scale Trust-Regions Subproblems and Regularization
LSTRS 1.2: MATLAB Software for Large-Scale Trust-Regions Subproblems and Regularization Marielba Rojas Informatics and Mathematical Modelling Technical University of Denmark Computational Methods with
More informationEfficient computation of transfer function dominant poles of large second-order dynamical systems
Chapter 6 Efficient computation of transfer function dominant poles of large second-order dynamical systems Abstract This chapter presents a new algorithm for the computation of dominant poles of transfer
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationMarch 8, 2010 MATH 408 FINAL EXAM SAMPLE
March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page
More informationLarge-scale eigenvalue problems
ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationMatrix Vector Products
We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and
More information