Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop
|
|
- Grant Jefferson
- 5 years ago
- Views:
Transcription
1 Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael Karow, and Punit Sharma
2 Why Hermitian pencils? Question: Why are eigenvalues of Hermitian pencils of interest in system theory?
3 Why Hermitian pencils? Question: Why are eigenvalues of Hermitian pencils of interest in system theory? Answer: Because Hermitian pencils are related to Hamiltonian matrices.
4 Why Hermitian pencils? Question: Why are eigenvalues of Hermitian pencils of interest in system theory? Answer: Because Hermitian pencils are related to Hamiltonian matrices. A matrix H C 2n 2n is called Hamiltonian if [ ] A C H = D A, where A, C, D C n n and where C, D are Hermitian.
5 Why Hermitian pencils? Question: Why are eigenvalues of Hermitian pencils of interest in system theory? Answer: Because Hermitian pencils are related to Hamiltonian matrices. A matrix H C 2n 2n is called Hamiltonian if [ ] A C H = D A, where A, C, D C n n and where C, D are Hermitian. Observation: The spectrum of Hamiltonian matrices is symmetric with respect to the imaginary axis: if λ C is an eigenvalue of H, then so is λ with the same multiplicities.
6 Application 1: Optimal Control The Linear Quadratic Optimal Control Problem: minimize the cost functional [ ] T [ ] [ ] x(t) Q S x(t) u(t) S T dt R u(t) subject to the dynamics 0 ẋ = Ax + Bu, x(0) = x 0, t [0, ), where x(t), x 0 R n, u(t) R m, A, Q R n n, S R n m, R R m m, [ ] Q S 0, R > 0. S T R The solution can be obtained by solving the eigenvalue problem for the Hamiltonian matrix [ ] A BR H := 1 S T BR 1 B T SR 1 S T Q A T + SR 1 B T.
7 Application 1: Optimal Control More general problem: minimize the cost functional [ ] T [ ] [ ] x(t) Q S x(t) u(t) S T dt R u(t) subject to the dynamics 0 Eẋ = Ax + Bu, x(0) = x 0, t [0, ), where x(t), x 0 R n, u(t) R m, E, A, Q R n n, S R n m, R R m m, [ ] Q S 0, R 0. S T R The solution can be obtained by solving the generalized eigenvalue problem for the matrix pencil of the form:
8 Application 1: Optimal Control More general problem: minimize the cost functional [ ] T [ ] [ ] x(t) Q S x(t) u(t) S T dt R u(t) subject to the dynamics 0 Eẋ = Ax + Bu, x(0) = x 0, t [0, ), where x(t), x 0 R n, u(t) R m, E, A, Q R n n, S R n m, R R m m, [ ] Q S 0, R 0. S T R λe A := λ 0 E 0 E Q A S A 0 B. S B R
9 Application 1: Optimal Control Observations: on the generalized eigenvalue problem: 0 E 0 Q A S λe A := λ E 0 0 A 0 B S B R L(λ) := λe A is an even matrix pencil, that is, L(λ) = L( λ). The eigenvalues of L(λ) are symmetric wrt the imaginary axis. L(iλ) = iλe A is a Hermitian matrix pencil. The eigenvalues of L(iλ) are symmetric wrt the real axis. If R is invertible, then the generalized eigenvalue problem can be reduced to the form[ ] [ ] 0 E Q SR λ 1 S A SR 1 B E 0 A BR 1 S BR 1 B.
10 Application 2: Algebraic Riccati Equations Algebraic Riccati Equation (ARE): find a solution X C n n such that D + XA + A X XCX = 0, where A, C, D C n n and C, D are Hermitian. The solution can be obtained by solving the eigenvalue problem for the Hamiltonian matrix [ ] A C H := D A : If the columns of [ ] X1 C 2n n X 2 form a basis of an invariant subspace H associated with the eigenvalues in the left half plane and if X 1 is invertible, then X = X 2 X1 1 is a solution of the ARE such that A + GX C.
11 Application 2: gyroscopic systems Gyroscopic systems: have the form Iẍ + Gẋ + Kx = 0, where G, K C n and where G = G and K = K. Introducing the new variables y 1 = ẋ 1 2 Gx and y 2 = x this system can be reduced to the first order system ẏ + Hy = 0 with the Hamiltonian matrix [ 1 H = 2 G K + ] 1 4 G2 1 I n 2 G. Consequently, the gyroscopic system is stable if all eigenvalues of H are on the imaginary axis and semisimple.
12 Interesting observation Experiment: Consider two Hamiltonian systems: ẋ = H 1 x = x, ẏ = H 2y = y. Observation: Both matrices have the semisimple eigenvalues +i and i with algebraic multiplicity 2 each. Consequently both systems are stable. Question: How do the matrices (Systems) behave under perturbations, i.e., what happens to the eigenvalues of H 1 + H 1, H 2 + H 2?
13 Interesting observation Result 1: Eigenvalue distribution of 1000 perturbations, when H i was a random matrix of norm 1 4. The perturbed systems are not stable.
14 Interesting observation Result 2: Eigenvalue distribution of 1000 perturbations, when H i was a random Hamiltonian matrix of norm 1 4. The second perturbed system remains stable, the first one typically not.
15 Why? For the understanding, we need...
16 Indefinite Linear Algebra name Indefinite Linear Algebra invented by Gohberg, Lancaster, Rodman in 2005; ±H = H F n n invertible defines an inner product on F n : [x, y] H := y Hx for all x, y F n ; Here, either denotes the transpose T or the conjugate transpose ; H = H H = H H = H T H = H T Hermitian sesquilinear form skew-hermitian sesquilinear form symmetric bilinear form skew-symmetric bilinear form the inner product may be indefinite (needs not be positive definite).
17 Indefinite Linear Algebra The adjoint: For X F n n let X be the matrix satisfying [v, Xw] H = [X v, w] H for all v, w F n. We have X = H 1 X T H resp. X = H 1 X H. Matrices with symmetries in indefinite inner products: adjoint y T Hx y Hx A H-selfadjoint A = A A T H = HA A H = HA S H-skew-adjoint S = S S T H = HS S H = HS U H-unitary U = U 1 U T HU = H U HU = H
18 Indefinite Linear Algebra A matrix H F 2n 2n ishamiltonian if and only if [ 0 H T In J = JH, where J = I n 0 ]. Hamiltonian matrices are skew-adjoint with respect to the skew-symmetric bilinear form induced by J. J-selfadjoint N T J = JN skew-hamiltonian J-skew-adjoint H T J = JH Hamiltonian J-unitary S T JS = J symplectic Symplectic matrices occur in discrete optimal control problems.
19 Indefinite Linear Algebra Hermititian matrix pencil ˆ= H-selfadjoint matrix : Observation 1: A C n n is H-selfadjoint A H = HA (HA) = HA Observation 2: If H is invertible: Ax = λx HAx = λhx Conversely: If λh G is a Hermitian pencil (d.h. H = H and G = G ) and if H is invertible, then A := H 1 G is H-selfadjoint: A H = GH 1 H = G = HH 1 G = HA.
20 Why bother? Forget structure and use general methods
21 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because...
22 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster
23 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster structure-preserving algorithms respect spectral symmetries
24 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster structure-preserving algorithms respect spectral symmetries structure-preserving algorithms produce physically meaningful results
25 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster structure-preserving algorithms respect spectral symmetries structure-preserving algorithms produce physically meaningful results perturbation theory is different if structure is preserved
26 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster structure-preserving algorithms respect spectral symmetries structure-preserving algorithms produce physically meaningful results perturbation theory is different if structure is preserved because some eigenvalues occur in pairs
27 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster structure-preserving algorithms respect spectral symmetries structure-preserving algorithms produce physically meaningful results perturbation theory is different if structure is preserved because some eigenvalues occur in pairs because there is sign characteristic
28 Why bother? Forget structure and use general methods No! We prefer structure-preserving methods, because... structure-preserving algorithms are often faster structure-preserving algorithms respect spectral symmetries structure-preserving algorithms produce physically meaningful results perturbation theory is different if structure is preserved because some eigenvalues occur in pairs because there is sign characteristic What is sign characteristic?
29 Canonical forms Definite Linear Algebra: Any Hermitian matrix is unitarily diagonalizable and all its eigenvalues are real. Indefinite Linear Algebra: Selfadjoint matrices with respect to indefinite inner products may have complex eigenvalues and need not be diagonalizable. Example: H = [ ], A 1 = [ ], A 2 = A 1 and A 2 are H-selfadjoint, i.e., A i H = HA i. [ i 0 0 i ]
30 Canonical forms Transformations that preserve structure: for bilinear forms: (H, A) (P T HP, P 1 AP ), P invertible; for sesquilinear forms: (H, A) (P HP, P 1 AP ), A is H-selfadjoint H-skew-adjoint H-unitary P 1 AP is Here P = P T or P = P, respectively. P invertible; P HP -selfadjoint P HP -skew-adjoint P HP -unitary
31 Canonical forms Theorem (Gohberg, Lancaster, Rodman, 1983, Thompson, 1976) Let A C n n be H-selfadjoint. Then there exists P C n n invertible such that where either P 1 AP = A 1 A k, P HP = H 1 H k, 1) A i = J ni (λ), and H i = εr ni, where λ R and ε = ±1; or [ ] [ ] Jni (µ) 0 0 Rni 2) A i =, H 0 J ni (µ) i =, where µ R. R ni 0 λ Here J m (λ) = λ 1 Cm m and R m = λ C m m.
32 Canonical forms There are similar results for H-skewadjoint and H-unitary matrices and for real or complex bilinear forms. Spectral symmetries: y T Hx y Hx y T Hx field F = C F = C F = R H-selfadjoints λ λ, λ λ, λ H-skew-adjoints λ, λ λ, λ λ, λ, λ, λ H-unitaries λ, λ 1 λ, λ 1 λ, λ 1, λ, λ 1
33 The sign characteristic Sign characteristic: There are additional invariants for real eigenvalues of H-selfadjoint matrices: signs ε = ±1. Example: A = [ ], H ε = [ ε ], ε = ±1; There is no transformation P 1 AP = A, P H +1 P = H 1, because of Sylvester s Law of Inertia; each Jordan block associated with a real eigenvalue of A has a corresponding sign ε {+1, 1}; the collection of all the signs is called the sign characteristic of A;
34 The sign characteristic Interpretation of the sign characteristic for simple eigenvalues: let (λ, v) be an eigenpair of the selfadjoint matrix A, where λ R: let ε be the sign corresponding to λ; the inner product [v, v] H is positive if ε = +1; the inner product [v, v] H is negative if ε = 1. Analogously: purely imaginary eigenvalues of H-skew-adjoint matrices have signs; unimodular eigenvalues of H-unitary matrices have signs.
35 What happens under structured perturbations? Example: symplectic matrices S R 2n 2n ; consider a slightly perturbed matrix S that is still symplectic; the behavior of the unimodular eigenvalues under perturbation depends on the sign characteristic; if two unimodular eigenvalues meet, the behavior is different if the corresponding signs are opposite or equal.
36 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
37 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
38 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
39 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
40 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
41 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
42 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
43 What happens under structured perturbations? let S have two close unimodular eigenvalues with opposite signs; if S is perturbed and the two eigenvalues meet, they generically form a Jordan block; then they may split off as a pair of nonunimodular reciprocal eigenvalues;
44 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
45 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
46 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
47 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
48 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
49 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
50 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
51 What happens under structured perturbations? let S have two close unimodular eigenvalues with equal signs; if S is perturbed and the two eigenvalues meet, they cannot form a Jordan block, and they must remain on the unit circle;
52 Interesting observation revisited Experiment: Consider two Hamiltonian systems: ẋ = H 1 x = x, ẏ = H 2y = y. Observation 1: Both matrices have the semisimple eigenvalues +i and i with algebraic multiplicity 2 each. Consequently both systems are stable. Observation 2: One can check: the sign characteristic of the eigenvalue i of H 1 consists of mixed signs +1 and 1 (same story for i); the sign characteristic of the eigenvalue i of H 2 consists of identical signs +1 and +1 (similar story for i: signs 1 and 1).
53 Interesting observation revisited Result 2: Eigenvalue distribution of 1000 perturbations, when H i was a random Hamiltonian matrix of norm 1 4. Left: mixed sign characteristic; Right: definite sign characteristic
54 Generalization to matrix pencils matrix case: H-selfadjoint H-skew-adjoint H-unitary pencil case: Hermitian -even -palindromic pencil: λg H λk H λa A structure: G = G, H = H K = K, H = H A C n n spectral symmetry: λ, λ λ, λ λ, λ 1 critical curve : real line imaginary line unit circle sign characteristic: eigenvalues on the critical curve will have signs ε = ±1. Generalization to matrix polynomials possible.
55 Finally: The Task
56 Application: passivity of systems Consider a linear time-invariant control system ẋ = Ax + Bu, x(0) = 0, y = Cx + Du, Assume σ(a) C and D is invertible. The system is called passive if the Hamiltonian matrix [ ] [ F G A BR H = H F T := 1 C BR 1 B T C T R 1 C (A BR 1 C) T has no pure imaginary eigenvalues, where R = D + D T. Often, an approximation H of H is computed and often in this approximation process the passivity is lost. Aim: modify H by a Hamiltonian small norm and small rank perturbation to a nearby Hamiltonian matrix having no pure imaginary eigenvalues. Solved by Alam, Bora, Karow, Mehrmann, Moro ]
57 The generalized problem Task: find the smallest perturbation that moves eigenvalues on the critical curve of a structured matrix pencils off the critical curve.
58 The generalized problem Task: find the smallest perturbation that moves eigenvalues on the critical curve of a structured matrix pencils off the critical curve. start with Hermitian pencils, i.e., move eigenvalues from the real line into the complex plane
59 The generalized problem Task: find the smallest perturbation that moves eigenvalues on the critical curve of a structured matrix pencils off the critical curve. start with Hermitian pencils, i.e., move eigenvalues from the real line into the complex plane first step: find the smallest structured perturbation that makes λ C an eigenvalue of a given Hermitian pencil A 1 + za 2
60 The generalized problem Task: find the smallest perturbation that moves eigenvalues on the critical curve of a structured matrix pencils off the critical curve. start with Hermitian pencils, i.e., move eigenvalues from the real line into the complex plane first step: find the smallest structured perturbation that makes λ C an eigenvalue of a given Hermitian pencil A 1 + za 2 Problem: Let A 1, A 2 C n n be Hermitian and let λ C. Calculate η S (A 1, A 2, λ) { = inf ( ) 2 det ((A1 1 ) + λ(a 2 2 )) = 0) } 1, 2 S and construct the corresponding 1 and 2 that attain the infimum.
61 The generalized problem Problem: Let A 1, A 2 C n n be Hermitian and let λ C. Calculate η S (A 1, A 2, λ) { = inf ( ) 2 det ((A 1 1 ) + λ(a 2 2 )) = 0) } 1, 2 S and construct the corresponding 1 and 2 that attain the infimum. We call η S (A 1, A 2, λ) the structured backward error of the Hermitian pencil A 1 + za 2 with respect to λ and S.
62 The generalized problem Case 1: S = C n n and λ C: easy! because 1 (x) = η(a 1, A 2, λ) = σ min (A 1 + λa 2 ), 1 + λ 2 1 x x 1 + λ 2(A 1+λA 2 )xx, 2 (x) = λ x x 1 + λ 2(A 1+λA 2 )xx is the smallest perturbation that makes the pair (λ, x) an eigenpair of the pencil and η(a 1, A 2, λ) = min ( 1 (x) (x)) 2. x 0
63 The generalized problem Case 1: S = C n n and λ C: easy! because 1 (x) = η(a 1, A 2, λ) = σ min (A 1 + λa 2 ), 1 + λ 2 1 x x 1 + λ 2(A 1+λA 2 )xx, 2 (x) = λ x x 1 + λ 2(A 1+λA 2 )xx is the smallest perturbation that makes the pair (λ, x) an eigenpair of the pencil and η(a 1, A 2, λ) = min ( 1 (x) (x)) 2. x 0 Case 2: S = Herm(n) and λ R: easy! η(a 1, A 2, λ) = σ min (A 1 + λa 2 ), 1 + λ 2 because for λ R the above perturbations are Hermitian.
64 The generalized problem Case 3: S = Herm(n) and λ C \ R: difficult! Adhikari, Alam 2009: complicated formulas for the structured backward error of eigenpairs (λ, x) for Hermitian pencils, when λ R η(a 1, A 2, λ) could be obtained by minimizing these formulas over all x C n \ {0}. This minimization problem is not feasible. Consequence: We need a different strategy!
65 Reformulating the problem det ( (A 1 1 ) + λ(a 2 2 ) ) = 0 (A 1 + λa 2 )x = ( 1 + λ 2 )x for some x 0 x = (A 1 + λa 2 ) 1 ( 1 + λ 2 )x v 1 = 1 M(v 1 + λv 2 ) and v 2 = 2 M(v 1 + λv 2 ) abbreviating M := (A 1 + λa 2 ) 1, v 1 = 1 x, and v 2 = 2 x. Consequence: det ( (A 1 1 ) + λ(a 2 2 ) ) = 0 there exist v 1, v 2 satisfying v 1 + λv 2 0 and v 1 = 1 M(v 1 + λv 2 ) and v 2 = 2 M(v 1 + λv 2 )
66 Reformulating the problem two structured mapping problems: for x = M(v 1 + λv 2 ) and y = v k find H = k Herm(n) such that y = Hx for k = 1, 2. Theorem (see Mackey, Mackey, Tisseur, 2008). Let x, y C n, x 0. Then there exists a Hermitian matrix H Herm(n) such that Hx = y if and only if Im (x y) = 0. If the latter condition is satisfied then min { H H Herm(n), Hx = y } = y x and the minimum is attained for H 0 = y x [ y y x x ] [ y x x y 1 x 1 y x y ] 1 [ y y x x if x and y are linearly independent and for H 0 = yx x x otherwise. ].
67 Reformulating the problem We need v 1M(v 1 + λv 2 ) and v 2M(v 1 + λv 2 ) to be real. This is equivalent to requiring that v H 1 v = 0 und v H 2 v = 0, where v = [ v1 v 2 ], H 1 = i [ M M λm λm 0 ] [ 0 M, H 2 = i M λm λm ]. Moreover, the minimal norm for a pair ( 1, 2 ) satisfying v 1 = 1 M(v 1 + λv 2 ) and v 2 = 2 M(v 1 + λv 2 ) is given by [ ] = v v M v H 0 v, where H 0 = M λm M λm M λ 2 M. M
68 Reformulating the problem Consequence: η(a { 1, A 2, λ) 2 v } v = inf v H 0 v v C2n, v H 0 v 0, v H 1 v = 0, v H 2 v = 0 { v } H 0 v 1 = sup v C 2n \ {0}, v H v 1 v = 0, v H 2 v = 0 v Idea: Compute the maximal eigenvalue λ max of the matrix and minimize this over t 1, t 2 R. H 0 + t 1 H 1 + t 2 H 2
69 The main results Theorem: Bora, Karow, M., Sharma, Let H 0, H 1, H 2 C n n be Hermitian. Assume that H 0 is nonzero and positive semidefinite, and that any linear combination α 1 H 1 + α 2 H 2, (α 1, α 2 ) R 2 \ {0} is indefinite. (Here, indefinite is used in the sense strictly not semi-definite.) Then the following statements hold. (1) The function L(t 1, t 2 ) := λ max (H 0 +t 1 H 1 +t 2 H 2 ) has a minimum λ max. (2) If the minimum λ max of L(t 1, t 2 ) is attained at (t 1, t 2) R 2, then there exists an associated eigenvector w C n \{0} of H 0 +t 1H 1 +t 2H 2 with (3) We have λ max = sup w H 1 w = 0 = w H 2 w. { w } H 0 w w w w 0, w H 1 w = 0, w H 2 w = 0. In particular, the supremum is a maximum and attained for the eigenvector w from (2).
70 The main results Theorem: Bora, Karow, M., Sharma, Let A 1, A 2 C n n be Hermitian, λ C \ R, and Suppose det(a 1 + λa 2 ) 0, and let M = (A 1 + λa 2 ) 1. Then ( ) 1/2 η(a 1, A 2, λ) = min t 1,t 2 R λ max(h 0 + t 1 H 1 + t 2 H 2 ), where [ ] M H 0 = M λm M λm M λ 2 M, H M 1 = i [ ] 0 M H 2 = i M λm λm. [ M M λm λm 0 ], Consequence: Distance problem solved, the corresponding perturbation can be constructed from the eigenvector w from the previous result.
71 Experiments: Nr. 1 unstructured,structured Nr. 2 unstructured,structured Nr. 3 unstructured,structured Nr. 4 unstructured,structured Nr. 5 structured Movies
72 Conclusions the effects of structured perturbations of Hermitian pencils (and other structured matrices and pencils) may be significantly different compared to unstructured perturbations sign characteristic is a crucial fact in the theory of structured perturbations; formulas for the structured backward error for Hermitian pencils are available; extension to matrix polynomimals possible (involves minimization over more than two parameters).
73 Conclusions the effects of structured perturbations of Hermitian pencils (and other structured matrices and pencils) may be significantly different compared to unstructured perturbations sign characteristic is a crucial fact in the theory of structured perturbations; formulas for the structured backward error for Hermitian pencils are available; extension to matrix polynomimals possible (involves minimization over more than two parameters). Thank you for your attention!
Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationStructured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control
Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet
More informationFinite dimensional indefinite inner product spaces and applications in Numerical Analysis
Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de
More informationThe Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications
MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department
More informationCanonical forms of structured matrices and pencils
Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry
More informationEigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations
Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationNonlinear palindromic eigenvalue problems and their numerical solution
Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (
More informationEigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations
Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations, VU Amsterdam and North West University, South Africa joint work with: Leonard Batzke, Christian
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis Volume 38, pp 75-30, 011 Copyright 011, ISSN 1068-9613 ETNA PERTURBATION ANALYSIS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, EVEN AND ODD MATRIX POLYNOMIALS SK
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationStructured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries
Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationOn Linear-Quadratic Control Theory of Implicit Difference Equations
On Linear-Quadratic Control Theory of Implicit Difference Equations Daniel Bankmann Technische Universität Berlin 10. Elgersburg Workshop 2016 February 9, 2016 D. Bankmann (TU Berlin) Control of IDEs February
More informationStability radii for real linear Hamiltonian systems with perturbed dissipation
Stability radii for real linear amiltonian systems with perturbed dissipation Christian Mehl Volker Mehrmann Punit Sharma August 17, 2016 Abstract We study linear dissipative amiltonian (D) systems with
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationGeneric rank-k perturbations of structured matrices
Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationEIGENVALUE BACKWARD ERRORS OF POLYNOMIAL EIGENVALUE PROBLEMS UNDER STRUCTURE PRESERVING PERTURBATIONS. Punit Sharma
EIGENVALUE BACKWARD ERRORS OF POLYNOMIAL EIGENVALUE PROBLEMS UNDER STRUCTURE PRESERVING PERTURBATIONS Ph.D. Thesis Punit Sharma DEPARTMENT OF MATHEMATICS INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI August,
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationQUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those
QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.
More informationChapter One. Introduction
Chapter One Introduction Besides the introduction, front matter, back matter, and Appendix (Chapter 15), the book consists of two parts. The first part comprises Chapters 2 7. Here, fundamental properties
More informationIterative Solution of a Matrix Riccati Equation Arising in Stochastic Control
Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the
More informationUniversity of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm
University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationAlgorithms for Solving the Polynomial Eigenvalue Problem
Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationComputing Unstructured and Structured Polynomial Pseudospectrum Approximations
Computing Unstructured and Structured Polynomial Pseudospectrum Approximations Silvia Noschese 1 and Lothar Reichel 2 1 Dipartimento di Matematica, SAPIENZA Università di Roma, P.le Aldo Moro 5, 185 Roma,
More informationSolving Polynomial Eigenproblems by Linearization
Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationLINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE
LINEAR-QUADRATIC OPTIMAL CONTROL OF DIFFERENTIAL-ALGEBRAIC SYSTEMS: THE INFINITE TIME HORIZON PROBLEM WITH ZERO TERMINAL STATE TIMO REIS AND MATTHIAS VOIGT, Abstract. In this work we revisit the linear-quadratic
More informationHyponormal matrices and semidefinite invariant subspaces in indefinite inner products
Electronic Journal of Linear Algebra Volume 11 Article 16 2004 Hyponormal matrices and semidefinite invariant subspaces in indefinite inner products Christian Mehl Andre C M Ran Leiba Rodman lxrodm@mathwmedu
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationSingular-value-like decomposition for complex matrix triples
Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical
More informationA Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils
A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationEigenvalues of rank one perturbations of matrices
Eigenvalues of rank one perturbations of matrices André Ran VU University Amsterdam Joint work in various combinations with: Christian Mehl, Volker Mehrmann, Leiba Rodman, Michał Wojtylak Jan Fourie, Gilbert
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationLinearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition
Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationIsospectral Families of High-order Systems
Isospectral Families of High-order Systems Peter Lancaster Department of Mathematics and Statistics University of Calgary Calgary T2N 1N4, Alberta, Canada and Uwe Prells School of Mechanical, Materials,
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationStructured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations
Structured Polynomial Eigenvalue Problems: Good Vibrations from Good Linearizations Mackey, D. Steven and Mackey, Niloufer and Mehl, Christian and Mehrmann, Volker 006 MIMS EPrint: 006.8 Manchester Institute
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationc 2006 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 029 05 c 2006 Society for Industrial and Applied Mathematics STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D. STEVEN
More informationScaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem
Scaling, Sensitivity and Stability in Numerical Solution of the Quadratic Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/
More informationExtending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices
Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices João R. Cardoso Instituto Superior de Engenharia de Coimbra Quinta da Nora 3040-228 Coimbra Portugal jocar@isec.pt F. Silva
More informationHomework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9
Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationEXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants
EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationEigenpairs and Similarity Transformations
CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationSolving large Hamiltonian eigenvalue problems
Solving large Hamiltonian eigenvalue problems David S. Watkins watkins@math.wsu.edu Department of Mathematics Washington State University Adaptivity and Beyond, Vancouver, August 2005 p.1 Some Collaborators
More informationSTRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS
Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationJordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices
COMMUNICATIONS IN ALGEBRA, 29(6, 2363-2375(200 Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices Fuzhen Zhang Department of Math Science and Technology
More informationEIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4
EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationThe Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just
More informationStructured Condition Numbers of Symmetric Algebraic Riccati Equations
Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More information1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form
LINEARIZATIONS OF SINGULAR MATRIX POLYNOMIALS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M. DOPICO, AND D. STEVEN MACKEY Abstract. A standard way of dealing with a regular matrix polynomial
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationInverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems
Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationStructured eigenvalue condition numbers and linearizations for matrix polynomials
Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Structured eigenvalue condition numbers and
More informationPALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS
PALINDROMIC POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D STEVEN MACKEY, NILOUFER MACKEY, CHRISTIAN MEHL, AND VOLKER MEHRMANN Abstract Palindromic polynomial eigenvalue problems
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationNOTES ON BILINEAR FORMS
NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear
More informationOn rank one perturbations of Hamiltonian system with periodic coefficients
On rank one perturbations of Hamiltonian system with periodic coefficients MOUHAMADOU DOSSO Université FHB de Cocody-Abidjan UFR Maths-Info., BP 58 Abidjan, CÔTE D IVOIRE mouhamadou.dosso@univ-fhb.edu.ci
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationPlan What is a normal formal of a mathematical object? Similarity and Congruence Transformations Jordan normal form and simultaneous diagonalisation R
Pencils of skew-symmetric matrices and Jordan-Kronecker invariants Alexey Bolsinov, Loughborough University September 25-27, 27 Journée Astronomie et Systèmes Dynamiques MCCE / Observatoire de Paris Plan
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationSemidefinite Programming Duality and Linear Time-invariant Systems
Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationStructured Condition Numbers and Backward Errors in Scalar Product Spaces
Structured Condition Numbers and Backward Errors in Scalar Product Spaces Françoise Tisseur Department of Mathematics University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials
More informationOn doubly structured matrices and pencils that arise in linear response theory
On doubly structured matrices and pencils that arise in linear response theory Christian Mehl Volker Mehrmann Hongguo Xu Abstract We discuss matrix pencils with a double symmetry structure that arise in
More informationNotes on Linear Algebra and Matrix Analysis
Notes on Linear Algebra and Matrix Analysis Maxim Neumann May 2006, Version 0.1.1 1 Matrix Basics Literature to this topic: [1 4]. x y < y,x >: standard inner product. x x = 1 : x is normalized x y = 0
More informationEcon 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis
Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationSymmetric and anti symmetric matrices
Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationWeek Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,
Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider
More informationMTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set
MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 6 Problems marked (T) are for discussions in Tutorial sessions. 1. Find the eigenvalues
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,
More information