Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization

Size: px
Start display at page:

Download "Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization"

Transcription

1 Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Heinrich Voss voss@tuhh.de Joint work with Kemal Yildiztekin Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

2 Outline 1 Introduction 2 Minmax Characterization 3 A posteriori error bounds 4 Quadratic Problem 5 A rational eigenproblem TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

3 Introduction Linear eigenvalue problems Theorem (Krylov, Bogoliubov 1929, Weinstein 1934) Let A = A H, x 0, and R(x) := x H Ax/x H x. Then here exists an eigenvalue η of A such that Ax R(x)x η R(x). x Theorem (Kato 1949, Temple 1929, 1952) Let A = A H with eigenvalues η 1 η 2 η n, and let α < β such that η j+1 α η j β η j 1. Let Ax R(x)x 2 (R(x) α)(β R(x)) x 2. Then it holds that R(x) Ax R(x)x 2 (β R(x)) x 2 η Ax R(x)x 2 j R(x) + (R(x) α) x 2. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

4 Minmax Characterization Nonlinear Eigenvalue Problem Let J R be an open interval (which may be unbounded), and T (λ), λ J be a family of Hermitian matrices. Nonlinear eigenvalue Problem: Find λ J and x 0 such that T (λ)x = 0. Then λ is called an eigenvalue of T ( ), and x a corresponding eigenvector. Problems of this type arise in damped vibrations of structures, conservative gyroscopic systems, lateral buckling problems, problems with retarded arguments, fluid-solid vibrations, and quantum dot heterostructures, e.g. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

5 Minmax Characterization Nonlinear minmax theory Assume that for fixed x C n, x 0 the real equation f (λ, x) := x H T (λ)x = 0 has at most one solution λ =: p(x) in J. Then equation f (λ, x) = 0 implicitly defines a functional p on some subset D of C n which we call the Rayleigh functional. Let (λ p(x))f (λ, x) > 0 for every λ p(x) and every x D. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

6 Minmax Characterization Overdamped problems If p is defined on D = C n \ {0} then the problem T (λ)x = 0 is called overdamped. Notation is motivated by the finite dimensional quadratic eigenvalue problem T (λ)x = λ 2 Mx + λcx + Kx = 0 where M, C and K are Hermitian and positive definite matrices. Theorem (Duffin 1955, Rogers 1964) Under the conditions above an overdamped problem has exactly n eigenvalues λ 1 λ 2 λ n which can be characterized by λ j = min dim V =j max p(x). x V \{0} TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

7 Minmax Characterization Nonoverdamped problems For nonoverdamped eigenproblems the natural ordering to call the smallest eigenvalue the first one, the second smallest the second one, etc., is not appropriate. This is obvious if we make a linear eigenvalue T (λ)x := (λi A)x = 0 nonlinear by restricting it to an interval J which does not contain the smallest eigenvalue of A. Then all conditions are satisfied, p is the restriction of the Rayleigh quotient R A to D := {x 0 : R A (x) J}, and inf x D p(x) will in general not be an eigenvalue. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

8 Minmax Characterization Enumeration of eigenvalues If λ J is an eigenvalue of T ( ) then µ = 0 is an eigenvalue of the linear problem T (λ)y = µy, and therefore there exists l N such that 0 = max V H l min v V \{0} v H T (λ)v v 2 where H l denotes the set of all l dimensional subspaces of C n. In this case λ is called an l-th eigenvalue of T ( ). TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

9 Minmax Characterization Minmax characterization (V., Werner 1982, V. 2009) Under the conditions given above it holds: (i) For every l N there is at most one l-th eigenvalue of T ( ) which can be characterized by λ l = min sup V H l, V D v V D p(v). ( ) The set of eigenvalues of T ( ) in J is at most countable. (ii) λ is an l-th eigenvalue if and only if µ = 0 is the l largest eigenvalue of the linear eigenproblem T ( λ)x = µx. (iii) The minimum in (*) is attained for the invariant subspace of T (λ l ) corresponding to its l largest eigenvalues. (iv) If T ( ) has an lth eigenvalue λ l J, then it holds for λ J < λ = λ x H T (λ)x l η l (λ) := sup min > dim V =l x V,x 0 x H x < = > 0. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

10 A posteriori error bounds Assume that T : J C n n allows for a minmax characterization of its eigenvalues in J, and let η j (λ) be the jth largest eigenvalue of the linear eigenproblem T (λ)y = η j (λ)y, j = 1,..., n, λ J. Lemma For λ, λ J, λ λ it holds that η j (λ) η j ( λ) λ λ y H (T (λ) T ( λ))y min y 0 (λ λ)y H y =: φ(λ, λ). Follows easily from the monotonicity result for eigenvalues of sums of Hermitian matrices A and B λ j+k 1 (A + B) λ j (A) + λ k (B), j, k = 1,..., n, j + k n + 1, λ j+k n (A + B) λ j (A) + λ k (B), j, k = 1,..., n, j + k n + 1, TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

11 A posteriori error bounds Krylov,Bogoliubov, Weinstein Theorem Under the conditions of the minmax characterization let x D(p), and assume that for γ p(x) δ it holds that φ(p(x), γ)(p(x) γ) T (p(x))x x and φ(δ, p(x))(δ p(x)) T (p(x))x. x Then T (λ)x = 0 has an eigenvalue λ such that γ λ δ. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

12 A posteriori error bounds Krylov,Bogoliubov, Weinstein Proof The Rayleigh quotient for T (p(x))y = ηy at x is 0, and hence the (linear) Krylov, Bogoliubov, Weinstein Theorem yields the existence of some eigenvalue η j (p(x)) such that η j (p(x)) T (p(x))x. x If η j (p(x)) > 0, then it follows from the Lemma T (p(x))x x η j (p(x)) η j (γ) + φ(p(x))(p(x) γ) η j (γ) + i.e. η j (γ) 0, and there exists λ j [γ, p(x)] such that η j (λ j ) = 0. T (p(x))x, x Likewise for η j (p(x)) < 0 we get the existence of an eigenvalue λ j [p(x), δ]. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

13 Kato, Temple A posteriori error bounds Theorem Under the conditions of the minmax characterization let λ j, β J and x D(p). Assume that λ j p(x) β, let η j+1 (β) 0, and φ(λ, λ) 0 for λ, λ [λ j, β], λ λ. Then it holds that φ(p(x), λ j )(p(x) λ j ) T (p(x))x 2 φ(β, p(x))(β p(x)) x 2. Remarks 1. If λ j+1 J then η j+1 (β) 0 holds for β λ j+1 ; otherwise it is trivial for β J. 2. For T (λ) := λi K it holds φ(λ, λ) = 1, and with p(x) = x H Kx/x H x one gets (p(x) λ j )(β p(x)) T (p(x))x 2 / x 2, i.e. the Kato-Temple Theorem for linear problems. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

14 Kato, Temple A posteriori error bounds Proof From the Lemma we get η j+1 (p(x)) η j+1 (β) η j+1 (p(x)) φ(β, p(x))(β p(x)) 0. From λ j p(x) β and property (vi) of the minmax Theorem it follows that the conditions of the (linear) Kato- Temple Theorem are satisfied, and therefore η j (p(x)) T (p(x))x 2 ( η j+1 (p(x)) x 2 T (p(x))x 2 φ(β, p(x))(β p(x)) x 2 and applying the Lemma again yields T (p(x))x 2 φ(β, p(x))(β p(x)) x 2 η j (p(x)) = η j (p(x)) η j (λ j ) φ(p(x), λ j )(p(x) λ j ). TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

15 A posteriori error bounds Kato, Temple for differentiable T Theorem Let T ( ) be differentiable and let the conditions of the minmax characterization be satisfied. Let x D(p), and assume that λ j, β J with λ j p(x) β, that η j+1 (β) 0, and y H T (λ)y ψ(λ) := min y 0 y H y 0 for every Λ [λ 1, λ 2 ]. Then it holds that p(x) λ j ψ(λ) dλ β p(x) ψ(λ) dλ T (p(x))x 2 x 2. For overdamped problems and the maximal eigenvalue of T ( ) this Kato-Temple bound was proved by Hadeler (1969). TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

16 Proof A posteriori error bounds η j (λ) : J R, the j largest eigenvalue of T (λ)y(λ) = η(λ)y(λ) is a continuous and piecewise continuously differentiable function, and the corresponding eigenvector y(λ) can be chosen continuous and piecewise continuously differentiable. Multiplying by y(λ) H from the left yields T (λ)y(λ) + T (λ)y (λ) = η j (λ)y(λ) + η j (λ)y (λ) η j (λ) = y(λ)h T (λ)y(λ) y(λ) H. y(λ) TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

17 Proof A posteriori error bounds Hence, β β η j+1 (p(x)) = η j+1 (β) η j (λ)dλ p(x) p(x) β ψ(λ)dλ =: γ 0. y(λ) H T (λ)y(λ) y(λ) H dλ y(λ) p(x) If γ = 0, then nothing is left to be shown. Otherwise, we have η j+1 (p(x)) γ < 0 η j (p(x)) where the last inequality follows from the minmax characterization, (iv). TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

18 Proof A posteriori error bounds From the (linear) Kato-Temple Theorem for T (p(x)) (notice that the Rayleigh quotient of T (p(x)) at x is 0) we get T (p(x))x 2 η j (p(x)) γ x 2, and therefore 0 = η j (λ j ) = η j (p(x)) + λ j p(x) λ j η j T (p(x))x 2 (λ)dλ γ x 2 + p(x) ψ(λ)dλ, i.e. φ(p(x), λ j )(p(x) λ j ) T (p(x))x 2 φ(β, p(x))(β p(x)) x 2. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

19 Quadratic Problem Quadratic eigenproblem Q(λ)x := (λ 2 I + 2λB + C)x = 0, B = B T > 0, C = C T > 0. Assume that Q(σ) is indefinite for some σ < 0. Then there exist intervals J := (, σ ) and J + := (σ +, 0) such that all eigenvalue λ 1 λ 2 λ k in J are minmax values of p, and all eigenvalues λ + 1 λ+ 2 λ+ l are maxmin values of p + where p ± (x) = ( x T Bx ± (x T Bx) 2 x T Cx x T x)/ x 2. Let x D(p ) and λ j p(x) β λ j+1 such that β λ max(b). Then for T ( ) = Q( ) we have φ(λ, λ) = λ λ 2λ max (B) 0 for λ, λ λ max (B), and one gets (λ j +λ max (B)) 2 (p (x)+λ max (B)) 2 Q(p (x))x 2 + x 2 ( β p (x) 2λ max (B))(β p (x)) TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

20 Quadratic Problem Numerical Example A = eye(20); B = randn(20); B = 0.5 B T B; C = randn(20); C = C T C; Then Q(λ)x = 0 has 26 real eigenvalues, 13 of either type, and the maximum of the eigenvalues of negative type is less than the minimum of the eigenvalue of positive type. So, all of them are minmax and maxmin values of p and p +, respectively. Only 4 real eigenvalues satisfy λ j bound applies. < λ max (B) such that the Kato-Temple For a random vector y and p = p (y) we projected Q( ) to the invariant subspace of Q(p) corresponding to the 4 largest eigenvalues, and chose as ansatz vectors the corresponding Ritz vectors. For the Kato-Temple bound of the kth eigenvalue we chose β = 0.5 (λ k + λ k+1 ). TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

21 Quadratic Problem Numerical Example l.bound e.val u.bound e.val-l.b. u.b.-e.val e e e e e e e e-3 TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

22 Quadratic Problem Quadratic eigenproblem l.bound e.val-l.b. u.bound u.b.-e.val e e e e e e e e e e e e e e e e-4 TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30 In a similar way one can prove a Kato-Temple Theorem yielding upper bounds for nonlinear eigenproblems allowing for a maxmin characterization of its eigenvalues. For Q(λ) = λ 2 + 2λB + C as before let x D(p + ), λ + j+1 β p +(x) λ + j, and β > λ min (B). Then it holds that (λ + j +λ min (B)) 2 (p + (x)+λ min (B)) 2 Q(p + (x))x 2 + x 2 (β + p + (x) + 2λ min (B))(p + (x) β). In our numerical example only one eigenvalue λ + 1 = 3.66e 3 satisfies λ + j > λ min (B) = 6.52e 3, and a very good lower bound p + (x) is needed to obtain a negative upper bound from the Kato-Temple Theorem.

23 Plateproblem A rational eigenproblem Consider vertical vibrations of a clamped plate with k identical elastically attached loads. Discretization with finite elements (with Bogner, Fox, Schmit elements) yields a rationale eigenvalue problem T (λ)x := λmx Kx + λ σ λ CCT x = 0 where K, M R n n are symmetric and positive definite, and C R n k, which is equivalent to A(λ)x := (λi E 1 KE T + λ σ λ (E 1 C)(E 1 C) T )x = 0, M = EE T. All eigenvalues allow for a minmax characterization, and for λ, λ < σ φ(λ, λ) = 1 + σ (σ λ)(σ λ) λ min(e 1 CC T E T ) = 1 > 0. TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

24 Plateproblem A rational eigenproblem j [λ l, λ u] λ u λ l A(λ u)x / x 1 [0, ] 1.29e e [ , ] 8.28e e [ , ] 7.24e e 03 1 [ , ] e 05 2 [ , ] 4.07e e [ , ] 6.06e e 01 2 [ , ] 1.22e e 03 2 [ , ] 1.33e e 03 2 [ , ] 4.31e e 04 2 [ , ] 5.02e e 07 2 [ , ] 1.00e e 07 3 [ , ] 7.11e e [ , ] 2.33e e [ , ] 2.79e e 03 3 [ , ] e 06 TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

25 Plateproblem A rational eigenproblem j [λ l, λ u] λ u λ l A(λ u)x / x 4 [ , ] 6.32e e [ , ] 2.53e e 01 4 [ , ] 1.09e e 04 4 [ , ] e 07 5 [ , ] 5.40e e [ , ] 9.27e e 01 5 [ , ] 5.52e e 04 5 [ , ] e 06 6 [ , ] 1.64e e [ , ] 1.33e e 01 6 [ , ] e 04 TUHH Heinrich Voss Computable error bounds Valencia, June 20, / 30

Variational Principles for Nonlinear Eigenvalue Problems

Variational Principles for Nonlinear Eigenvalue Problems Variational Principles for Nonlinear Eigenvalue Problems Heinrich Voss voss@tuhh.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Variational Principles for Nonlinear EVPs

More information

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Nonlinear eigenvalue problems Eigenvalue problems

More information

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS

ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ALEKSANDRA KOSTIĆ AND HEINRICH VOSS Key words. eigenvalue, variational characterization, principle, Sylvester s law of inertia AMS subject

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

A MAXMIN PRINCIPLE FOR NONLINEAR EIGENVALUE PROBLEMS WITH APPLICATION TO A RATIONAL SPECTRAL PROBLEM IN FLUID SOLID VIBRATION

A MAXMIN PRINCIPLE FOR NONLINEAR EIGENVALUE PROBLEMS WITH APPLICATION TO A RATIONAL SPECTRAL PROBLEM IN FLUID SOLID VIBRATION A MAXMIN PRINCIPLE FOR NONLINEAR EIGENVALUE PROBLEMS WITH APPLICATION TO A RATIONAL SPECTRAL PROBLEM IN FLUID SOLID VIBRATION HEINRICH VOSS Abstract. In this paper we prove a maxmin principle for nonlinear

More information

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073

More information

Dedicated to Ivo Marek on the occasion of his 75th birthday. T(λ)x = 0 (1.1)

Dedicated to Ivo Marek on the occasion of his 75th birthday. T(λ)x = 0 (1.1) A MINMAX PRINCIPLE FOR NONLINEAR EIGENPROBLEMS DEPENDING CONTINUOUSLY ON THE EIGENPARAMETER HEINRICH VOSS Key words. nonlinear eigenvalue problem, variational characterization, minmax principle, variational

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Available online at ScienceDirect. Procedia Engineering 100 (2015 ) 56 63

Available online at   ScienceDirect. Procedia Engineering 100 (2015 ) 56 63 Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 100 (2015 ) 56 63 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014 Definite Quadratic

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2. RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject

More information

Solving a rational eigenvalue problem in fluidstructure

Solving a rational eigenvalue problem in fluidstructure Solving a rational eigenvalue problem in fluidstructure interaction H. VOSS Section of Mathematics, TU Germany Hambu~g-Harburg, D-21071 Hamburg, Abstract In this paper we consider a rational eigenvalue

More information

Nonlinear Eigenvalue Problems

Nonlinear Eigenvalue Problems 115 Nonlinear Eigenvalue Problems Heinrich Voss Hamburg University of Technology 115.1 Basic Properties........................................ 115-2 115.2 Analytic matrix functions.............................

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

Algorithms for Solving the Polynomial Eigenvalue Problem

Algorithms for Solving the Polynomial Eigenvalue Problem Algorithms for Solving the Polynomial Eigenvalue Problem Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey

More information

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3)

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3) PROJECTION METHODS FOR NONLINEAR SPARSE EIGENVALUE PROBLEMS HEINRICH VOSS Key words. nonlinear eigenvalue problem, iterative projection method, Jacobi Davidson method, Arnoldi method, rational Krylov method,

More information

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

STATIONARY SCHRÖDINGER EQUATIONS GOVERNING ELECTRONIC STATES OF QUANTUM DOTS IN THE PRESENCE OF SPIN ORBIT SPLITTING

STATIONARY SCHRÖDINGER EQUATIONS GOVERNING ELECTRONIC STATES OF QUANTUM DOTS IN THE PRESENCE OF SPIN ORBIT SPLITTING STATIONARY SCHRÖDINGER EQUATIONS GOVERNING ELECTRONIC STATES OF QUANTUM DOTS IN THE PRESENCE OF SPIN ORBIT SPLITTING MARTA BETCKE AND HEINRICH VOSS Abstract. In this work we derive a pair of nonlinear

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Eigenvectors. Prop-Defn

Eigenvectors. Prop-Defn Eigenvectors Aim lecture: The simplest T -invariant subspaces are 1-dim & these give rise to the theory of eigenvectors. To compute these we introduce the similarity invariant, the characteristic polynomial.

More information

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise

Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification. Al-Ammari, Maha and Tisseur, Francoise Hermitian Matrix Polynomials with Real Eigenvalues of Definite Type. Part I: Classification Al-Ammari, Maha and Tisseur, Francoise 2010 MIMS EPrint: 2010.9 Manchester Institute for Mathematical Sciences

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Higher rank numerical ranges of rectangular matrix polynomials

Higher rank numerical ranges of rectangular matrix polynomials Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid

More information

PH.D. PRELIMINARY EXAMINATION MATHEMATICS

PH.D. PRELIMINARY EXAMINATION MATHEMATICS UNIVERSITY OF CALIFORNIA, BERKELEY SPRING SEMESTER 207 Dept. of Civil and Environmental Engineering Structural Engineering, Mechanics and Materials NAME PH.D. PRELIMINARY EXAMINATION MATHEMATICS Problem

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Iterative projection methods for sparse nonlinear eigenvalue problems

Iterative projection methods for sparse nonlinear eigenvalue problems Iterative projection methods for sparse nonlinear eigenvalue problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Iterative projection

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Chapter 12 Solving secular equations

Chapter 12 Solving secular equations Chapter 12 Solving secular equations Gérard MEURANT January-February, 2012 1 Examples of Secular Equations 2 Secular equation solvers 3 Numerical experiments Examples of secular equations What are secular

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Communities, Spectral Clustering, and Random Walks

Communities, Spectral Clustering, and Random Walks Communities, Spectral Clustering, and Random Walks David Bindel Department of Computer Science Cornell University 3 Jul 202 Spectral clustering recipe Ingredients:. A subspace basis with useful information

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 3954 3973 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Hermitian matrix polynomials

More information

MATH 412 Fourier Series and PDE- Spring 2010 SOLUTIONS to HOMEWORK 5

MATH 412 Fourier Series and PDE- Spring 2010 SOLUTIONS to HOMEWORK 5 MATH 4 Fourier Series PDE- Spring SOLUTIONS to HOMEWORK 5 Problem (a: Solve the following Sturm-Liouville problem { (xu + λ x u = < x < e u( = u (e = (b: Show directly that the eigenfunctions are orthogonal

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

ITERATIVE PROJECTION METHODS FOR LARGE SCALE NONLINEAR EIGENVALUE PROBLEMS

ITERATIVE PROJECTION METHODS FOR LARGE SCALE NONLINEAR EIGENVALUE PROBLEMS ITERATIVE PROJECTION METHODS FOR LARGE SCALE NONLINEAR EIGENVALUE PROBLEMS Heinrich Voss Hamburg University of Technology, Department of Mathematics, D-21071 Hamburg, Federal Republic of Germany, voss@tu-harburg.de

More information

Ordinary Differential Equations II

Ordinary Differential Equations II Ordinary Differential Equations II February 23 2017 Separation of variables Wave eq. (PDE) 2 u t (t, x) = 2 u 2 c2 (t, x), x2 c > 0 constant. Describes small vibrations in a homogeneous string. u(t, x)

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart I Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 40, pp. 82-93, 2013. Copyright 2013,. ISSN 1068-9613. ETNA ON SYLVESTER S LAW OF INERTIA FOR NONLINEAR EIGENVALUE PROBLEMS ALEKSANDRA KOSTIĆ AND HEINRICH

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics.

An Algorithm for. Nick Higham. Françoise Tisseur. Director of Research School of Mathematics. An Algorithm for the Research Complete Matters Solution of Quadratic February Eigenvalue 25, 2009 Problems Nick Higham Françoise Tisseur Director of Research School of Mathematics The School University

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Eigenvector error bound and perturbation for nonlinear eigenvalue problems

Eigenvector error bound and perturbation for nonlinear eigenvalue problems Eigenvector error bound and perturbation for nonlinear eigenvalue problems Yuji Nakatsukasa School of Mathematics University of Tokyo Joint work with Françoise Tisseur Workshop on Nonlinear Eigenvalue

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers Merico Argentati University of Colorado Denver Joint work with Andrew V. Knyazev, Klaus Neymeyr

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the

More information

MATH 213 Linear Algebra and Ordinary Differential Equations Spring 2015 Study Sheet for Final Exam. Topics

MATH 213 Linear Algebra and Ordinary Differential Equations Spring 2015 Study Sheet for Final Exam. Topics MATH 213 Linear Algebra and Ordinary Differential Equations Spring 2015 Study Sheet for Final Exam This study sheet will not be allowed during the test. Books and notes will not be allowed during the test.

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

Technical Results on Regular Preferences and Demand

Technical Results on Regular Preferences and Demand Division of the Humanities and Social Sciences Technical Results on Regular Preferences and Demand KC Border Revised Fall 2011; Winter 2017 Preferences For the purposes of this note, a preference relation

More information

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems

Lecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University

More information

Appendix A: Separation theorems in IR n

Appendix A: Separation theorems in IR n Appendix A: Separation theorems in IR n These notes provide a number of separation theorems for convex sets in IR n. We start with a basic result, give a proof with the help on an auxiliary result and

More information

Lecture 13 Spectral Graph Algorithms

Lecture 13 Spectral Graph Algorithms COMS 995-3: Advanced Algorithms March 6, 7 Lecture 3 Spectral Graph Algorithms Instructor: Alex Andoni Scribe: Srikar Varadaraj Introduction Today s topics: Finish proof from last lecture Example of random

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that:

More information

Announce Statistical Motivation Properties Spectral Theorem Other ODE Theory Spectral Embedding. Eigenproblems I

Announce Statistical Motivation Properties Spectral Theorem Other ODE Theory Spectral Embedding. Eigenproblems I Eigenproblems I CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Eigenproblems I 1 / 33 Announcements Homework 1: Due tonight

More information

How to Detect Definite Hermitian Pairs

How to Detect Definite Hermitian Pairs How to Detect Definite Hermitian Pairs Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint work with Chun-Hua Guo and Nick

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Math 164-1: Optimization Instructor: Alpár R. Mészáros Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Announce Statistical Motivation ODE Theory Spectral Embedding Properties Spectral Theorem Other. Eigenproblems I

Announce Statistical Motivation ODE Theory Spectral Embedding Properties Spectral Theorem Other. Eigenproblems I Eigenproblems I CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Eigenproblems I 1 / 33 Announcements Homework 1: Due tonight

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

Nonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability

Nonlinear Control. Nonlinear Control Lecture # 6 Passivity and Input-Output Stability Nonlinear Control Lecture # 6 Passivity and Input-Output Stability Passivity: Memoryless Functions y y y u u u (a) (b) (c) Passive Passive Not passive y = h(t,u), h [0, ] Vector case: y = h(t,u), h T =

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

MATH 2250 Final Exam Solutions

MATH 2250 Final Exam Solutions MATH 225 Final Exam Solutions Tuesday, April 29, 28, 6: 8:PM Write your name and ID number at the top of this page. Show all your work. You may refer to one double-sided sheet of notes during the exam

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting

More information

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2, Math 051 W008 Margo Kondratieva Week 10-11 Quadratic forms Principal axes theorem Text reference: this material corresponds to parts of sections 55, 8, 83 89 Section 41 Motivation and introduction Consider

More information

Solving Polynomial Eigenproblems by Linearization

Solving Polynomial Eigenproblems by Linearization Solving Polynomial Eigenproblems by Linearization Nick Higham School of Mathematics University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with D. Steven Mackey and Françoise

More information

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem Exercises Basic terminology and optimality conditions 4.1 Consider the optimization problem f 0(x 1, x 2) 2x 1 + x 2 1 x 1 + 3x 2 1 x 1 0, x 2 0. Make a sketch of the feasible set. For each of the following

More information