ON THE CHEBYSHEV METHOD FOR APPROXIMATING THE EIGENVALUES OF LINEAR OPERATORS
|
|
- Egbert Morton
- 6 years ago
- Views:
Transcription
1 REVUE D ANALYSE NUMÉRIQUE ET DE THÉORIE DE L APPROXIMATION Tome 25, N os 1-2, 1996, pp ON THE CHEBYSHEV METHOD FOR APPROXIMATING THE EIGENVALUES OF LINEAR OPERATORS EMIL CĂTINAŞ and ION PĂVĂLOIU (Cluj-Napoca) 1 INTRODUCTION Approaches to the problem of approximating the eigenvalues of linear operators by the Newton method have been done in a series of papers ([1], [3], [4], [5]) There is a special interest in using the Newton method because the operatorial equation to be solved has a special form, as we shall see We shall study in the following the convergence of the Chebyshev method attached to this problem and we shall apply the results obtained for the approximation of an eigenvalue and of a corresponding eigenvector for a matrix of real or complex numbers It is known that the convergence order of Newton method is usually 2 and the convergence order of Chebyshev method is usually 3 Taking into account as well the number of operations made at each step, we obtain that the Chebyshev method is more efficient than the Newton method Let E be a Banach space over K, where K = R or K = C, and T : E E a linear operator It is well known that the scalar λ is an eigenvalue of T if the equation (11) Tx λx = θ has at least one solution x θ, where θ is the null element of the space E The elements x θ that satisfy equation (11) are called eigenvectors of the operator T, corresponding to the eigenvalue λ For the simultaneously determination of the eigenvalues and eigenvectors of T we can proceed in the following way We attach to the equation (11) an equation of the form (12) Gx = 1 where G is a linear functional G : E K
2 44 Emil Cătinaş and Ion Păvăloiu 2 Consider the real Banach space F = E K, with the norm given by (13) u = max{ x, λ }, u F, u = ( x λ), with x E and λ K In this space we consider the operator f : F F given by (14) f ( ( ) x) Tx λx λ = Gx 1 Ifwedenotebyθ 1 = ( θ 0) thenullelementofthespacef, thentheeigenvalues and the corresponding eigenvectors of the operator T are solutions of the equation (15) f (u) = θ 1 Obviously, f is not a linear operator It can be easily seen that the first order Fréchet derivative of f has the following form [4]: (16) f (u 0 )h = ( Th 1 λ 0 h 1 λ 1 x 0 Gh 1 where u 0 = ( x 0 ) ( λ 0 and h = h1 ) λ 1 For the second order derivative of f we obtain the expression (17) f (u 0 )hk = ( λ 2 h 1 λ 1 h 2 0 where k = ( h 2 λ 2 ) The Fréchet derivatives of order higher than 2 of f are null Considering the above forms of the Fréchet derivatives of f, in the following we shall study the convergence of the Chebyshev method for the operators having the third order Fréchet derivative the null operator ) ),, 2 THE CONVERGENCE OF CHEBYSHEV METHOD The iterative Chebyshev method for solving equation (15) consists in the successive construction of the elements of the sequence (u n ) n 0 given by (21) u n+1 = u n Γ n f (u n ) 1 2 Γ nf (u n ) ( Γ n f (u n ) ) 2, n = 0,1,, u0 F, where Γ n = [f (u n )] 1
3 3 On the Chebyshev method 45 Let u 0 F and δ > 0, b > 0 be two real numbers Denote S = {u F : u u 0 δ} If m 2 = sup f (u), u S then Consider the numbers (22) sup f (u) f (u 0 ) +m2 δ, u S sup f (u) f (u 0 ) +δ f (u 0 ) +m 2 δ 2 =: m 0 u S µ = 1 2 m2 2 b4( m 2m 0 b 2) ν = b ( m 2m 0 b 2) With the above notations, the following theorem holds: Theorem 1 If the operator f is three times differentiable with f (u) θ 3 for all u S (θ 3 being the 3-linear null operator) and if, moreover, there exist u 0 F, δ > 0, b > 0 such that the following relations hold i the operator f (u) has a bounded inverse for all u S, and f (u) 1 b; ii the numbers µ and ν given by (22) satisfy the relations ρ 0 = µ f (u 0 ) < 1 and νρ 0 µ(1 ρ0 ) δ, then the following properties hold: j (u n ) n 0 given by (21) is convergent; jj if u = lim u u n, then u S and f (u) = θ 1 ; jjj u n+1 u n νρ3n 0 µ, n = 0,1,; jv u u n νρ3n 0 µ(1 ρ 3 n 0 ), n = 0,1, Proof Denote by g : S F the following mapping: (23) g(u) = Γ(u)f (u) 1 2 Γ(u)f (u) ( Γ(u)f (u) ) 2, where Γ(u) = f (u) 1
4 46 Emil Cătinaş and Ion Păvăloiu 4 It can be easily seen that for all u S the following identity holds f (u)+f (u)g(u)+ 1 2 f (u)g 2 (u) = [ = 1 2 f (u) f (u) 1 f (u),f (u) 1 f (u) ( f (u) 1 f (u) ) ] f (u) [f (u) 1 f (u) ( f (u) 1 f (u) ) ] 2 2 whence we obtain (24) f (u)+f (u)g(u)+ 1 2 f (u)g 2 (u) 1 2 m2 2 b4( m 0m 2 b 2) f (u) 3, or (25) f (u)+f (u)g(u)+ 1 2 f (u)g 2 (u) µ f (u) 3, for all u S Similarly, by (23) and taking into account the notations we made, we get (26) g(u) ν f (u), for all u S Using the hypotheses of the theorem, the inequality (25) and the fact that f (u) = θ 3, we obtain the following inequality: f (u 1 ) f (u 1 ) f (u 0 ) f (u 0 )g(u 0 ) 1 2 f (u 0 )g 2 (u 0 ) + f (u0 )+f (u 0 )g(u 0 )+ 1 2 f (u 0 )g 2 (u 0 ) µ f (u 0 ) 3 Since u 1 u 0 = g(u 0 ), by (25) we have u 1 u 0 ν f (u 0 ) = ν µ f(u 0 ) µ < νρ 0 µ(1 ρ0 ) δ, whence it follows that u 1 S Suppose now that the following properties hold: a) u i S, i = 0,k; b) f (u i ) µ f (u i 1 ) 3, i = 1,k By the fact that u k S, using (25) it follows (27) f (u k+1 ) µ f (u k ) 3, and from relation u k+1 u k = g(u k ) (28) u k+1 u k < ν f (u k ) The inequalities b) and (27) lead us to (29) f (u i ) 1 µ ( µ f (u0 ) ) 3 i, i = 1,k +1
5 5 On the Chebyshev method 47 We have that u k+1 S : (210) k+1 k+1 u k+1 u 0 u i u i 1 ν f (u i 1 ) ν i=1 i=1 µ k+1 i=1 ρ 3i 1 0 νρ 0 (1 ρ 0 ) µ Now we shall prove that the sequence (u n ) n 0 is Cauchy Indeed, for all m,n N we have (211) m 1 u n+m u n u n+i+1 u n+i i=0 f (u n+i ) m 1 ν i=0 m 1 ν µ ρ 3n+i 0 i=0 m 1 = ν µ ρ 3n 0 i=0 νρ3n 0 µ(1 ρ 3 n 0 ), ρ 3n+i 3 n 0 whence, taking into account that ρ 0 < 1, it follows that (u n ) n 0 converges Let u = lim n u n Then, for m in (210) it follows jv The consequence jjj follows from (28) and (29) 3 THE APPROXIMATION OF THE EIGENVALUES AND EIGENVECTORS OF MATRICES In the following we shall apply the previously studied method to the approximation of eigenvalues and eigenvectors of matrices with elements real or complex numbers Let p N and the matrix A = (a ij ) i,j=1,p, where a ij K, i,j = 1,p Using the above notations, we shall consider E = K p and F = K p K Any solution of equation ) (31) f ( x λ ( ) Ax λx := x i0 1 = ( θ 0), i 0 {1,,p} being fixed, where x = (x 1,,x p ) K p and θ = (0,,0) K p, will lead us to an eigenvalue of A and to a corresponding eigenvector For the sake of simplicity denote λ = x p+1, so that equation 31 is represented by the
6 48 Emil Cătinaş and Ion Păvăloiu 6 system (32) f i (x 1,,x p,x p+1 ) = 0, i = 1,p+1 where (33) f i (x 1,,x p+1 ) = a i1 x 1 ++(a ii x p+1 )x i ++a ip x p, i = 1,p and (34) f p+1 (x) = x i0 1 Denote by P the mapping P : K p+1 K p+1 defined by relations (33) and (34) Let x n = ( x n 1 p+1),,xn K p+1 Then the first order Fréchet derivative of the operator P at x n has the following form (35) a 11 x n p+1 a 12 a 1i0 a 1p x n 1 h 1 a 21 a 22 x n p+1 a 2i0 a 2p x n 2 h 2 P (x n )h=, a p1 a p2 a pi0 a pp x n p+1 x n p h p h p+1 where h = (h 1,,h p+1 ) K p+1 If we write k = (k 1,,k p+1 ) K p+1 then for the second order Fréchet derivative we get k p k 1 h 1 0 k p+1 0 k 2 h 2 (36) P ( x n ) kh = 0 0 k p+1 k p h p h p+1 Denote by Γ(x n ) the inverse of the matrix attached to the operator P (x n ) andu n =Γ(x n )P (x n ) = ( u n 1,un 2 p+1),,un Letvn = P (x n ) ( Γ(x n )P (x n ) ) 2 = P (x n ) u 2 n We obtain the following representation (37) u n p u n 1 u n 1 0 u n p+1 0 u n 2 u n 2 v n = P (x n )u 2 n = 0 0 u n p+1 u n p u n p u n p+1
7 7 On the Chebyshev method 49 From this relation we can easily deduce the equalities (38) v n i = 2u n p+1 un i, i = 1,p; v n p+1 = 0 Denoting w n = ( w n 1,wn 2,,wn p+1) = Γ(xn )v n and supposing that x n is an approximation of the solution of system (32), then the next approximation x n+1 given by method (21) is obtained by (39) x n+1 = x n u n 1 2 w n, i = 0,1, Consider K p with the norm of an element x = (x 1,,x p ) given by the equality (310) x = max 1 i p { x i }, and consequently (311) A = max 1 i p j=1 p a ij It can be easily seen that P (x n ) = 2, for all x n K p+1 Let x 0 K p+1 be an initial approximation of the solution of system (32) Consider a real number r > 0 and the set S = { x K p+1 x x 0 r } Denote by m 0 = P (x 0 ) +r P (x 0 ) +2r 2, µ =2b 4 ( m 0b 2 ), ν =b ( 1+m 0 b 2 ), where b = sup x S Γ(x), Γ(x) being the inverse of the matrix attached to the operator P (x) Taking into account the results obtained in Section 2, the following consequence holds: Corollary 2 If x 0 K p+1 and r R, r > 0, are chosen such that the matrix attached to the operator P (x) is nonsingular for all x S, and the following inequalities hold: then the following properties are true ρ 0 < µ P (x 0 ) < 1 νρ 0 µ(1 ρ0 ) r
8 50 Emil Cătinaş and Ion Păvăloiu 8 j 1 the sequence (x n ) n 0 generated by (39) is convergent; jj 1 if x = lim n x n, then P (x) = θ 1 = (0,,0) K p+1 ; jjj 1 x n+1 x n νρ3n 0 µ, n = 0,1,; jv 1 x x n νρ 3n 0 µ(1 ρ 3 n 0 ), n = 0,1, Remark If the radius r of the ball S is given and there exists P (x 0 ) 1 and 2r P (x 0 ) 1 < 1, then P (x) 1 P (x 0 ) 1 1 2r P (x 0 ) 1, for all x S and in the above Corollary, taking into account the proof of Theorem 1, we can take b = P (x 0 ) 1 1 2r P (x 0 ) 1 4 A COMPARISON TO THE NEWTON METHOD Note that if in (39) we neglect the term 1 2 w n, then we get the Newton method: x n+1 = x n u n = x n Γ( x n )F ( x n ), n = 0,1, In order to compare the Newton method and the Chebyshev method, we shall consider that the linear systems which appear in both methods are solved by the Gauss method While the Newton method require at each step the solving of a system Ax = b, the Chebyshev method require the solving of two linear systems Ax = b, Ay = c with the same matrix A and the vector c depending on the solution x of the first system So, we shall adapt the Gauss method in order to perform as few as possible multiplications and divisions When comparing the two methods we shall neglect the number of addition and subtraction operations
9 9 On the Chebyshev method 51 The solving of a given linear system Ax = b, A M m (K), b,x K m, (where we have denoted m = p+1) using the Gauss method consists in two stages In the first stage, the given system is transformed into an equivalent one but with the matrix of the coefficients being upper triangular In the secondstagetheunknowns(x i ) i=1,m aredeterminedbybackwardsubstitution The first stage There are performed m 1 steps, at each one vanishing the elements on the same column below the main diagonal We write the initial system in the form a 1 11 a 1 1m a 1 m1 a 1 mm x 1 x m = b 1 1 b 1 m Suppose that a , a1 11 being called the first pivote The first line in the system is multiplied by α = a1 21 and is added to the second one, which a 1 11 becomes0,a 2 22,a2 23,,a2 2m,b2 2, afterperformingm+1multiplication ordivision (M/D) operations After m 1 such transformations the system becomes a 1 11 a 1 12 a 1 1m 0 a 2 22 a 2 2m x 1 x 2 = b 1 1 b a 2 m2 a 2 mm x m b 2 m Hence at the first step there were performed (m 1)(m+1) M/D operations In the same manner, at the k-th step we have the system a 1 11 a 1 12 a 1 1k a 1 1,k+1 a 1 1m 0 a 2 22 a 2 2k a 2 2,k+1 a 2 2m 0 0 a k kk a k k,k+1 a k km 0 0 a k k+1,k a k k+1,k+1 a k k+1,m x 1 x 2 x k x k+1 = b 1 1 b 2 2 b k k b k k a k mk a k m,k+1 a k mm x m b k m
10 52 Emil Cătinaş and Ion Păvăloiu 10 Supposing the k-th pivote a k kk M/D operations we get a 1 11 a 1 12 a 1 1k a 1 1,k+1 a 1 1m 0 a 2 22 a 2 2k a 2 2,k+1 a 2 2m 0 0 a k kk a k k,k+1 a k k,m a k+1 k+1,k+1 a k+1 k+1,m a k+1 m,k+1 a k+1 mm 0 and performing (m k)(m k+2) x 1 x 2 x k x k+1 x m = b 1 1 b 2 2 b k k b k+1 k+1 b k+1 m At each step k, the elements below the k-th pivote vanish, so they are not needed any more in the solving of the system The corresponding memory in the computer is used keeping in it the coefficients ak k+1,k a k kk,, ak mk a k, kk which, of course, will be needed only for solving another system Ay = c, with c depending on x, the solution of Ax = b At the first stage there are performed (m 1)(m+1)++1 3 = 2m3 +3m 2 5m 6 The second stage Given the system a 1 11 a 1 12 a 1 1m 0 a 2 22 a 2 2m 0 0 a m mm x 1 x 2 x m the solution x is computed in the following way: x m = b m m/a m mm, = M/D operations b 1 1 b 2 2 b m m x k = ( b k k (ak k,k+1 x k+1 ++a k km x m) ) /a k kk, x 1 = ( b 1 1 (a1 12 x 2 ++a 1 1m x m) ) /a 1 11
11 11 On the Chebyshev method 53 At this stage there are performed 1+2++m = m(m+1) 2 M/D operations In the both stages, there are totally performed m 3 3 +m2 m 3 M/D operations In the case when we solve the systems Ax = b, Ay = c, where the vector c depends on the solution x, we first apply the Gauss method for the system Ax = b and at the first stage we keep below the main diagonal the coefficients by which the pivotes were multiplied Then we apply to the vector c the transformations performed to the vector b when solving Ax = b Denote c = (c 1 i ) i=1,m At the first step At the k-th step At the m-th step c 2 2 := a 21c 1 1 +c1 2 c 2 m := a m1c 1 1 +c1 m c k+1 k+1 := a k+1,k c k k +ck k+1 c k+1 m := a mk c k k +ck m c m m := a m,m 1c m 1 m 1 +cm 1 m There were performed m 1+m 2++1 = m(m 1) 2 M/D operations Now the second stage of the Gauss method is applied to a 1 11 a 1 1m 0 a m mm y 1 y m = In addition to the case of a single linear system, in this case were performed m(m 1) 2 M/D operations, getting c 1 1 c m m m m2 5 6m M/D operations, andtakingintoaccount (38) weadd(m 1) morem/d operations, obtaining m m2 + m 6 1 M/D operations
12 54 Emil Cătinaş and Ion Păvăloiu 12 Remark At the first stage, if for some k we have a k kk 0, then an element a k i 0,k 0, i 0 {k+1,,m} must be find, and the lines i 0 and k in A and b be swapped In order to avoid the error accumulations, a partial or total pivote strategy is recommended even if a k kk 0 for k = 1,m 1 For partial pivoting, the pivote is chosen such that a k i 0,k = max i=k,m ak ik The interchange of lines can be avoided by using a permutation vector p = (p i ) i=1,m, which is first initialized so that p i = i The elements in A and b are then referred to as a ij := a p(i),j and b i := b p(i), and swapping the lines k and i 0 is done by swapping the k-th and i 0 -th elements in p For the Chebyshev method, the use of the vector p cannot be avoided by the very interchanging of the lines, because we must keep track for the permutations made, in order to apply them in the same order to the vector c Moreover, weneedtwoextravectors tandq Intwestorethetranspositions applied to the lines in Ax = b, which are then successively applied to q At the first stage of the Gauss method, when the k-th pivote is a k i 0,k and i 0 k, the k-th and i 0 -th elements in p are swapped, and we assign t k := i 0 to indicate that at the k-th step we applied to p the transposition (k,i 0 ) After computing the solution of Ax = b, we initialize the vector c by (37), the permutation vector q by q i := i, i := 1,m, and then we successively apply the transforms operated to b, taking into account the eventual transpositions The algorithm for solving the second linear system in the Chebyshev method is as follows for k := 1 to m 1 do begin if t[k] <> k then {at the k-th step the transposition} begin {(k, t[k]) has been applied to p} auxi := q[k]; q[k] := q[t[k]]; q[t[k]] := auxi; end, for i := k +1 to m do c[q[i]] := c[q[i]]+a[q[i],k] c[q[k]] end;
13 13 On the Chebyshev method 55 for i := m downto 1 do {the solution y is now computed} begin sum := 0; for j := i+1 to m do sum := sum+a[p[i],j] y[j]; {now p = q} y[i] := (c[p[i]] sum)/a[p[i],i]; end We adopt as the efficiency measure of an iterative method M the number E(M) = lnq s, where q is the convergence order and s is the number of M/D operations needed at each step We obtain for the Newton method and E(N) = E(C) = 3ln2 m 3 +3m 2 m 6ln3 2m 3 +9m 2 +m 6 for the Chebyshev method It can be easily seen that we have E(C) > E(N) for n 2, ie the Chebyshev method is more efficient than the Newton method Consider the real matrix 5 NUMERICAL EXAMPLE A = which has the following eigenvalues and eigenvectors λ 1,2,3 = 2, x 1 = (1,1,0,0), x 2 = (1,0,1,0), x 3 = (1,0,0,1) and λ 4 = 2, x 4 = (1, 1, 1, 1) Taking the initial value x 0 = (1, 15, 2, 15, 1), and applying the two methods we obtain the following results:
14 56 Emil Cătinaş and Ion Păvăloiu 14 Newton method n x 1 x 2 x 3 x 4 x 5 = λ Chebyshev method n x 1 x 2 x 3 x 4 x 5 = λ REFERENCES [1] MP Anselone, LB Rall, The solution of characteristic value-vector problems by Newton method, Numer Math 11 (1968), pp [2] PG Ciarlet, Introduction à l analyse numérique matricielle et à l optimisation, Mason, Paris, 1990 [3] F Chatelin, Valeurs propres de matrices, Mason, Paris, 1988 [4] L Collatz, Functionalanalysis und Numerische Mathematik, Springer-Verlag, Berlin, 1964 [5] VS Kartîşov, FL Iuhno, O nekotorîh Modifikaţiah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci, J Vîcisl matem i matem fiz (33) (1973) 9, pp [6] I Păvăloiu, Sur les procédés itératifs à un ordre élevé de convergence, Mathematica (Cluj) 12 (35) (1970) 2, pp [7] JF Traub, Iterative Methods for the Solution of Equations, Prentice-Hall Inc, Englewood Cliffs, N J, 1964 Received March 7, 1996 T Popoviciu Institute of Numerical Analysis Romanian Academy PO Box 68 Cluj-Napoca 1 RO-3400 Romania
ERROR ESTIMATION IN NUMERICAL SOLUTION OF EQUATIONS AND SYSTEMS OF EQUATIONS
REVUE D ANALYSE NUMÉRIQUE ET DE THÉORIE DE L APPROXIMATION Rev. Anal. Numér. Théor. Approx., vol. 99 no., pp. 53 65 ERROR ESTIMATION IN NUMERICAL SOLUTION OF EQUATIONS AND SYSTEMS OF EQUATIONS ION PĂVĂLOIU
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationFIXED POINT THEOREMS OF KRASNOSELSKII TYPE IN A SPACE OF CONTINUOUS FUNCTIONS
Fixed Point Theory, Volume 5, No. 2, 24, 181-195 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.htm FIXED POINT THEOREMS OF KRASNOSELSKII TYPE IN A SPACE OF CONTINUOUS FUNCTIONS CEZAR AVRAMESCU 1 AND CRISTIAN
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationTHE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS
THE INEXACT, INEXACT PERTURBED AND QUASI-NEWTON METHODS ARE EQUIVALENT MODELS EMIL CĂTINAŞ Abstract. A classical model of Newton iterations which takes into account some error terms is given by the quasi-newton
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationJournal of Computational and Applied Mathematics
Journal of Computational Applied Mathematics 236 (212) 3174 3185 Contents lists available at SciVerse ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam
More informationFast Matrix Multiplication*
mathematics of computation, volume 28, number 125, January, 1974 Triangular Factorization and Inversion by Fast Matrix Multiplication* By James R. Bunch and John E. Hopcroft Abstract. The fast matrix multiplication
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationMAT 610: Numerical Linear Algebra. James V. Lambers
MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic
More informationEVANESCENT SOLUTIONS FOR LINEAR ORDINARY DIFFERENTIAL EQUATIONS
EVANESCENT SOLUTIONS FOR LINEAR ORDINARY DIFFERENTIAL EQUATIONS Cezar Avramescu Abstract The problem of existence of the solutions for ordinary differential equations vanishing at ± is considered. AMS
More informationFirst, we review some important facts on the location of eigenvalues of matrices.
BLOCK NORMAL MATRICES AND GERSHGORIN-TYPE DISCS JAKUB KIERZKOWSKI AND ALICJA SMOKTUNOWICZ Abstract The block analogues of the theorems on inclusion regions for the eigenvalues of normal matrices are given
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationJim Lambers MAT 610 Summer Session Lecture 2 Notes
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the
More informationMemoirs on Differential Equations and Mathematical Physics
Memoirs on Differential Equations and Mathematical Physics Volume 51, 010, 93 108 Said Kouachi and Belgacem Rebiai INVARIANT REGIONS AND THE GLOBAL EXISTENCE FOR REACTION-DIFFUSION SYSTEMS WITH A TRIDIAGONAL
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationDraft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo
Lecture 12 Gaussian Elimination and LU Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa State University[0.5in]
More informationCAAM 454/554: Stationary Iterative Methods
CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationSOME REMARKS ON KRASNOSELSKII S FIXED POINT THEOREM
Fixed Point Theory, Volume 4, No. 1, 2003, 3-13 http://www.math.ubbcluj.ro/ nodeacj/journal.htm SOME REMARKS ON KRASNOSELSKII S FIXED POINT THEOREM CEZAR AVRAMESCU AND CRISTIAN VLADIMIRESCU Department
More informationModified Gauss Seidel type methods and Jacobi type methods for Z-matrices
Linear Algebra and its Applications 7 (2) 227 24 www.elsevier.com/locate/laa Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices Wen Li a,, Weiwei Sun b a Department of Mathematics,
More informationIterative Methods for Eigenvalues of Symmetric Matrices as Fixed Point Theorems
Iterative Methods for Eigenvalues of Symmetric Matrices as Fixed Point Theorems Student: Amanda Schaeffer Sponsor: Wilfred M. Greenlee December 6, 007. The Power Method and the Contraction Mapping Theorem
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationAN ITERATIVE METHOD FOR COMPUTING ZEROS OF OPERATORS SATISFYING AUTONOMOUS DIFFERENTIAL EQUATIONS
Electronic Journal: Southwest Journal of Pure Applied Mathematics Internet: http://rattler.cameron.edu/swjpam.html ISBN 1083-0464 Issue 1 July 2004, pp. 48 53 Submitted: August 21, 2003. Published: July
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationScalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets
Scalar Asymptotic Contractivity and Fixed Points for Nonexpansive Mappings on Unbounded Sets George Isac Department of Mathematics Royal Military College of Canada, STN Forces Kingston, Ontario, Canada
More informationVectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.
Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a
More informationMatrix Theory. A.Holst, V.Ufnarovski
Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationThe Journal of Nonlinear Sciences and Applications
J. Nonlinear Sci. Appl. 2 (2009), no. 3, 195 203 The Journal of Nonlinear Sciences Applications http://www.tjnsa.com ON MULTIPOINT ITERATIVE PROCESSES OF EFFICIENCY INDEX HIGHER THAN NEWTON S METHOD IOANNIS
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationLecture 4: Linear Algebra 1
Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation
More informationEXISTENCE OF STRONG SOLUTIONS OF FULLY NONLINEAR ELLIPTIC EQUATIONS
EXISTENCE OF STRONG SOLUTIONS OF FULLY NONLINEAR ELLIPTIC EQUATIONS Adriana Buică Department of Applied Mathematics Babeş-Bolyai University of Cluj-Napoca, 1 Kogalniceanu str., RO-3400 Romania abuica@math.ubbcluj.ro
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationMath 489AB Exercises for Chapter 2 Fall Section 2.3
Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationChapter 3. Numerical linear algebra. 3.1 Motivation. Example 3.1 (Stokes flow in a cavity) Three equations,
Chapter 3 Numerical linear algebra 3. Motivation In this chapter we will consider the two following problems: ➀ Solve linear systems Ax = b, where x, b R n and A R n n. ➁ Find x R n that minimizes m (Ax
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationA CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES
A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES NICOLAE POPA Abstract In this paper we characterize the Schur multipliers of scalar type (see definition below) acting on scattered
More informationPerov s fixed point theorem for multivalued mappings in generalized Kasahara spaces
Stud. Univ. Babeş-Bolyai Math. 56(2011), No. 3, 19 28 Perov s fixed point theorem for multivalued mappings in generalized Kasahara spaces Alexandru-Darius Filip Abstract. In this paper we give some corresponding
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationAn Extrapolated Gauss-Seidel Iteration
mathematics of computation, volume 27, number 124, October 1973 An Extrapolated Gauss-Seidel Iteration for Hessenberg Matrices By L. J. Lardy Abstract. We show that for certain systems of linear equations
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationThe aim of this paper is to obtain a theorem on the existence and uniqueness of increasing and convex solutions ϕ of the Schröder equation
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 1, January 1997, Pages 153 158 S 0002-9939(97)03640-X CONVEX SOLUTIONS OF THE SCHRÖDER EQUATION IN BANACH SPACES JANUSZ WALORSKI (Communicated
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationGeneralized line criterion for Gauss-Seidel method
Computational and Applied Mathematics Vol. 22, N. 1, pp. 91 97, 2003 Copyright 2003 SBMAC Generalized line criterion for Gauss-Seidel method M.V.P. GARCIA 1, C. HUMES JR. 2 and J.M. STERN 2 1 Department
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationAN ITERATION. In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b
AN ITERATION In part as motivation, we consider an iteration method for solving a system of linear equations which has the form x Ax = b In this, A is an n n matrix and b R n.systemsof this form arise
More informationChapter 3. Differentiable Mappings. 1. Differentiable Mappings
Chapter 3 Differentiable Mappings 1 Differentiable Mappings Let V and W be two linear spaces over IR A mapping L from V to W is called a linear mapping if L(u + v) = Lu + Lv for all u, v V and L(λv) =
More informationScientific Computing: Dense Linear Systems
Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)
More informationLax Solution Part 4. October 27, 2016
Lax Solution Part 4 www.mathtuition88.com October 27, 2016 Textbook: Functional Analysis by Peter D. Lax Exercises: Ch 16: Q2 4. Ch 21: Q1, 2, 9, 10. Ch 28: 1, 5, 9, 10. 1 Chapter 16 Exercise 2 Let h =
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationA NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES
Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University
More informationTHE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION
SIAM J. NUMER. ANAL. c 1997 Society for Industrial and Applied Mathematics Vol. 34, No. 3, pp. 1255 1268, June 1997 020 THE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION NILOTPAL GHOSH, WILLIAM
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationAppendix C Vector and matrix algebra
Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar
More informationInterlacing Inequalities for Totally Nonnegative Matrices
Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are
More informationON THE MOMENTS OF ITERATED TAIL
ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent
More informationGoal: to construct some general-purpose algorithms for solving systems of linear Equations
Chapter IV Solving Systems of Linear Equations Goal: to construct some general-purpose algorithms for solving systems of linear Equations 4.6 Solution of Equations by Iterative Methods 4.6 Solution of
More informationAIMS Exercise Set # 1
AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationLecture Note 2: The Gaussian Elimination and LU Decomposition
MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method
More informationSOME INEQUALITIES FOR THE EUCLIDEAN OPERATOR RADIUS OF TWO OPERATORS IN HILBERT SPACES
SOME INEQUALITIES FOR THE EUCLIDEAN OPERATOR RADIUS OF TWO OPERATORS IN HILBERT SPACES SEVER S DRAGOMIR Abstract Some sharp bounds for the Euclidean operator radius of two bounded linear operators in Hilbert
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More information