ANALOG OF FAVARD S THEOREM FOR POLYNOMIALS CONNECTED WITH DIFFERENCE EQUATION OF 4-TH ORDER. S. M. Zagorodniuk
|
|
- Buck Cooper
- 5 years ago
- Views:
Transcription
1
2 Serdica Math J 27 (2001, ANALOG OF FAVARD S THEOREM FOR POLYNOMIALS CONNECTED WITH DIFFERENCE EQUATION OF 4-TH ORDER S M Zagorodniuk Communicated by E I Horozov Abstract Orthonormal polynomials on the real line {p n (λ} n=0 satisfy the recurrent relation of the form: λ n 1 p n 1 (λ + α n p n (λ + λ n p n+1 (λ = λp n (λ, n = 0, 1, 2,, where λ n > 0, α n R, n = 0, 1, ; λ 1 = p 1 = 0, λ C In this paper we study systems of polynomials {p n (λ} n=0 which satisfy the equation: α n 2 p n 2 (λ + β n 1 p n 1 (λ + γ n p n (λ + β n p n+1 (λ + α n p n+2 (λ = λ 2 p n (λ, n = 0, 1, 2,, where α n > 0, β n C, γ n R, n = 0, 1, 2,, α 1 = α 2 = β 1 = 0, p 1 = p 2 = 0, p 0 (λ = 1, p 1 (λ = cλ + b, c > 0, b C, λ C It is shown that they are orthonormal on the real and the imaginary axes pm (λ in the complex plane: (p n (λ, p n ( λdσ(λ = δ R T p m ( λ n,m, n, ( m = 0, ; T = (, with respect to some matrix measure σ(λ = σ1 (λ σ 2 (λ σ 3 (λ σ 4 (λ Also the Green formula for difference equation of 4-th order is built 2000 Mathematics Subject Classification: 42C05, 39A05 Key words: orthogonal polynomials, difference equation
3 194 S M Zagorodniuk Let us consider the recurrent relation for the system of polynomials {p n (λ} n=0 of the form: (1 α n 2p n 2 (λ + β n 1 p n 1 (λ + γ n p n (λ + β n p n+1 (λ + α n p n+2 (λ = λ 2 p n (λ, n = 0,1,2,, where α n > 0, β n C, γ n R, n = 0,1,2,, α 1 = α 2 = β 1 = 0, p 1 = p 2 = 0, p 0 (λ = 1, p 1 (λ = cλ + b, c > 0, b C, λ C This relation can be written in the matrix form: or γ 0 β 0 α β 0 γ 1 β 1 α α 0 β 1 γ 2 β 2 α α 1 β 2 γ 3 β 3 α 3 J 5 p = λ 2 p, p 0 p 1 p 2 p 3 = λ2 where J 5 is five-diagonal, symmetric, semi-infinite matrix and p is vector of polynomials Let us study the properties of these systems of polynomials Note, that orthonormal polynomials on the real line belong to this class of polynomials For them the following equation is fulfilled: J 3 p = λp with three-diagonal, symmetric, semi-infinite matrix and from this immediately it follows, that J3 2p = λ2 p and J3 2 is five-diagonal, symmetric, semi-infinite matrix (Question: how larger is the considered class than the class of real polynomials? For the answer see [1, Theorem 5, p 272] ( σ1 (λ σ Definition ([1, p ] Find matrix measure σ(λ= 2 (λ σ 3 (λ σ 4 (λ λ C; σ i (λ : C C is piecewise continuous mapping on the real and the imaginary axis, i = 1,4, such that 1 σ(λ is symmetric, monotonically increasing matrix function: σ 1 (λ = σ 1 (λ, σ 4 (λ = σ 4 (λ, σ 2 (λ = σ 3 (λ; σ(λ 2 σ(λ 1, λ 2 λ 1, λ 1,λ 2 R p 0 p 1 p 2 p 3, 2 R T λ 2 σ(λ 2 σ(λ 1, λ 1 i i, λ 1,λ 2 ( i,i (λ k,( λ k 1 d σ(λ = s 1 k, k = 0,,
4 Analog of Favard s theorem 195 (λ k 1,( λ k 1 λ d σ(λ = m R T λ k, k = 1,, where {s k } k=0, {m k} k=1 are fixed sequences of complex numbers; We ll call this problem generalized symmetric moments problem Definition ([1, p 266] We call a pair of sequences {s k,m k+1 } k=0, s k C,m k+1 C, k = 0, symmetric if s 2k+1 = m 2k+1 ; s 2k = s 2k,m 2k+2 = m 2k+2, k = 0, We call a pair of sequences {s k,m k+1 } k=0, s k C, m k+1 C, k = 0, positive one if s 0 s 1 s k s 0 s 1 s k m 1 m 2 m k+1 m 1 m 2 m k+1 s 2 s 3 s k+2 > 0, k = 2l + 1; s 2 s 3 s k+2 > 0,k = 2l m k m k+1 m 2k s k s k+1 s 2k l = 0, The following theorem holds true: Theorem 1 ([1, Theorem 6, p 274] Let moments problem in general form be given For the existance of a problem s solution σ(λ (with an infinite number of increasing points it is necessary and sufficient the pair of sequences {s k,m k+1 } k=0 to be symmetric and positive Zolotarev suggested to investigate systems of polynomials, which satisfy (1 He also gave the usefull notes The next theorem is an analog of Favard s theorem [2, Theorem 15, p 60] It gives the orthonormality properties for the systems from (1 Theorem 2 Let the system of polynomials {p n (λ} n=0 satisfy (1 Then these polynomials are orthonormal on the real and the imaginary axes in the complex plane: R T pm (λ (p n (λ,p n ( λdσ(λ = δ p m ( λ n,m, n,m = 0, ;T = (,,
5 196 S M Zagorodniuk σ1 (λ σ where σ(λ = 2 (λ is symmetric, non-decreasing matrix-function σ 3 (λ σ 4 (λ with infinite number of points of increasing: σ 1 (λ = σ 1 (λ, σ 4 (λ = σ 4 (λ, σ 2 (λ = σ 3 (λ; σ(λ 2 σ(λ 1, λ 2 λ 1, λ 1,λ 2 R, σ(λ 2 σ(λ 1, λ 2 i λ 1,λ 2 T, T = ( i,i λ 1 i, Proof Let {p n (λ} n=0 be given system of polynomials We define a functional σ(u,v, where u,v P, P is a space of all polynomials If u(λ,v(λ P, than we can write: u(λ = n a i p i (λ, v(λ = m b j p j (λ, a i C, i = 0,n, b j C,j = 0,m, (a n 0, b m 0, n,m N Note, that for arbitrary polynomials u(λ, v(λ such decomposition is always possible because the polynomials p n (λ have positive leading coefficients Also, for arbitrary u(λ,v(λ such representation is unique By definition σ(u,v = k a i b i, k = min{n,m} This functional is bilinear Also, it satisfies the relation: σ(u,v = σ(v,u Note, that for the polynomials {p n (λ} n=0 we have: (2 σ(p n (λ,p m (λ = δ n,m, n,m = 0,1, Let us show that this functional has the property: (3 σ(λ 2 u,v = σ(u,λ 2 v, u,v P Really, if u(λ = n a i p i (λ, v(λ = m b j p j (λ, a i C, i = 0,n, b j C, j = 0,m, (a n 0, b m 0, n,m N, then ( ( σ(λ 2 u,v = σ λ 2 n a i p i (λ, m n b j p j (λ = σ λ 2 a i p i (λ, m b j p j (λ = If we prove that = n m a i b j σ(λ 2 p i (λ,p j (λ (4 σ(λ 2 p i (λ,p j (λ = σ(p i (λ,λ 2 p j (λ, i,j = 0,1,,
6 Analog of Favard s theorem 197 then σ(λ 2 u,v = n and relation (3 will be proved But ( m n a i b j σ(p i (λ,λ 2 p j (λ = σ a i p i (λ, m λ 2 b j p j (λ = = σ(u,λ 2 v σ(λ 2 p i (λ,p j (λ = σ(α i 2 p i 2 (λ+β i 1 p i 1 (λ+γ i p i (λ+β i p i+1 (λ+α i p i+2 (λ, p j (λ = α i 2 δ i 2,j + β i 1 δ i 1,j + γ i δ i,j + β i δ i+1,j + α i δ i+2,j, i,j = 0,1,2, σ(p i (λ,λ 2 p j (λ = σ(p i (λ,α j 2 p j 2 (λ + β j 1 p j 1 (λ + γ j p j (λ + β j p j+1 (λ+ +α j p j+2 (λ = α j 2 δ i,j 2 + β j 1 δ i,j 1 + γ j δ i,j + β j δ i,j+1 + α j δ i,j+2 = = α i δ i,j 2 + β i δ i,j 1 + γ i δ i,j + β i 1 δ i,j+1 + α i 2 δ i,j+2 = α i δ i+2,j + β i δ i+1,j + +γ i δ i,j + β i 1 δ i 1,j + α i 2 δ i 2,j, i,j = 0,1,2, This implies the correctness of (3 Let s k = σ(λ k,1, k = 0,1, m k = σ(λ k 1,λ, k = 1,2, Then s 2k = σ(λ 2k,1 = σ(1,λ 2k = σ(λ 2k,1 = s 2k, s 2k+1 = σ(λ 2k+1,1 = σ(1,λ 2k+1 = σ(λ 2k,λ = m 2k+1, k = 0,1, ; m 2k = σ(λ 2k 1,λ = σ(λ,λ 2k 1 = σ(λ 2k 1,λ = m 2k, k = 1,2, Hence, the pair of sequences {s k,m k+1 } k=0, s k C, m k+1 C, k = 0, is symmetric Note that σ(u,u > 0, u P, u 0 Using this we have ( n n 0 < σ a k λ k, a i λ i = n a k a i σ(λ k,λ i = k=0 k,
7 198 S M Zagorodniuk = n k=0 n k=0 [ n/2 a k a 2j s k+2j + n/2 1 [ (n 1/2 a k a 2j s k+2j + a k a 2j+1 m k+2j+1 ], if n is even (n 1/2 a i C,i = 0,n,a n 0, n N a k a 2j+1 m k+2j+1 ], if n is odd It follows that the pair of sequences {s k,m k+1 } k=0 is positive σ1 (λ σ Let σ(λ = 2 (λ be a solution of corresponding generalized σ 3 (λ σ 4 (λ symmetric problem of moments Let us consider a functional: ˆσ(u,v = R T v(λ (u(λ, u( λdσ(λ, u,v P v( λ Let u(λ = n a k λ k, v(λ = m b i λ i, a k C,k = 0,n, b i C, i = 0,m Then k=0 ( n ˆσ(u,v = ˆσ a k λ k, m b i λ i = n k=0 The following equality holds true: Actually, k=0 ˆσ(λ k,λ i = σ(λ k,λ i, k,i = 0,1, m a k b iˆσ(λ k,λ i ˆσ(λ k,λ 2j = ˆσ(λ k+2j,1 = s k+2j = σ(λ k+2j,1 = σ(λ k,λ 2j, ˆσ(λ k,λ 2j+1 = ˆσ(λ k+2j,λ = m k+2j+1 = σ(λ k+2j,λ = σ(λ k,λ 2j+1, This implies that ˆσ(u,v = n k=0 k,j = 0,1,2, m a k b i σ(λ k,λ i = σ(u,v, u,v P Using the orthonormality relation (2 we obtain that polynomials {p n (λ} n=0 satisfy the required property of orthonormality and this completes the proof
8 Analog of Favard s theorem 199 We now turn to the difference equation of the type: (5 α n 2 y n 2 + β n 1 y n 1 + γ n y n + β n y n+1 + α n y n+2 = λ 2 y n (λ, n = 2,3,, y 0 y 1 where α n > 0, β n C, γ n R, n = 0,1,2,; λ C is a parameter, y = y ṇ is solution vector Let {p n (λ} n=0 be the system of polynomials such that: p 0(λ = 1, p 1 (λ = cλ + b, c > 0, b C, λ C, the polynomials p 2 (λ, p 3 (λ can be found from equalities: γ 0 p 0 (λ + β 0 p 1 (λ + α 0 p 2 (λ = λ 2 p 0 (λ, β 0 p 0 (λ + γ 1 p 1 (λ + β 1 p 2 (λ + α 1 p 3 (λ = λ 2 p 1 (λ, γ 0,γ 1 R,β 0 C, and {p n (λ} n=0 is solution of (5 This system of polynomials satisfy (1 Let σ(u,v,u,v P be a functional defined as in the proof of Theorem 2 Let us write the first 4 polynomials in the sequence {p n (λ} n=0 : p 0 (λ = 1,p 1 (λ = λ+ν 1,p 2 (λ = µ 2 λ 2 +ν 2 λ+η 2,p 3 (λ = µ 3 λ 3 +ν 3 λ 2 +η 3 λ+ξ 3, where µ k > 0,ν k C,k = 1,3, η 2,η 3,ξ 3 C Consider the following systems of polynomials: p + n (λ = p n(λ + p n ( λ 2, p n (λ = p n(λ p n ( λ, 2λ, f n (λ = σ u ( pn (u up n (λ p+ n (λ u 2 λ 2,1 ( pn (u up n g k (λ = σ (λ p+ n (λ u u 2 λ 2,p 1 (u (6 n = 0,1, The f n (λ,g n (λ can also be written in the form: ( p + f n (λ = σ n (u p + n (λ ( u u 2 λ 2,1 + σ u u p n (u p n (λ u 2 λ 2,1,,
9 200 S M Zagorodniuk ( p + g n (λ = σ n (u p + ( n (λ u u 2 λ 2,p 1 (u + σ u u p n (u p n (λ u 2 λ 2,p 1 (u, n = 0, Now we can construct the fundamental system of solutions of (5 (see [3]: Theorem 3 Let us consider the difference equation (5 The systems of polynomials {p + n (λ} n=0, {p n (λ} n=0, {f n(λ} n=0, {g n(λ} n=0 form the fundamental system of solutions of (5 The initial conditions are as follows: (7 p + 0 (λ = 1 p + 1 (λ = ν 1 p + 2 (λ = µ 2λ 2 ; + η 2 p + 3 (λ = ν 3λ 2 + ξ 3 f 0 (λ = 0 f 1 (λ = 0 f 2 (λ = µ 2 f 3 (λ = µ 3ν 1 + ν 3 ; p 0 (λ = 0 p 1 (λ = p 2 (λ = ν ; 2 p 3 (λ = µ 3λ 2 + η 3 g 0 (λ = 0 g 1 (λ = 0 g 2 (λ = 0 g 3 (λ = µ 3 Proof The systems {p ± n (λ} n=0 are solutions of (5 because they are linear combination of solutions of this equation The conditions (7 are easily verified Substituting f n (λ in the left-hand side of (5 we have: ( u 2 p + n σ (u λ2 p + n (λ ( u u 2 λ 2,1 + σ u u u2 p n (u λ2 p n (λ u 2 λ 2,1 = σ u (p + n (u,1+ ( p +λ 2 + σ n (u p + n (λ ( u u 2 λ 2,1 + σ u (up n (u,1 + λ 2 σ u u p n (u p n (λ u 2 λ 2,1 = = λ 2 f n (λ + σ u (p + n (u + up n (u,1 = λ 2 f n (λ + σ u (p n (u,1 = λ 2 f n (λ, Analogously for polynomials g k (λ we have: n = 1,2, ( u 2 p + n (u λ 2 p + ( n (λ σ u u 2 λ 2,p 1 (u + σ u u u2 p n (u λ 2 p n (λ u 2 λ 2,p 1 (u = λ 2 g n (λ + σ u (p n (u,p 1 (u = λ 2 g n (λ, n = 2,3, =
10 Analog of Favard s theorem 201 It follows that the systems {f n (λ} n=0, {g n(λ} n=0 are solutions of (5 Using formula (6 and conditions (7 for p ± n (λ,n = 0,3 we obtain: f 0 (λ = g 0 (λ = 0, f 1 (λ = g 1 (λ = 0, f 2 (λ = σ u (µ 2,1 = µ 2, g 2 (λ = σ u (µ 2,p 1 (u = µ 2 σ u (1,p 1 (u = 0, p1 (u ν 1 f 3 (λ = σ u (ν 3,1 + σ u (uµ 3,1 = ν 3 + µ 3 σ u (u,1 = ν 3 + µ 3 σ u,1 = ν 3 + µ 3 (σ u (p 1 (u,1 σ u (ν 1,1 = ν 3 µ 3ν 1, g 3 (λ = σ u (ν 3,p 1 (u + σ u (uµ 3,p 1 (u = µ 3 σ u ( p1 (u ν 1,p 1 (u = µ 3 The linear independence of solutions {p ± n (λ} n=0, {f n(λ} n=0, {g n(λ} n=0 is evident from (7 The proof is complete Let {y n } n=0, {v n} n=0 be solutions of equation (5 corresponding to parameter values λ and ξ respectively: λ 2 y n (λ = α n 2 y n 2 (λ + β n 1 y n 1 (λ + γ n y n (λ + β n y n+1 (λ + α n y n+2 (λ, Then ξ 2 v n (ξ = α n 2 v n 2 (ξ + β n 1 v n 1 (ξ + γ n v n (ξ + β n v n+1 (ξ + α n v n+2 (ξ, n = 2,3, λ 2 y n (λv n (ξ = α n 2 y n 2 (λv n (ξ + β n 1 y n 1 (λv n (ξ + γ n y n (λv n (ξ+ +β n y n+1 (λv n (ξ + α n y n+2 (λv n (ξ, ξ 2 y n (λv n (ξ = α n 2 y n (λv n 2 (ξ + β n 1 y n (λv n 1 (ξ + γ n y n (λv n (ξ+ +β n y n (λv n+1 (ξ + α n y n (λv n+2 (ξ, n = 2,3, Subtracting the second equality from the first one we have: (λ 2 ξ 2 y n (λv n (ξ = α n 2 (y n 2 (λv n (ξ y n (λv n 2 (ξ + β n 1 y n 1 (λv n (ξ β n 1 y n (λv n 1 (ξ + β n y n+1 (λv n (ξ β n y n (λv n+1 (ξ + α n (y n+2 (λv n (ξ y n (λv n+2 (ξ = A n 2 (λ,ξ B n 1 (λ,ξ + B n (λ,ξ + A n (λ,ξ, =
11 202 S M Zagorodniuk where A n (λ,ξ = α n (y n+2 (λv n (ξ y n (λv n+2 (ξ, B n (λ,ξ = β n y n+1 (λv n (ξ β n y n (λv n+1 (ξ Summing up over n we obtain: (λ 2 ξ 2 m y n (λv n (ξ = B m (λ,ξ + A m (λ,ξ + A m 1 (λ,ξ A k 2 (λ,ξ n=k So, we have proved the following theorem: B k 1 (λ,ξ A k 1 (λ,ξ, 2 k m Theorem 4 Let {y n } n=0, {v n} n=0 be solutions of equation (5 corresponding to parameters λ and ξ respectively Then the following formula holds: (λ 2 ξ 2 m y n (λv n (ξ = α m (y m+2 (λv m (ξ y m (λv m+2 (ξ+ n=k +α m 1 (y m+1 (λv m 1 (ξ y m 1 (λv m+1 (ξ+β m y m+1 (λv m (ξ β m y m (λv m+1 (ξ α k 2 (y k (λv k 2 (ξ y k 2 (λv k (ξ α k 1 (y k+1 (λv k 1 (ξ y k 1 (λv k+1 (ξ (β k 1 y k (λv k 1 (ξ β k 1 y k 1 (λv k (ξ, 2 k m REFERENCES [1] S Zagorodniuk On a five-diagonal Jacobi matrices and orthogonal polynomials on rays in the complex plane Serdica Math J 24 (1998, [2] G Freud Orthogonal polynomials Academia Kiado, Budapest, 1971 [3] D I Martyniuk Lectures on qualitative theory of difference equations Kiev, Naukova dumka, 1972 (in Russian Faculty of Mathematics and Informatics Kharkov University Kharkov 77 Ukraine Received August 25, 2000 Revised July 30, 2001
Orthogonal polynomials
Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMath 489AB Exercises for Chapter 1 Fall Section 1.0
Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationON A CLASS OF COVARIANCE OPERATORS U =
GEORGIAN MATHEMATICAL JOURNAL: Vol. 5, No. 5, 1998, 415-424 ON A CLASS OF COVARIANCE OPERATORS T. CHANTLADZE AND N. KANDELAKI Abstract. This paper is the continuation of [1] in which complex symmetries
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationRETRACTED On construction of a complex finite Jacobi matrix from two spectra
Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 On construction of a complex finite Jacobi matrix from two spectra Gusein Sh. Guseinov guseinov@ati.edu.tr Follow this and additional
More informationACM 104. Homework Set 5 Solutions. February 21, 2001
ACM 04 Homework Set 5 Solutions February, 00 Franklin Chapter 4, Problem 4, page 0 Let A be an n n non-hermitian matrix Suppose that A has distinct eigenvalues λ,, λ n Show that A has the eigenvalues λ,,
More information21 Symmetric and skew-symmetric matrices
21 Symmetric and skew-symmetric matrices 21.1 Decomposition of a square matrix into symmetric and skewsymmetric matrices Let C n n be a square matrix. We can write C = (1/2)(C + C t ) + (1/2)(C C t ) =
More informationLecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore
Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of
More informationHonors Linear Algebra, Spring Homework 8 solutions by Yifei Chen
.. Honors Linear Algebra, Spring 7. Homework 8 solutions by Yifei Chen 8... { { W {v R 4 v α v β } v x, x, x, x 4 x x + x 4 x + x x + x 4 } Solve the linear system above, we get a basis of W is {v,,,,
More informationOn the almost sure convergence of series of. Series of elements of Gaussian Markov sequences
On the almost sure convergence of series of elements of Gaussian Markov sequences Runovska Maryna NATIONAL TECHNICAL UNIVERSITY OF UKRAINE "KIEV POLYTECHNIC INSTITUTE" January 2011 Generally this talk
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationCONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA
Hacettepe Journal of Mathematics and Statistics Volume 40(2) (2011), 297 303 CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA Gusein Sh Guseinov Received 21:06 :2010 : Accepted 25 :04 :2011 Abstract
More informationLinear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form
Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationA trigonometric orthogonality with respect to a nonnegative Borel measure
Filomat 6:4 01), 689 696 DOI 10.98/FIL104689M Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A trigonometric orthogonality with
More informationMATH Solutions to Homework Set 1
MATH 463-523 Solutions to Homework Set 1 1. Let F be a field, and let F n denote the set of n-tuples of elements of F, with operations of scalar multiplication and vector addition defined by λ [α 1,...,
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationEcon Slides from Lecture 8
Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its
More informationA NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n.
Scientiae Mathematicae Japonicae Online, e-014, 145 15 145 A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS Masaru Nagisa Received May 19, 014 ; revised April 10, 014 Abstract. Let f be oeprator monotone
More informationOn an Inverse Problem for a Quadratic Eigenvalue Problem
International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationEigenvectors and SVD 1
Eigenvectors and SVD 1 Definition Eigenvectors of a square matrix Ax=λx, x=0. Intuition: x is unchanged by A (except for scaling) Examples: axis of rotation, stationary distribution of a Markov chain 2
More informationFinite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system
Finite Math - J-term 07 Lecture Notes - //07 Homework Section 4. - 9, 0, 5, 6, 9, 0,, 4, 6, 0, 50, 5, 54, 55, 56, 6, 65 Section 4. - Systems of Linear Equations in Two Variables Example. Solve the system
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationUnsupervised Learning: Dimensionality Reduction
Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,
More informationHomework 2. Solutions T =
Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationEXAM. Exam 1. Math 5316, Fall December 2, 2012
EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationLecture 1: Systems of linear equations and their solutions
Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationMATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.
MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry
More informationIrina Pchelintseva A FIRST-ORDER SPECTRAL PHASE TRANSITION IN A CLASS OF PERIODICALLY MODULATED HERMITIAN JACOBI MATRICES
Opuscula Mathematica Vol. 8 No. 008 Irina Pchelintseva A FIRST-ORDER SPECTRAL PHASE TRANSITION IN A CLASS OF PERIODICALLY MODULATED HERMITIAN JACOBI MATRICES Abstract. We consider self-adjoint unbounded
More informationSolitary Wave Solutions for Heat Equations
Proceedings of Institute of Mathematics of NAS of Ukraine 00, Vol. 50, Part, 9 Solitary Wave Solutions for Heat Equations Tetyana A. BARANNYK and Anatoly G. NIKITIN Poltava State Pedagogical University,
More information(VI.C) Rational Canonical Form
(VI.C) Rational Canonical Form Let s agree to call a transformation T : F n F n semisimple if there is a basis B = { v,..., v n } such that T v = λ v, T v 2 = λ 2 v 2,..., T v n = λ n for some scalars
More information. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. An n n matrix
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More informationTHREE SPACES PROBLEM FOR LYAPUNOV THEOREM ON VECTOR MEASURE. V. M. Kadets, O. I. Vladimirskaya
Serdica Math. J. 24 (998), 45-52 THREE SPACES PROBLEM FOR LYAPUNOV THEOREM ON VECTOR MEASURE V. M. Kadets, O. I. Vladimirskaya Communicated by G. Godefroy Abstract. It is proved that a Banach space X has
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationAN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION
Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION HUAJUN HUANG AND TIN-YAU TAM Abstract. The m-th root of the diagonal of the upper
More informationMath 224, Fall 2007 Exam 3 Thursday, December 6, 2007
Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank
More informationON LINEAR COMBINATIONS OF
Física Teórica, Julio Abad, 1 7 (2008) ON LINEAR COMBINATIONS OF ORTHOGONAL POLYNOMIALS Manuel Alfaro, Ana Peña and María Luisa Rezola Departamento de Matemáticas and IUMA, Facultad de Ciencias, Universidad
More informationLec 2: Mathematical Economics
Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen
More informationFurther Mathematical Methods (Linear Algebra) 2002
Further Mathematical Methods (Linear Algebra) 2002 Solutions For Problem Sheet 4 In this Problem Sheet, we revised how to find the eigenvalues and eigenvectors of a matrix and the circumstances under which
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,
More informationPolynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems
Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)
More informationReal symmetric matrices/1. 1 Eigenvalues and eigenvectors
Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that
More informationMidterm Exam. Math 460, Fall October 2015
Midterm Exam Math 460, Fall 2015 14 October 2015 1. This exam has 4 questions and 6 pages for a total of 110 points. Make sure you have all pages before you begin. 2. As a form of extra credit, the grade
More information1 Adjacency matrix and eigenvalues
CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationLinear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.
Homework #1 due Thursday, Feb. 2 1. Find the eigenvalues and the eigenvectors of the matrix [ ] 4 6 A =. 2 5 2. Find the eigenvalues and the eigenvectors of the matrix 3 3 1 A = 1 1 1. 3 9 5 3. The following
More informationApplications of Mathematics
Applications of Mathematics Pavel Doktor; Alexander Ženíšek The density of infinitely differentiable functions in Sobolev spaces with mixed boundary conditions Applications of Mathematics, Vol. 51 (2006),
More informationEigenvalues and Eigenvectors A =
Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationLecture 1 Systems of Linear Equations and Matrices
Lecture 1 Systems of Linear Equations and Matrices Math 19620 Outline of Course Linear Equations and Matrices Linear Transformations, Inverses Bases, Linear Independence, Subspaces Abstract Vector Spaces
More informationMath 3013 Problem Set 6
Math 3013 Problem Set 6 Problems from 31 (pgs 189-190 of text): 11,16,18 Problems from 32 (pgs 140-141 of text): 4,8,12,23,25,26 1 (Problems 3111 and 31 16 in text) Determine whether the given set is closed
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 2: Orthogonal Vectors and Matrices; Vector Norms Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 11 Outline 1 Orthogonal
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationMath 407: Linear Optimization
Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University
More informationAtypical representations of the Lie superalgebra sl(1,n) for n greater than 1
Seminar Sophus Lie 3 (1993) 15 24 Atypical representations of the Lie superalgebra sl(1,n) for n greater than 1 Hartmut Schlosser 1. Preliminaries The Lie superalgebra (LSA) sl(1, n) with n > 1 is a concrete
More informationFrame Diagonalization of Matrices
Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)
More informationTHE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n. Christopher S. Withers and Saralees Nadarajah (Received July 2008)
NEW ZEALAND JOURNAL OF MATHEMATICS Volume 38 (2008), 171 178 THE nth POWER OF A MATRIX AND APPROXIMATIONS FOR LARGE n Christopher S. Withers and Saralees Nadarajah (Received July 2008) Abstract. When a
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationlecture 2 and 3: algorithms for linear algebra
lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations
More informationMTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)
MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and
More informationKrylov subspace projection methods
I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationComputational math: Assignment 1
Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationTrigonometric SOS model with DWBC and spin chains with non-diagonal boundaries
Trigonometric SOS model with DWBC and spin chains with non-diagonal boundaries N. Kitanine IMB, Université de Bourgogne. In collaboration with: G. Filali RAQIS 21, Annecy June 15 - June 19, 21 Typeset
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More information1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006
Jordan Canonical Forms December 6, 2006 1 Introduction We know that not every n n matrix A can be diagonalized However, it turns out that we can always put matrices A into something called Jordan Canonical
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationON THE MOMENTS OF ITERATED TAIL
ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationReflections and Rotations in R 3
Reflections and Rotations in R 3 P. J. Ryan May 29, 21 Rotations as Compositions of Reflections Recall that the reflection in the hyperplane H through the origin in R n is given by f(x) = x 2 ξ, x ξ (1)
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationThe SVD-Fundamental Theorem of Linear Algebra
Nonlinear Analysis: Modelling and Control, 2006, Vol. 11, No. 2, 123 136 The SVD-Fundamental Theorem of Linear Algebra A. G. Akritas 1, G. I. Malaschonok 2, P. S. Vigklas 1 1 Department of Computer and
More informationRanks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices
Operator Theory: Advances and Applications, Vol 1, 1 13 c 27 Birkhäuser Verlag Basel/Switzerland Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices Tom Bella, Vadim
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationMATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003
MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space
More informationMTAEA Vectors in Euclidean Spaces
School of Economics, Australian National University January 25, 2010 Vectors. Economists usually work in the vector space R n. A point in this space is called a vector, and is typically defined by its
More informationAn Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices
Coyright 202 Tech Science Press CMES, vol.86, no.4,.30-39, 202 An Inverse Problem for Two Sectra of Comlex Finite Jacobi Matrices Gusein Sh. Guseinov Abstract: This aer deals with the inverse sectral roblem
More informationarxiv: v1 [cs.oh] 3 Oct 2014
M. Prisheltsev (Voronezh) mikhail.prisheltsev@gmail.com ADAPTIVE TWO-DIMENSIONAL WAVELET TRANSFORMATION BASED ON THE HAAR SYSTEM 1 Introduction arxiv:1410.0705v1 [cs.oh] Oct 2014 The purpose is to study
More information