On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems

Size: px
Start display at page:

Download "On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems"

Transcription

1 Int. J. Open Problems Compt. Math., Vol. 4, No. 3, September 2011 ISSN ; Copyright c ICSRS Publication, On the Computations of Eigenvalues of the Fourth-order Sturm Liouville Problems Muhammed I. Syam, Qasem M. Al-Mdallal, and Hani Siyyam Department of Mathematical Sciences, United Arab Emirates University, P.O. Box m.syam@uaeu.ac.ae Department of Mathematical Sciences, United Arab Emirates University, P.O. Box q.almdallal@uaeu.ac.ae Department of Mathematics, Taibah University Almadinah Almunawwarah, Kingdom of Saudi Arabia Abstract Locations of eigenvalues of Fourth-order Sturm-Liouville problems are investigated numerically by a combination of both Tau and Lanczos methods. Numerical and theoretical results indicate that the present method is efficient and accurate. Some complexity results are also studied. Keywords: Fourth-Order Sturm-Liouville Problem, Tau Method, Lanczos method. 1 Introduction In the present paper we investigate the solution of the fourth-order Sturm- Liouville problems given by subject to (p(x)y (x)) = (s(x)y (x)) + (λw(x) q(x)) y(x), 1 x 1. (1) α 1 y( 1) + β 1 y(1) = 0 α 2 y ( 1) + β 2 y (1) = 0 α 3 y ( 1) + β 3 y (1) = 0 (2) α 4 y ( 1) + β 4 y (1) = 0

2 On the computations of Eigenvalues where p, s, w and q are piecewise continuous functions with p(x), w(x) 0. Here α i and β i (for i = 1, 2, 3, 4) are constants. The fourth-order Sturm-Liouville problems play very important roles in both theory and applications. This is due to their use to describe a large number of physical and engineering phenomenons such as the deformation of an elastic beam under a variety of boundary conditions, for more details see [1],[2],[6],[7], [9], and [11]. In this paper, we present a method for locating the eigenvalues of problem (1)-(2). This method depends on applying the Tau method to discretize equation (1) to obtain a system of the form M(λ)Y = 0, where M(λ) is a matrix function of λ. Then we apply the Lenczos method to scan a given interval to determine a value for λ. This paper is organized as follows: The numerical methods of solution and problem statement are presented in section (2). Numerical results are presented in Section 3 followed by some conclusion remarks. 2 Tau-Lanczos Method In this section, we present the numerical method used to solve problem (1)-(2). Essentially, this approach depends on applying the Tau method on equation (1) to obtain a system of the form M(λ)Y = 0, where M(λ) is a matrix function of λ. According to the fact that problem (1) has nonzero eigenvectors then M(λ) should be a singular matrix when λ is an eigenvalue of equation (1). Moreover, to find a value for such λ in a specific interval, we apply the Lenczos method. The full discussion of the methods of solution for problem (1) is presented in the following three subsections: 2.1 Tau Method For sake of simplicity we discuss the more general form of equation (1) given by P(x)y (x)+a(x)y (x)+b(x)y (x)+c(x)y (x)+q(x)y(x) λw(x)y(x) = 0. (3)

3 240 Muhammed I. Syam et al. Firstly, we assume that the exact solution of (1) is approximated in terms of the Chebyshev polynomials as y(x) Y N (x) = y k T k (x). (4) where y k (k = 0 : N + 4) are unknown constants need to be determined. Similarly, the functions P, A, B, C, Q and W can also be approximated as P(x) P N (x) = A(x) A N (x) = B(x) B N (x) = C(x) C N (x) = Q(x) Q N (x) = W(x) W N (x) = p k T k (x), a k T k (x), b k T k (x), (5) c k T k (x), q k T k (x), w k T k (x), where p k, a k, b k, c k, q k and w k are known constants. Direct substitutions of (4) and (5) into (3) yield R N (x) = P N Y N + A N Y N + B N Y N + C N Y N + Q N Y N λw N Y N, (6) where R N represents the residual function. In particular, the functions Y N, Y N, Y N and Y N can be written in the following forms Y N(x) = Y N(x) = Y N (x) = Y N (x) = N+3 y (1) k T k (x), N+2 y (2) k T k (x), (7) N+1 N y (3) k y (4) k T k (x), T k (x),

4 On the computations of Eigenvalues where y (1) k, y(2) k, y(3) k and y (4) k are linear combinations of the unknown coefficients y i (i = 1,, N + 4) through the following recurrence relations c n 1 y (q) n 1 y (q) n+1 = 2ny (q 1) n, n 1, q = 1, 2, 3, 4. (8) Here c 0 = 2, c n = 1 for n 1 and y n (0) = y n. One can see that the approximation of the function y given in equations (4) converge uniformly if y C 4 [ 1, 1] and y (5) (x) is piecewise continuous function on [ 1, 1]. Define ζ k, η k, ν k, ω k, τ k and σ k so that P N (x)y N (x) A N (x)y N (x) B N (x)y N (x) C N (x)y N (x) Q N (x)y N (x) W N (x)y N (x) N ζ k T k (x), N η k T k (x), N ν k T k (x), (9) N ω k T k (x), N τ k T k (x), N σ k T k (x). Consequently, equation (6) can be written in the form R N (x) N (ζ k + η k + ν k + ω k + τ k λσ k ) T k (x). (10) To obtain the coefficients of T k (x) in the right hand side of equation (10), we use the fact that T k (x) and R N (x) are orthogonal for k = 0 : N with respect to the weight function w(x) = 1 1 x 2, i.e. R N (x), T k (x) = ζ k + η k + ν k + ω k + τ k λσ k = 0, for k = 0 : N, (11) where R N (x), T k (x) = 1 1 R N (x) T k (x) dx. 1 x 2 It is more convenient to rewrite equations (11) in the following vector form ζ + η + ν + ω + τ λσ = 0, (12)

5 242 Muhammed I. Syam et al. where ζ = (ζ 0, ζ 1,..., ζ N ) T, η = (η 0, η 1,..., η N ) T, ν = (ν 0, ν 1,..., ν N ) T, ω = (ω 0, ω 1,..., ω N ) T, τ = (τ 0, τ 1,..., τ N ) T, σ = (σ 0, σ 1,..., σ N ) T. If we assume that ( T Y = (y 0, y 1,..., ζ ) T, Y (1) = y (1) 0, y (1) 1,..., y N+3) (1), ( T ( T Y (2) = y (2) 0, y (2) 1,..., y N+2) (2), Y (3) = y (3) 0, y (3) 1,..., y N+1) (3), ( ) T Y (4) = y (4) 0, y (4) 1,..., y (4) N, then equation (9) ensures the existence of six matrices G 01, G 02, G 1, G 2, G 3 and G 4 with dimensions (N +1) (N +5), (N +1) (N +5), (N +1) (N +4), (N + 1) (N + 3), (N + 1) (N + 2) and (N + 1) (N + 1) respectively; so that σ = G 01 Y, τ = G 02 Y, ω = G 1 Y (1) (13) ν = G 2 Y (2), η = G 3 Y (3), ζ = G 4 Y (4). However, equation (8) ensures the existence of another four matrices A 1, A 2, A 3 and A 4 with dimensions (N +4) (N +5), (N +3) (N +5), (N +2) (N +5) and (N + 1) (N + 5) respectively; so that Y (1) = A 1 Y, Y (2) = A 2 Y, Y (3) = A 3 Y, and Y (4) = A 4 Y. (14) Consequently, inserting the results from (13) and (14) into (12) leads to the following eigenvalue problem ΛY = λg 01 Y, (15) where Λ = G 4 A 4 + G 3 A 3 + G 2 A 2 + G 1 A 1 + G 02 is (N + 1) (N + 5) matrix. On the other hand, the boundary conditions (2) can be expressed as α 0 y( 1) + β 0 y(1) = 0 α 1 y ( 1) + β 1 y (1) = 0 α 2 y ( 1) + β 2 y (1) = 0 ɛ 0k y k = 0 (16) ɛ 1k y k = 0 (17) ɛ 2k y k = 0 (18)

6 On the computations of Eigenvalues where α 3 y ( 1) + β 3 y (1) = 0 ɛ 3k y k = 0, (19) ɛ 0k = ( 1) k α 1 ( + β 1, ) ɛ 1k = ( 1) ( k+1 ) k 2 α 2 + β 2 k 2, ɛ 2k = ( 1) n n 2 n 2 1 α n 2 n 2 1 β 3 2, and ( ) ( ) ( ) ( ) ɛ 3k = ( 1) n+1 n 2 n 2 1 n 2 4 α n 2 n 2 1 n 2 4 β Note that the above compact forms for ɛ ik, for i = 1, 2, 3, 4 and k = 0 : N + 4 are proven using the following form of the p th derivative of T n (for n = 0, ) at ±1: T (p) n (±1) = (±1) n+p p 1 n 2 k 2 2k + 1. Engaging the results of (15)-(19) gives the following algebraic eigenvalue problem M(λ)Y = 0, (20) where M(λ) = Λ ɛ 0 ɛ 1 ɛ 2 ɛ 3 λ and ɛ i = ( ɛ i0, ɛ i1,, ɛ i ) (for i = 0, 1, 2, 3). Using the fact that the fourthorder Sturm-Liouville problems (1) has nonzero eigenvectors, system (20) must have nonzero solutions; therefore, G det(m(λ)) = 0 (21) when λ is an eigenvalue of Problem (1). It is found that M(λ) is a large sparse matrix, hence we apply the Lanczos method to locate those eigenvalues, λ s, and to approximate them as we explain in the next subsection. 2.2 Lanczos Method Particulary, problem (21) is unstable problem; hence we replace it by a stable problem using the following theorem (see [5]). Theorem 1 The following statements are equivalent: (C1) det(m(λ)) = 0.

7 244 Muhammed I. Syam et al. (C2) min { M(λ)u 2 : u R s and u = 1 } = 0. { } (C3) min M(λ)u 2 : u R s and u 0 = 0. u 2 (C4) The smallest eigenvalue of M(λ) M(λ) is zero, where means the transpose of the matrix and. denotes the Euclidean norm. Therefore, in our study we replace problem (21) with problem (C2). For convenience, in what follow we assume that M = M(λ) M(λ). It is noted that M is a large, square, and symmetric matrix; therefore the most suitable method to use, in this case, is the Lanczos method which is described below. Consider the Rayleigh quotient R(u) = u Mu, u 0, (22) u u then, the minimum of R(u) is the smallest eigenvalue of M. Moreover, if we suppose that e l R s is the Lanczos orthonormal vectors and E l = [ϱ 1 ϱ 2... ϱ l ], then κ 1 ϱ 1 0. El ϱ ME l = Π l = 1 κ (23) 0 ϱ l 1 κ l One can verify that m l = min u 0 R(E lu) λ min. Hence, m 1 m 2 m s = Λ min, for more details see [5]. In general, we obtain a good approximation for Λ min from m l for some l << s. However, we should note that Π l is tridiagonal matrix, so we can write it in the computer as l 3 matrix. The following algorithm shows how to calculate κ 1, κ 2,..., κ l, ϱ 1,..., ϱ l 1. Algorithm 2 Input: T ol Output: calculate κ 1, κ 2,..., κ l, ϱ 1,..., ϱ l 1 Step 1: Set i = 0; Υ = 0;and ϱ 0 = 1. Step 2: while ϱ j tol and i < l, do steps 3-7 Step 3: If i 0, do steps 4-5 Step 4: For k = 1 : s, do step 5

8 On the computations of Eigenvalues Step 5: Set π = ψ k, ψ k = Υ k ϱ i, Υ k = ϱ i π. Step 6: Set Υ = Υ + Mψ. Step 7: Set i = i + 1, κ i = ψ Υ; Υ = Υ κ i ψ; and ϱ i = Υ 2. The following remarks on Algorithm 2 should be noted: 1. In each step we do only one evaluation for Aψ. This means that if we want to compute Π l, we need only l evaluations of Aψ. 2. We compute Mψ as follows: (a) Compute u = M(λ)ψ. (b) Compute M(λ) u. 3. If M(λ) has an average of about h nonzero per row, then approximately (3h + 8)s flops are involved in a single Lanczos step. 4. The Lanczos vectors are generated in the s-vector w. 5. Unfortunately, during the calculations in Algorithm 2, we lose the orthogonality of the Lanczos vectors which is due to the cancelation. To avoid the problem of loss of the orthogonality, one can use either the complete reorthogonalization or the selective orthogonalization. The first method is very expensive and complicated to use. Therefor, we use the selective orthogonalization in this paper. Suppose that the symmetric QR method is applied to Π l, see [5]. Assume that θ 1, θ 2,..., θ l are the computed Ritz values and Ω l is nearly orthogonal matrix of eigenvectors. Let Then it can be shown that F l = [f 1 f 2... f l ] = E l Ω l and e l+1 f i eps M 2 ϱ l Ω li Mf i θ i f i ϱ l Ω li = ϱ li where eps is the machine precision. The computed Ritz pair (θ, f) is called good if Mf θf eps M 2.

9 246 Muhammed I. Syam et al. We can measure the loss of orthogonality of E i by κ i = I i E i E i and κ 1 = 1 e 1e 1. It can be easily seen that κ i is an increasing function of i. We can measure the value of κ i+1 using κ i via the following theorem. Theorem 3 If κ i µ, then κ i (µ + eps + (µ eps) E i e i+1 2 ). Fix κ, say for example κ = If κ i κ, then q i+1 is orthogonal on all columns of E i. In this case we will not do any reorthogonalization. If κ i > κ, then we orthogonalize e i+1 against each good Ritz vector. In the selective orthogonalization we apply the symmetric QR method on Π l which has small size comparing with the size of M. Then, we apply the Rayleigh quotient iteration to compute the smallest eigenvalue of the matrix Π j using the following algorithm. Algorithm 4 Input: X (0) such that X (0) = 1. Output: the smallest eigenvalue of the matrix Π j Step 1: For k = 0, 1,..., do step 2-4. Step 2: Compute µ k = X(k) T l X (k) X (k) X (k). Step 3: Solve (Π l µ k I l )Z (k+1) = X (k) for Z (k+1). Step 4: Set X (k+1) = Z(k+1) Z (k+1). For more details about the selective orthogonalization, see [5],[8] and [12]. Finally, we can summarize our technique in this section as follows. 1. Compute the matrix M (λ); as we did in section Set M = M (λ)m(λ), l = Use the Selective orthogonalization and the Lanczos method to compute the matrix T l. 4. Use Algorithm 9 to approximate m l. 5. If m l is good approximation for Λ min, stop else l = l + 1; and repeat steps Stop.

10 On the computations of Eigenvalues Numerical Results It is important to mention that the present method was applied on many different examples, but herein we only present the following two examples to illustrate the efficiency and accuracy of this method. Consider the fourthorder eigenvalue problem subject to Using the transformation y (4) (z) µy(z) = 0, z (0, 1) y(0) = y (0) = y(1) = y (1) = 0. x = 2z 1, one obtains the following eigenvalue problem subject to y (4) (x) λy(x) = 0, x ( 1, 1) y( 1) = y ( 1) = y(1) = y (1) = 0 where λ = µ. We scan for a solution of the µ parameter in the interval 16 [237, ] where the increment is Table (1) shows the minimal eigenvalues Λ and the number of evaluations of Mψ which were necessary to obtain Λ, say ν. Consider the fourth-order eigenvalue problem subject to y (4) (z) = 0.02z 2 y zy (0.0001z )y + µy(z), z (0, 5) y(0) = y (0) = y(5) = y (5) = 0. Using the transformation x = 2z 5, 5 one obtains the following eigenvalue problem y (4) (x) = ( ) [ 0.02(x + 1) 2 y (x + 1)y (0.0001( 5x+5 2 ) 4 subject to 0.02)y] + λy(x), x ( 1, 1) y( 1) = y ( 1) = y(1) = y (1) = 0 where λ = 625µ. The solution of the µ parameter is scanned in the interval , ] with increment We make an entirely analogous analysis to do that of Example 1. Table (2) shows the minimal eigenvalues Λ and the number of evaluations of Mψ which were necessary to obtain Λ, say ν.

11 248 Muhammed I. Syam et al. µ Λ ν µ Λ ν e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e 06 4 Table (1) µ Λ ν µ Λ ν e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e 07 4 Table (2)

12 On the computations of Eigenvalues Open problem Future work will be on the extension of the present study to higher-order Sturm-Liouville problems such as sixth-order of the form [p(x)y (x)] = [s(x)y (x)] [r(x)y (x)] [λw(x) q(x)] y(x), (24) subject to α j y (j) ( 1) + β j y (j) (1) = 0, (j = 0,, 5), (25) where p, s, r, w and q are piecewise continuous functions with p(x) > 0 and w(x) 0 for all x ( 1, 1). Here α j and β j (for j = 0,, 5) are constants. 5 Conclusion We discuss the determination of the location of eigenvalues for the fourth order Sturm Liouville problems using a combination of Tau and Lanczos methods. The numerical results for the examples demonstrate the efficacy and accuracy of this method. Moreover, the number of evaluations ν shows that the present approach is not expensive. 6 Acknowledgment The authors would like to express their appreciation for the valuable comments of the reviewers. The authors also would like to express their sincere appreciation to the the United Arab Emirates University Research Affairs for the financial support of grant nos /07, /07. References [1] Agarwal RP, Chow MY, Iterative methods for a fourth order boundary value problem, J Comput Appl Math, Vol.10, (1984), pp [2] Agarwal RP, On the fourth-order boundary value problems arising in beam analysis, Diff Integral Equat, Vol.2, (1989), pp [3] Greenberg L, An oscillation method for fourth order self-adjoint twopoint boundary value problems with nonlinear eigenvalues, SIAM J Math Anal, Vol. 22, (1991), pp [4] Greenberg L, Marletta M, Oscillation theory and numerical solution of fourth order Sturm Liouville problem, IMA J Numer Anal, Vol.15, (1995), pp

13 250 Muhammed I. Syam et al. [5] G. H. Golub, C. F. Van Loan, Matrix Computations, The Johns Hopkins University press, Baltimore, MD, (1983). [6] Ma RY, Wang HY, On the existence of positive solutions of fourth order ordinary differential equation, Appl Anal Vol.99, (1995), pp [7] O Regan D,? Solvability of some fourth (and higher) order singular boundary value problems, J Math Anal Appl, Vol.161, (1991), pp [8] Y. SAAD, The Lanczos biorthogonalization algorithm and other oblique projection methods for solving large unsymmetric linear systems, SIAM J. Numer. Anal., Vol.19, (1982), pp [9] Schroder J., Fourth-order two-point boundary value problems; Estimates by two side bounds, Nonlinear Anal, Vol.8 (1984), pp [10] M. Syam, Finding all real zeros of polynomial systems using multiresultant, Journal of Computational and Applied Mathematics, Vol.167, (2004), pp [11] M. Syam, H. Siyyam, An efficient technique for finding the eigenvalues of fourth-order Sturm Liouville problems, Chaos, Solitons & Fractals, In Press. [12] B. L. Van Der Waerden, Modern Algebra, Ungar, New York, (1953).

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Lecture 4.6: Some special orthogonal functions

Lecture 4.6: Some special orthogonal functions Lecture 4.6: Some special orthogonal functions Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 4340, Advanced Engineering Mathematics

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Physics 250 Green s functions for ordinary differential equations

Physics 250 Green s functions for ordinary differential equations Physics 25 Green s functions for ordinary differential equations Peter Young November 25, 27 Homogeneous Equations We have already discussed second order linear homogeneous differential equations, which

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Ax=b. Zack 10/4/2013

Ax=b. Zack 10/4/2013 Ax=b Zack 10/4/2013 Iteration method Ax=b v 1, v 2 x k = V k y k Given (A, b) standard orthonormal base v 1, v 2 x k = V k y k For symmetric A, the method to generate V k is called Lanczos Method Lanczos

More information

(L, t) = 0, t > 0. (iii)

(L, t) = 0, t > 0. (iii) . Sturm Liouville Boundary Value Problems 69 where E is Young s modulus and ρ is the mass per unit volume. If the end x = isfixed, then the boundary condition there is u(, t) =, t >. (ii) Suppose that

More information

Newton-Raphson Type Methods

Newton-Raphson Type Methods Int. J. Open Problems Compt. Math., Vol. 5, No. 2, June 2012 ISSN 1998-6262; Copyright c ICSRS Publication, 2012 www.i-csrs.org Newton-Raphson Type Methods Mircea I. Cîrnu Department of Mathematics, Faculty

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics THE METHOD OF LOWER AND UPPER SOLUTIONS FOR SOME FOURTH-ORDER EQUATIONS ZHANBING BAI, WEIGAO GE AND YIFU WANG Department of Applied Mathematics,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

The Finite Spectrum of Sturm-Liouville Operator With δ-interactions

The Finite Spectrum of Sturm-Liouville Operator With δ-interactions Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 93-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 3, Issue (June 08), pp. 496 507 The Finite Spectrum of Sturm-Liouville

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices.

Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. Using Godunov s Two-Sided Sturm Sequences to Accurately Compute Singular Vectors of Bidiagonal Matrices. A.M. Matsekh E.P. Shurina 1 Introduction We present a hybrid scheme for computing singular vectors

More information

About One way of Encoding Alphanumeric and Symbolic Information

About One way of Encoding Alphanumeric and Symbolic Information Int. J. Open Problems Compt. Math., Vol. 3, No. 4, December 2010 ISSN 1998-6262; Copyright ICSRS Publication, 2010 www.i-csrs.org About One way of Encoding Alphanumeric and Symbolic Information Mohammed

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

The Method of Resultants for Computing Real Solutions of Polynomial Systems

The Method of Resultants for Computing Real Solutions of Polynomial Systems The Method of Resultants for Computing Real Solutions of Polynomial Systems Eugene L. Allgower 1,2, Kurt Georg 1,2 and Rick Miranda 1,2 July 1990, revised May 1991 Abstract: A new method for determining

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Rayleigh Quotient Iteration Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Solving Eigenvalue Problems All

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

Cubic B-spline Collocation Method for Fourth Order Boundary Value Problems. 1 Introduction

Cubic B-spline Collocation Method for Fourth Order Boundary Value Problems. 1 Introduction ISSN 1749-3889 print, 1749-3897 online International Journal of Nonlinear Science Vol.142012 No.3,pp.336-344 Cubic B-spline Collocation Method for Fourth Order Boundary Value Problems K.N.S. Kasi Viswanadham,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

2 One-dimensional differential transform

2 One-dimensional differential transform International Mathematical Forum, Vol. 7, 2012, no. 42, 2061-2069 On Solving Differential Equations with Discontinuities Using the Differential Transformation Method: Short Note Abdelhalim Ebaid and Mona

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Introduction to Finite Element Method

Introduction to Finite Element Method Introduction to Finite Element Method Dr. Rakesh K Kapania Aerospace and Ocean Engineering Department Virginia Polytechnic Institute and State University, Blacksburg, VA AOE 524, Vehicle Structures Summer,

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Accurate Inverses for Computing Eigenvalues of Extremely Ill-conditioned Matrices and Differential Operators

Accurate Inverses for Computing Eigenvalues of Extremely Ill-conditioned Matrices and Differential Operators arxiv:1512.05292v2 [math.na] 15 May 2017 Accurate Inverses for Computing Eigenvalues of Extremely Ill-conditioned Matrices and Differential Operators Qiang Ye Abstract This paper is concerned with computations

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Numerically computing zeros of the Evans Function

Numerically computing zeros of the Evans Function Numerically computing zeros of the Evans Function Rebekah Coggin Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty Advisor: Todd Kapitula Department of Mathematics

More information

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney

Monte Carlo simulation inspired by computational optimization. Colin Fox Al Parker, John Bardsley MCQMC Feb 2012, Sydney Monte Carlo simulation inspired by computational optimization Colin Fox fox@physics.otago.ac.nz Al Parker, John Bardsley MCQMC Feb 2012, Sydney Sampling from π(x) Consider : x is high-dimensional (10 4

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

1 PART1: Bratu problem

1 PART1: Bratu problem D9: Advanced Numerical Analysis: 3 Computer Assignment 3: Recursive Projection Method(RPM) Joakim Möller PART: Bratu problem. Background The Bratu problem in D is given by xx u + yy u + λe u =, u Γ = ()

More information

A Modified Method for Reconstructing Periodic Jacobi Matrices

A Modified Method for Reconstructing Periodic Jacobi Matrices mathematics of computation volume 42. number 165 january 1984, pages 143 15 A Modified Method for Reconstructing Periodic Jacobi Matrices By Daniel Boley and Gene H. Golub* Abstract. In this note, we discuss

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

On the existence of an eigenvalue below the essential spectrum

On the existence of an eigenvalue below the essential spectrum On the existence of an eigenvalue below the essential spectrum B.M.Brown, D.K.R.M c Cormack Department of Computer Science, Cardiff University of Wales, Cardiff, PO Box 916, Cardiff CF2 3XF, U.K. A. Zettl

More information

Multiparameter eigenvalue problem as a structured eigenproblem

Multiparameter eigenvalue problem as a structured eigenproblem Multiparameter eigenvalue problem as a structured eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana This is joint work with M Hochstenbach Będlewo, 2932007 1/28 Overview Introduction

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 435 (2011) 480 493 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Bounding the spectrum

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method

Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Solving Constrained Rayleigh Quotient Optimization Problem by Projected QEP Method Yunshen Zhou Advisor: Prof. Zhaojun Bai University of California, Davis yszhou@math.ucdavis.edu June 15, 2017 Yunshen

More information

Multiple positive solutions for a nonlinear three-point integral boundary-value problem

Multiple positive solutions for a nonlinear three-point integral boundary-value problem Int. J. Open Problems Compt. Math., Vol. 8, No. 1, March 215 ISSN 1998-6262; Copyright c ICSRS Publication, 215 www.i-csrs.org Multiple positive solutions for a nonlinear three-point integral boundary-value

More information

Preliminary Examination, Numerical Analysis, August 2016

Preliminary Examination, Numerical Analysis, August 2016 Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any

More information

Series Solutions Near a Regular Singular Point

Series Solutions Near a Regular Singular Point Series Solutions Near a Regular Singular Point MATH 365 Ordinary Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Background We will find a power series solution to the equation:

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

The inverse of a tridiagonal matrix

The inverse of a tridiagonal matrix Linear Algebra and its Applications 325 (2001) 109 139 www.elsevier.com/locate/laa The inverse of a tridiagonal matrix Ranjan K. Mallik Department of Electrical Engineering, Indian Institute of Technology,

More information

Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable Sequences

Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable Sequences Advances in Dynamical Systems and Applications ISSN 0973-532, Volume 6, Number, pp. 9 09 20 http://campus.mst.edu/adsa Banach Algebras of Matrix Transformations Between Spaces of Strongly Bounded and Summable

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Homotopy Perturbation Method for Computing Eigenelements of Sturm-Liouville Two Point Boundary Value Problems

Homotopy Perturbation Method for Computing Eigenelements of Sturm-Liouville Two Point Boundary Value Problems Applied Mathematical Sciences, Vol 3, 2009, no 31, 1519-1524 Homotopy Perturbation Method for Computing Eigenelements of Sturm-Liouville Two Point Boundary Value Problems M A Jafari and A Aminataei Department

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told

More information

Olga Boyko, Olga Martinyuk, and Vyacheslav Pivovarchik

Olga Boyko, Olga Martinyuk, and Vyacheslav Pivovarchik Opuscula Math. 36, no. 3 016), 301 314 http://dx.doi.org/10.7494/opmath.016.36.3.301 Opuscula Mathematica HIGHER ORDER NEVANLINNA FUNCTIONS AND THE INVERSE THREE SPECTRA PROBLEM Olga Boyo, Olga Martinyu,

More information

Lecture on: Numerical sparse linear algebra and interpolation spaces. June 3, 2014

Lecture on: Numerical sparse linear algebra and interpolation spaces. June 3, 2014 Lecture on: Numerical sparse linear algebra and interpolation spaces June 3, 2014 Finite dimensional Hilbert spaces and IR N 2 / 38 (, ) : H H IR scalar product and u H = (u, u) u H norm. Finite dimensional

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information