Derivative of a Determinant with Respect to an Eigenvalue in the LDU Decomposition of a Non-Symmetric Matrix

Similar documents
Received 8 November 2013; revised 8 December 2013; accepted 15 December 2013

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION

Stochastic Matrices in a Finite Field

A Hadamard-type lower bound for symmetric diagonally dominant positive matrices

Chimica Inorganica 3

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

The Method of Least Squares. To understand least squares fitting of data.

CALCULATION OF STIFFNESS AND MASS ORTHOGONAL VECTORS

TRACES OF HADAMARD AND KRONECKER PRODUCTS OF MATRICES. 1. Introduction

Analytical solutions for multi-wave transfer matrices in layered structures

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

A Block Cipher Using Linear Congruences

Eigenvalues and Eigenvectors

Chapter 9: Numerical Differentiation

A Genetic Algorithm for Solving General System of Equations

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter

ENGI 9420 Engineering Analysis Assignment 3 Solutions

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

Iterative method for computing a Schur form of symplectic matrix

Lecture 8: October 20, Applications of SVD: least squares approximation

PAPER : IIT-JAM 2010

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem

A GENERALIZATION OF THE SYMMETRY BETWEEN COMPLETE AND ELEMENTARY SYMMETRIC FUNCTIONS. Mircea Merca

Assignment 2 Solutions SOLUTION. ϕ 1 Â = 3 ϕ 1 4i ϕ 2. The other case can be dealt with in a similar way. { ϕ 2 Â} χ = { 4i ϕ 1 3 ϕ 2 } χ.

Determinants of order 2 and 3 were defined in Chapter 2 by the formulae (5.1)

On the Determinants and Inverses of Skew Circulant and Skew Left Circulant Matrices with Fibonacci and Lucas Numbers

DECOMPOSITION METHOD FOR SOLVING A SYSTEM OF THIRD-ORDER BOUNDARY VALUE PROBLEMS. Park Road, Islamabad, Pakistan

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations

CALCULATING FIBONACCI VECTORS

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Bounds for the Extreme Eigenvalues Using the Trace and Determinant

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS

Damped Vibration of a Non-prismatic Beam with a Rotational Spring

CALCULATION OF FIBONACCI VECTORS

Recurrence Relations

The Jordan Normal Form: A General Approach to Solving Homogeneous Linear Systems. Mike Raugh. March 20, 2005

State Space Representation

Why learn matrix algebra? Vectors & Matrices with statistical applications. Brief history of linear algebra

Generating Functions for Laguerre Type Polynomials. Group Theoretic method

Matrix Algebra 2.2 THE INVERSE OF A MATRIX Pearson Education, Inc.

Recursive Algorithms. Recurrences. Recursive Algorithms Analysis

Right circulant matrices with ratio of the elements of Fibonacci and geometric sequence

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

SOME METHODS FOR SOLVING FULLY FUZZY LINEAR SYSTEM OF EQUATIONS

Symmetric Matrices and Quadratic Forms

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j

1 Last time: similar and diagonalizable matrices

Algorithm Analysis. Chapter 3

Some Results on Certain Symmetric Circulant Matrices

Addition: Property Name Property Description Examples. a+b = b+a. a+(b+c) = (a+b)+c

Algorithms and Data Structures 2014 Exercises and Solutions Week 13

Research Article Some E-J Generalized Hausdorff Matrices Not of Type M

A LIMITED ARITHMETIC ON SIMPLE CONTINUED FRACTIONS - II 1. INTRODUCTION

An Alternative Scaling Factor In Broyden s Class Methods for Unconstrained Optimization

For a 3 3 diagonal matrix we find. Thus e 1 is a eigenvector corresponding to eigenvalue λ = a 11. Thus matrix A has eigenvalues 2 and 3.

Intensive Algorithms Lecture 11. DFT and DP. Lecturer: Daniel A. Spielman February 20, f(n) O(g(n) log c g(n)).

Proof of Fermat s Last Theorem by Algebra Identities and Linear Algebra

Decoupling Zeros of Positive Discrete-Time Linear Systems*

Numerical Solution of the Two Point Boundary Value Problems By Using Wavelet Bases of Hermite Cubic Spline Wavelets

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

ANOTHER GENERALIZED FIBONACCI SEQUENCE 1. INTRODUCTION

1 Adiabatic and diabatic representations

Solving a Nonlinear Equation Using a New Two-Step Derivative Free Iterative Methods

A New Solution Method for the Finite-Horizon Discrete-Time EOQ Problem

Numerical Solutions of Second Order Boundary Value Problems by Galerkin Residual Method on Using Legendre Polynomials

Parallel Vector Algorithms David A. Padua

Using An Accelerating Method With The Trapezoidal And Mid-Point Rules To Evaluate The Double Integrals With Continuous Integrands Numerically

Poisson s remarkable calculation - a method or a trick?

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

THE EIGENVALUE PROBLEM

Taylor polynomial solution of difference equation with constant coefficients via time scales calculus

Eigenvalues of Ikeda Lifts

Numerical Conformal Mapping via a Fredholm Integral Equation using Fourier Method ABSTRACT INTRODUCTION

International Journal of Mathematical Archive-5(7), 2014, Available online through ISSN

l -State Solutions of a New Four-Parameter 1/r^2 Singular Radial Non-Conventional Potential via Asymptotic Iteration Method

Some properties of Boubaker polynomials and applications

Principle Of Superposition

Some Variants of Newton's Method with Fifth-Order and Fourth-Order Convergence for Solving Nonlinear Equations

c 2006 Society for Industrial and Applied Mathematics

REVISION SHEET FP1 (MEI) ALGEBRA. Identities In mathematics, an identity is a statement which is true for all values of the variables it contains.

The Basic Space Model

5.3 Preconditioning. - M is easy to deal with in parallel (reduced approximate direct solver)

On the Inverse of a Certain Matrix Involving Binomial Coefficients

Basic Iterative Methods. Basic Iterative Methods

Computation for Jacobi-Gauss Lobatto Quadrature Based on Derivative Relation

DIGITAL FILTER ORDER REDUCTION

Polynomial Multiplication and Fast Fourier Transform

Math 155 (Lecture 3)

Chapter 2 The Solution of Numerical Algebraic and Transcendental Equations

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

AN OPEN-PLUS-CLOSED-LOOP APPROACH TO SYNCHRONIZATION OF CHAOTIC AND HYPERCHAOTIC MAPS

A 2nTH ORDER LINEAR DIFFERENCE EQUATION

Comparison of Minimum Initial Capital with Investment and Non-investment Discrete Time Surplus Processes

MATH10212 Linear Algebra B Proof Problems

Solution of Differential Equation from the Transform Technique

Feedback in Iterative Algorithms

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Transcription:

Applied Mathematics, 203, 4, 464-468 http://dx.doi.org/0.4236/am.203.43069 Published Olie March 203 (http://www.scirp.org/joural/am) Derivative of a Determiat with Respect to a Eigevalue i the LDU Decompositio of a No-Symmetric Matrix Mitsuhiro Kashiwagi Departmet of Architecture, School of Idustrial Egieerig, Tokai Uiversity, Toroku, Kumamoto, Japa Email: mkashi@ktmail.tokai-u.jp Received October, 202; revised November, 202; accepted November 8, 202 ABSTRACT We demostrate that, whe computig the LDU decompositio (a typical example of a direct solutio method), it is possible to obtai the derivative of a determiat with respect to a eigevalue of a o-symmetric matrix. Our proposed method augmets a LDU decompositio program with a additioal routie to obtai a program for easily evaluatig the derivative of a determiat with respect to a eigevalue. The proposed method follows simply from the process of solvig simultaeous liear equatios ad is particularly effective for bad matrices, for which memory requiremets are sigificatly reduced compared to those for dese matrices. We discuss the theory uderlyig our proposed method ad preset detailed algorithms for implemetig it. Keywords: Derivative of Determiat; No-Symmetric Matrix; Eigevalue; Bad Matrix; LDU Decompositio. Itroductio I previous work, the author used the trace theorem ad the iverse matrix formula for the coefficiet matrix appearig i the cojugate gradiet method to propose a method for derivative a determiat i a eigevalue problem with respect to a eigevalue []. I particular, we applied this method to eigevalue aalysis ad demostrated its effectiveess i that settig [2-4]. However, the techique proposed i those studies was iteded for use with iterative methods such as the cojugate gradiet method ad was ot applicable to direct solutio methods such as the lower-diagoal-upper (LDU) decompositio, a typical example. Here, we discuss the derivative of a determiat with respect to eigevalues for problems ivolvig LDU decompositio, takig as a referece the equatios associated with the cojugate gradiet method []. Moreover, i the Newto-Raphso method, the solutio depeds o the derivative of a determiat, so the result of our proposed computatioal algorithm may be used to determie the step size i that method. Ideed, the step size may be set automatically, allowig a reductio i the umber of basic parameters, which has a sigificat impact o oliear aalysis. For both dese matrices ad bad matrices, which require sigificatly less memory tha dese matrices, our method performs calculatios usig oly the matrix elemets withi a certai bad of the matrix factors arisig from the LDU decompositio. This esures that calculatios o dese matrices proceed just as if they were bad matrices. Ideed, computatios o dese matrices usig our method are expected to require oly slightly more computatio time tha computatios o bad matrices performed without our proposed method. I what follows, we discuss the theory of our proposed method ad preset detailed algorithms for implemetig it. 2. Derivative of a Determiat with Respect to a Eigevalue Usig the LDU Decompositio The eigevalue problem may be expressed as follows. If A is a o-symmetric matrix of dimesio, the the usual eigevalue problem is Ax x, () where ad x deote respectively a eigevalue ad a correspodig eigevector. I order for Equatio () to have otrivial solutios, the matrix A I must be sigular, i.e., det A I 0. (2) We itroduce the otatio f for the left-had side of this equatio: f det A I. (3) Applyig the trace theorem, we obtai f trace A I A I f, (4) trace A I Copyright 203 SciRes.

M. KASHIWAGI 465 where a a2 a 22 A Ia. (5) a a2 a The LDU decompositio decomposes this matrix ito a product of three factors: A LDU, (6) where the L, D, ad U factors have the forms 2 L, (7) 2 Dd d d 22 d, (8) u2 u u 2 U u. (9) We write the iverse of the L factor i the form g 2 L g. (0) g g2 Expadig the relatio LL I (where I is the idetity matrix), we obtai the followig expressios for the elemets of L : i j g, i i j2 g g. () ik kj kj Next, we write the iverse of the U factor i the form h2 h h 2 U h. (2) Expadig the relatio U U I, we agai obtai expressios for the elemets of U : j i h u, j ji2 h u u h. (3) kj ik k i Equatios () ad (3) idicate that g ad h must be computed for all matrix elemets. However, for matrix elemets outside the half-bad, we have 0 ad u 0, ad thus the computatio requires oly elemets ad u withi the half-bad. Moreover, the arrower the half-bad, the more the computatio time is reduced. From Equatio (4), we see that evaluatig the derivative of a determiat requires oly the diagoal elemets of the iverse of the matrix A i (6). Usig Equatios (7)- (0), ad (2) to evaluate ad sum the diagoal elemets of the product U D L, we obtai the followig represetatio of Equatio (4): f gkih ik f i d. (4) d ii ki kk This equatio demostrates that the derivative of a determiat may be computed from the elemets of the iverses of the L, D, ad U matrices obtaied from the LDU decompositio of the origial matrix. As oted above, the calculatio uses oly the elemets of the matrices withi a certai bad ear the diagoal, so computatios o dese matrices may be hadled just as if they were bad matrices. Thus we expect a isigificat icrease i computatio time as compared to that required for bad matrices without our proposed method. By augmetig a LDU decompositio program with a additioal routie (which simply appeds two vectors), we easily obtai a program for evaluatig f f. Such computatios are useful for umerical aalysis usig algorithms such as the Newto-Raphso method or the Durad-Kerer method. Moreover, the proposed method follows easily from the process of solvig simultaeous liear equatios. 3. Algorithms for Implemetig the Proposed Method 3.. Algorithm for Dese Matrices We first discuss a algorithm for dese matrices. The arrays ad variables are as follows. ) LDU decompositio ad calculatio of the derivative of a determiat with respect to a eigevalue () Iput data A: give coefficiet matrix, 2-dimesioal array of the form A(,) b: work vector, -dimesioal array of the form b() c: work vector, -dimesioal array of the form c() : give order of matrix A ad vector b eps: parameter to check sigularity of the matrix A: L matrix, D matrix ad U matrix, 2-dimesioal array of the form A(,) fd: derivative of determiat Copyright 203 SciRes.

466 M. KASHIWAGI ierr; error code =0, for ormal executio =, for sigularity (3) LDU decompositio <d(i,i)> do k=,i- A(i,i)=A(i,i)-A(i,j)*A(j,j)*A(j,i) if (abs(a(i,i))<eps) the ierr= retur ed if <l(i,j) ad u(i,j)> do j=i+, do k=,i- A(j,i)=A(j,i)-A(j,k)*A(k,k)*A(k,i) A(i,j)=A(i,j)-A(i,k)*A(k,k)*A(k,j) A(j,i)=A(j,i)/A(i,i) A(i,j)=A(i,j)/A(i,i) ierr=0 (4) Derivative of a determiat with respect to a eigevalue fd=0 <(i,i)>. fd=fd-/a(i,i) <(i,j)> - do j=i+, b(j)=-a(j,i) c(j)=-a(i,j) do k=,j-i- b(j)=b(j)-a(j,i+k)*b(i+k) c(j)=c(j)-a(i+k,j)*c(i+k) fd=fd-b(j)*c(j)/a(j,j) 2) Calculatio of the solutio () Iput data A: L matrix, D matrix ad U matrix, 2-dimesioal array of the form A(,) b: give right-had side vector, -dimesioal array of the form b() : give order of matrix A ad vector b b: work ad solutio vector, -dimesioal array (3) Forward substitutio do j=i+, b(j)=b(j)-a(j,i)*b(i) (4) Backward substitutio b(i)=b(i)/a(i,i) ii=-i+ do j=,ii- b(j)=b(j)-a(j,ii)*b(ii) 3.2. Algorithm for Bad Matrices We ext discuss a algorithm for bad matrices. Figure depicts a schematic represetatio of a bad matrix. Here deotes the left half-badwidth, ad u deotes the right half-badwidth. These umbers do ot iclude the diagoal elemet, so the total badwidth is u. ) LDU decompositio ad calculatio of the derivative of a determiat with respect to a eigevalue () Iput data A: give coefficiet bad matrix, 2-dimesioal array of the form A(,l++u) b: work vector, -dimesioal array of the form b() c: work vector, -dimesioal array of the form c() : give order of matrix A l: give left half-bad width of matrix A u: give right half-bad width of matrix A eps: parameter to check sigularity of the matrix A: L matrix, D matrix ad U matrix, 2-dimesioal array of the form A(,l++u) fd: derivative of determiat ierr: error code =0, for ormal executio =, for sigularity (3) LDU decompositio <d(i,i)> u u Figure. Bad matrix. Copyright 203 SciRes.

M. KASHIWAGI 467 do j=max(,i-mi(l,u)),i- A(i,l+)=A(i,l+) -A(i,l+-(i-j))*A(j,l+)*A(j,l++(i-j)) if (abs(a(i,l+))<eps) the ierr= retur ed if <l(i,j)> do j=i+,mi(i+l,) sl=a(j,l+-(j-i)) do k=max(,j-l),i- sl=sl- A(j,l+-(j-k))*A(k,l+)*A(k,l++(i-k)) A(j,l+-(j-i))=sl/A(i,l+) <u(i,j)> do j=i+,mi(i+u,) su=a(i,l++(j-i)) do k=max(,j-u),i- su=su-a(i,l+-(i-k))*a(k,l+)*a(k,l++(j-k)) A(i,l++(j-i))=su/A(i,l+) ierr=0 (4) Derivative of a determiat with respect to a eigevalue fd=0 <(i,i)> fd=fd-/a(i,q) <(i,j)> - do j=i+,mi(i+l,) b(j)=-a(j,l+-(j-i)) do k=,j-i- b(j)=b(j)-a(j,l+-(j-i)+k)*b(i+k) do j=i+l+, b(j)=0.d0 do k=,l b(j)=b(j)-a(j,k)*b(j-l-+k) do j=i+,mi(i+u,) c(j)=-a(i,l++(j-i)) do k=,j-i- c(j)=c(j)-a(i+k,l++(j-i)-k)*c(i+k) do j=i+u+, c(j)=0.d0 do k=,u c(j)=c(j)-a(i,l++k)*c(j-l-+k) do j=i+, fd=fd-b(j)*c(j)/a(j,l+) 2) Calculatio of the solutio () Iput data A: give decomposed coefficiet bad matrix, 2-dimesioal array of the form A(,l++u) b: give right-had side vector, -dimesioal array of the form b() : give order of matrix A ad vector b l: give left half-bad width of matrix A u: give right half-bad width of matrix A b: solutio vector, -dimesioal array (3) Forward substitutio do j=max(,i-l),i- b(i)=b(i)-a(i,+-(i-j))*b(j) (4) Backward substitutio ii=-i+ b(ii)=b(ii)/a(ii,l+) do j=ii+,mi(,ii+u) b(ii)=b(ii)-a(ii,l++(j-ii))*b(j) 4. Coclusio We have demostrated that, whe computig the LDU decompositio (a typical example of a direct solutio method), it is possible simultaeously to obtai the derivative of a determiat with respect to a eigevalue. I additio, i the Newto-Raphso method, the solutio depeds o the derivative of a determiat, so the result of our proposed computatioal algorithm may be used to determie the step size i that method. Ideed, the step size may be set automatically, allowig a reductio i the umber of basic parameters, which has a sigificat impact o oliear aalysis. Our proposed method is based o the LDU decompositio, ad for bad matrices the memory required to store the LDU factors is sigificatly reduced compared to the case of dese matrices. By augmetig a LDU decompositio program with a additioal routie (which simply appeds two vectors), we easily obtai a program for evaluatig the derivative of a Copyright 203 SciRes.

468 M. KASHIWAGI determiat with respect to a eigevalue. The result of this computatio is useful for umerical aalysis usig such algorithms as those of the Newto-Raphso method ad the Durad-Kerer method. Moreover, the proposed method follows easily from the process of solvig simultaeous liear equatios ad should prove a effective techique for eigevalue problems ad oliear problems. REFERENCES [] M. Kashiwagi, A Eigesolutio Method for Solvig the Largest or Smallest Eigepair by the Cojugate Gradiet Method, Trasactios of the Japa Society for Aeroautical ad Space Scieces, Vol., 999, pp. -5. [2] M. Kashiwagi, A Method for Determiig Itermediate Eigesolutio of Sparse ad Symmetric Matrices by the Double Shifted Iverse Power Method, Trasactios of JSI, Vol. 9, No. 3, 2009, pp. 23-38. [3] M. Kashiwagi, A Method for Determiig Eigesolutios of Large, Sparse, Symmetric Matrices by the Precoditioed Cojugate Gradiet Method i the Geeralized Eigevalue Problem, Joural of Structural Egieerig, Vol. 73, No. 629, 2008, pp. 209-27. [4] M. Kashiwagi, A Method for Determiig Itermediate Eigepairs of Sparse Symmetric Matrices by Laczos ad Shifted Iverse Power Method i Geeralized Eigevalue Problems, Joural of Structural ad Costructio Egieerig, Vol. 76, No. 664, 20, pp. 8-88. doi:0.330/as.76.8 Copyright 203 SciRes.