IMPROVEMENT OF AN APPROXIMATE SET OF LATENT ROOTS AND MODAL COLUMNS OF A MATRIX BY METHODS AKIN TO THOSE OF CLASSICAL PERTURBATION THEORY
|
|
- Melinda Dalton
- 6 years ago
- Views:
Transcription
1 IMPROVEMENT OF AN APPROXIMATE SET OF LATENT ROOTS AND MODAL COLUMNS OF A MATRIX BY METHODS AKIN TO THOSE OF CLASSICAL PERTURBATION THEORY By H. A. JAHN {University of Birmingham) [Received 7 October 947] SUMMARY A method is described for simultaneously improving all the latent roots and modal columns of a given matrix, starting from a given complete set of approximate modal columns. It is considered that the method will be useful as a final step in any iteration process of determining these quantities. The method is illustrated by a numerical example. The modification needed when two or more of the latent roots are coincident, or nearly so, is very briefly indicated. The fundamental formulae are akin to those of classical perturbation theory, the corresponding formulae of which, for the special case of a Lagrange frequency equation, are given for convenience in the Appendix.. The fundamental equations LET a complete, linearly independent set of approximate modal vectors of a square matrix A of order n be given. Then operating on any x< r 0) by the matrix A gives a vector Ax^ which can be expressed as a linear combination of the vectors x^\..., zf \ If we write this linear combination as WK () n where 2' denotes 2 > then, since the a^0' are approximate modes, the AJ. ' 8 8= 8#r will be first approximations to the latent roots and the a*.*' will be small quantities. An improvement on the original mode will then bo given by
2 32 H. A. JAHN so long as none of the differences Aj. ' A^ are very small (see 6), for we have < ) 4 o) + 2' 3^B or.44 X> = A^^'-f terms of second order. Having found this first approximation to the latent roots and the modes, we may now take into account the second-order terms, writing giving A< 2) as the second approximation to the root, and (3) as the second approximation to the mode. Or, in general, for the pth approximation p* «2 g^w (5) " for the determination of A^ and 4 P) given 4"~ ) ( r > a ^,--,n). 2. Determination L the coefficients. First method - To find the coefficients A,, a n at any stage we have the linear vector equation \.x r -\-^.'a n x a = Ax r, (7) i s or, in terms of components, 8 where (Axjf are the components of the vector Ax T. For a given value of r these are n linear equations for the n unknowns
3 Writing IMPROVEMENT OF A SET OF LATENT ROOTS 33 the solution of these equations may be written as (9) *ln X 2n X n 'nl ^ln X 2n x r-l,n \A x r)n D.a T8 = () We see that to find the improved value of the rth latent root at any stage we take thematrix whose columns are the modes obtained from the preceding stage and replace the rth column by the components of the vector Ax r. Then the improved value of the root is the value of the determinant of this modified matrix divided by the determinant of the original modal matrix. The coefficients a^ in the expression (6), viz. + for the improved modes are obtained in the same manner by replacing the ath column of the matrix of the modes a^?" * by the components of Ax r, evaluating the determinant of the matrix so modified, and dividing again by the determinant of the original modal matrix. 3. The equations in matrix notation It may be noted that the essential computational element in the method described here is the calculation of the inverse, or adjoint, of the given approximate modal matrix. For if 7(P) (2) is the given approximate modal matrix and we write (3)
4 34 H. A. JAHN then comparison with equation () shows that a«) (4) i.e. A x is determined by (5) where X< > is the adjoint of a and D o = Having found A x and hence A*. ' (r = n) in this manner, we find the improved modal matrix a* ' from 3*» = a* 0^, (6) where J?! is derived from A x as follows: «2n as is seen by comparison with equation (2). Or, in general, (7) (8) (9) where B v is obtained from A p in the same manner as B x is obtained from A v i.e. This is illustrated in the following example. 4. Application to a symmetrical third-order matrix (Elementary Matrices (l)f, p. 30) By the matrix iteration method the following roots and modes of the matrix A shown below have been found: I" 8-209] , x x = 2 5 llj [" r ' , x,= , 0 0 \ = , A 2 = 2-652, A 3 = t See reference () at the end.
5 IMPROVEMENT OF A SET OF LATENT ROOTS 35 Suppose the following approximate modal matrix z< 0) had been found for which the adjoint X m, determinant D o, and Aaf are as shown: ro ll ' D o = \3f i\ = Then we have for the first approximation J *3 giving A^' = , A > = 2-653, A^x) = , roots. For the improvement of the modes, with Aj»>-A?> = -855, we find A$ -A > = , J ] I Proceeding to the second approximation we find , J J l» = 80000, for the > A*" = -664, J = ,
6 36 Hence Pi 2) A H. A. JAHN r J giving for the second approximation to the roots \M = , A^ = , Ag» = , = Connexion with Rayleigh's principle and second method There is another method of calculating A,, a rs (s ^ r) from the vector equation Kx,+ 2'"r.x. = M. (?) 8 which brings out the connexion with Rayleigh's principle (2) for the determination of latent roots given approximate modes. Taking scalar products with the vectors x x (I = l,...,n) we obtain (Ax,, as,) = K(x r, x,)+ 2' «*(*.> *») («=!,...,»). (20) These, for a given value of r, are n linear equations for the n unknowns A,, a rs (s = l,2,...,r l,r+l,...,n). Putting for brevity and writing A rl = (2) (22) (23) That (22) and (23) are identical with (0) and () may be seen as follows. Since the S r, = (x T, x t ) are the scalar products of two vectors we have the matrix relation "8 U S 2.. S ln "l RCi, x 9.. x,»~]fx- 2 x m ^ nl -Zm.a;In where a; rl, aj^,..., a; rn are the components of the vector x r. Consequently A = D 2. (25) ii "nl (24)
7 IMPROVEMENT OF A SET OF LATENT ROOTS Similarly we have the relations 37 X /y < ^ f* If* * ( M *V \ It* XX *2 x n2 x ln x ll *2 *r-l,l l-' aa Wl x r+l,l X nl (Ax r ) n x r+hn.. x nn \ 8 2 Kr-l "In,.(26) J nl K,r-X S nr + where (Ax r ) lt (Ax r ) 2,, (Ax r ) n are the components of the vector Ax r. So also #2 x s+l,x X nil X nn \ \.Xxn X 2n X 8-X,n \AX r ) n X s+ln.. X n 82 $X,8-X A r -, 8, s n o l8+ (27) n2 8 ng _ A rn Consequently equations (0) and () may be obtained from (22) and (23) by dividing by D. The equations (2), (22), (23) have the advantage over the corresponding equations (9), (0), (), that, for a symmetrical matrix A (for which the exact modes are orthogonal), the matrix A will be approximately diagonal and also, so long as none of the latent roots are very small, the terms A rr = (Ax r,x r ) will be larger than the non-diagonal terms A rl = (Ax T,x { ). Neglecting the smaller terms we have then from (22) Arr. or " r -t rr - (x r,x r )' which is Rayleigh's approximation for the latent roots. For the improvement of the modes, we have, for 8 < r, , 82 82,8+ (28) A.a rs = 8g+l,2 8, 8,2.. 8^.! A^ $rj+x Kr K Kx 8 n2.. 8^.! A rn 8 n^+.. 8 nr.. 8 n, (29)
8 38 H. A. JAHN where the large terms are underlined. Retaining only the largest terms in the" expansion of this determinant, we find Aa r8 =.4 rg 8 u S g _i,8-i 8 8+i.8+i Kh+ Since to the first approximation we have we find, on division by A, A = SnS^. The same result is obtained for «> r. Thus in all cases A rs 8 rs (30) To this approximation we are able to improve the roots and the modes without the solution of linear equations. For the iteration process of approximation we have thus (3) (32) The improved mode is given by yiv) afp-i). which becomes, since >_A</> = ^ 2 ^ _ - g.<j» (6), (33) 2 > <*-» 8g-«P ' which gives the improvement in the modes in terms of the coefficients rr rr U(l>-) (34) ( Sg-" = (4"- '. a*"- *). (35) A'^-" = (^x<"-», «?"«), (36) derived from the modes 7ff~ x) of the preceding approximation. '
9 IMPROVEMENT OF A SET OF LATENT ROOTS 6. Application of the second method to a third-order matrix Applying these formulae to the example of 4 we have with p , I I ] , J 39 I" = giving XV = , = 2-652, Ai, ' = , A*. ' = , off = r.tt) off = 8 2 2»8 8<»>8 8$ 8 From these we find, with 33-4 ( i2 8$' A (0) S(0) = , = , ^(0) g(0) % z ^(0) g(0) = , = , ol ax = , A(0) g(0) = A (0) 5(0) A^-Aa ' = -854, *»-*» = , the following improvement of the modes (see 3), -Af> = -663, ro Lo J o ] I
10 40 H. A. JAHN It is seen, as was to be expected, that the method described in this paragraph converges slightly less rapidly than that employed in 4; it has, however, the distinct advantage that the solution of the set of n linear equations is avoided. 7. Application to the Lagrange frequency equation The approximate orthogonality of the approximate modes which form the basis of the method of 5 applies only when the matrix A is symmetrical. The method is, however, easily extended to a Lagrange frequency equation of the form (-\M+K)x = 0 (37) or (-M+U)x = 0, (38) where M is the kinetic energy matrix, K the potential energy matrix, and U = M~ l K, (39) if the usual extended definition of orthogonality in terms of the kinetic energy is made, i.e. x r, z a are denned to be orthogonal when For, starting from the equation (x r,mxj = o. (40) and taking scalar products with Mx lt we have \{x r,mx l )+ 2,'aJXvJUx,) = (Ux r,mx t ) = (M- x Kx r,mxi) = (Kx r,x,). (42) The formulae of 5 will then hold, assuming again that none of the latent roots are very small, if we replace A rl = (Ax n x t ) by K Tl ={Kx n x l ) (43) and 8 ri = (x T,x,) by M,, = {x T,Mx t ) = (Mx r,x t ). (44) Consequently A*> is the improved value of the root, and k the improvement in the mode. These may be compared with the corresponding formulae of classical perturbation theory given in the Appendix.
11 IMPROVEMENT OF A SET OF LATENT ROOTS 4 8. The case of coincident roots The modification needed to the approximation procedure when two or more latent roots are coincident, or nearly so, is outlined briefly below. For the sake of simplicity take the case where the first two modes correspond to approximately equal roots. Let y^\ y 2 0) be the approximate modes corresponding to these roots, whilst x^0) (r =-- 3,..., n) are those corresponding to the other roots. Then we will have relations of the form I > (47) 8=3 I W, (48) = 3 4 ) 2/ ( 2 0) + I' <4V * s 0) - (49) s=3 Here the coefficients /4V, MV, /4V > /4V. -M- ' will au< ^e of the same order of magnitude, but the coefficients a will still be small quantities. As in classical perturbation theory (3), before proceeding farther, we first diagonalize the matrix Let the transformation of y^\ y^ which does thiti (at least to the second order of small quantities), be given by *i 0) = c 2/ o) +c 2 y 2 ) ) (50) 4 o) =Wi O) +c 222 /< >, (5) and let A^, 4 ' be the resulting diagonal elements and consequently the first approximation to the first two latent roots. We shall then have Improved modes may then be found as before, viz. 8=3 2 (52) = A^w+Ja&'a;" 0 - (53) 8=3 (54) 8=3 n Am4 0). (55)
12 42 H. A. JAHN and the process repeated if necessary. The coefficients A,, a rs can be determined by any of the methods outlined in the preceding paragraphs. 9. Conclusions It is considered that the method of improving the latent roots and modal columns of a matrix described here will be useful as a final step in any iteration process of determining these quantities. The special method of 5 and 7 becomes invalid when one or more of the latent roots are very small and applies further in its present form only to a symmetrical matrix or to one which is the product of two symmetrical matrices. It should be noted that the fundamental formulae of this report are akin to those of classical perturbation theory (2,3), as shown in the Appendix. The method might in fact be described as that of perturbation theory in reverse, for in that theory one starts from the known latent roots and modal columns of a given matrix and derives those of a matrix differing slightly from the original matrix, whilst here one starts from approximate modal columns and latent roots of a given matrix and deduces improved values of these for the same matrix. This analogy formed the basis of the original derivation of the formulae given here. REFERENCES. FKAZEK, DUNCAN, and COLLAR, Elementary Matrices (Cambridge Univ. Press, 038). 2. RAYLEIGH, Theory of Sound. 3. COTXBANT-HILBEBT, Methoden der Mathematischen Physik, Bd. (93). APPENDIX The Corresponding Formulae of Classical Perturbation Theory for the Case of a Lagrangian Frequency Equation Let M and K be the known kinetic energy and potential energy matrices of a given conservative system. Let x r, A, denote the known modal columns and latent roots of the system, so that \ = a>j, where a> r is the circular frequency. Then we have (-\ r M + K)x r =0, (58) or Kx T = XrMx T. (59) Denoting the scalar product of two modal columns by &,>*.) = 2 x H x, it (60) where x ri are the components of the modal column or vector x r, then the orthogonality with rospecfr to the kinetic energy of the modes may be expressed by (Mx r,x,) = i^8 r,, (6) where S r, is the usual Kronecker symbol 8 r, = 0 for r jt s and = for r = s. (62)
13 IMPROVEMENT OF A SET OF LATENT BOOTS 43 The classical perturbation problem consists in rinding the changes A\, Ax r in thelatent roots and modes of the above system due to small changes AM, AK in the kinetic or potential energy matrices. Putting, for the modified system, A^^ + AA,, (63) K = *r+z'c t,x,, (64) t we have {-(A r +AA r )(M4 AM)+K + AK}(x r + Jf c r,x,) = 0. (65) a Equating the small terms of first order to zero we find (-\M-\-K) 2'c M x. + (-A r AAf-ilfAA r + AiC)a:. = 0. (6b Taking scalar products with x r, we obtain from this vector equation, since 80 that (Mx r,x s ) - Mrrh,,, (Kx r,x t )=^A r (Mx r,x 3 ) = X r M rr 8 ra = K rr S r,, (67) K n = \M rr, (68) -A r (AM) rr -iw; T AA r +(AA') rr = 0. (69) from which we have A r = A r + AA r j ^ jjt (7) since A,. = On the other hand, we have M' n K TT +(AK) rt f J M, r Thus the modified latent root is given by K' n _ A '-H; T - M TT +AM TT ' which may be compared with equation (45) of the text. The equivalent form A^= -(W^jAJQg (74) AT Mrr AJT for the relative change in the latent root, derived from (70), is useful. The fact that the introduction of small coupling terms (AM) rl, (AK) T, between the different normal modes of the original system has no effect on the frequency to the first order of small quantities may be noted. To find the coefficients c TI determining the changes in the modes we take scalar products of the vector equation (66) with x, (s ^ r). This gives c r,(-\ r M t,+k, i )-\(AM) rt +(AK) r, = 0, (75), -A r (AAr) r.+(alq r,, C " = KM-K < 76 > or, since A, = (AK) n (AM), K rr M u -K,,M rr Krr M u (77)
14 44 IMPROVEMENT OF A SET OF LATENT ROOTS giving, for the modification to the mode, which may be compared with formula (46) of the text. The relation < u M a K,, M, Ht KM,, M u may be noted (compare equation (33)), so that the c r, are of the form (78) (79) c T, = K-K (compare equation (2)). The case of coincident latent roots of the unmodified system and the extension of the calculations to the second approximation (second-order perturbations) will be found treated in ref. (3) or in any standard text-book of quantum theory, although the formulae are given there as applied to a complex hermitian matrix. (80)
H NT Z N RT L 0 4 n f lt r h v d lt n r n, h p l," "Fl d nd fl d " ( n l d n l tr l t nt r t t n t nt t nt n fr n nl, th t l n r tr t nt. r d n f d rd n t th nd r nt r d t n th t th n r lth h v b n f
More informationD t r l f r th n t d t t pr p r d b th t ff f th l t tt n N tr t n nd H n N d, n t d t t n t. n t d t t. h n t n :.. vt. Pr nt. ff.,. http://hdl.handle.net/2027/uiug.30112023368936 P bl D n, l d t z d
More information46 D b r 4, 20 : p t n f r n b P l h tr p, pl t z r f r n. nd n th t n t d f t n th tr ht r t b f l n t, nd th ff r n b ttl t th r p rf l pp n nt n th
n r t d n 20 0 : T P bl D n, l d t z d http:.h th tr t. r pd l 46 D b r 4, 20 : p t n f r n b P l h tr p, pl t z r f r n. nd n th t n t d f t n th tr ht r t b f l n t, nd th ff r n b ttl t th r p rf l
More informationTh n nt T p n n th V ll f x Th r h l l r r h nd xpl r t n rr d nt ff t b Pr f r ll N v n d r n th r 8 l t p t, n z n l n n th n rth t rn p rt n f th v
Th n nt T p n n th V ll f x Th r h l l r r h nd xpl r t n rr d nt ff t b Pr f r ll N v n d r n th r 8 l t p t, n z n l n n th n rth t rn p rt n f th v ll f x, h v nd d pr v n t fr tf l t th f nt r n r
More informationInverses. Stephen Boyd. EE103 Stanford University. October 28, 2017
Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number
More informationColby College Catalogue
Colby College Digital Commons @ Colby Colby Catalogues College Archives: Colbiana Collection 1866 Colby College Catalogue 1866-1867 Colby College Follow this and additional works at: http://digitalcommons.colby.edu/catalogs
More information176 5 t h Fl oo r. 337 P o ly me r Ma te ri al s
A g la di ou s F. L. 462 E l ec tr on ic D ev el op me nt A i ng er A.W.S. 371 C. A. M. A l ex an de r 236 A d mi ni st ra ti on R. H. (M rs ) A n dr ew s P. V. 326 O p ti ca l Tr an sm is si on A p ps
More informationColby College Catalogue
Colby College Digital Commons @ Colby Colby Catalogues College Archives: Colbiana Collection 1870 Colby College Catalogue 1870-1871 Colby College Follow this and additional works at: http://digitalcommonscolbyedu/catalogs
More information828.^ 2 F r, Br n, nd t h. n, v n lth h th n l nd h d n r d t n v l l n th f v r x t p th l ft. n ll n n n f lt ll th t p n nt r f d pp nt nt nd, th t
2Â F b. Th h ph rd l nd r. l X. TH H PH RD L ND R. L X. F r, Br n, nd t h. B th ttr h ph rd. n th l f p t r l l nd, t t d t, n n t n, nt r rl r th n th n r l t f th f th th r l, nd d r b t t f nn r r pr
More informationColby College Catalogue
Colby College Digital Commons @ Colby Colby Catalogues College Archives: Colbiana Collection 1872 Colby College Catalogue 1872-1873 Colby College Follow this and additional works at: http://digitalcommonscolbyedu/catalogs
More information4 4 N v b r t, 20 xpr n f th ll f th p p l t n p pr d. H ndr d nd th nd f t v L th n n f th pr v n f V ln, r dn nd l r thr n nt pr n, h r th ff r d nd
n r t d n 20 20 0 : 0 T P bl D n, l d t z d http:.h th tr t. r pd l 4 4 N v b r t, 20 xpr n f th ll f th p p l t n p pr d. H ndr d nd th nd f t v L th n n f th pr v n f V ln, r dn nd l r thr n nt pr n,
More informationPR D NT N n TR T F R 6 pr l 8 Th Pr d nt Th h t H h n t n, D D r r. Pr d nt: n J n r f th r d t r v th tr t d rn z t n pr r f th n t d t t. n
R P RT F TH PR D NT N N TR T F R N V R T F NN T V D 0 0 : R PR P R JT..P.. D 2 PR L 8 8 J PR D NT N n TR T F R 6 pr l 8 Th Pr d nt Th h t H h n t n, D.. 20 00 D r r. Pr d nt: n J n r f th r d t r v th
More informationAN EXTREMUM PROPERTY OF SUMS OF EIGENVALUES' HELMUT WIELANDT
AN EXTREMUM PROPERTY OF SUMS OF EIGENVALUES' HELMUT WIELANDT We present in this note a maximum-minimum characterization of sums like at+^+aa where a^ ^a are the eigenvalues of a hermitian nxn matrix. The
More information,. *â â > V>V. â ND * 828.
BL D,. *â â > V>V Z V L. XX. J N R â J N, 828. LL BL D, D NB R H â ND T. D LL, TR ND, L ND N. * 828. n r t d n 20 2 2 0 : 0 T http: hdl.h ndl.n t 202 dp. 0 02802 68 Th N : l nd r.. N > R, L X. Fn r f,
More informationColby College Catalogue
Colby College Digital Commons @ Colby Colby Catalogues College Archives: Colbiana Collection 1871 Colby College Catalogue 1871-1872 Colby College Follow this and additional works at: http://digitalcommonscolbyedu/catalogs
More informationn r t d n :4 T P bl D n, l d t z d th tr t. r pd l
n r t d n 20 20 :4 T P bl D n, l d t z d http:.h th tr t. r pd l 2 0 x pt n f t v t, f f d, b th n nd th P r n h h, th r h v n t b n p d f r nt r. Th t v v d pr n, h v r, p n th pl v t r, d b p t r b R
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More information0 t b r 6, 20 t l nf r nt f th l t th t v t f th th lv, ntr t n t th l l l nd d p rt nt th t f ttr t n th p nt t th r f l nd d tr b t n. R v n n th r
n r t d n 20 22 0: T P bl D n, l d t z d http:.h th tr t. r pd l 0 t b r 6, 20 t l nf r nt f th l t th t v t f th th lv, ntr t n t th l l l nd d p rt nt th t f ttr t n th p nt t th r f l nd d tr b t n.
More informationLabor and Capital Before the Law
University of Michigan Law School University of Michigan Law School Scholarship Repository Articles Faculty Scholarship 1884 Labor and Capital Before the Law Thomas M. Cooley University of Michigan Law
More information4 8 N v btr 20, 20 th r l f ff nt f l t. r t pl n f r th n tr t n f h h v lr d b n r d t, rd n t h h th t b t f l rd n t f th rld ll b n tr t d n R th
n r t d n 20 2 :24 T P bl D n, l d t z d http:.h th tr t. r pd l 4 8 N v btr 20, 20 th r l f ff nt f l t. r t pl n f r th n tr t n f h h v lr d b n r d t, rd n t h h th t b t f l rd n t f th rld ll b n
More informationn
p l p bl t n t t f Fl r d, D p rt nt f N t r l R r, D v n f nt r r R r, B r f l. n.24 80 T ll h, Fl. : Fl r d D p rt nt f N t r l R r, B r f l, 86. http://hdl.handle.net/2027/mdp.39015007497111 r t v n
More informationOPERATIONS RESEARCH CENTER. ^ l^ COPY $. /^ UPDATING THE PRODUCT FORM OF THE INVERSE FOR THE REVERSED SIMPLEX METHOD
-,i.im»»-i.wu, i^~*^~mi^^mmmrim*^f~*^mmm _^,^. [ CO ORC 64-33 DECEMBER 1964 < UPDATING THE PRODUCT FORM OF THE INVERSE FOR THE REVERSED SIMPLEX METHOD COPY $. /^ ^QQFICHE J. ^ 3-^ by George B. Dantzig
More informationl f t n nd bj t nd x f r t l n nd rr n n th b nd p phl t f l br r. D, lv l, 8. h r t,., 8 6. http://hdl.handle.net/2027/miun.aey7382.0001.001 P bl D n http://www.hathitrust.org/access_use#pd Th r n th
More informationHumanistic, and Particularly Classical, Studies as a Preparation for the Law
University of Michigan Law School University of Michigan Law School Scholarship Repository Articles Faculty Scholarship 1907 Humanistic, and Particularly Classical, Studies as a Preparation for the Law
More informationN V R T F L F RN P BL T N B ll t n f th D p rt nt f l V l., N., pp NDR. L N, d t r T N P F F L T RTL FR R N. B. P. H. Th t t d n t r n h r d r
n r t d n 20 2 04 2 :0 T http: hdl.h ndl.n t 202 dp. 0 02 000 N V R T F L F RN P BL T N B ll t n f th D p rt nt f l V l., N., pp. 2 24. NDR. L N, d t r T N P F F L T RTL FR R N. B. P. H. Th t t d n t r
More informationJUST THE MATHS UNIT NUMBER 1.5. ALGEBRA 5 (Manipulation of algebraic expressions) A.J.Hobson
JUST THE MATHS UNIT NUMBER 1.5 ALGEBRA 5 (Manipulation of algebraic expressions) by A.J.Hobson 1.5.1 Simplification of expressions 1.5.2 Factorisation 1.5.3 Completing the square in a quadratic expression
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationThis property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.
34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2
More informationTh pr nt n f r n th f ft nth nt r b R b rt Pr t r. Pr t r, R b rt, b. 868. xf rd : Pr nt d f r th B bl r ph l t t th xf rd n v r t Pr, 00. http://hdl.handle.net/2027/nyp.33433006349173 P bl D n n th n
More information22 t b r 2, 20 h r, th xp t d bl n nd t fr th b rd r t t. f r r z r t l n l th h r t rl T l t n b rd n n l h d, nd n nh rd f pp t t f r n. H v v d n f
n r t d n 20 2 : 6 T P bl D n, l d t z d http:.h th tr t. r pd l 22 t b r 2, 20 h r, th xp t d bl n nd t fr th b rd r t t. f r r z r t l n l th h r t rl T l t n b rd n n l h d, nd n nh rd f pp t t f r
More informationChapter 4. Matrices and Matrix Rings
Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information2 Series Solutions near a Regular Singular Point
McGill University Math 325A: Differential Equations LECTURE 17: SERIES SOLUTION OF LINEAR DIFFERENTIAL EQUATIONS II 1 Introduction Text: Chap. 8 In this lecture we investigate series solutions for the
More informationVr Vr
F rt l Pr nt t r : xt rn l ppl t n : Pr nt rv nd PD RDT V t : t t : p bl ( ll R lt: 00.00 L n : n L t pd t : 0 6 20 8 :06: 6 pt (p bl Vr.2 8.0 20 8.0. 6 TH N PD PPL T N N RL http : h b. x v t h. p V l
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationJUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson
JUST THE MATHS UNIT NUMBER 9.9 MATRICES 9 (Modal & spectral matrices) by A.J.Hobson 9.9. Assumptions and definitions 9.9.2 Diagonalisation of a matrix 9.9.3 Exercises 9.9.4 Answers to exercises UNIT 9.9
More information5.3 The Power Method Approximation of the Eigenvalue of Largest Module
192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and
More informationReview of Vectors and Matrices
A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationSTRAND J: TRANSFORMATIONS, VECTORS and MATRICES
Mathematics SKE, Strand J STRAND J: TRANSFORMATIONS, VECTORS and MATRICES J4 Matrices Text Contents * * * * Section J4. Matrices: Addition and Subtraction J4.2 Matrices: Multiplication J4.3 Inverse Matrices:
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationINSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES
1 CHAPTER 4 MATRICES 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES 1 Matrices Matrices are of fundamental importance in 2-dimensional and 3-dimensional graphics programming
More information,0,",,.,*",,ffi:, *",",,,",*YnJt%ffi& (st& sc oev.sectton, No. 3\ q2tvvlz2or 5. MemoNo 3\q34o1s Date 1a 122o1s COI.IECTOMTE, MALKANGIRI OROER
,0,",,.,*",,ff, CO.CTOMT, MALKANGR (st& sc ov.scton, No. \ q2vvlz2or OROR Publcal on of na eeced/rejeced s o Maron b be ena d n he c s Hosels Dev.Oepl of Makan D Bc. n puuance of adven seffenl No.2o7l1
More informationA L A BA M A L A W R E V IE W
A L A BA M A L A W R E V IE W Volume 52 Fall 2000 Number 1 B E F O R E D I S A B I L I T Y C I V I L R I G HT S : C I V I L W A R P E N S I O N S A N D TH E P O L I T I C S O F D I S A B I L I T Y I N
More information1 Matrices and vector spaces
Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices
More informationDiscontinuous Distributions in Mechanics of Materials
Discontinuous Distributions in Mechanics of Materials J.E. Akin, Rice University 1. Introduction The study of the mechanics of materials continues to change slowly. The student needs to learn about software
More informationSIMPLIFIED CALCULATION OF PRINCIPAL COMPONENTS HAROLD HOTELLING
PSYCHOMETRIKA--VOL. 1, NO. 1 SIMPLIFIED CALCULATION OF PRINCIPAL COMPONENTS HAROLD HOTELLING The resolution of a set of n tests or other variates into components 7~, each of which accounts for the greatest
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More information1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r
DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationReduction of Singular Pencils of Matrices 1
Reduction of Singular Pencils of Matrices 1 By W. LEDERMANX. (Received luh October 1934.. Read 2nd November 1934.) 1. Introduction. Let pa + ob = [pa^ + ab^] be a pencil of type m x in', i.e. with m rows
More informationn = n = This is the Binomial Theorem with non-negative integer as index. Thus, a Pascal's Triangle can be written as
Pascal's Triangle and the General Binomial Theorem by Chan Wei Min It is a well-nown fact that the coefficients of terms in the expansion of (1 + x)n for nonnegative integer n can be arranged in the following
More informationTHE POISSON TRANSFORM^)
THE POISSON TRANSFORM^) BY HARRY POLLARD The Poisson transform is defined by the equation (1) /(*)=- /" / MO- T J _M 1 + (X t)2 It is assumed that a is of bounded variation in each finite interval, and
More informationMath 4377/6308 Advanced Linear Algebra
2.3 Composition Math 4377/6308 Advanced Linear Algebra 2.3 Composition of Linear Transformations Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math4377
More information~,. :'lr. H ~ j. l' ", ...,~l. 0 '" ~ bl '!; 1'1. :<! f'~.., I,," r: t,... r':l G. t r,. 1'1 [<, ."" f'" 1n. t.1 ~- n I'>' 1:1 , I. <1 ~'..
,, 'l t (.) :;,/.I I n ri' ' r l ' rt ( n :' (I : d! n t, :?rj I),.. fl.),. f!..,,., til, ID f-i... j I. 't' r' t II!:t () (l r El,, (fl lj J4 ([) f., () :. -,,.,.I :i l:'!, :I J.A.. t,.. p, - ' I I I
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationChapter 2. Ma 322 Fall Ma 322. Sept 23-27
Chapter 2 Ma 322 Fall 2013 Ma 322 Sept 23-27 Summary ˆ Matrices and their Operations. ˆ Special matrices: Zero, Square, Identity. ˆ Elementary Matrices, Permutation Matrices. ˆ Voodoo Principle. What is
More informationELEMENTARY MATRIX ALGEBRA
ELEMENTARY MATRIX ALGEBRA Third Edition FRANZ E. HOHN DOVER PUBLICATIONS, INC. Mineola, New York CONTENTS CHAPTER \ Introduction to Matrix Algebra 1.1 Matrices 1 1.2 Equality of Matrices 2 13 Addition
More informationON THE DIVERGENCE OF FOURIER SERIES
ON THE DIVERGENCE OF FOURIER SERIES RICHARD P. GOSSELIN1 1. By a well known theorem of Kolmogoroff there is a function whose Fourier series diverges almost everywhere. Actually, Kolmogoroff's proof was
More informationLinear Algebra Primer
Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationEquations with regular-singular points (Sect. 5.5).
Equations with regular-singular points (Sect. 5.5). Equations with regular-singular points. s: Equations with regular-singular points. Method to find solutions. : Method to find solutions. Recall: The
More informationExplicit evaluation of the transmission factor T 1. Part I: For small dead-time ratios. by Jorg W. MUller
Rapport BIPM-87/5 Explicit evaluation of the transmission factor T (8,E) Part I: For small dead-time ratios by Jorg W. MUller Bureau International des Poids et Mesures, F-930 Sevres Abstract By a detailed
More informationExhibit 2-9/30/15 Invoice Filing Page 1841 of Page 3660 Docket No
xhibit 2-9/3/15 Invie Filing Pge 1841 f Pge 366 Dket. 44498 F u v 7? u ' 1 L ffi s xs L. s 91 S'.e q ; t w W yn S. s t = p '1 F? 5! 4 ` p V -', {} f6 3 j v > ; gl. li -. " F LL tfi = g us J 3 y 4 @" V)
More informationTHE QR METHOD A = Q 1 R 1
THE QR METHOD Given a square matrix A, form its QR factorization, as Then define A = Q 1 R 1 A 2 = R 1 Q 1 Continue this process: for k 1(withA 1 = A), A k = Q k R k A k+1 = R k Q k Then the sequence {A
More informationStark effect of a rigid rotor
J. Phys. B: At. Mol. Phys. 17 (1984) 3535-3544. Printed in Great Britain Stark effect of a rigid rotor M Cohen, Tova Feldmann and S Kais Department of Physical Chemistry, The Hebrew University, Jerusalem
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationPower Series Solutions of Ordinary Differential Equations
Power Series Solutions for Ordinary Differential Equations James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University December 4, 2017 Outline Power
More informationSPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES
SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES M. D. ATKINSON Let V be an w-dimensional vector space over some field F, \F\ ^ n, and let SC be a space of linear mappings from V into itself {SC ^ Horn
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For
More informationAsymptotics of generating the symmetric and alternating groups
Asymptotics of generating the symmetric and alternating groups John D. Dixon School of Mathematics and Statistics Carleton University, Ottawa, Ontario K2G 0E2 Canada jdixon@math.carleton.ca October 20,
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More informationLinear Algebra: Lecture notes from Kolman and Hill 9th edition.
Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices
More informationWelcome to Math 257/316 - Partial Differential Equations
Welcome to Math 257/316 - Partial Differential Equations Instructor: Mona Rahmani email: mrahmani@math.ubc.ca Office: Mathematics Building 110 Office hours: Mondays 2-3 pm, Wednesdays and Fridays 1-2 pm.
More informationSeries Solutions Near a Regular Singular Point
Series Solutions Near a Regular Singular Point MATH 365 Ordinary Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Background We will find a power series solution to the equation:
More informationrhtre PAID U.S. POSTAGE Can't attend? Pass this on to a friend. Cleveland, Ohio Permit No. 799 First Class
rhtr irt Cl.S. POSTAG PAD Cllnd, Ohi Prmit. 799 Cn't ttnd? P thi n t frind. \ ; n l *di: >.8 >,5 G *' >(n n c. if9$9$.jj V G. r.t 0 H: u ) ' r x * H > x > i M
More informationĞ ğ ğ Ğ ğ Öğ ç ğ ö öğ ğ ŞÇ ğ ğ
Ğ Ü Ü Ü ğ ğ ğ Öğ ş öğ ş ğ öğ ö ö ş ğ ğ ö ğ Ğ ğ ğ Ğ ğ Öğ ç ğ ö öğ ğ ŞÇ ğ ğ l _.j l L., c :, c Ll Ll, c :r. l., }, l : ö,, Lc L.. c l Ll Lr. 0 c (} >,! l LA l l r r l rl c c.r; (Y ; c cy c r! r! \. L : Ll.,
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More information2 We imagine that our double pendulum is immersed in a uniform downward directed gravitational field, with gravitational constant g.
THE MULTIPLE SPHERICAL PENDULUM Thomas Wieting Reed College, 011 1 The Double Spherical Pendulum Small Oscillations 3 The Multiple Spherical Pendulum 4 Small Oscillations 5 Linear Mechanical Systems 1
More informationT h e C S E T I P r o j e c t
T h e P r o j e c t T H E P R O J E C T T A B L E O F C O N T E N T S A r t i c l e P a g e C o m p r e h e n s i v e A s s es s m e n t o f t h e U F O / E T I P h e n o m e n o n M a y 1 9 9 1 1 E T
More informationTHE MIDWAY & GAMES GRADE 6 STEM STEP BY STEP POTENTIAL & KINETIC ENERGY MOVE THE CROWDS
THE MIDWAY & GAMES GRADE 6 STEP BY STEP POTENTIAL & KINETIC ENERGY MOVE THE CROWDS & G S S Pl & K E Mv C I l ll l M T x Tx, F S T NERGY! k E? All x Exl M l l Wl k, v k W, M? j I ll xl l k M D M I l k,
More informationLinear Algebra Review (Course Notes for Math 308H - Spring 2016)
Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationCayley-Hamilton Theorem
Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A
More informationON SOME QUESTIONS ARISING IN THE APPROXIMATE SOLUTION OF NONLINEAR DIFFERENTIAL EQUATIONS*
333 ON SOME QUESTIONS ARISING IN THE APPROXIMATE SOLUTION OF NONLINEAR DIFFERENTIAL EQUATIONS* By R. BELLMAN, BAND Corporation, Santa Monica, California AND J. M. RICHARDSON, Hughes Research Laboratories,
More informationON THE GROUP OF ISOMORPHISMS OF A CERTAIN EXTENSION OF AN ABELIAN GROUP*
ON THE GROUP OF ISOMORPHISMS OF A CERTAIN EXTENSION OF AN ABELIAN GROUP* BY LOUIS C. MATHEWSON Introduction In 1908 Professor G. A. Miller showed that " if an abelian group 77 which involves operators
More information4. cosmt. 6. e tan 1rt 10. cos 2 t
20 Chapter 1 Fourier Series of Periodic Functions 3. sin3t s. sinh2t 7. lsint l 9. t 2 4. cosmt 6. e 1 8. tan 1rt 10. cos 2 t 1.1B Sketch two or more periods of the following functions. 1. f(t) = t2, -1[
More informationZhi-Wei Sun Department of Mathematics, Nanjing University Nanjing , People s Republic of China
J. Number Theory 16(016), 190 11. A RESULT SIMILAR TO LAGRANGE S THEOREM Zhi-Wei Sun Department of Mathematics, Nanjing University Nanjing 10093, People s Republic of China zwsun@nju.edu.cn http://math.nju.edu.cn/
More informationTaylor polynomials. 1. Introduction. 2. Linear approximation.
ucsc supplementary notes ams/econ 11a Taylor polynomials c 01 Yonatan Katznelson 1. Introduction The most elementary functions are polynomials because they involve only the most basic arithmetic operations
More informationPhase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes
Phase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes Assume a phase space of dimension N where Autonomous governing equations with initial state: = is a state
More informationPrincipal Component Analysis and Linear Discriminant Analysis
Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29
More informationNOTES ON MATRICES OF FULL COLUMN (ROW) RANK. Shayle R. Searle ABSTRACT
NOTES ON MATRICES OF FULL COLUMN (ROW) RANK Shayle R. Searle Biometrics Unit, Cornell University, Ithaca, N.Y. 14853 BU-1361-M August 1996 ABSTRACT A useful left (right) inverse of a full column (row)
More information. D CR Nomenclature D 1
. D CR Nomenclature D 1 Appendix D: CR NOMENCLATURE D 2 The notation used by different investigators working in CR formulations has not coalesced, since the topic is in flux. This Appendix identifies the
More information'IEEE... AD-A THE SHAPE OF A LIQUID DROP IN THE FLOW OF A PERFECT in1 I FLUID(U) HARRY DIAMOND LASS ADEIPHI RD C A MORRISON FEB 83 HDL-TL-83-2
AD-A125 873 THE SHAPE OF A LIQUID DROP IN THE FLOW OF A PERFECT in1 I FLUID(U) HARRY DIAMOND LASS ADEIPHI RD C A MORRISON FEB 83 'IEEE... HDL-TL-83-2 IUNCLASSIFIED F/G 28/4 NL .04 2-0~ 1.25 uia. 1 -EV-
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationChapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27
Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation
More information