Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Size: px
Start display at page:

Download "Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems"

Transcription

1 Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur emal: 1

2 . Chapter 5 Soluton of System of Lnear Equatons Module No. 6 Soluton of Inconsstent and Ill Condtoned Systems

3 In prevous modules, we have dscussed several methods to solve a system of lnear equatons. In these modules, t s assumed that the gven system s well-posed,.e. f one (or more) coeffcent of the system s slghtly changed, then there s no maor change n the soluton. Otherwse the system of equatons s called ll-posed or ll-condtoned. In ths module, we wll dscussed about the soluton methods of the ll-condtoned system of equatons. Before gong to dscuss the ll-condtoned system, we defne some basc terms from lnear algebra whch are used to descrbed the methods. 6.1 Vector and matrx norms Let x = (x 1, x 2,..., x n ) be a vector of dmenson n. The norm of the vector x s the sze or length of x, and t s denoted by x. The norm s a mappng from the set of vectors to a real number. That s, t s a real number whch satsfes the followng condtons: () x 0 and x = 0 ff x = 0 (6.1) () αx = α x for any real scalar α (6.2) () x + y x + y (trangle nequalty). (6.3) Several type of norms are defned by many authors. The most use full vector norms are defned below. () x 1 = x (6.4) =1 () x 2 = n x 2 (Eucldean norm) (6.5) =1 () x = max x (maxmum norm or unform norm). (6.6) Now, we defne dfferent type of matrx norms. Let A and B be two matrces such that A + B and AB are defned. The norm of a matrx A s denoted by A and t 1

4 Soluton of Inconsstent and Ill Condtoned Systems satsfes the followng condtons () A 0 and A = 0 ff A = 0 (6.7) () αa = α A, α s a real scalar (6.8) () A + B A + B (6.9) (v) AB A B. (6.10) From (6.10), t follows that for any postve nteger k. Lke the vector norms, some common matrx norms are A k A k, (6.11) () A 1 = max a (the column norm) (6.12) () A 2 = a 2 (the Eucldean norm) (6.13) () A = max a (the row norm). (6.14) The Eucldean norm s also known as Erhard-Schmdt norm or Schur norm or the Frobenus norm. The concept of matrx norm s used to study the convergence of teratve methods to solve the system of lnear equatons. It s also used to study the stablty of a system of equatons. Example 6.1 Let A = be a matrx. Fnd the matrx norms A 1, A 2 and A Soluton. A 1 = max{ , , , } = 6 A 2 = ( 4) ( 2) = 122 and A = max{ , , } = 16. 2

5 Ill-condtoned system of lnear equatons Let us consder the followng system of lnear equatons. x y = x + y = 4. (6.15) It s easy to verfy that ths system of equatons has no soluton. But, for dfferent approxmate values of 1 3 the system has dfferent nterestng results. Frst we take Then the system becomes x + 0.3y = x + y = 4. (6.16) The soluton of these equatons s x = 1.3, y = 0.1. If we approxmate 1 3 as 0.33, then the reduced system of equatons s x y = x + y = 4 (6.17) and ts soluton s x = 1, y = 1. If the approxmaton s then the system s x y = x + y = 4 (6.18) and ts soluton s x = 2, y = 10. When , then the system s x y = x + y = 4 (6.19) and ts soluton s x = 100, y = 32. Note the systems of equatons (6.15)-(6.19) and ther solutons. These are very confusng stuatons. What s the best approxmaton of 1 3? 0.3 or Observe that 3

6 Soluton of Inconsstent and Ill Condtoned Systems the solutons are sgnfcantly ncreased when the coeffcent of y n frst equaton s sghtly ncreased. That s, a small change n the coeffcent of y n frst equaton of the system produces large change n the soluton. These systems are called ll-condtoned or ll-posed system. On the other hand, f the change n the soluton s small for small changes n the coeffcents, then the system s called well-condtoned or well-posed system. Let us consder the followng system of equatons Ax = b. (6.20) Suppose one or more elements of the matrces A and/or b be changed and let them be A and b. Also, let y be the soluton of the new system,.e. A y = b. (6.21) Assumed that the changes n the coeffcents are very small. The system of equatons (6.20) s called ll-condtoned when the change n y s too large compared to the soluton vector x of (6.20). Otherwse, the system of equatons s called well-condtoned. If a system s ll-condtoned then the correspondng coeffcent matrx s called an ll-condtoned matrx. [ For the ] above problem,.e. for the system of equatons (6.17) coeffcent matrx s and t s an ll-condtoned matrx. 3 1 When A s small then, n general, the matrx A s ll-condtoned. But, the term small has no defnte meanng. So many methods are suggested to measure the llcondtoned of a matrx. One of the smple methods s defned below. Let A be a matrx and the condton number (denoted by Cond(A)) of t s defne by Cond(A) = A A 1 (6.22) where A s any type of matrx norm. If Cond(A) s large then the matrx s called ll-condtoned and correspondng system of equatons s called ll-condtoned system of equatons. If Cond(A) s small then the matrx A and the correspondng system of equatons are called well-condtoned. 4

7 Let us consder the followng [ two ] matrces to[ llustrated ] the ll-condtoned and wellcondtoned cases. Let A = and B = be two matrces [ ] [ ] Then A 1 = and B 1 = The Eucldean norms of A and B are A 2 = = and A 1 2 = Thus, Cond(A) = A 2 A 1 2 = = , a very large number. Hence A s ll-condtoned. For the matrx B, B 2 = = and B 1 2 = Then Cond(B) = , a relatvely small quantty. Thus, the matrx B s well-condtoned. The value of Cond(A) les between 0 and. If t s large then we say that the matrx s ll-condtoned. But, there s no defnte meanng of large number. So, ths measure s not good. Now, we defne another parameter whose value les between 0 and 1. ( 1/2, Let A = [a ] be a matrx and r = a) 2 = 1, 2,..., n. The quantty ν(a) = A r 1 r 2 r n (6.23) measures the smallness of the determnant A. It can be shown that 1 ν 1. If ν(a) s closed to zero, then the matrx A s ll-condtoned and f t s closed to 1, then A s well-condtoned. [ ] 1 4 For the matrx A =, r 1 = 17, r 2 = , A = 0.12, [ ] ν(a) = = and for the matrx B =, r 1 = 34, r 2 = 8, B = 16, ν(b) = = Thus the matrx A s ll-condtoned whle the matrx B s well-condtoned as ts value s very closed to 1. 5

8 Soluton of Inconsstent and Ill Condtoned Systems 6.3 Least squares method for nconsstent system Let us consder a system of equatons whose number of equatons s not equal to number of varables. Let such system be Ax = b (6.24) where A, x and b are of order m n, n 1 and m 1 respectvely. Note that the coeffcent matrx s rectangular. Thus, ether the system has no soluton or t has nfnte number of solutons. Assumed that the system s nconsstent. So, t does not have any soluton. But, the system may have a least squares soluton. A soluton x s sad to be least squares f Ax b 0, but Ax b s mnmum. The soluton x m s called the mnmum norm least squares soluton f x m x l (6.25) for any x l such that Ax l b Ax b for all x. (6.26) Snce A s rectangular matrx, so ts soluton can be determned by the followng equaton x = A + b, (6.27) where A + s the g-nverse of A. Snce the Moore-Penrose nverse A + s unque, therefore the mnmum norm least squares soluton s unque. The soluton can also be determned by wthout fndng the g-nverse of A. Ths method s descrbed below. If x s the exact soluton of the system of equatons Ax = b, then Ax b = 0, otherwse Ax b s a non-null matrx of order m 1. In explct form ths vector s a 11 x 1 + a 12 x 2 + a 1n x n b 1 a 21 x 1 + a 22 x 2 + a 2n x n b 2. a m1 x 1 + a m2 x 2 + a mn x n b m 6

9 Let square of the norm Ax b be denoted by S. Therefore, S = (a 11 x 1 + a 12 x 2 + a 1n x n b 1 ) 2 +(a 21 x 1 + a 22 x 2 + a 2n x n b 2 ) 2 + +(a m1 x 1 + a m2 x 2 + a mn x n b n ) 2 m = (a x b ) 2. (6.28) =1 The quantty S s called the sum of square of resduals. Now, our am s to fnd the vector x = (x 1, x 2,..., x n ) t such that S s mnmum. The suffcent condtons for whch S to be mnmum are S x 1 = 0, S x 2 = 0,, S x n = 0 (6.29) Note that the system of equatons (6.29) s non-homogeneous and contans n equatons wth n unknowns x 1, x 2,..., x n. Ths system of equatons can be solved by any method descrbed n prevous modules. Let x 1 = x 1, x 2 = x 2,..., x n = x n be the soluton of the equatons (6.29). Therefore, the least squares soluton of the system of equatons (6.24) s x = (x 1, x 2,..., x n) t. (6.30) The sum of square of resduals (.e. the sum of the squares of the absolute errors) s gven by S = m =1 (a x b ) 2. (6.31) Let us consder two examples to llustrate the least squares method whch s used to solve nconsstent system of equatons. [ ] 4 8 Example 6.2 Fnd g-nverse of the sngular matrx A = and hence fnd a least 1 2 squares soluton of the nconsstent system of equatons 4x + 8y = 2, x + 2y = 1. [ ] [ ] [ ] Soluton. Let α 1 =, α 2 =, A 1 = [ ] [ ] 1 A + 1 = (αt 1 α 1) 1 α1 t = 4 [ ] [ ] = 4 1,

10 Soluton of Inconsstent and Ill Condtoned Systems [ ] [ ] δ 2 = A + 1 α 8 2 = 4 1 = 2, [ ] [ ] [ ] γ 2 = α 2 A 1 δ 2 = 2 = = 0 (a null vector), [ ] [ ] β 2 = (1 + δ2 tδ 2) 1 δ2 ta+ 1 = = [ ] δ 2 β 2 = Therefore, 4 85 Ths s the g-nverse of A. A + 2 = [ A + 1 δ 2 β 2 β 2 ] = [ Second Part: In matrx notaton, the gven system of equatons s Ax = b, where [ ] [ ] [ ] 4 8 x 2 A =, x =, b =. 1 2 y 1 Note that the coeffcent matrx s sngular. So, t has no conventonal soluton. But, the least squares soluton of ths system of equaton s x = A + b,.e. Hence, the least squares soluton s Example [ ] [ ] [ ] x = /85 = /85 x = 9 85, y = Fnd the least squares soluton of the followng system of lnear equatons x + 2y = 2.0, x y = 1.0, x + 3y = 2.3, and 2x + y = 2.9. Also, estmate the resdual. Soluton. Let x, y be the least squares soluton of the gven system of equatons. Then the sum of square of resduals S s S = (x + 2y 2.0) 2 + (x y 1.0) 2 + (x + 3y 2.3) 2 + (2x + y 2.9) 2. Now, the problem s to fnd the values of x and y n such a way that S s mnmum. Thus, 8 S S = 0 and x y = 0. ].

11 Therefore the normal equatons are, 2(x + 2y 2.0) + 2(x y 1.0) + 2(x + 3y 2.3) + 4(2x + y 2.9) = 0 and 4(x + 2y 2.0) 2(x y 1.0) + 6(x + 3y 2.3) + 2(2x + y 2.9) = 0. After smplfcaton, these equatons reduce to 7x +6y = 11.1 and 6x +15y = The soluton of these equatons s x = 1.3 and y = 1 = Ths s the least 3 squares soluton of the gven system of equatons. The sum of the square of resduals s S = ( ) 2 + ( ) 2 + ( ) 2 + ( ) 2 = Method to solve ll-condtoned system It s very dffcult to solve a system of ll-condtoned equatons. Few methods are avalable to solve an ll-condtoned system of lnear equatons. One smple concept to solve an ll-condtoned system s to carry out the calculatons wth large number of sgnfcant dgts. But, computaton wth more sgnfcant dgts takes much tme. One better method s to mprove upon the accuracy of the approxmate soluton by an teratve method. Such an teratve method s consder below. Let us consder the followng ll-condtoned system of equatons a x = b, = 1, 2,..., n. (6.32) Let { x 1, x 2,..., x n } be an approxmate soluton of (6.32). Snce ths s an approxmate soluton, therefore a x s not necessarly equal to b. For ths soluton, let the rght hand vector be b,.e. b = b. Thus, for ths soluton the equaton (6.32) becomes a x = b, = 1, 2,..., n. (6.33) 9

12 Soluton of Inconsstent and Ill Condtoned Systems Subtractng (6.33) from (6.32), we get.e., a (x x ) = (b b ) a ε = d (6.34) where ε = x x, d = b b, = 1, 2,..., n. Now, equaton (6.34) s agan a system of lnear equatons whose unknowns are ε 1, ε 2,..., ε n. By solvng these equatons we obtaned the values of ε s. Hence, the new soluton s gven by x = ε + x and ths soluton s better approxmaton to x s. Ths technque may be repeated to get more better soluton. 6.5 The relaxaton method The relaxaton method, nvented by Southwell n 1946, s an teratve method used to solved a system of lnear equatons. Let a x = b, (6.35) be the th, = 1, 2,..., n equaton of a system of lnear equatons. Let x (k) = (x (k) 1, x(k) 2,..., x(k) n ) t be the kth terated soluton of the system of lnear equatons. Then a x (k) b, = 1, 2,..., n. Now, we denote the kth terated resdual for the th equaton by r (k). Therefore, the value of r (k) s gven by r (k) = b a x (k), = 1, 2,..., n. (6.36) If r (k) = 0 for all = 1, 2,..., n, then (x (k) 1, x(k) 2,..., x(k) n ) t s the exact soluton of the gven system of equatons. If the resduals are not zero or not small for all equatons, then apply the same method to reduce the resduals. 10

13 In relaxaton method, the soluton can be mproved successvely by reducng the largest resdual to zero at that teraton. To get the fast convergence, the equatons are rearranged n such a way that the largest coeffcents n the equatons appear on the dagonals,.e. the coeffcent matrx becomes dagonally domnant. The am of ths method s to reduce the largest resdual to zero. Let r p be the largest resdual (n magntude) occurs at the pth equaton for a partcular teraton. Then the value of the varable x p be ncreased by dx p where dx p = r p a pp. That s, x p s replaced by x p + dx p to relax r p,.e. to reduce r p to zero. Then the modfed soluton after ths teraton s ( ) x (k+1) = x (k) 1, x(k) 2,..., x(k) p 1, x p + dx p, x (k) p+1,..., x(k) n. The method s repeated untl all the resduals become zero or tends to zero. Example 6.4 Solve the followng system of lnear equatons by relaxaton method takng (0, 0, 0) as ntal soluton 27x + 6y z = 54, 6x + 15y + 2z = 72, x + y + 54z = 110. Soluton. The gven system of equatons s dagonally domnant. The resduals r 1, r 2, r 3 are gven by the followng equatons r 1 = 54 27x 6y + z r 2 = 72 6x 15y 2z r 3 = 110 x y 54z. Here, the ntal soluton s (0, 0, 0),.e. x = y = z = 0. Therefore, the resduals are r 1 = 54, r 2 = 72, r 3 = 110. The largest resdual n magntude s r 3. Thus, the thrd equaton has more error and we have to mprove x 3. Then the ncrement dx 3 n x 3 s now calculated as dx 3 = r 3 = 110 a = Thus the frst terated soluton s (0, 0, ),.e. (0, 0, 2.037). In next teraton we determne the new resduals of large magntudes and relax t to zero. The process s repeated untl all the resduals become zero or very small. 11

14 Soluton of Inconsstent and Ill Condtoned Systems All steps of all teratons are shown below: resduals max ncrement soluton k r 1 r 2 r 3 (r 1, r 2, r 3 ) p dx p x y z In ths case, all resduals are very small. The soluton of the gven system of equatons s x 1 = 1.166, x 2 = 4.075, x 3 = 1.940, correct upto three decmal places. 6.6 Successve overrelaxaton (S.O.R.) method The relaxaton method can be modfed to acheve fast convergence. For ths purpose, a sutable relaxaton factor w s ntroduced. The th equaton of the system of equatons a x = b, = 1, 2,..., n (6.37) s a x = b. 12 Ths equaton can be wrtten as

15 a x + a x = b. (6.38) = Let ( x (0) 1, x(0) 2,..., ) x(0) n be the ntal soluton and ( x (k+1) 1, x (k+1) 2,..., x (k+1) 1, x (k), x (k) +1,..., ) x(k) n, be the soluton when th equaton beng consder. Then the equaton (6.38) becomes 1 a x (k+1) + = a x (k) = b. (6.39) Snce ( x (k+1) 1, x (k+1) 2,..., x (k+1) 1, x (k), x (k) +1,..., ) x(k) n s an approxmate soluton of the gven system of equatons, therefore the resdual at the th equaton s determne from the followng equaton: 1 r = b a x (k+1) = a x (k), = 1, 2,..., n. (6.40) We denote the dfferences of x s at two consecutve teratons by ε (k) as ε (k) = x (k+1) x (k). In the successve overrelaxaton (SOR) method, t s assumed that where w s a scalar, called the relaxaton factor. Thus, the equaton (6.41) becomes a x (k+1) and t s defned a ε (k) = w r, = 1, 2,..., n, (6.41) = a x (k) [ 1 w a x (k+1) + = = 1, 2,..., n; k = 0, 1, 2,... The teraton process s repeated untl desred accuracy s acheved. a x (k) b ], (6.42) The above teraton method s called the overrelaxaton method when 1 < w < 2, and s called the under relaxaton method when 0 < w < 1. When w = 1, the method becomes well known Gauss-Sedal s teraton method. 13

16 Soluton of Inconsstent and Ill Condtoned Systems The proper choce of w can speed up the convergence of the teraton scheme and t depends on the gven system of equatons. Example 6.5 Solve the followng system of lnear equatons 4x 1 + 2x 2 + x 3 = 5, x 1 + 5x 2 + 2x 3 = 6, x 1 + x 2 + 7x 3 = 2 by SOR method taken relaxaton factor w = Soluton. The SOR teraton scheme for the gven system of equatons s [ ] 4x (k+1) 1 = 4x (k) x (k) 1 + 2x (k) 2 + x (k) 3 5 [ ] 5x (k+1) 2 = 5x (k) x (k+1) 1 + 5x (k) 2 + 2x (k) 3 6 [ ] 7x (k+1) 3 = 7x (k) x (k+1) 1 + x (k+1) 2 + 7x (k) 3 2. Let x (0) 1 = x (0) 2 = x (0) 3 = 0. The calculatons of all teratons are shown below: k x 1 x 2 x The solutons at teratons 8th and 9th are same. Hence, the requred soluton s x 1 = , x 2 = , x 3 = correct up to four decmal places. 14

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,

Example: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41, The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

6.3.4 Modified Euler s method of integration

6.3.4 Modified Euler s method of integration 6.3.4 Modfed Euler s method of ntegraton Before dscussng the applcaton of Euler s method for solvng the swng equatons, let us frst revew the basc Euler s method of numercal ntegraton. Let the general from

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Perron Vectors of an Irreducible Nonnegative Interval Matrix

Perron Vectors of an Irreducible Nonnegative Interval Matrix Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Topic 5: Non-Linear Regression

Topic 5: Non-Linear Regression Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly

More information

for Linear Systems With Strictly Diagonally Dominant Matrix

for Linear Systems With Strictly Diagonally Dominant Matrix MATHEMATICS OF COMPUTATION, VOLUME 35, NUMBER 152 OCTOBER 1980, PAGES 1269-1273 On an Accelerated Overrelaxaton Iteratve Method for Lnear Systems Wth Strctly Dagonally Domnant Matrx By M. Madalena Martns*

More information

Neuro-Adaptive Design - I:

Neuro-Adaptive Design - I: Lecture 36 Neuro-Adaptve Desgn - I: A Robustfyng ool for Dynamc Inverson Desgn Dr. Radhakant Padh Asst. Professor Dept. of Aerospace Engneerng Indan Insttute of Scence - Bangalore Motvaton Perfect system

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

Complete subgraphs in multipartite graphs

Complete subgraphs in multipartite graphs Complete subgraphs n multpartte graphs FLORIAN PFENDER Unverstät Rostock, Insttut für Mathematk D-18057 Rostock, Germany Floran.Pfender@un-rostock.de Abstract Turán s Theorem states that every graph G

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

MATH Homework #2

MATH Homework #2 MATH609-601 Homework #2 September 27, 2012 1. Problems Ths contans a set of possble solutons to all problems of HW-2. Be vglant snce typos are possble (and nevtable). (1) Problem 1 (20 pts) For a matrx

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Homework Notes Week 7

Homework Notes Week 7 Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A

More information

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the

More information

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)

10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014) 0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes

More information

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Google PageRank with Stochastic Matrix

Google PageRank with Stochastic Matrix Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d

More information

Problem Do any of the following determine homomorphisms from GL n (C) to GL n (C)?

Problem Do any of the following determine homomorphisms from GL n (C) to GL n (C)? Homework 8 solutons. Problem 16.1. Whch of the followng defne homomomorphsms from C\{0} to C\{0}? Answer. a) f 1 : z z Yes, f 1 s a homomorphsm. We have that z s the complex conjugate of z. If z 1,z 2

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

The internal structure of natural numbers and one method for the definition of large prime numbers

The internal structure of natural numbers and one method for the definition of large prime numbers The nternal structure of natural numbers and one method for the defnton of large prme numbers Emmanul Manousos APM Insttute for the Advancement of Physcs and Mathematcs 3 Poulou str. 53 Athens Greece Abstract

More information

SL n (F ) Equals its Own Derived Group

SL n (F ) Equals its Own Derived Group Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluton to the Heat Equaton ME 448/548 Notes Gerald Recktenwald Portland State Unversty Department of Mechancal Engneerng gerry@pdx.edu ME 448/548: FTCS Soluton to the Heat Equaton Overvew 1. Use

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

THE VIBRATIONS OF MOLECULES II THE CARBON DIOXIDE MOLECULE Student Instructions

THE VIBRATIONS OF MOLECULES II THE CARBON DIOXIDE MOLECULE Student Instructions THE VIBRATIONS OF MOLECULES II THE CARBON DIOXIDE MOLECULE Student Instructons by George Hardgrove Chemstry Department St. Olaf College Northfeld, MN 55057 hardgrov@lars.acc.stolaf.edu Copyrght George

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information